content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Retrieve cilium configuration ``` cilium-dbg config get <config name> [flags] ``` ``` -h, --help help for get -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Cilium configuration options"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_config_get.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: \"ark backup get\" layout: docs Get backups Get backups ``` ark backup get [flags] ``` ``` -h, --help help for get --label-columns stringArray a comma-separated list of labels to be displayed as columns -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default \"table\") -l, --selector string only show items matching this label selector --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with backups"
}
] |
{
"category": "Runtime",
"file_name": "ark_backup_get.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Ceph Common Issues <!-- markdownlint-disable MD024 --> <!-- allow duplicate headers in this file --> <!-- omit in toc --> - - - - - - - - - - - - - - - - - - - - - - - - - - Many of these problem cases are hard to summarize down to a short phrase that adequately describes the problem. Each problem will start with a bulleted list of symptoms. Keep in mind that all symptoms may not apply depending on the configuration of Rook. If the majority of the symptoms are seen there is a fair chance you are experiencing that problem. If after trying the suggestions found on this page and the problem is not resolved, the Rook team is very happy to help you troubleshoot the issues in their Slack channel. Once you have , proceed to the `#ceph` channel to ask for assistance. See also the . There are two main categories of information you will need to investigate issues in the cluster: Kubernetes status and logs documented Ceph cluster status (see upcoming section) After you verify the basic health of the running pods, next you will want to run Ceph tools for status of the storage components. There are two ways to run the Ceph tools, either in the Rook toolbox or inside other Rook pods that are already running. Logs on a specific node to find why a PVC is failing to mount See the for a script that will help you gather the logs Other artifacts: The monitors that are expected to be in quorum: `kubectl -n <cluster-namespace> get configmap rook-ceph-mon-endpoints -o yaml | grep data` The provides a simple environment to run Ceph tools. Once the pod is up and running, connect to the pod to execute Ceph commands to evaluate that current state of the cluster. ```console kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l \"app=rook-ceph-tools\" -o jsonpath='{.items[*].metadata.name}') bash ``` Here are some common commands to troubleshoot a Ceph cluster: `ceph status` `ceph osd status` `ceph osd df` `ceph osd utilization` `ceph osd pool stats` `ceph osd tree` `ceph pg stat` The first two status commands provide the overall cluster health. The normal state for cluster operations is HEALTHOK, but will still function when the state is in a HEALTHWARN state. If you are in a WARN state, then the cluster is in a condition that it may enter the HEALTHERROR state at which point *all* disk I/O operations are halted. If a HEALTHWARN state is observed, then one should take action to prevent the cluster from halting when it enters the HEALTH_ERROR state. There are many Ceph sub-commands to look at and manipulate Ceph objects, well beyond the scope this document. See the for more details of gathering information about the health of the cluster. In addition, there are other helpful hints and some best practices located in the . Of particular note, there are scripts for collecting logs and gathering OSD information there. Execution of the `ceph` command hangs PersistentVolumes are not being created Large amount of slow requests are blocking Large amount of stuck requests are blocking One or more MONs are restarting periodically Create a to investigate the current state of Ceph. Here is an example of what one might see. In this case the `ceph status` command would just hang so a CTRL-C needed to be"
},
{
"data": "```console kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status ceph status ^CCluster connection interrupted or timed out ``` Another indication is when one or more of the MON pods restart frequently. Note the 'mon107' that has only been up for 16 minutes in the following output. ```console $ kubectl -n rook-ceph get all -o wide --show-all NAME READY STATUS RESTARTS AGE IP NODE po/rook-ceph-mgr0-2487684371-gzlbq 1/1 Running 0 17h 192.168.224.46 k8-host-0402 po/rook-ceph-mon107-p74rj 1/1 Running 0 16m 192.168.224.28 k8-host-0402 rook-ceph-mon1-56fgm 1/1 Running 0 2d 192.168.91.135 k8-host-0404 rook-ceph-mon2-rlxcd 1/1 Running 0 2d 192.168.123.33 k8-host-0403 rook-ceph-osd-bg2vj 1/1 Running 0 2d 192.168.91.177 k8-host-0404 rook-ceph-osd-mwxdm 1/1 Running 0 2d 192.168.123.31 k8-host-0403 ``` What is happening here is that the MON pods are restarting and one or more of the Ceph daemons are not getting configured with the proper cluster information. This is commonly the result of not specifying a value for `dataDirHostPath` in your Cluster CRD. The `dataDirHostPath` setting specifies a path on the local host for the Ceph daemons to store configuration and data. Setting this to a path like `/var/lib/rook`, reapplying your Cluster CRD and restarting all the Ceph daemons (MON, MGR, OSD, RGW) should solve this problem. After the Ceph daemons have been restarted, it is advisable to restart the . Rook operator is running Either a single mon starts or the mons start very slowly (at least several minutes apart) The crash-collector pods are crashing No mgr, osd, or other daemons are created except the CSI driver When the operator is starting a cluster, the operator will start one mon at a time and check that they are healthy before continuing to bring up all three mons. If the first mon is not detected healthy, the operator will continue to check until it is healthy. If the first mon fails to start, a second and then a third mon may attempt to start. However, they will never form quorum and the orchestration will be blocked from proceeding. The crash-collector pods will be blocked from starting until the mons have formed quorum the first time. There are several common causes for the mons failing to form quorum: The operator pod does not have network connectivity to the mon pod(s). The network may be configured incorrectly. One or more mon pods are in running state, but the operator log shows they are not able to form quorum A mon is using configuration from a previous installation. See the for cleaning the previous cluster. A firewall may be blocking the ports required for the Ceph mons to form quorum. Ensure ports 6789 and 3300 are enabled. See the for more details. There may be MTU mismatch between different networking components. Some networks may be more susceptible to mismatch than others. If Kubernetes CNI or hosts enable jumbo frames (MTU 9000), Ceph will use large packets to maximize network bandwidth. If other parts of the networking chain don't support jumbo frames, this could result in lost or rejected packets unexpectedly. First look at the logs of the operator to confirm if it is able to connect to the mons. ```console kubectl -n rook-ceph logs -l app=rook-ceph-operator ``` Likely you will see an error similar to the following that the operator is timing out when connecting to the mon. The last command is `ceph mon_status`, followed by a timeout message five minutes later. ```console 2018-01-21 21:47:32.375833 I | exec: Running command: ceph mon_status --cluster=rook --conf=/var/lib/rook/rook-ceph/rook.config --keyring=/var/lib/rook/rook-ceph/client.admin.keyring --format json --out-file /tmp/442263890 2018-01-21 21:52:35.370533 I | exec: 2018-01-21 21:52:35.071462 7f96a3b82700 0 monclient(hunting): authenticate timed out after 300 2018-01-21 21:52:35.071462 7f96a3b82700 0 monclient(hunting): authenticate timed out after 300 2018-01-21 21:52:35.071524 7f96a3b82700 0 librados:"
},
{
"data": "authentication error (110) Connection timed out 2018-01-21 21:52:35.071524 7f96a3b82700 0 librados: client.admin authentication error (110) Connection timed out [errno 110] error connecting to the cluster ``` The error would appear to be an authentication error, but it is misleading. The real issue is a timeout. If you see the timeout in the operator log, verify if the mon pod is running (see the next section). If the mon pod is running, check the network connectivity between the operator pod and the mon pod. A common issue is that the CNI is not configured correctly. To verify the network connectivity: Get the endpoint for a mon Curl the mon from the operator pod For example, this command will curl the first mon from the operator: ```console $ kubectl -n rook-ceph exec deploy/rook-ceph-operator -- curl $(kubectl -n rook-ceph get svc -l app=rook-ceph-mon -o jsonpath='{.items[0].spec.clusterIP}'):3300 2>/dev/null ceph v2 ``` If \"ceph v2\" is printed to the console, the connection was successful. If the command does not respond or otherwise fails, the network connection cannot be established. Second we need to verify if the mon pod started successfully. ```console $ kubectl -n rook-ceph get pod -l app=rook-ceph-mon NAME READY STATUS RESTARTS AGE rook-ceph-mon-a-69fb9c78cd-58szd 1/1 CrashLoopBackOff 2 47s ``` If the mon pod is failing as in this example, you will need to look at the mon pod status or logs to determine the cause. If the pod is in a crash loop backoff state, you should see the reason by describing the pod. ```console $ kubectl -n rook-ceph describe pod -l mon=rook-ceph-mon0 ... Last State: Terminated Reason: Error Message: The keyring does not match the existing keyring in /var/lib/rook/rook-ceph-mon0/data/keyring. You may need to delete the contents of dataDirHostPath on the host from a previous deployment. ... ``` See the solution in the next section regarding cleaning up the `dataDirHostPath` on the nodes. This is a common problem reinitializing the Rook cluster when the local directory used for persistence has not been purged. This directory is the `dataDirHostPath` setting in the cluster CRD and is typically set to `/var/lib/rook`. To fix the issue you will need to delete all components of Rook and then delete the contents of `/var/lib/rook` (or the directory specified by `dataDirHostPath`) on each of the hosts in the cluster. Then when the cluster CRD is applied to start a new cluster, the rook-operator should start all the pods as expected. !!! caution Deleting the `dataDirHostPath` folder is destructive to the storage. Only delete the folder if you are trying to permanently purge the Rook cluster. See the for more details. When you create a PVC based on a rook storage class, it stays pending indefinitely For the Wordpress example, you might see two PVCs in pending state. ```console $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Pending rook-ceph-block 8s wp-pv-claim Pending rook-ceph-block 16s ``` There are two common causes for the PVCs staying in pending state: There are no OSDs in the cluster The CSI provisioner pod is not running or is not responding to the request to provision the storage To confirm if you have OSDs in your cluster, connect to the and run the `ceph status` command. You should see that you have at least one OSD `up` and `in`. The minimum number of OSDs required depends on the `replicated.size` setting in the pool created for the storage class. In a \"test\" cluster, only one OSD is required (see `storageclass-test.yaml`). In the production storage class example (`storageclass.yaml`), three OSDs would be"
},
{
"data": "```console $ ceph status cluster: id: a0452c76-30d9-4c1a-a948-5d8405f19a7c health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 11m) mgr: a(active, since 10m) osd: 1 osds: 1 up (since 46s), 1 in (since 109m) ``` If you don't see the expected number of OSDs, let's investigate why they weren't created. On each node where Rook looks for OSDs to configure, you will see an \"osd prepare\" pod. ```console $ kubectl -n rook-ceph get pod -l app=rook-ceph-osd-prepare NAME ... READY STATUS RESTARTS AGE rook-ceph-osd-prepare-minikube-9twvk 0/2 Completed 0 30m ``` See the section on to investigate the logs. The CSI driver may not be responding to the requests. Look in the logs of the CSI provisioner pod to see if there are any errors during the provisioning. There are two provisioner pods: ```console kubectl -n rook-ceph get pod -l app=csi-rbdplugin-provisioner ``` Get the logs of each of the pods. One of them should be the \"leader\" and be responding to requests. ```console kubectl -n rook-ceph logs csi-cephfsplugin-provisioner-d77bb49c6-q9hwq csi-provisioner ``` See also the . Lastly, if you have OSDs `up` and `in`, the next step is to confirm the operator is responding to the requests. Look in the Operator pod logs around the time when the PVC was created to confirm if the request is being raised. If the operator does not show requests to provision the block image, the operator may be stuck on some other operation. In this case, restart the operator pod to get things going again. If the \"osd prepare\" logs didn't give you enough clues about why the OSDs were not being created, please review your configuration. The common misconfigurations include: If `useAllDevices: true`, Rook expects to find local devices attached to the nodes. If no devices are found, no OSDs will be created. If `useAllDevices: false`, OSDs will only be created if `deviceFilter` is specified. Only local devices attached to the nodes will be configurable by Rook. In other words, the devices must show up under `/dev`. The devices must not have any partitions or filesystems on them. Rook will only configure raw devices. Partitions are not yet supported. OSD pods are failing to start You have started a cluster after tearing down another cluster When an OSD starts, the device or directory will be configured for consumption. If there is an error with the configuration, the pod will crash and you will see the CrashLoopBackoff status for the pod. Look in the osd pod logs for an indication of the failure. ```console $ kubectl -n rook-ceph logs rook-ceph-osd-fl8fs ... ``` One common case for failure is that you have re-deployed a test cluster and some state may remain from a previous deployment. If your cluster is larger than a few nodes, you may get lucky enough that the monitors were able to start and form quorum. However, now the OSDs pods may fail to start due to the old state. Looking at the OSD pod logs you will see an error about the file already existing. ```console $ kubectl -n rook-ceph logs rook-ceph-osd-fl8fs ... 2017-10-31 20:13:11.187106 I | mkfs-osd0: 2017-10-31 20:13:11.186992 7f0059d62e00 -1 bluestore(/var/lib/rook/osd0) readfsid unparsable uuid 2017-10-31 20:13:11.187208 I | mkfs-osd0: 2017-10-31 20:13:11.187026 7f0059d62e00 -1 bluestore(/var/lib/rook/osd0) setupblocksymlinkor_file failed to create block symlink to /dev/disk/by-partuuid/651153ba-2dfc-4231-ba06-94759e5ba273: (17) File exists 2017-10-31 20:13:11.187233 I | mkfs-osd0: 2017-10-31 20:13:11.187038 7f0059d62e00 -1 bluestore(/var/lib/rook/osd0) mkfs failed, (17) File exists 2017-10-31 20:13:11.187254 I | mkfs-osd0: 2017-10-31 20:13:11.187042 7f0059d62e00 -1 OSD::mkfs: ObjectStore::mkfs failed with error (17) File exists 2017-10-31 20:13:11.187275 I | mkfs-osd0: 2017-10-31"
},
{
"data": "7f0059d62e00 -1 ERROR: error creating empty object store in /var/lib/rook/osd0: (17) File exists ``` If the error is from the file that already exists, this is a common problem reinitializing the Rook cluster when the local directory used for persistence has not been purged. This directory is the `dataDirHostPath` setting in the cluster CRD and is typically set to `/var/lib/rook`. To fix the issue you will need to delete all components of Rook and then delete the contents of `/var/lib/rook` (or the directory specified by `dataDirHostPath`) on each of the hosts in the cluster. Then when the cluster CRD is applied to start a new cluster, the rook-operator should start all the pods as expected. No OSD pods are started in the cluster Devices are not configured with OSDs even though specified in the Cluster CRD One OSD pod is started on each node instead of multiple pods for each device First, ensure that you have specified the devices correctly in the CRD. The has several ways to specify the devices that are to be consumed by the Rook storage: `useAllDevices: true`: Rook will consume all devices it determines to be available `deviceFilter`: Consume all devices that match this regular expression `devices`: Explicit list of device names on each node to consume Second, if Rook determines that a device is not available (has existing partitions or a formatted filesystem), Rook will skip consuming the devices. If Rook is not starting OSDs on the devices you expect, Rook may have skipped it for this reason. To see if a device was skipped, view the OSD preparation log on the node where the device was skipped. Note that it is completely normal and expected for OSD prepare pod to be in the `completed` state. After the job is complete, Rook leaves the pod around in case the logs need to be investigated. ```console $ kubectl -n rook-ceph get pod -l app=rook-ceph-osd-prepare NAME READY STATUS RESTARTS AGE rook-ceph-osd-prepare-node1-fvmrp 0/1 Completed 0 18m rook-ceph-osd-prepare-node2-w9xv9 0/1 Completed 0 22m rook-ceph-osd-prepare-node3-7rgnv 0/1 Completed 0 22m ``` ```console $ kubectl -n rook-ceph logs rook-ceph-osd-prepare-node1-fvmrp provision [...] ``` Here are some key lines to look for in the log: ```console 2019-05-30 19:02:57.353171 W | cephosd: skipping device sda that is in use 2019-05-30 19:02:57.452168 W | skipping device \"sdb5\": [\"Used by ceph-disk\"] Insufficient space (<5GB) on vgs Insufficient space (<5GB) LVM detected Has BlueStore device label locked read-only 2019-05-30 19:02:57.535598 I | cephosd: device sdc to be configured by ceph-volume 2019-05-30 19:02:59.844642 I | Type Path LV Size % of device 2019-05-30 19:02:59.844651 I | - 2019-05-30 19:02:59.844677 I | [data] /dev/sdc 7.00 GB 100% ``` Either update the CR with the correct settings, or clean the partitions or filesystem from your devices. To clean devices from a previous install see the . After the settings are updated or the devices are cleaned, trigger the operator to analyze the devices again by restarting the operator. Each time the operator starts, it will ensure all the desired devices are configured. The operator does automatically deploy OSDs in most scenarios, but an operator restart will cover any scenarios that the operator doesn't detect automatically. ```console $ kubectl -n rook-ceph delete pod -l app=rook-ceph-operator [...] ``` This issue is fixed in Rook v1.3 or later. After issuing a `reboot` command, node never returned online Only a power cycle helps On a node running a pod with a Ceph persistent volume ```console mount | grep rbd /dev/rbdx on ... (rw,relatime, ..., noquota) ``` When the reboot command is issued, network interfaces are terminated before disks are"
},
{
"data": "This results in the node hanging as repeated attempts to unmount Ceph persistent volumes fail with the following error: ```console libceph: connect [monitor-ip]:6789 error -101 ``` The node needs to be before reboot. After the successful drain, the node can be rebooted as usual. Because `kubectl drain` command automatically marks the node as unschedulable (`kubectl cordon` effect), the node needs to be uncordoned once it's back online. Drain the node: ```console kubectl drain <node-name> --ignore-daemonsets --delete-local-data ``` Uncordon the node: ```console kubectl uncordon <node-name> ``` More than one shared filesystem (CephFS) has been created in the cluster A pod attempts to mount any other shared filesystem besides the first* one that was created The pod incorrectly gets the first filesystem mounted instead of the intended filesystem The only solution to this problem is to upgrade your kernel to `4.7` or higher. This is due to a mount flag added in the kernel version `4.7` which allows to chose the filesystem by name. For additional info on the kernel version requirement for multiple shared filesystems (CephFS), see . You can set a given log level and apply it to all the Ceph daemons at the same time. For this, make sure the toolbox pod is running, then determine the level you want (between 0 and 20). You can find the list of all subsystems and their default values in . Be careful when increasing the level as it will produce very verbose logs. Assuming you want a log level of 1, you will run: ```console $ kubectl -n rook-ceph exec deploy/rook-ceph-tools -- set-ceph-debug-level 1 ceph config set global debug_context 1 ceph config set global debug_lockdep 1 [...] ``` Once you are done debugging, you can revert all the debug flag to their default value by running the following: ```console kubectl -n rook-ceph exec deploy/rook-ceph-tools -- set-ceph-debug-level default ``` They are cases where looking at Kubernetes logs is not enough for diverse reasons, but just to name a few: not everyone is familiar for Kubernetes logging and expects to find logs in traditional directories logs get eaten (buffer limit from the log engine) and thus not requestable from Kubernetes So for each daemon, `dataDirHostPath` is used to store logs, if logging is activated. Rook will bindmount `dataDirHostPath` for every pod. Let's say you want to enable logging for `mon.a`, but only for this daemon. Using the toolbox or from inside the operator run: ```console ceph config set mon.a logtofile true ``` This will activate logging on the filesystem, you will be able to find logs in `dataDirHostPath/$NAMESPACE/log`, so typically this would mean `/var/lib/rook/rook-ceph/log`. You don't need to restart the pod, the effect will be immediate. To disable the logging on file, simply set `logtofile` to `false`. There is no progress on I/O from/to one of RBD devices (`/dev/rbd` or `/dev/nbd*`). After that, the whole worker node hangs up. This happens when the following conditions are satisfied. The problematic RBD device and the corresponding OSDs are co-located. There is an XFS filesystem on top of this device. In addition, when this problem happens, you can see the following messages in `dmesg`. ```console $ dmesg ... [51717.039319] INFO: task kworker/2:1:5938 blocked for more than 120 seconds. [51717.039361] Not tainted 4.15.0-72-generic #81-Ubuntu [51717.039388] \"echo 0 > /proc/sys/kernel/hungtasktimeout_secs\" disables this message. ... ``` It's so-called `hung_task` problem and means that there is a deadlock in the kernel. For more detail, please refer to . This problem will be solve by the following two fixes. Linux kernel: A minor feature that is introduced by"
},
{
"data": "It will be included in Linux v5.6. Ceph: A fix that uses the above-mentioned kernel's feature. The Ceph community will probably discuss this fix after releasing Linux v5.6. You can bypass this problem by using ext4 or any other filesystems rather than XFS. Filesystem type can be specified with `csi.storage.k8s.io/fstype` in StorageClass resource. `ceph status` shows \"too few PGs per OSD\" warning as follows. ```console $ ceph status cluster: id: fd06d7c3-5c5c-45ca-bdea-1cf26b783065 health: HEALTH_WARN too few PGs per OSD (16 < min 30) [...] ``` The meaning of this warning is written in . However, in many cases it is benign. For more information, please see . Please refer to if you want to know the proper `pg_num` of pools and change these values. There is a critical flaw in OSD on LV-backed PVC. LVM metadata can be corrupted if both the host and OSD container modify it simultaneously. For example, the administrator might modify it on the host, while the OSD initialization process in a container could modify it too. In addition, if `lvmetad` is running, the possibility of occurrence gets higher. In this case, the change of LVM metadata in OSD container is not reflected to LVM metadata cache in host for a while. If you still decide to configure an OSD on LVM, please keep the following in mind to reduce the probability of this issue. Disable `lvmetad.` Avoid configuration of LVs from the host. In addition, don't touch the VGs and physical volumes that back these LVs. Avoid incrementing the `count` field of `storageClassDeviceSets` and create a new LV that backs an OSD simultaneously. You can know whether the above-mentioned tag exists with the command: `sudo lvs -o lvname,lvtags`. If the `lvtag` field is empty in an LV corresponding to the OSD lvtags, this OSD encountered the problem. In this case, please or replace with other new OSD before restarting. This problem doesn't happen in newly created LV-backed PVCs because OSD container doesn't modify LVM metadata anymore. The existing lvm mode OSDs work continuously even thought upgrade your Rook. However, using the raw mode OSDs is recommended because of the above-mentioned problem. You can replace the existing OSDs with raw mode OSDs by retiring them and adding new OSDs one by one. See the documents and . If the Kernel is configured with a low , the OSD prepare job might fail with the following error: ```text exec: stderr: 2020-09-17T00:30:12.145+0000 7f0c17632f40 -1 bdev(0x56212de88700 /var/lib/ceph/osd/ceph-0//block) aiostart io_setup(2) failed with EAGAIN; try increasing /proc/sys/fs/aio-max-nr ``` To overcome this, you need to increase the value of `fs.aio-max-nr` of your sysctl configuration (typically `/etc/sysctl.conf`). You can do this with your favorite configuration management system. Alternatively, you can have a to apply the configuration for you on all your nodes. Users running Rook versions v1.6.0-v1.6.7 may observe unwanted OSDs on partitions that appear unexpectedly and seemingly randomly, which can corrupt existing OSDs. Unexpected partitions are created on host disks that are used by Ceph OSDs. This happens more often on SSDs than HDDs and usually only on disks that are 875GB or larger. Many tools like `lsblk`, `blkid`, `udevadm`, and `parted` will not show a partition table type for the partition. Newer versions of `blkid` are generally able to recognize the type as \"atari\". The underlying issue causing this is Atari partition (sometimes identified as AHDI) support in the Linux kernel. Atari partitions have very relaxed specifications compared to other partition types, and it is relatively easy for random data written to a disk to appear as an Atari partition to the Linux"
},
{
"data": "Ceph's Bluestore OSDs have an anecdotally high probability of writing data on to disks that can appear to the kernel as an Atari partition. Below is an example of `lsblk` output from a node where phantom Atari partitions are present. Note that `sdX1` is never present for the phantom partitions, and `sdX2` is 48G on all disks. `sdX3` is a variable size and may not always be present. It is possible for `sdX4` to appear, though it is an anecdotally rare event. ```console NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 0 3T 0 disk sdb2 8:18 0 48G 0 part sdb3 8:19 0 6.1M 0 part sdc 8:32 0 3T 0 disk sdc2 8:34 0 48G 0 part sdc3 8:35 0 6.2M 0 part sdd 8:48 0 3T 0 disk sdd2 8:50 0 48G 0 part sdd3 8:51 0 6.3M 0 part ``` You can see for more detailed information and discussion. If you are using Rook v1.6, you must first update to v1.6.8 or higher to avoid further incidents of OSD corruption caused by these Atari partitions. An old workaround suggested using `deviceFilter: ^sd[a-z]+$`, but this still results in unexpected partitions. Rook will merely stop creating new OSDs on the partitions. It does not fix a related issue that `ceph-volume` that is unaware of the Atari partition problem. Users who used this workaround are still at risk for OSD failures in the future. To resolve the issue, immediately update to v1.6.8 or higher. After the update, no corruption should occur on OSDs created in the future. Next, to get back to a healthy Ceph cluster state, focus on one corrupted disk at a time and one disk at a time. As an example, you may have `/dev/sdb` with two unexpected partitions (`/dev/sdb2` and `/dev/sdb3`) as well as a second corrupted disk `/dev/sde` with one unexpected partition (`/dev/sde2`). First, remove the OSDs associated with `/dev/sdb`, `/dev/sdb2`, and `/dev/sdb3`. There might be only one, or up to 3 OSDs depending on how your system was affected. Again see the . Use `dd` to wipe the first sectors of the partitions followed by the disk itself. E.g., `dd if=/dev/zero of=/dev/sdb2 bs=1M` `dd if=/dev/zero of=/dev/sdb3 bs=1M` `dd if=/dev/zero of=/dev/sdb bs=1M` Then wipe clean `/dev/sdb` to prepare it for a new OSD. See for details. After this, scale up the Rook operator to deploy a new OSD to `/dev/sdb`. This will allow Ceph to use `/dev/sdb` for data recovery and replication while the next OSDs are removed. Now Repeat steps 1-4 for `/dev/sde` and `/dev/sde2`, and continue for any other corrupted disks. If your Rook cluster does not have any critical data stored in it, it may be simpler to uninstall Rook completely and redeploy with v1.6.8 or higher. Configuration settings passed as environment variables do not take effect as expected. For example, the discover daemonset is not created, even though `ROOKENABLEDISCOVERY_DAEMON=\"true\"` is set. Inspect the `rook-ceph-operator-config` ConfigMap for conflicting settings. The ConfigMap takes precedence over the environment. The ConfigMap , even if all actual configuration is supplied through the environment. Look for lines with the `op-k8sutil` prefix in the operator logs. These lines detail the final values, and source, of the different configuration variables. Verify that both of the following messages are present in the operator logs: ```log rook-ceph-operator-config-controller successfully started rook-ceph-operator-config-controller done reconciling ``` If it does not exist, create an empty ConfigMap: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: rook-ceph-operator-config namespace: rook-ceph # namespace:operator data: {} ``` If the ConfigMap exists, remove any keys that you wish to configure through the environment."
}
] |
{
"category": "Runtime",
"file_name": "ceph-common-issues.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This guide shows you how to configure Piraeus Datastore when using . To complete this guide, you should be familiar with: editing `LinstorCluster` resources. Because k0s store their state in a separate directory (`/var/lib/k0s`) the LINSTOR CSI Driver needs to be updated to use a new path for mounting volumes. To change the LINSTOR CSI Driver, so that it uses the k0s state paths, apply the following `LinstorCluster`: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: csiNode: enabled: true podTemplate: spec: containers: name: linstor-csi volumeMounts: mountPath: /var/lib/k0s/kubelet name: publish-dir mountPropagation: Bidirectional name: csi-node-driver-registrar args: --v=5 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/k0s/kubelet/plugins/linstor.csi.linbit.com/csi.sock --health-port=9809 volumes: name: publish-dir hostPath: path: /var/lib/k0s/kubelet name: registration-dir hostPath: path: /var/lib/k0s/kubelet/plugins_registry name: plugin-dir hostPath: path: /var/lib/k0s/kubelet/plugins/linstor.csi.linbit.com ```"
}
] |
{
"category": "Runtime",
"file_name": "k0s.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document is a reference of the API types introduced by Kilo. Note: this document is generated from code comments. When contributing a change to this document, please do so by changing the code comments. DNSOrIP represents either a DNS name or an IP address. When both are given, the IP address, as it is more specific, override the DNS name. | Field | Description | Scheme | Required | | -- | -- | | -- | | dns | DNS must be a valid RFC 1123 subdomain. | string | false | | ip | IP must be a valid IP address. | string | false | Peer is a WireGuard peer that should have access to the VPN. | Field | Description | Scheme | Required | | -- | -- | | -- | | metadata | Standard objects metadata. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#metadata | | false | | spec | Specification of the desired behavior of the Kilo Peer. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/api-conventions.md#spec-and-status | | true | PeerEndpoint represents a WireGuard endpoint, which is an IP:port tuple. | Field | Description | Scheme | Required | | -- | -- | | -- | | dnsOrIP | DNSOrIP is a DNS name or an IP address. | | true | | port | Port must be a valid port number. | uint32 | true | PeerList is a list of peers. | Field | Description | Scheme | Required | | -- | -- | | -- | | metadata | Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds | | false | | items | List of peers. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md | | true | PeerSpec is the description and configuration of a peer. | Field | Description | Scheme | Required | | -- | -- | | -- | | allowedIPs | AllowedIPs is the list of IP addresses that are allowed for the given peer's tunnel. | []string | true | | endpoint | Endpoint is the initial endpoint for connections to the peer. | * | false | | persistentKeepalive | PersistentKeepalive is the interval in seconds of the emission of keepalive packets by the peer. This defaults to 0, which disables the feature. | int | false | | presharedKey | PresharedKey is the optional symmetric encryption key for the peer. | string | false | | publicKey | PublicKey is the WireGuard public key for the peer. | string | true |"
}
] |
{
"category": "Runtime",
"file_name": "api.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Thread Naming ================ Gluster processes spawn many threads; some threads are created by libglusterfs library, while others are created by xlators. When gfapi library is used in an application, some threads belong to the application and some are spawned by gluster libraries. We also have features where n number of threads are spawned to act as worker threads for same operation. In all the above cases, it is useful to be able to determine the list of threads that exist in runtime. Naming threads when you create them is the easiest way to provide that information to kernel so that it can then be queried by any means. How to name threads We have two wrapper functions in libglusterfs for creating threads. They take name as an argument and set thread name after its creation. ```C gfthreadcreate (pthreadt *thread, const pthreadattr_t *attr, void (start_routine)(void ), void arg, const char *name) ``` ```C gfthreadcreatedetached (pthreadt *thread, void (start_routine)(void ), void arg, const char *name) ``` As max name length for a thread in POSIX is only 16 characters including the '\\0' character, you have to be a little creative with naming. Also, it is important that all Gluster threads have common prefix. Considering these conditions, we have \"glfs_\" as prefix for all the threads created by these wrapper functions. It is responsibility of the owner of thread to provide the suffix part of the name. It does not have to be a descriptive name, as it has only 10 letters to work with. However, it should be unique enough such that it can be matched with a table which describes it. If n number of threads are spwaned to perform same function, it is must that the threads are numbered. Table of thread names Thread names don't have to be a descriptive; however, it should be unique enough such that it can be matched with a table below without ambiguity. bdaio - block device aio brfsscan - bit rot fs scanner brhevent - bit rot event handler brmon - bit rot monitor brosign - bit rot one shot signer brpobj - bit rot object processor brsproc - bit rot scrubber brssign - bit rot stub signer brswrker - bit rot worker clogc - changelog consumer clogcbki - changelog callback invoker clogd - changelog dispatcher clogecon - changelog reverse connection clogfsyn - changelog fsync cloghcon - changelog history consumer clogjan - changelog janitor clogpoll - changelog poller clogproc - changelog process clogro - changelog rollover ctrcomp - change time recorder compaction dhtdf - dht defrag task dhtdg - dht defrag start dhtfcnt - dht rebalance file counter ecshd - ec heal daemon epollN - epoll thread fusenoti - fuse notify fuseproc - fuse main thread gdhooks - glusterd hooks glfspoll - gfapi poller thread idxwrker - index worker iosdump - io stats dump iotwr - io thread worker leasercl - lease recall memsweep - sweeper thread for mem pools nfsauth - nfs auth nfsnsm - nfs nsm nfsudp - nfs udp mount nlmmon - nfs nlm/nsm mon posixaio - posix aio posixfsy - posix fsync posixhc - posix heal posixjan - posix janitor posixrsv - posix reserve quiesce - quiesce dequeue rdmaAsyn - rdma async event handler rdmaehan - rdma completion handler rdmarcom - rdma receive completion handler rdmascom - rdma send completion handler rpcsvcrh - rpcsvc request handler scleanup - socket cleanup shdheal - self heal daemon sigwait - glusterfsd sigwaiter spoller - socket poller sprocN - syncop worker thread tbfclock - token bucket filter token generator thread timer - timer thread upreaper - upcall reaper"
}
] |
{
"category": "Runtime",
"file_name": "thread-naming.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: FUSE SDK Overview The Alluxio POSIX API is a feature that allows mounting training datasets in specific storage services (e.g. S3, HDFS) to the local filesystem and provides caching capabilities to speed up I/O access to frequently used data. There are two kinds of caching capabilities: 1. local caching only 2. local caching + distributed caching. Differences between the two solutions are listed below, choose your desired solution based on training requirements and available resources. <table class=\"table table-striped\"> <tr> <th>Category</th> <th>Local Caching</th> <th>Distributed Caching</th> </tr> <tr> <td>Prerequisite</td> <td>N/A</td> <td>Require a running Alluxio cluster (master + worker)</td> </tr> <tr> <td>Caching Capability</td> <td>Bounded by local storage size</td> <td>Bounded by Alluxio cluster storage size</td> </tr> <tr> <td>Suited Workloads</td> <td>Single node training with large dataset. Distributed training with no data shuffle between nodes</td> <td>Multiple training nodes or training tasks share the same dataset</td> </tr> </table> See to quickly setup your FUSE SDK local cache solution which can connects to your desired storage services. provides different local cache capabilities to speed up your workloads and reduce the pressure of storage services. [Local Kernel Data Cache Configuration]({{ '/en/fuse-sdk/Local-Cache-Tuning.html#local-kernel-data-cache-configuration' | relativize_url }}) [Local Userspace Data Cache Configuration]({{ '/en/fuse-sdk/Local-Cache-Tuning.html#local-userspace-data-cache-configuration' | relativize_url }}) [Local Kernel Metadata Cache Configuration]({{ '/en/fuse-sdk/Local-Cache-Tuning.html#local-kernel-metadata-cache-configuration' | relativize_url }}) [Local Userspace Metadata Cache Configuration]({{ '/en/fuse-sdk/Local-Cache-Tuning.html#local-userspace-metadata-cache-configuration' | relativize_url }}) provides advanced FUSE SDK tuning tips for performance optimization or debugging. FUSE SDK can connect to a shared distributed caching service. For more information, please refer to"
}
] |
{
"category": "Runtime",
"file_name": "FUSE-SDK-Overview.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document provides an overview on how to run Kata containers with ACRN hypervisor and device model. ACRN is a flexible, lightweight Type-1 reference hypervisor built with real-time and safety-criticality in mind. ACRN uses an open source platform making it optimized to streamline embedded development. Some of the key features being: Small footprint - Approx. 25K lines of code (LOC). Real Time - Low latency, faster boot time, improves overall responsiveness with hardware. Adaptability - Multi-OS support for guest operating systems like Linux, Android, RTOSes. Rich I/O mediators - Allows sharing of various I/O devices across VMs. Optimized for a variety of IoT (Internet of Things) and embedded device solutions. Please refer to ACRN for more details on ACRN hypervisor and device model. This document requires the presence of the ACRN hypervisor and Kata Containers on your system. Install using the instructions available through the following links: ACRN supported . > Note: Please make sure to have a minimum of 4 logical processors (HT) or cores. ACRN setup. For networking, ACRN supports either MACVTAP or TAP. If MACVTAP is not enabled in the Service OS, please follow the below steps to update the kernel: ```sh $ git clone https://github.com/projectacrn/acrn-kernel.git $ cd acrn-kernel $ cp kernelconfigsos .config $ sed -i \"s/# CONFIGMACVLAN is not set/CONFIGMACVLAN=y/\" .config $ sed -i '$ i CONFIG_MACVTAP=y' .config $ make clean && make olddefconfig && make && sudo make modulesinstall INSTALLMOD_PATH=out/ ``` Login into Service OS and update the kernel with MACVTAP support: ```sh $ sudo mount /dev/sda1 /mnt $ sudo scp -r <user name>@<host address>:<your workspace>/acrn-kernel/arch/x86/boot/bzImage /mnt/EFI/org.clearlinux/ $ sudo scp -r <user name>@<host address>:<your workspace>/acrn-kernel/out/lib/modules/* /lib/modules/ $ conf_file=$(sed -n '$ s/default //p'"
},
{
"data": "$ kernelimg=$(sed -n 2p /mnt/loader/entries/$conffile | cut -d'/' -f4) $ sudo sed -i \"s/$kernelimg/bzImage/g\" /mnt/loader/entries/$conffile $ sync && sudo umount /mnt && sudo reboot ``` Kata Containers installation: Automated installation does not seem to be supported for Clear Linux, so please use steps. Note: Create rootfs image and not initrd image. In order to run Kata with ACRN, your container stack must provide block-based storage, such as device-mapper. Note: Currently, by design you can only launch one VM from Kata Containers using ACRN hypervisor (SDC scenario). Based on feedback from community we can increase number of VMs. To configure Docker for device-mapper and Kata, Stop Docker daemon if it is already running. ```bash $ sudo systemctl stop docker ``` Set `/etc/docker/daemon.json` with the following contents. ``` { \"storage-driver\": \"devicemapper\" } ``` Restart docker. ```bash $ sudo systemctl daemon-reload $ sudo systemctl restart docker ``` Configure to use `kata-runtime`. To configure Kata Containers with ACRN, copy the generated `configuration-acrn.toml` file when building the `kata-runtime` to either `/etc/kata-containers/configuration.toml` or `/usr/share/defaults/kata-containers/configuration.toml`. The following command shows full paths to the `configuration.toml` files that the runtime loads. It will use the first path that exists. (Please make sure the kernel and image paths are set correctly in the `configuration.toml` file) ```bash $ sudo kata-runtime --show-default-config-paths ``` Warning: Please offline CPUs using script, else VM launches will fail. ```bash $ sudo ./offline_cpu.sh ``` Start an ACRN based Kata Container, ```bash $ sudo docker run -ti --runtime=kata-runtime busybox sh ``` You will see ACRN(`acrn-dm`) is now running on your system, as well as a `kata-shim`. You should obtain an interactive shell prompt. Verify that all the Kata processes terminate once you exit the container. ```bash $ ps -ef | grep -E \"kata|acrn\" ``` Validate ACRN hypervisor by using `kata-runtime kata-env`, ```sh $ kata-runtime kata-env | awk -v RS= '/\\[Hypervisor\\]/' [Hypervisor] MachineType = \"\" Version = \"DM version is: 1.2-unstable-254577a6-dirty (daily tag:acrn-2019w27.4-140000p) Path = \"/usr/bin/acrn-dm\" BlockDeviceDriver = \"virtio-blk\" EntropySource = \"/dev/urandom\" Msize9p = 0 MemorySlots = 10 Debug = false UseVSock = false SharedFS = \"\" ```"
}
] |
{
"category": "Runtime",
"file_name": "how-to-use-kata-containers-with-acrn.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - File | Pointer to string | | [optional] Socket | Pointer to string | | [optional] Mode | string | | Iommu | Pointer to bool | | [optional] [default to false] `func NewConsoleConfig(mode string, ) *ConsoleConfig` NewConsoleConfig instantiates a new ConsoleConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewConsoleConfigWithDefaults() *ConsoleConfig` NewConsoleConfigWithDefaults instantiates a new ConsoleConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *ConsoleConfig) GetFile() string` GetFile returns the File field if non-nil, zero value otherwise. `func (o ConsoleConfig) GetFileOk() (string, bool)` GetFileOk returns a tuple with the File field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *ConsoleConfig) SetFile(v string)` SetFile sets File field to given value. `func (o *ConsoleConfig) HasFile() bool` HasFile returns a boolean if a field has been set. `func (o *ConsoleConfig) GetSocket() string` GetSocket returns the Socket field if non-nil, zero value otherwise. `func (o ConsoleConfig) GetSocketOk() (string, bool)` GetSocketOk returns a tuple with the Socket field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *ConsoleConfig) SetSocket(v string)` SetSocket sets Socket field to given value. `func (o *ConsoleConfig) HasSocket() bool` HasSocket returns a boolean if a field has been set. `func (o *ConsoleConfig) GetMode() string` GetMode returns the Mode field if non-nil, zero value otherwise. `func (o ConsoleConfig) GetModeOk() (string, bool)` GetModeOk returns a tuple with the Mode field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *ConsoleConfig) SetMode(v string)` SetMode sets Mode field to given value. `func (o *ConsoleConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o ConsoleConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *ConsoleConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *ConsoleConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "ConsoleConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "By now, vCPU threads of Kata Containers are scheduled randomly to CPUs. And each pod would request a specific set of CPUs which we call it CPU set (just the CPU set meaning in Linux cgroups). If the number of vCPU threads are equal to that of CPUs claimed in CPU set, we can then pin each vCPU thread to one specified CPU, to reduce the cost of random scheduling. Two ways are provided to use this vCPU thread pinning feature: through `QEMU` configuration file and through annotations. Finally the pinning parameter is passed to `HypervisorConfig`. | API Info | Value | |-|--| | Package | `golang.org/x/sys/unix` | | Method | `unix.SchedSetaffinity(thread_id, &unixCPUSet)` | | Official Doc Page | https://pkg.go.dev/golang.org/x/sys/unix#SchedSetaffinity | As shown in Section 1, when `num(vCPU threads) == num(CPUs in CPU set)`, we shall pin each vCPU thread to a specified CPU. And when this condition is broken, we should restore to the original random scheduling pattern. So when may `num(CPUs in CPU set)` change? There are 5 possible scenes: | Possible scenes | Related Code | |--|--| | when creating a container | File Sandbox.go, in method `CreateContainer` | | when starting a container | File Sandbox.go, in method `StartContainer` | | when deleting a container | File Sandbox.go, in method `DeleteContainer` | | when updating a container | File Sandbox.go, in method `UpdateContainer` | | when creating multiple containers | File Sandbox.go, in method `createContainers` | We can split the whole process into the following steps. Related methods are `checkVCPUsPinning` and `resetVCPUsPinning`, in file Sandbox.go."
}
] |
{
"category": "Runtime",
"file_name": "vcpu-threads-pinning.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "To integrate with custom authentication methods using the ), MinIO provides an STS API extension called `AssumeRoleWithCustomToken`. After configuring the plugin, use the generated Role ARN with `AssumeRoleWithCustomToken` to get temporary credentials to access object storage. To make an STS API request with this method, send a POST request to the MinIO endpoint with following query parameters: | Parameter | Type | Required | | |--||-|-| | Action | String | Yes | Value must be `AssumeRoleWithCustomToken` | | Version | String | Yes | Value must be `2011-06-15` | | Token | String | Yes | Token to be authenticated by identity plugin | | RoleArn | String | Yes | Must match the Role ARN generated for the identity plugin | | DurationSeconds | Integer | No | Duration of validity of generated credentials. Must be at least 900. | The validity duration of the generated STS credentials is the minimum of the `DurationSeconds` parameter (if passed) and the validity duration returned by the Identity Management Plugin. XML response for this API is similar to Sample request with `curl`: ```sh curl -XPOST 'http://localhost:9001/?Action=AssumeRoleWithCustomToken&Version=2011-06-15&Token=aaa&RoleArn=arn:minio:iam:::role/idmp-vGxBdLkOc8mQPU1-UQbBh-yWWVQ' ``` Prettified Response: ```xml <?xml version=\"1.0\" encoding=\"UTF-8\"?> <AssumeRoleWithCustomTokenResponse xmlns=\"https://sts.amazonaws.com/doc/2011-06-15/\"> <AssumeRoleWithCustomTokenResult> <Credentials> <AccessKeyId>24Y5H9VHE14H47GEOKCX</AccessKeyId> <SecretAccessKey>H+aBfQ9B1AeWWb++84hvp4tlFBo9aP+hUTdLFIeg</SecretAccessKey> <Expiration>2022-05-25T19:56:34Z</Expiration> <SessionToken>eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiIyNFk1SDlWSEUxNEg0N0dFT0tDWCIsImV4cCI6MTY1MzUwODU5NCwiZ3JvdXBzIjpbImRhdGEtc2NpZW5jZSJdLCJwYXJlbnQiOiJjdXN0b206QWxpY2UiLCJyb2xlQXJuIjoiYXJuOm1pbmlvOmlhbTo6OnJvbGUvaWRtcC14eHgiLCJzdWIiOiJjdXN0b206QWxpY2UifQ.1tO1LmlUNXiy-wl-ZbkJLWTpaPlhaGqHehsi21lNAmAGCImHHsPb-GA4lRq6GkvHAODN5ZYCf_S-OwpOOdxFwA</SessionToken> </Credentials> <AssumedUser>custom:Alice</AssumedUser> </AssumeRoleWithCustomTokenResult> <ResponseMetadata> <RequestId>16F26E081E36DE63</RequestId> </ResponseMetadata> </AssumeRoleWithCustomTokenResponse> ```"
}
] |
{
"category": "Runtime",
"file_name": "custom-token-identity.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Please note: K8up is a young Open Source project, evolved from the project sponsor . As such, we're in the process of setting up a proper project governance. If you have any wish, let use know via a . The K8up community adheres to the following principles: Open: K8up is open source. Welcoming and respectful: See . Transparent and accessible: Changes to the K8up organization, K8up code repositories, and CNCF related activities (e.g. level, involvement, etc.) are done in public. Merit: Ideas and contributions are accepted according to their technical merit and alignment with project objectives, scope, and design principles. The K8up roadmap is on GitHub, see . Every contribution is welcome. Please see our how to best contribute. The k8up-io GitHub project maintainers team reflects the list of Maintainers. K8up is a CNCF sandbox project. As such, K8up might be involved in CNCF (or other CNCF projects) related marketing, events, or activities. Please read our code of conduct here: . The code of conduct is overseen by the K8up project maintainers. Possible code of conduct violations should be emailed to the project maintainers [email protected]. If the possible violation is against one of the project maintainers, that member will be recused from voting on the issue. Such issues must be escalated to the appropriate CNCF contact, and CNCF may choose to intervene. The following licenses and contributor agreements will be used for K8up projects: for code for new contributions"
}
] |
{
"category": "Runtime",
"file_name": "GOVERNANCE.md",
"project_name": "K8up",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: User Command Line Interface {% comment %} This is a generated file created by running command \"bin/alluxio generate user-cli\" The command parses the golang command definitions and descriptions to generate the markdown in this file {% endcomment %} Alluxio's command line interface provides user access to various operations, such as: Start or stop processes Filesystem operations Administrative commands Invoke the executable to view the possible subcommands: ```shell $ ./bin/alluxio Usage: bin/alluxio [command] Available Commands: cache Worker-related file system and format operations. conf Get, set, and validate configuration settings, primarily those defined in conf/alluxio-site.properties exec Run the main method of an Alluxio class, or end-to-end tests on an Alluxio cluster. fs Operations to interface with the Alluxio filesystem generate Generate files used in documentation help Help about any command info Retrieve and/or display info about the running Alluxio cluster init Initialization operations such as format and validate job Command line tool for interacting with the job service. journal Journal related operations process Start or stop cluster processes Flags: --debug-log True to enable debug logging Use \"bin/alluxio [command] --help\" for more information about a command. ``` To set JVM system properties as part of the command, set the `-D` flag in the form of `-Dproperty=value`. To attach debugging java options specified by `$ALLUXIOUSERATTACH_OPTS`, set the `--attach-debug` flag Note that, as a part of Alluxio deployment, the Alluxio shell will also take the configuration in `${ALLUXIOHOME}/conf/alluxio-site.properties` when it is run from Alluxio installation at `${ALLUXIOHOME}`. Worker-related file system and format operations. Usage: `bin/alluxio cache format` The format command formats the Alluxio worker on this host. This deletes all the cached data stored by the worker. Data in the under storage will not be changed. Warning: Format should only be called when the worker is not running Examples: ```shell $ ./bin/alluxio cache format ``` Usage: `bin/alluxio cache free [flags]` Synchronously free cached files along a path or held by a specific worker Flags: `--path`: The file or directory to free (Default: \"\") `--worker`: The worker to free (Default: \"\") Examples: ```shell $ ./bin/alluxio cache free --path /path/to/file ``` ```shell $ ./bin/alluxio cache free --worker <workerHostName> ``` Get, set, and validate configuration settings, primarily those defined in conf/alluxio-site.properties Usage: `bin/alluxio conf get [key] [flags]` The get command prints the configured value for the given key. If the key is invalid, it returns a nonzero exit code. If the key is valid but isn't set, an empty string is printed. If no key is specified, the full configuration is printed. Note: This command does not require the Alluxio cluster to be running. Flags: `--master`: Show configuration properties used by the master (Default: false) `--source`: Show source of the configuration property instead of the value (Default: false) `--unit`: Unit of the value to return, converted to correspond to the given unit. E.g., with \"--unit KB\", a configuration value of \"4096B\" will return 4 Possible options include B, KB, MB, GB, TP, PB, MS, S, M, H, D (Default: \"\") Examples: ```shell $ ./bin/alluxio conf get ``` ```shell $ ./bin/alluxio conf get alluxio.master.hostname ``` ```shell $ ./bin/alluxio conf get --master ``` ```shell $ ./bin/alluxio conf get --source ``` ```shell $ ./bin/alluxio conf get alluxio.user.block.size.bytes.default --unit KB $ ./bin/alluxio conf get alluxio.master.journal.flush.timeout --unit S ``` Usage: `bin/alluxio conf log [flags]` The log command returns the current value of or updates the log level of a particular class on specific"
},
{
"data": "Users are able to change Alluxio server-side log levels at runtime. The --target flag specifies which processes to apply the log level change to. The target could be of the form <master|workers|jobmaster|jobworkers|host:webPort[:role]> and multiple targets can be listed as comma-separated entries. The role can be one of master,worker,jobmaster,jobworker. Using the role option is useful when an Alluxio process is configured to use a non-standard web port (e.g. if an Alluxio master does not use 19999 as its web port). The default target value is the primary master, primary job master, all workers and job workers. Note: This command requires the Alluxio cluster to be running. Flags: `--level`: If specified, sets the specified logger at the given level (Default: \"\") `--name`: (Required) Logger name (ex. alluxio.master.file.DefaultFileSystemMaster) `--target`: A target name among <master|workers|jobmaster|jobworkers|host:webPort[:role]>. Defaults to master,workers,jobmaster,jobworkers (Default: []) Examples: ```shell $ ./bin/alluxio conf log --logName alluxio.master.file.DefaultFileSystemMaster --target=master --level=DEBUG ``` ```shell $ ./bin/alluxio conf log --logName alluxio.worker.dora.PagedDoraWorker.java --target=myHostName:worker --level=WARN ``` Run the main method of an Alluxio class, or end-to-end tests on an Alluxio cluster. Usage: `bin/alluxio exec basicIOTest [flags]` Run all end-to-end tests or a specific test, on an Alluxio cluster. Flags: `--directory`: Alluxio path for the tests working directory. Default: / (Default: \"\") `--operation`: The operation to test, either BASIC or BASICNONBYTE_BUFFER. By default both operations are tested. (Default: \"\") `--readType`: The read type to use, one of NOCACHE, CACHE, CACHEPROMOTE. By default all readTypes are tested. (Default: \"\") `--workers`: Alluxio worker addresses to run tests on. If not specified, random ones will be used. (Default: \"\") `--writeType`: The write type to use, one of MUSTCACHE, CACHETHROUGH, THROUGH. By default all writeTypes are tested. (Default: \"\") Examples: ```shell $ ./bin/alluxio exec basicIOTest ``` ```shell $ ./bin/alluxio exec basicIOtest --operation BASIC --readType NO_CACHE --writeType THROUGH ``` Usage: `bin/alluxio exec class [flags]` Run the main method of an Alluxio class. Flags: `--jar`: Determine a JAR file to run. (Default: \"\") `--m`: Determine a module to run. (Default: \"\") Usage: `bin/alluxio exec hdfsMountTest [flags]` Tests runs a set of validations against the given hdfs path. Flags: `--option`: options associated with this mount point. (Default: \"\") `--path`: (Required) specifies the HDFS path you want to validate. `--readonly`: mount point is readonly in Alluxio. (Default: false) `--shared`: mount point is shared. (Default: false) Usage: `bin/alluxio exec ufsIOTest [flags]` A benchmarking tool for the I/O between Alluxio and UFS. This test will measure the I/O throughput between Alluxio workers and the specified UFS path. Each worker will create concurrent clients to first generate test files of the specified size then read those files. The write/read I/O throughput will be measured in the process. Flags: `--cluster`: specifies the benchmark is run in the Alluxio cluster. If not specified, this benchmark will run locally. (Default: false) `--cluster-limit`: specifies how many Alluxio workers to run the benchmark concurrently. If >0, it will only run on that number of workers. If 0, it will run on all available cluster workers. If <0, will run on the workers from the end of the worker list. This flag is only used if --cluster is enabled. (Default: 0) `--io-size`: specifies the amount of data each thread writes/reads. (Default: \"\") `--java-opt`: The java options to add to the command line to for the task. This can be repeated. The options must be quoted and prefixed with a space. For example: --java-opt \" -Xmx4g\" --java-opt \""
},
{
"data": "(Default: []) `--path`: (Required) specifies the path to write/read temporary data in. `--threads`: specifies the number of threads to concurrently use on each worker. (Default: 4) Examples: ```shell $ ./bin/alluxio runUfsIOTest --path hdfs://<hdfs-address> ``` ```shell $ ./bin/alluxio runUfsIOTest --path hdfs://<hdfs-address> --cluster --cluster-limit 1 ``` ```shell $ ./bin/alluxio runUfsIOTest --path hdfs://<hdfs-address> --cluster --cluster-limit 2 --io-size 512m --threads 2 ``` Usage: `bin/alluxio exec ufsTest [flags]` Test the integration between Alluxio and the given UFS to validate UFS semantics Flags: `--path`: (Required) the full UFS path to run tests against. `--test`: Test name, this option can be passed multiple times to indicate multipleZ tests (Default: []) Operations to interface with the Alluxio filesystem For commands that take Alluxio URIs as an argument such as ls or mkdir, the argument should be either A complete Alluxio URI, such as alluxio://<masterHostname>:<masterPort>/<path> A path without its scheme header, such as /path, in order to use the default hostname and port set in alluxio-site.properties Note: All fs commands require the Alluxio cluster to be running. Most of the commands which require path components allow wildcard arguments for ease of use. For example, the command \"bin/alluxio fs rm '/data/2014*'\" deletes anything in the data directory with a prefix of 2014. Some shells will attempt to glob the input paths, causing strange errors. As a workaround, you can disable globbing (depending on the shell type; for example, set -f) or by escaping wildcards For example, the command \"bin/alluxio fs cat /\\\\*\" uses the escape backslash character twice. This is because the shell script will eventually call a java program which should have the final escaped parameters \"cat /\\\\*\". Usage: `bin/alluxio fs cat [path]` The cat command prints the contents of a file in Alluxio to the shell. Examples: ```shell $ ./bin/alluxio fs cat /output/part-00000 ``` Usage: `bin/alluxio fs check-cached [path] [flags]` Checks if files under a path have been cached in alluxio. Flags: `--limit`: Limit number of files to check (Default: 1000) `--sample`: Sample ratio, 10 means sample 1 in every 10 files. (Default: 1) Usage: `bin/alluxio fs checksum [path]` The checksum command outputs the md5 value of a file in Alluxio. This can be used to verify the contents of a file stored in Alluxio. Examples: ```shell $ ./bin/alluxio fs checksum /LICENSE md5sum: bf0513403ff54711966f39b058e059a3 md5 LICENSE MD5 (LICENSE) = bf0513403ff54711966f39b058e059a3 ``` Usage: `bin/alluxio fs chgrp [group] [path] [flags]` The chgrp command changes the group of the file or directory in Alluxio. Alluxio supports file authorization with POSIX file permissions. The file owner or superuser can execute this command. Flags: `--recursive`,`-R`: change the group recursively for all files and directories under the given path (Default: false) Examples: ```shell $ ./bin/alluxio fs chgrp alluxio-group-new /input/file1 ``` Usage: `bin/alluxio fs chmod [mode] [path] [flags]` The chmod command changes the permission of a file or directory in Alluxio. The permission mode is represented as an octal 3 digit value. Refer to https://en.wikipedia.org/wiki/Chmod#Numerical_permissions for a detailed description of the modes. Flags: `--recursive`,`-R`: change the permission recursively for all files and directories under the given path (Default: false) Examples: ```shell $ ./bin/alluxio fs chmod 755 /input/file1 ``` Usage: `bin/alluxio fs chown <owner>[:<group>] <path> [flags]` The chown command changes the owner of a file or directory in"
},
{
"data": "The ownership of a file can only be altered by a superuser Flags: `--recursive`,`-R`: change the owner recursively for all files and directories under the given path (Default: false) Examples: ```shell $ ./bin/alluxio fs chown alluxio-user /input/file1 ``` Usage: `bin/alluxio fs consistent-hash [--create]|[--compare <1stCheckFilePath> <2ndCheckFilePath>]|[--clean] [flags]` This command is for checking whether the consistent hash ring is changed or not Flags: `--clean`: Delete generated check data (Default: false) `--compare`: Compare check files to see if the hash ring has changed (Default: false) `--create`: Generate check file (Default: false) Usage: `bin/alluxio fs cp [srcPath] [dstPath] [flags]` Copies a file or directory in the Alluxio filesystem or between local and Alluxio filesystems. The file:// scheme indicates a local filesystem path and the alluxio:// scheme or no scheme indicates an Alluxio filesystem path. Flags: `--buffer-size`: Read buffer size when coping to or from local, with defaults of 64MB and 8MB respectively (Default: \"\") `--preserve`,`-p`: Preserve file permission attributes when copying files; all ownership, permissions, and ACLs will be preserved (Default: false) `--recursive`,`-R`: True to copy the directory subtree to the destination directory (Default: false) `--thread`: Number of threads used to copy files in parallel, defaults to 2 * CPU cores (Default: 0) Examples: ```shell $ ./bin/alluxio fs cp /file1 /file2 ``` ```shell $ ./bin/alluxio fs cp file:///file1 /file2 ``` ```shell $ ./bin/alluxio fs cp alluxio:///file1 file:///file2 ``` ```shell $ ./bin/alluxio fs cp -R /dir1 /dir2 ``` Usage: `bin/alluxio fs head [path] [flags]` The head command prints the first 1KB of data of a file to the shell. Specifying the -c flag sets the number of bytes to print. Flags: `--bytes`,`-c`: Byte size to print (Default: \"\") Examples: ```shell $ ./bin/alluxio fs head -c 2048 /output/part-00000 ``` Usage: `bin/alluxio fs location [path]` Displays the list of hosts storing the specified file. Usage: `bin/alluxio fs ls [path] [flags]` The ls command lists all the immediate children in a directory and displays the file size, last modification time, and in memory status of the files. Using ls on a file will only display the information for that specific file. The ls command will also load the metadata for any file or immediate children of a directory from the under storage system to Alluxio namespace if it does not exist in Alluxio. It queries the under storage system for any file or directory matching the given path and creates a mirror of the file in Alluxio backed by that file. Only the metadata, such as the file name and size, are loaded this way and no data transfer occurs. Flags: `--help`: help for this command (Default: false) `--human-readable`,`-h`: Print sizes in human readable format (Default: false) `--list-dir-as-file`,`-d`: List directories as files (Default: false) `--load-metadata`,`-f`: Force load metadata for immediate children in a directory (Default: false) `--omit-mount-info`,`-m`: Omit mount point related information such as the UFS path (Default: false) `--pinned-files`,`-p`: Only show pinned files (Default: false) `--recursive`,`-R`: List subdirectories recursively (Default: false) `--reverse`,`-r`: Reverse sorted order (Default: false) `--sort`: Sort entries by column, one of {creationTime|inMemoryPercentage|lastAccessTime|lastModificationTime|name|path|size} (Default: \"\") `--timestamp`: Display specified timestamp of entry, one of {createdTime|lastAccessTime|lastModifiedTime} (Default: \"\") Examples: ```shell $ ./bin/alluxio fs ls /s3/data ``` ```shell $ ./bin/alluxio fs ls -f /s3/data ``` Usage: `bin/alluxio fs mkdir [path1 path2 ...]` The mkdir command creates a new directory in the Alluxio filesystem. It is recursive and will create any parent directories that do not exist. Note that the created directory will not be created in the under storage system until a file in the directory is persisted to the underlying storage. Using mkdir on an invalid or existing path will"
},
{
"data": "Examples: ```shell $ ./bin/alluxio fs mkdir /users $ ./bin/alluxio fs mkdir /users/Alice $ ./bin/alluxio fs mkdir /users/Bob ``` Usage: `bin/alluxio fs mv [srcPath] [dstPath]` The mv command moves a file or directory to another path in Alluxio. The destination path must not exist or be a directory. If it is a directory, the file or directory will be placed as a child of the directory. The command is purely a metadata operation and does not affect the data blocks of the file. Examples: ```shell $ ./bin/alluxio fs mv /data/2014 /data/archives/2014 ``` Usage: `bin/alluxio fs rm [path] [flags]` The rm command removes a file from Alluxio space and the under storage system. The file will be unavailable immediately after this command returns, but the actual data may be deleted a while later. Flags: `--alluxio-only`: True to only remove data and metadata from Alluxio cache (Default: false) `--recursive`,`-R`: True to recursively remove files within the specified directory subtree (Default: false) `--skip-ufs-check`,`-U`: True to skip checking if corresponding UFS contents are in sync (Default: false) Examples: ```shell $ ./bin/alluxio fs rm /tmp/unused-file ``` ```shell $ ./bin/alluxio fs rm --alluxio-only --skip-ufs-check /tmp/unused-file2 ``` Usage: `bin/alluxio fs stat [flags]` The stat command dumps the FileInfo representation of a file or a directory to the shell. Flags: `--file-id`: File id of file (Default: \"\") `--format`,`-f`: Display info in the given format: \"%N\": name of the file \"%z\": size of file in bytes \"%u\": owner \"%g\": group name of owner \"%i\": file id of the file \"%y\": modification time in UTC in 'yyyy-MM-dd HH:mm:ss' format \"%Y\": modification time as Unix timestamp in milliseconds \"%b\": Number of blocks allocated for file (Default: \"\") `--path`: Path to file or directory (Default: \"\") Examples: ```shell $ ./bin/alluxio fs stat /data/2015/logs-1.txt ``` ```shell $ ./bin/alluxio fs stat /data/2015 ``` ```shell $ ./bin/alluxio fs stat -f %z /data/2015/logs-1.txt ``` ```shell $ ./bin/alluxio fs stat -fileId 12345678 ``` Usage: `bin/alluxio fs tail [path] [flags]` The tail command prints the last 1KB of data of a file to the shell. Specifying the -c flag sets the number of bytes to print. Flags: `--bytes`: Byte size to print (Default: \"\") Examples: ```shell $ ./bin/alluxio fs tail -c 2048 /output/part-00000 ``` Usage: `bin/alluxio fs test [path] [flags]` Test a property of a path, returning 0 if the property is true, or 1 otherwise Flags: `--dir`,`-d`: Test if path is a directory (Default: false) `--exists`,`-e`: Test if path exists (Default: false) `--file`,`-f`: Test if path is a file (Default: false) `--not-empty`,`-s`: Test if path is not empty (Default: false) `--zero`,`-z`: Test if path is zero length (Default: false) Usage: `bin/alluxio fs touch [path]` Create a 0 byte file at the specified path, which will also be created in the under file system Generate files used in documentation Usage: `bin/alluxio generate doc-tables` Generate configuration and metric tables used in documentation Usage: `bin/alluxio generate docs` Generate all documentation files Usage: `bin/alluxio generate user-cli [flags]` Generate content for"
},
{
"data": "Flags: `--help`,`-h`: help for user-cli (Default: false) Retrieve and/or display info about the running Alluxio cluster Usage: `bin/alluxio info cache [flags]` Reports worker capacity information Flags: `--live`: Only show live workers for capacity report (Default: false) `--lost`: Only show lost workers for capacity report (Default: false) `--worker`: Only show specified workers for capacity report, labeled by hostname or IP address (Default: []) Usage: `bin/alluxio info collect [command] [flags]` Collects information such as logs, config, metrics, and more from the running Alluxio cluster and bundle into a single tarball [command] must be one of the following values: all: runs all the commands below cluster: runs a set of Alluxio commands to collect information about the Alluxio cluster conf: collects the configuration files under ${ALLUXIO_HOME}/config/ env: runs a set of linux commands to collect information about the cluster jvm: collects jstack from the JVMs log: collects the log files under ${ALLUXIO_HOME}/logs/ metrics: collects Alluxio system metrics WARNING: This command MAY bundle credentials. Inspect the output tarball for any sensitive information and remove it before sharing with others. Flags: `--additional-logs`: Additional file name prefixes from ${ALLUXIO_HOME}/logs to include in the tarball, inclusive of the default log files (Default: []) `--end-time`: Logs that do not contain entries before this time will be ignored, format must be like 2006-01-02T15:04:05 (Default: \"\") `--exclude-logs`: File name prefixes from ${ALLUXIO_HOME}/logs to exclude; this is evaluated after adding files from --additional-logs (Default: []) `--exclude-worker-metrics`: True to skip worker metrics collection (Default: false) `--include-logs`: File name prefixes from ${ALLUXIO_HOME}/logs to include in the tarball, ignoring the default log files; cannot be used with --exclude-logs or --additional-logs (Default: []) `--local`: True to only collect information from the local machine (Default: false) `--max-threads`: Parallelism of the command; use a smaller value to limit network I/O when transferring tarballs (Default: 1) `--output-dir`: (Required) Output directory to write collect info tarball to `--start-time`: Logs that do not contain entries after this time will be ignored, format must be like 2006-01-02T15:04:05 (Default: \"\") Usage: `bin/alluxio info doctor [type]` Runs doctor configuration or storage command Usage: `bin/alluxio info nodes` Show all registered workers' status Usage: `bin/alluxio info report [arg] [flags]` Reports Alluxio running cluster information [arg] can be one of the following values: jobservice: job service metrics information metrics: metrics information summary: cluster summary ufs: under storage system information Defaults to summary if no arg is provided Flags: `--format`: Set output format, any of [json, yaml] (Default: \"\") Usage: `bin/alluxio info version` Print Alluxio version. Initialization operations such as format and validate Usage: `bin/alluxio init clear-os-cache` The clear-os-cache command drops the OS buffer cache Usage: `bin/alluxio init copy-dir [path]` The copy-dir command copies the directory at given path to all master nodes listed in conf/masters and all worker nodes listed in conf/workers. Note: This command does not require the Alluxio cluster to be running. Examples: ```shell $ ./bin/alluxio init copy-dir conf/alluxio-site.properties ``` Usage: `bin/alluxio init format [flags]` The format command formats the Alluxio master and all its workers. Running this command on an existing Alluxio cluster deletes everything persisted in Alluxio, including cached data and any metadata information. Data in under storage will not be changed. Warning: Formatting is required when you run Alluxio for the first time. It should only be called while the cluster is not running. Flags: `--localFileSystem`,`-s`: Only format if underfs is local and doesn't already exist (Default: false) `--skip-master`: Skip formatting journal on all masters (Default: false) `--skip-worker`: Skip formatting cache on all workers (Default: false) Usage: `bin/alluxio init validate [flags]` Validate Alluxio configuration or environment Flags: `--type`: Decide the type to validate. Valid inputs: [conf, env] (Default: \"\") Examples: ```shell $ ./bin/alluxio init validate --type conf ``` ```shell $ ./bin/alluxio init validate --type env ``` Command line tool for interacting with the job"
},
{
"data": "Usage: `bin/alluxio job load [flags]` The load command moves data from the under storage system into Alluxio storage. For example, load can be used to prefetch data for analytics jobs. If load is run on a directory, files in the directory will be recursively loaded. Flags: `--bandwidth`: [submit] Single worker read bandwidth limit (Default: \"\") `--format`: [progress] Format of output, either TEXT or JSON (Default: \"\") `--metadata-only`: [submit] Only load file metadata (Default: false) `--partial-listing`: [submit] Use partial directory listing, initializing load before reading the entire directory but cannot report on certain progress details (Default: false) `--path`: (Required) [all] Source path of load operation `--progress`: View progress of submitted job (Default: false) `--skip-if-exists`: [submit] Skip existing fullly cached files (Default: false) `--stop`: Stop running job (Default: false) `--submit`: Submit job (Default: false) `--verbose`: [progress] Verbose output (Default: false) `--verify`: [submit] Run verification when load finishes and load new files if any (Default: false) Examples: ```shell $ ./bin/alluxio job load --path /path --submit ``` ```shell $ ./bin/alluxio job load --path /path --progress Progress for loading path '/path': Settings: bandwidth: unlimited verify: false Job State: SUCCEEDED Files Processed: 1000 Bytes Loaded: 125.00MB Throughput: 2509.80KB/s Block load failure rate: 0.00% Files Failed: 0 ``` ```shell $ ./bin/alluxio job load --path /path --stop ``` Journal related operations Usage: `bin/alluxio journal checkpoint` The checkpoint command creates a checkpoint the leading Alluxio master's journal. This command is mainly used for debugging and to avoid master journal logs from growing unbounded. Checkpointing requires a pause in master metadata changes, so use this command sparingly to avoid interfering with other users of the system. Usage: `bin/alluxio journal format` The format command formats the local Alluxio master's journal. Warning: Formatting should only be called while the cluster is not running. Usage: `bin/alluxio journal read [flags]` The read command parses the current journal and outputs a human readable version to the local folder. This command may take a while depending on the size of the journal. Note: This command requies that the Alluxio cluster is NOT running. Flags: `--end`: end log sequence number (exclusive) (Default: -1) `--input-dir`: input directory on-disk to read the journal content from (Default: \"\") `--master`: name of the master class (Default: \"\") `--output-dir`: output directory to write journal content to (Default: \"\") `--start`: start log sequence number (inclusive) (Default: 0) Examples: ```shell $ ./bin/alluxio readJournal Dumping journal of type EMBEDDED to /Users/alluxio/journal_dump-1602698211916 2020-10-14 10:56:51,960 INFO RaftStorageDirectory - Lock on /Users/alluxio/alluxio/journal/raft/02511d47-d67c-49a3-9011-abb3109a44c1/in_use.lock acquired by nodename 78602@alluxio-user 2020-10-14 10:56:52,254 INFO RaftJournalDumper - Read 223 entries from log /Users/alluxio/alluxio/journal/raft/02511d47-d67c-49a3-9011-abb3109a44c1/current/log_0-222. ``` Start or stop cluster processes Usage: `bin/alluxio process start [flags]` Starts a single process locally or a group of similar processes across the cluster. For starting a group, it is assumed the local host has passwordless SSH access to other nodes in the cluster. The command will parse the hostnames to run on by reading the conf/masters and conf/workers files, depending on the process type. Flags: `--async`,`-a`: Asynchronously start processes without monitoring for start completion (Default: false) `--skip-kill-prev`,`-N`: Avoid killing previous running processes when starting (Default: false) Usage: `bin/alluxio process stop [flags]` Stops a single process locally or a group of similar processes across the cluster. For stopping a group, it is assumed the local host has passwordless SSH access to other nodes in the cluster. The command will parse the hostnames to run on by reading the conf/masters and conf/workers files, depending on the process type. Flags: `--soft`,`-s`: Soft kill only, don't forcibly kill the process (Default: false)"
}
] |
{
"category": "Runtime",
"file_name": "User-CLI.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "In this article we will show you two serverless functions in Rust and WasmEdge deployed on Netlify. One is the image processing function, the other one is the TensorFlow inference function. For more insights on why WasmEdge on Netlify, please refer to the article . Since our demo WebAssembly functions are written in Rust, you will need a . Make sure that you install the `wasm32-wasi` compiler target as follows, in order to generate WebAssembly bytecode. ```bash rustup target add wasm32-wasi ``` The demo application front end is written in , and deployed on Netlify. We will assume that you already have the basic knowledge of how to work with Next.js and Netlify. Our first demo application allows users to upload an image and then invoke a serverless function to turn it into black and white. A deployed on Netlify is available. Fork the to get started. To deploy the application on Netlify, just . This repo is a standard Next.js application for the Netlify platform. The backend serverless function is in the folder. The file contains the Rust programs source code. The Rust program reads image data from the `STDIN`, and then outputs the black-white image to the `STDOUT`. ```rust use hex; use std::io::{self, Read}; use image::{ImageOutputFormat, ImageFormat}; fn main() { let mut buf = Vec::new(); io::stdin().readtoend(&mut buf).unwrap(); let imageformatdetected: ImageFormat = image::guess_format(&buf).unwrap(); let img = image::loadfrommemory(&buf).unwrap(); let filtered = img.grayscale(); let mut buf = vec![]; match imageformatdetected { ImageFormat::Gif => { filtered.write_to(&mut buf, ImageOutputFormat::Gif).unwrap(); }, _ => { filtered.write_to(&mut buf, ImageOutputFormat::Png).unwrap(); }, }; io::stdout().write_all(&buf).unwrap(); io::stdout().flush().unwrap(); } ``` You can use Rusts `cargo` tool to build the Rust program into WebAssembly bytecode or native code. ```bash cd api/functions/image-grayscale/ cargo build --release --target wasm32-wasi ``` Copy the build artifacts to the `api` folder. ```bash cp target/wasm32-wasi/release/grayscale.wasm ../../ ``` The Netlify function runs upon setting up the serverless environment. It installs the WasmEdge runtime, and then compiles each WebAssembly bytecode program into a native `so` library for faster execution. The script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice runs the compiled `grayscale.so` file generated by for better performance. ```javascript const fs = require('fs'); const { spawn } = require('child_process'); const path = require('path'); module.exports = (req, res) => { const wasmedge = spawn( path.join(dirname, 'wasmedge'), [path.join(dirname, 'grayscale.so')]); let d = []; wasmedge.stdout.on('data', (data) => { d.push(data); }); wasmedge.on('close', (code) => { let buf = Buffer.concat(d); res.setHeader('Content-Type', req.headers['image-type']); res.send(buf); }); wasmedge.stdin.write(req.body); wasmedge.stdin.end(''); } ``` That's"
},
{
"data": "and you now have a Netlify Jamstack app with a high-performance Rust and WebAssembly based serverless backend. The application allows users to upload an image and then invoke a serverless function to classify the main subject on the image. It is in as the previous example but in the `tensorflow` branch. The backend serverless function for image classification is in the folder in the `tensorflow` branch. The file contains the Rust programs source code. The Rust program reads image data from the `STDIN`, and then outputs the text output to the `STDOUT`. It utilizes the WasmEdge Tensorflow API to run the AI inference. ```rust pub fn main() { // Step 1: Load the TFLite model let modeldata: &[u8] = includebytes!(\"models/mobilenetv11.0224/mobilenetv11.0224_quant.tflite\"); let labels = includestr!(\"models/mobilenetv11.0224/labelsmobilenetquantv1224.txt\"); // Step 2: Read image from STDIN let mut buf = Vec::new(); io::stdin().readtoend(&mut buf).unwrap(); // Step 3: Resize the input image for the tensorflow model let flatimg = wasmedgetensorflowinterface::loadjpgimageto_rgb8(&buf, 224, 224); // Step 4: AI inference let mut session = wasmedgetensorflowinterface::Session::new(&modeldata, wasmedgetensorflow_interface::ModelType::TensorFlowLite); session.addinput(\"input\", &flatimg, &[1, 224, 224, 3]) .run(); let resvec: Vec<u8> = session.getoutput(\"MobilenetV1/Predictions/Reshape_1\"); // Step 5: Find the food label that responds to the highest probability in res_vec // ... ... let mut label_lines = labels.lines(); for i in 0..maxindex { label_lines.next(); } // Step 6: Generate the output text let classname = labellines.next().unwrap().to_string(); if max_value > 50 { println!(\"It {} a <a href='https://www.google.com/search?q={}'>{}</a> in the picture\", confidence.tostring(), classname, class_name); } else { println!(\"It does not appears to be any food item in the picture.\"); } } ``` You can use the `cargo` tool to build the Rust program into WebAssembly bytecode or native code. ```bash cd api/functions/image-classification/ cargo build --release --target wasm32-wasi ``` Copy the build artifacts to the `api` folder. ```bash cp target/wasm32-wasi/release/classify.wasm ../../ ``` Again, the script installs WasmEdge runtime and its Tensorflow dependencies in this application. It also compiles the `classify.wasm` bytecode program to the `classify.so` native shared library at the time of deployment. The script loads the WasmEdge runtime, starts the compiled WebAssembly program in WasmEdge, and passes the uploaded image data via `STDIN`. Notice runs the compiled `classify.so` file generated by for better performance. ```javascript const fs = require('fs'); const { spawn } = require('child_process'); const path = require('path'); module.exports = (req, res) => { const wasmedge = spawn( path.join(dirname, 'wasmedge-tensorflow-lite'), [path.join(dirname, 'classify.so')], {env: {'LDLIBRARYPATH': dirname}} ); let d = []; wasmedge.stdout.on('data', (data) => { d.push(data); }); wasmedge.on('close', (code) => { res.setHeader('Content-Type', `text/plain`); res.send(d.join('')); }); wasmedge.stdin.write(req.body); wasmedge.stdin.end(''); } ``` You can now and have a web app for subject classification. Next, it's your turn to develop Rust serverless functions in Netlify using the as a template. Looking forward to your great work."
}
] |
{
"category": "Runtime",
"file_name": "netlify.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Firmware | Pointer to string | | [optional] Kernel | Pointer to string | | [optional] Cmdline | Pointer to string | | [optional] Initramfs | Pointer to string | | [optional] `func NewPayloadConfig() *PayloadConfig` NewPayloadConfig instantiates a new PayloadConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewPayloadConfigWithDefaults() *PayloadConfig` NewPayloadConfigWithDefaults instantiates a new PayloadConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *PayloadConfig) GetFirmware() string` GetFirmware returns the Firmware field if non-nil, zero value otherwise. `func (o PayloadConfig) GetFirmwareOk() (string, bool)` GetFirmwareOk returns a tuple with the Firmware field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PayloadConfig) SetFirmware(v string)` SetFirmware sets Firmware field to given value. `func (o *PayloadConfig) HasFirmware() bool` HasFirmware returns a boolean if a field has been set. `func (o *PayloadConfig) GetKernel() string` GetKernel returns the Kernel field if non-nil, zero value otherwise. `func (o PayloadConfig) GetKernelOk() (string, bool)` GetKernelOk returns a tuple with the Kernel field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PayloadConfig) SetKernel(v string)` SetKernel sets Kernel field to given value. `func (o *PayloadConfig) HasKernel() bool` HasKernel returns a boolean if a field has been set. `func (o *PayloadConfig) GetCmdline() string` GetCmdline returns the Cmdline field if non-nil, zero value otherwise. `func (o PayloadConfig) GetCmdlineOk() (string, bool)` GetCmdlineOk returns a tuple with the Cmdline field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PayloadConfig) SetCmdline(v string)` SetCmdline sets Cmdline field to given value. `func (o *PayloadConfig) HasCmdline() bool` HasCmdline returns a boolean if a field has been set. `func (o *PayloadConfig) GetInitramfs() string` GetInitramfs returns the Initramfs field if non-nil, zero value otherwise. `func (o PayloadConfig) GetInitramfsOk() (string, bool)` GetInitramfsOk returns a tuple with the Initramfs field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PayloadConfig) SetInitramfs(v string)` SetInitramfs sets Initramfs field to given value. `func (o *PayloadConfig) HasInitramfs() bool` HasInitramfs returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "PayloadConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"Set up Velero on your platform\" layout: docs You can run Velero with a cloud provider or on-premises. For detailed information about the platforms that Velero supports, see . You can run Velero in any namespace, which requires additional customization. See . You can also use Velero's integration with restic, which requires additional setup. See . The Velero client includes an `install` command to specify the settings for each supported cloud provider. You can install Velero for the included cloud providers using the following command: ```bash velero install \\ --provider <YOUR_PROVIDER> \\ --bucket <YOUR_BUCKET> \\ [--secret-file <PATHTOFILE>] \\ [--no-secret] \\ [--backup-location-config] \\ [--snapshot-location-config] \\ [--namespace] \\ [--use-volume-snapshots] \\ [--use-restic] \\ [--pod-annotations] \\ ``` When using node-based IAM policies, `--secret-file` is not required, but `--no-secret` is required for confirmation. For provider-specific instructions, see: When using restic on a storage provider that doesn't currently have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation. To see the YAML applied by the `velero install` command, use the `--dry-run -o yaml` arguments. For more complex installation needs, use either the generated YAML, or the Helm chart. You can run Velero in an on-premises cluster in different ways depending on your requirements. First, you must select an object storage backend that Velero can use to store backup data. contains information on various options that are supported or have been reported to work by users. is an option if you want to keep your backup data on-premises and you are not using another storage platform that offers an S3-compatible object storage API. Second, if you need to back up persistent volume data, you must select a volume backup solution. contains information on the supported options. For example, if you use for persistent storage, you can install their Velero plugin to get native Portworx snapshots as part of your Velero backups. If there is no native snapshot plugin available for your storage platform, you can use Velero's , which provides a platform-agnostic backup solution for volume data. Whether you run Velero on a cloud provider or on-premises, if you have more than one volume snapshot location for a given volume provider, you can specify its default location for backups by setting a server flag in your Velero deployment"
},
{
"data": "For details, see the documentation topics for individual cloud providers. By default, the Velero deployment requests 500m CPU, 128Mi memory and sets a limit of 1000m CPU, 256Mi. Default requests and limits are not set for the restic pods as CPU/Memory usage can depend heavily on the size of volumes being backed up. If you need to customize these resource requests and limits, you can set the following flags in your `velero install` command: ``` velero install \\ --provider <YOUR_PROVIDER> \\ --bucket <YOUR_BUCKET> \\ --secret-file <PATHTOFILE> \\ --velero-pod-cpu-request <CPU_REQUEST> \\ --velero-pod-mem-request <MEMORY_REQUEST> \\ --velero-pod-cpu-limit <CPU_LIMIT> \\ --velero-pod-mem-limit <MEMORY_LIMIT> \\ [--use-restic] \\ [--restic-pod-cpu-request <CPU_REQUEST>] \\ [--restic-pod-mem-request <MEMORY_REQUEST>] \\ [--restic-pod-cpu-limit <CPU_LIMIT>] \\ [--restic-pod-mem-limit <MEMORY_LIMIT>] ``` Values for these flags follow the same format as . If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`: ```bash kubectl delete namespace/velero clusterrolebinding/velero kubectl delete crds -l component=velero ``` When installing using the Helm chart, the provider's credential information will need to be appended into your values. The easiest way to do this is with the `--set-file` argument, available in Helm 2.10 and higher. ```bash helm install --set-file credentials.secretContents.cloud=./credentials-velero stable/velero ``` See your cloud provider's documentation for the contents and creation of the `credentials-velero` file. After you set up the Velero server, try these examples: Start the sample nginx app: ```bash kubectl apply -f examples/nginx-app/base.yaml ``` Create a backup: ```bash velero backup create nginx-backup --include-namespaces nginx-example ``` Simulate a disaster: ```bash kubectl delete namespaces nginx-example ``` Wait for the namespace to be deleted. Restore your lost resources: ```bash velero restore create --from-backup nginx-backup ``` NOTE: For Azure, you must run Kubernetes version 1.7.2 or later to support PV snapshotting of managed disks. Start the sample nginx app: ```bash kubectl apply -f examples/nginx-app/with-pv.yaml ``` Create a backup with PV snapshotting: ```bash velero backup create nginx-backup --include-namespaces nginx-example ``` Simulate a disaster: ```bash kubectl delete namespaces nginx-example ``` Because the default for dynamically-provisioned PVs is \"Delete\", these commands should trigger your cloud provider to delete the disk that backs the PV. Deletion is asynchronous, so this may take some time. Before continuing to the next step, check your cloud provider to confirm that the disk no longer exists. Restore your lost resources: ```bash velero restore create --from-backup nginx-backup ```"
}
] |
{
"category": "Runtime",
"file_name": "install-overview.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "nictagadm add [-v] [-l] [-p prop=value,...] <name> [mac] nictagadm delete [-v] [-f] <name> nictagadm exists [-lv] <name> [name1]... nictagadm list [-v] [-l | -L] [-p] [-d delim] nictagadm update [-v] [-p prop=value,...] <name> [mac] nictagadm vms [-v] <name> The nictagadm tool allows you to add, update, delete and display information about SmartOS nic tags. Both standard nic tags and local-only etherstubs can be managed. Nic tags are used in SmartOS to refer to a physical nic without needing its underlying MAC address or interface name. Both vmadm(8) and the SmartOS config file use them as identifiers. In addition, the nic tag is used to describe the maximum mtu of the network. When the system is started, the physical device will be programmed with the MTU that is the maximum of all of the specified tags. The MTU is not updated live. For nodes with /usbkey present, nictagadm will update /usbkey/config as appropriate, and attempt to mount the original USB key and update its copy as well. This allows the nic tags to persist across reboots. For nodes without /usbkey present, nic tags will not persist across reboots unless the nic tag parameters are passed in as boot parameters at the next reboot. The following options are valid for all commands. -v Output verbose diagnostic information. -h Print help and exit. -? Print help and exit. The following subcommands and options are supported: add [-l] [-p prop=value,...] <name> [mac] Create a new nic tag on the system, named name. If the '-l' option is specified, the nic tag will cause an etherstub to be created which is a virtual switch on the local host. When creating an etherstub, a mac address is not necessary. When creating a nic tag otherwise, the mac address is necessary. The mac address may either be specified in the property list or as an optional final"
},
{
"data": "For a full list of valid properties, see the section PROPERTIES. -l Create an etherstub. -p prop=value,... A comma-separate list of properties to set to the specified values. delete [-f] <name> Deletes an existing tag on the system, unless it's in use by any VMs. The use of -f skips this check. -f Delete the nic tag regardless of existing VMs. exists [-l] <name> [name1]... Tests to see if a nic tag exists with name. If it exists, the program exits 0, otherwise it exists non-zero. -l Only emit the names of nic tags that don't exist to stderr. help Print help and exit. list [-l | -L] [-p] [-d delim] List nic tags on the system. -l Only list etherstubs. -L Don't list etherstubs. -p Output in a parseable form. -d delim Sets the output delimeter to delim. The default delimiter is ':'. update [-p prop=value,...] <name> [mac] Updates the properties of a nic tag. For a full list of properties see the section PROPERTIES. For backwards compatibility, the mac address may be specified as an optional final argument. If used, it should be specified via -p. -p prop=value,... A comma-separate list of properties to set to the specified values. vms <tag name> Lists UUIDs of VMs using a nic tag. The following properties are accepted for use with the nictagadm -p options: mac Indicates the MAC address of the physical device that this nic tag should be created over. mtu Indicates the maximum transmission unit (mtu) to be associated with this nic tag. The corresponding physical network interface will have its MTU set to at least this value, the actual value will be the maximum of all of the associated nic tags. The valid range for the MTU of a nic tag is from 1500 to 9000 bytes. The following exit values are returned: 0 Successful completion. 1 An error occurred. 2 Invalid usage. dladm(8), sysinfo(8), vmadm(8)"
}
] |
{
"category": "Runtime",
"file_name": "nictagadm.8.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(migrate-from-lxc)= Incus provides a tool (`lxc-to-incus`) that you can use to import LXC containers into your Incus server. The LXC containers must exist on the same machine as the Incus server. The tool analyzes the LXC containers and migrates both their data and their configuration into new Incus containers. ```{note} Alternatively, you can use the `incus-migrate` tool within a LXC container to migrate it to Incus (see {ref}`import-machines-to-instances`). However, this tool does not migrate any of the LXC container configuration. ``` If the tool isn't provided alongside your Incus installation, you can build it yourself. Make sure that you have `go` ({ref}`requirements-go`) installed and get the tool with the following command: go install github.com/lxc/incus/cmd/lxc-to-incus@latest You can migrate one container at a time or all of your LXC containers at the same time. ```{note} Migrated containers use the same name as the original containers. You cannot migrate containers with a name that already exists as an instance name in Incus. Therefore, rename any LXC containers that might cause name conflicts before you start the migration process. ``` Before you start the migration process, stop the LXC containers that you want to migrate. Run `sudo lxc-to-incus [flags]` to migrate the containers. For example, to migrate all containers: sudo lxc-to-incus --all To migrate only the `lxc1` container: sudo lxc-to-incus --containers lxc1 To migrate two containers (`lxc1` and `lxc2`) and use the `my-storage` storage pool in Incus: sudo lxc-to-incus --containers lxc1,lxc2 --storage my-storage To test the migration of all containers without actually running it: sudo lxc-to-incus --all --dry-run To migrate all containers but limit the `rsync` bandwidth to 5000 KB/s: sudo lxc-to-incus --all --rsync-args --bwlimit=5000 Run `sudo lxc-to-incus --help` to check all available flags. ```{note} If you get an error that the `linux64` architecture isn't supported, either update the tool to the latest version or change the architecture in the LXC container configuration from `linux64` to either `amd64` or `x86_64`. ``` The tool analyzes the LXC configuration and the configuration of the container (or containers) and migrates as much of the configuration as possible. You will see output similar to the following: ```{terminal} :input: sudo lxc-to-incus --containers lxc1 Parsing LXC configuration Checking for unsupported LXC configuration keys Checking for existing containers Checking whether container has already been migrated Validating whether incomplete AppArmor support is enabled Validating whether mounting a minimal /dev is enabled Validating container rootfs Processing network configuration Processing storage configuration Processing environment configuration Processing container boot configuration Processing container apparmor configuration Processing container seccomp configuration Processing container SELinux configuration Processing container capabilities configuration Processing container architecture configuration Creating container Transferring container: lxc1: ... Container 'lxc1' successfully created ``` After the migration process is complete, you can check and, if necessary, update the configuration in Incus before you start the migrated Incus container."
}
] |
{
"category": "Runtime",
"file_name": "migrate_from_lxc.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Erasure coding implementation ============================= This document provides information about how has been implemented into ec translator. It describes the algorithm used and the optimizations made, but it doesn't contain a full description of the mathematical background needed to understand erasure coding in general. It either describes the other parts of ec not directly related to the encoding/decoding procedure, like synchronization or fop management. Introduction EC is based on erasure code. It's a very old code. It's not considered the best one nowadays, but is good enough and it's one of the few codes that is not covered by any patent and can be freely used. To define the Reed-Solomon code we use 3 parameters: Key fragments (K) It represents the minimum number of healthy fragments that will be needed to be able to recover the original data. Any subset of K out of the total number of fragments will serve. Redundancy fragments (R) It represents the number of extra fragments to compute for each original data block. This value determines how many fragments can be lost before being unable to recover the original data. Fragment size (S) This determines the size of each fragment. The original data block size is computed as S * K. Currently this values is fixed to 512 bytes. Total number of fragments (N = K + R) This isn't a real parameter but it will be useful to simplify the following descriptions. From the point of view of the implementation, it only consists on matrix multiplications. There are two kinds of matrices to use for Reed-Solomon: This kind of matrix has the particularity that K of the encoded fragments are simply a copy of the original data, divided into K pieces. Thus no real encoding needs to be done for them and only the R redundancy fragments need to be computed. This kind of matrices contain one KxK submatrix that is the . Non-systematic This kind of matrix doesn't contain an identity submatrix. This means that all of the N fragments need to be encoded, requiring more computation. On the other hand, these matrices have some nice properties that allow faster implementations of some algorithms, like the matrix inversion used to decode the data. Another advantage of non-systematic matrices is that the decoding time is constant, independently of how many fragments are lost, while systematic approach can suffer from performance degradation when one fragment is lost. All non-systematic matrices can be converted to systematic ones, but then we lose the good properties of the non-systematic. We have to choose betwee best peek performance (systematic) and performance stability (non-systematic). Encoding procedure To encode a block of data we need a KxN matrix where each subset of K rows is . In other words, the determinant of each KxK submatrix is not 0. There are some known ways to obtain this kind of matrices. EC uses a small variation of a matrix known as where each element of the matrix is defined as: a(i, j) = i ^ (K - j) where i is the row from 1 to N, and j is the column from 1 to K. This is exactly the Vandermonde Matrix but with the elements of each row in reverse order. This change is made to be able to implement a small optimization in the matrix"
},
{
"data": "Once we have the matrix, we only need to compute the multiplication of this matrix by a vector composed of K elements of data coming from the original data block. / \\ / \\ | 1 1 1 1 1 | / \\ | a + b + c + d + e = t | | 16 8 4 2 1 | | a | | 16a + 8b + 4c + 2d + e = u | | 81 27 9 3 1 | | b | = | 81a + 27b + 9c + 3d + e = v | | 256 64 16 4 1 | * | c | | 256a + 64b + 16c + 4d + e = w | | 625 125 25 5 1 | | d | | 625a + 125b + 25c + 5d + e = x | | 1296 216 36 6 1 | | e | | 1296a + 216b + 36c + 6d + e = y | | 2401 343 49 7 1 | \\ / | 2401a + 343b + 49c + 7d + e = z | \\ / \\ / The optimization that can be done here is this: 16a + 8b + 4c + 2d + e = 2(2(2(2a + b) + c) + d) + e So all the multiplications are always by the number of the row (2 in this case) and we don't need temporal storage for intermediate results: a *= 2 a += b a *= 2 a += c a *= 2 a += d a *= 2 a += e Once we have the result vector, each element is a fragment that needs to be stored in a separate place. Decoding procedure To recover the data we need exactly K of the fragments. We need to know which K fragments we have (i.e. the original row number from which each fragment was calculated). Once we have this data we build a square KxK matrix composed by the rows corresponding to the given fragments and invert it. With the inverted matrix, we can recover the original data by multiplying it with the vector composed by the K"
},
{
"data": "In our previous example, if we consider that we have recovered fragments t, u, v, x and z, corresponding to rows 1, 2, 3, 5 and 7, we can build the following matrix: / \\ | 1 1 1 1 1 | | 16 8 4 2 1 | | 81 27 9 3 1 | | 625 125 25 5 1 | | 2401 343 49 7 1 | \\ / And invert it: / \\ | 1/48 -1/15 1/16 -1/48 1/240 | | -17/48 16/15 -15/16 13/48 -11/240 | | 101/48 -86/15 73/16 -53/48 41/240 | | -247/48 176/15 -129/16 83/48 -61/240 | | 35/8 -7 35/8 -7/8 1/8 | \\ / Multiplying it by the vector (t, u, v, x, z) we recover the original data (a, b, c, d, e): / \\ / \\ / \\ | 1/48 -1/15 1/16 -1/48 1/240 | | t | | a | | -17/48 16/15 -15/16 13/48 -11/240 | | u | | b | | 101/48 -86/15 73/16 -53/48 41/240 | * | v | = | c | | -247/48 176/15 -129/16 83/48 -61/240 | | x | | d | | 35/8 -7 35/8 -7/8 1/8 | | z | | e | \\ / \\ / \\ / Galois Field This encoding/decoding procedure is quite complex to compute using regular mathematical operations and it's not well suited for what we want to do (note that matrix elements can grow unboundly). To solve this problem, exactly the same procedure is done inside a of characteristic 2, which is a finite field with some interesting properties that make it specially useful for fast operations using computers. There are two main differences when we use this specific Galois Field: All regular additions are replaced by bitwise xor's For todays computers it's not really faster to execute an xor compared to an addition, however replacing additions by xor's inside a multiplication has many advantages (we will make use of this to optimize the multiplication). Another consequence of this change is that additions and substractions are really the same xor operation. The elements of the matrix are bounded The field uses a modulus that keep all possible elements inside a delimited region, avoiding really big numbers and fixing the number of bits needed to represent each value. In the current implementation EC uses 8 bits per field element. It's very important to understand how multiplications are computed inside a Galois Field to be able to understand how has it been optimized. We'll start with a simple 'old school' multiplication but in base 2. For example, if we want to multiply 7 5 (111b 101b in binary), we do the following: 1 1 1 (= 7) 1 0 1 (= 5) -- 1 1 1 (= 7) 0 0 0 (= 0) 1 1 1 (= 7) -- 1 0 0 0 1 1 (= 35) This is quite simple. Note that the addition of the third column generates a carry that is propagated to all the other left columns. The next step is to define the modulus of the field. Suppose we use 11 as the modulus. Then we convert the result into an element of the field by dividing by the modulus and taking the residue. We also use the 'old school' method in binary: 1 0 0 0 1 1 (= 35) | 1 0 1 1 (= 11) 0 0 0 0 - 0 1 1 (= 3) 1 0 0 0 1 1 0 1 1 -- 0 0 1 1 0 1 1 0 1 1 0 0 1 0 (= 2) So, 7 * 5 in a field with modulus 11 is 2. Note that the main objective in each iteration of the division is to make higher bits equal to 0 when possible (if it's not possible in one iteration, it will be zeroed on the next). If we do the same but changing additions with xors we get this: 1 1 1 (= 7) 1 0 1 (= 5) -- 1 1 1 (= 7) x 0 0 0 (= 0) x 1 1 1 (= 7) -- 1 1 0 1 1 (= 27) In this case, the xor of the third column doesn't generate any carry. Now we need to divide by the"
},
{
"data": "We can also use 11 as the modulus since it still satisfies the needed conditions to work on a Galois Field of characteristic 2 with 3 bits: 1 1 0 1 1 (= 27) | 1 0 1 1 (= 11) x 1 0 1 1 - 1 1 1 (= 7) 0 1 1 0 1 x 1 0 1 1 -- 0 1 1 0 1 x 1 0 1 1 0 1 1 0 (= 6) Note that, in this case, to make zero the higher bit we need to consider the result of the xor operation, not the addition operation. So, 7 * 5 in a Galois Field of 3 bits with modulus 11 is 6. Optimization To compute all these operations in a fast way some methods have been traditionally used. Maybe the most common is the . The problem with this method is that it requires 3 lookups for each byte multiplication, greatly amplifying the needed memory bandwidth and making it difficult to take advantage of any SIMD support on the processor. What EC does to improve the performance is based on the following property (using the 3 bits Galois Field of the last example): A B mod N = (A b{2} * 4 mod N) x (A b{1} 2 mod N) x (A * b{0} mod N) This is basically a rewrite of the steps made in the previous example to multiply two numbers but moving the modulus calculation inside each intermediate result. What we can see here is that each term of the xor can be zeroed if the corresponding bit of B is 0, so we can ignore that factor. If the bit is 1, we need to compute A multiplied by a power of two and take the residue of the division by the modulus. We can precompute these values: A0 = A (we don't need to compute the modulus here) A1 = A0 * 2 mod N A2 = A1 * 2 mod N Having these values we only need to add those corresponding to bits set to 1 in B. Using our previous example: A = 1 1 1 (= 7) B = 1 0 1 (= 5) A0 = 1 1 1 (= 7) A1 = 1 1 1 * 1 0 mod 1 0 1 1 = 1 0 1 (= 5) A2 = 1 0 1 * 1 0 mod 1 0 1 1 = 0 0 1 (= 1) Since only bits 0 and 2 are 1 in B, we add A0 and A2: A0 + A2 = 1 1 1 x 0 0 1 = 1 1 0 (= 6) If we carefully look at what we are doing when computing each Ax, we see that we do two basic things: Shift the original value one bit to the left If the highest bit is 1, xor with the modulus Let's write this in a detailed way (representing each bit): Original value: a{2} a{1} a{0} Shift 1 bit: a{2} a{1} a{0} 0 If a{2} is 0 we already have the result: a{1} a{0} 0 If a{2} is 1 we need to xor with the modulus: 1 a{1} a{0} 0 x 1 0 1 1 = a{1} (a{0} x 1) 1 An important thing to see here is that if a{2} is 0, we can get the same result by xoring with all 0 instead of the"
},
{
"data": "For this reason we can rewrite the modulus as this: Modulus: a{2} 0 a{2} a{2} This means that the modulus will be 0 0 0 0 is a{2} is 0, so the value won't change, and it will be 1 0 1 1 if a{2} is 1, giving the correct result. So, the computation is simply: Original value: a{2} a{1} a{0} Shift 1 bit: a{2} a{1} a{0} 0 Apply modulus: a{1} (a{0} x a{2}) a{2} We can compute all Ax using this method. We'll get this: A0 = a{2} a{1} a{0} A1 = a{1} (a{0} x a{2}) a{2} A2 = (a{0} x a{2}) (a{1} x a{2}) a{1} Once we have all terms, we xor the ones corresponding to the bits set to 1 in B. In out example this will be A0 and A2: Result: (a{2} x a{0} x a{2}) (a{1} x a{1} x a{2}) (a{0} x a{1}) We can easily see that we can remove some redundant factors: Result: a{0} a{2} (a{0} x a{1}) This way we have come up with a simply set of equations to compute the multiplication of any number by 5. If A is 1 1 1 (= 7), the result must be 1 1 0 (= 6) using the equations, as we expected. If we try another numbe for A, like 0 1 0 (= 2), the result must be 0 0 1 (= 1). This seems a really fast way to compute the multiplication without using any table lookup. The problem is that this is only valid for B = 5. For other values of B another set of equations will be found. To solve this problem we can pregenerate the equations for all possible values of B. Since the Galois Field we use is small, this is feasible. One thing to be aware of is that, in general, two equations for different bits of the same B can share common subexpressions. This gives space for further optimizations to reduce the total number of xors used in the final equations for a given B. However this is not easy to find, since finding the smallest number of xors that give the correct result is an NP-Problem. For EC an exhaustive search has been made to find the best combinations for each possible value. Implementation -- All this seems great from the hardware point of view, but implementing this using normal processor instructions is not so easy because we would need a lot of shifts, ands, xors and ors to move the bits of each number to the correct position to compute the equation and then another shift to put each bit back to its final place. For example, to implement the functions to multiply by 5, we would need something like this: Bit 2: T2 = (A & 1) << 2 Bit 1: T1 = (A & 4) >> 1 Bit 0: T0 = ((A >> 1) x A) & 1 Result: T2 + T1 + T0 This doesn't look good. So here we make a change in the way we get and process the data: instead of reading full numbers into variables and operate with them afterwards, we use a single independent variable for each bit of the number. Assume that we can read and write independent bits from memory (later we'll see how to solve this problem when this is not"
},
{
"data": "In this case, the code would look something like this: Bit 2: T2 = Mem[2] Bit 1: T1 = Mem[1] Bit 0: T0 = Mem[0] Computation: T1 ^= T0 Store result: Mem[2] = T0 Mem[1] = T2 Mem[0] = T1 Note that in this case we handle the final reordering of bits simply by storing the right variable to the right place, without any shifts, ands nor ors. In fact we only have memory loads, memory stores and xors. Note also that we can do all the computations directly using the variables themselves, without additional storage. This true for most of the values, but in some cases an additional one or two temporal variables will be needed to store intermediate results. The drawback of this approach is that additions, that are simply a xor of two numbers will need as many xors as bits are in each number. SIMD optimization -- So we have a good way to compute the multiplications, but even using this we'll need several operations for each byte of the original data. We can improve this by doing multiple multiplications using the same set of instructions. With the approach taken in the implementation section, we can see that in fact it's really easy to add SIMD support to this method. We only need to store in each variable one bit from multiple numbers. For example, when we load T2 from memory, instead of reading the bit 2 of the first number, we can read the bit 2 of the first, second, third, fourth, ... numbers. The same can be done when loading T1 and T0. Obviously this needs to have a special encoding of the numbers into memory to be able to do that in a single operation, but since we can choose whatever encoding we want for EC, we have chosen to have exactly that. We interpret the original data as a stream of bits, and we split it into subsequences of length L, each containing one bit of a number. Every S subsequences form a set of numbers of S bits that are encoded and decoded as a single group. This repeats for any remaining data. For example, in a simple case with L = 8 and S = 3, the original data would contain something like this (interpreted as a sequence of bits, offsets are also bit-based): Offset 0: a{0} b{0} c{0} d{0} e{0} f{0} g{0} h{0} Offset 8: a{1} b{1} c{1} d{1} e{1} f{1} g{1} h{1} Offset 16: a{2} b{2} c{2} d{2} e{2} f{2} g{2} h{2} Offset 24: i{0} j{0} k{0} l{0} m{0} n{0} o{0} p{0} Offset 32: i{1} j{1} k{1} l{1} m{1} n{1} o{1} p{1} Offset 40: i{2} j{2} k{2} l{2} m{2} n{2} o{2} p{2} Note: If the input file is not a multiple of S * L, 0-padding is done. Here we have 16 numbers encoded, from A to P. This way we can easily see that reading the first byte of the file will read all bits 0 of number A, B, C, D, E, F, G and H. The same happens with bits 1 and 2 when we read the second and third bytes respectively. Using this encoding and the implementation described above, we can see that the same set of instructions will be computing the multiplication of 8 numbers at the same time. This can be further improved if we use L = 64 with 64 bits variables on 64-bits"
},
{
"data": "It's even faster if we use L = 128 using SSE registers or L = 256 using AVX registers on Intel processors. Currently EC uses L = 512 and S = 8. This means that numbers are packed in blocks of 512 bytes and gives space for even bigger processor registers up to 512 bits. Conclusions -- This method requires a single variable/processor register for each bit. This can be challenging if we want to avoid additional memory accesses, even if we use modern processors that have many registers. However, the implementation we chose for the Vandermonde Matrix doesn't require temporary storage, so we don't need a full set of 8 new registers (one for each bit) to store partial computations. Additionally, the computation of the multiplications requires, at most, 2 extra registers, but this is afordable. Xors are a really fast operation in modern processors. Intel CPU's can dispatch up to 3 xors per CPU cycle if there are no dependencies with ongoing previous instructions. Worst case is 1 xor per cycle. So, in some configurations, this method could be very near to the memory speed. Another interesting thing of this method is that all data it needs to operate is packed in small sequential blocks of memory, meaning that it can take advantage of the faster internal CPU caches. Results For the particular case of 8 bits, EC can compute each multiplication using 12.8 xors on average (without counting 0 and 1 that do not require any xor). Some numbers require less, like 2 that only requires 3 xors. Having all this, we can check some numbers to see the performance of this method. Maybe the most interesting thing is the average number of xors needed to encode a single byte of data. To compute this we'll need to define some variables: K: Number of data fragments R: Number of redundancy fragments N: K + R B: Number of bits per number A: Average number of xors per number Z: Bits per CPU register (can be up to 256 for AVX registers) X: Average number of xors per CPU cycle L: Average cycles per load S: Average cycles per store G: Core speed in Hz Total number of bytes processed for a single matrix multiplication: __Read__: K B * Z / 8 __Written__: N B * Z / 8 Total number of memory accesses: __Loads__: K B * N __Stores__: B N We need to read the same K B Z bits, in registers of Z bits, N times, one for each row of the matrix. However the last N - 1 reads could be made from the internal CPU caches if conditions are good. Total number of operations: __Additions__: (K - 1) N __Multiplications__: K N Total number of xors: B (K - 1) N + A K N = N ((A + B) K - B) Xors per byte: 8 N ((A + B) K - B) / (K B * Z) CPU cycles per byte: 8 N ((A + B) K - B) / (K B Z X) + 8 L N / Z + (loads) 8 S N / (K * Z) (stores) Bytes per second: G / {CPU cycles per byte} Some xors per byte numbers for specific configurations (B=8): Z=64 Z=128 Z=256 K=2/R=1 0.79 0.39 0.20 K=4/R=2 1.76 0.88 0.44 K=4/R=3 2.06 1.03 0.51 K=8/R=3 3.40 1.70 0.85 K=8/R=4 3.71 1.86 0.93 K=16/R=4 6.34 3.17 1.59"
}
] |
{
"category": "Runtime",
"file_name": "ec-implementation.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Create NFS Shares sidebar_position: 9 description: Learn how to use the NFS protocol to share directories within the JuiceFS file system. NFS (Network File System) is a network file-sharing protocol that allows different computers to share files and directories over a network. It was originally developed by Sun Microsystems and is a standard way of file sharing between Unix and Unix-like systems. The NFS protocol enables clients to access remote file systems as if they were local, achieving transparent remote file access. When you need to share directories from the JuiceFS file system through NFS, you can simply use the `juicefs mount` command to mount the file system. Then, you can create NFS shares with the JuiceFS mount point or subdirectories. :::note `juicefs mount` mounts the file system as a local user-space file system through the FUSE interface, making it identical to the local file system in terms of appearance and usage. Hence, it can be directly used to create NFS shares. ::: To configure NFS shares, you need to install the relevant software packages on both the server and client sides. Let's take Ubuntu/Debian systems as an example: Create a host for NFS sharing (with the JuiceFS file system also mounted on this server). ```shell sudo apt install nfs-kernel-server ``` All Linux hosts that need to access NFS shares should install the client software. ```shell sudo apt install nfs-common ``` Assuming the JuiceFS is mounted on the server system at the path `/mnt/myjfs`, if you want to set the `media` subdirectory as an NFS share, you can add the following configuration to the `/etc/exports` file on the server system: ``` \"/mnt/myjfs/media\" *(rw,sync,nosubtreecheck,fsid=1) ``` The syntax for NFS share configuration is as follows: ``` <Share Path> <Allowed IPs>(options) ``` For example, if you want to restrict the mounting of this share to hosts in the `192.168.1.0/24` IP range and avoid squashing root privileges, you can modify it as follows: ``` \"/mnt/myjfs/media\" 192.168.1.0/24(rw,async,nosubtreecheck,norootsquash,fsid=1) ``` Explanation of the share options: `rw`: Represents read and write permissions. If read-only access is desired, use"
},
{
"data": "`sync` and `async`: `sync` enables synchronous writes, meaning that when writing to the NFS share, the client waits for the server's confirmation of successful data write before proceeding with subsequent operations. `async`, on the other hand, allows asynchronous writes. In this mode, the client does not wait for the server's confirmation of successful write before proceeding with subsequent operations. `nosubtreecheck`: Disables subtree checking, allowing clients to mount both the parent and child directories of the NFS share. This can reduce some security but improve NFS compatibility. Setting it to `subtree_check` enables subtree checking, allowing clients to only mount the NFS share and its subdirectories. `norootsquash`: Controls the mapping behavior of the client's root user when accessing the NFS share. By default, when the client mounts the NFS share as root, the server maps it to a non-privileged user (usually nobody or nfsnobody), which is known as root squashing. Enabling this option cancels the root squashing, giving the client the same root user privileges as the server. This option comes with certain security risks and should be used with caution. `fsid`: A file system identifier used to identify different file systems on NFS. In NFSv4, the root directory of NFS is defined as fsid=0, and other file systems need to be numbered uniquely under it. Here, JuiceFS is an externally mounted FUSE file system, so it needs to be assigned a unique identifier. For NFS shares, the sync (synchronous writes) mode can improve data reliability but always requires waiting for the server's confirmation before proceeding with the next operation. This may result in lower write performance. For JuiceFS, which is a cloud-based distributed file system, network latency also needs to be considered. Using the sync mode can often lead to lower write performance due to network latency. In most cases, when creating NFS shares with JuiceFS, it is recommended to set the write mode to async (asynchronous writes) to avoid sacrificing write performance. If data reliability must be prioritized and sync mode is necessary, it is recommended to configure JuiceFS with a high-performance SSD as a local cache with sufficient capacity and enable the writeback cache mode."
}
] |
{
"category": "Runtime",
"file_name": "nfs.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "FUSE refers to the \"wire protocol\" between kernel and userspace and related specifications. fuse refers to the kernel subsystem and also to the GlusterFs translator. The desrcibes how interrupt handling happens in fuse. This document describes the internal API in the fuse translator with which interrupt can be handled. The API being internal (to be used only in fuse-bridge.c; the functions are not exported to a header file). ``` enum fuseinterruptstate { / ... / INTERRUPT_SQUELCHED, INTERRUPT_HANDLED, / ... / }; typedef enum fuseinterruptstate fuseinterruptstate_t; struct fuseinterruptrecord; typedef struct fuseinterruptrecord fuseinterruptrecord_t; typedef void (fuse_interrupt_handler_t)(xlator_t this, fuseinterruptrecord_t *); struct fuseinterruptrecord { fuseinheadert fusein_header; void *data; /* ... */ }; fuseinterruptrecord_t * fuseinterruptrecordnew(fuseinheadert *finh, fuseinterrupthandler_t handler); void fuseinterruptrecordinsert(xlatort this, fuse_interrupt_record_t fir); gfbooleant fuseinterruptfinishfop(callframet *frame, xlatort *this, gfbooleant sync, void datap); void fuseinterruptfinishinterrupt(xlatort this, fuse_interrupt_record_t fir, fuseinterruptstate_t intstat, gfbooleant sync, void datap); ``` The code demonstrates the usage of the API through `fuse_flush()`. (It's a dummy implementation only for demonstration purposes.) Flush is chosen because a `FLUSH` interrupt is easy to trigger (see tests/features/interrupt.t). Interrupt handling for flush is switched on by `--fuse-flush-handle-interrupt` (a hidden glusterfs command line flag). The implementation of flush interrupt is contained in the `fuseflushinterrupt_handler()` function and blocks guarded by the ``` if (priv->flushhandleinterrupt) { ... ``` conditional (where `priv` is a `*fuseprivatet`). \"Regular\" fuse fops and interrupt handlers interact via a list containing interrupt records. If a fop wishes to have its interrupts handled, it needs to set up an interrupt record and insert it into the list; also when it's to finish (ie. in its \"cbk\" stage) it needs to delete the record from the list. If no interrupt happens, basically that's all to it - a list insertion and deletion. However, if an interrupt comes for the fop, the interrupt FUSE request will carry the data identifying an ongoing fop (that is, its `unique`), and based on that, the interrupt record will be looked up in the list, and the specific interrupt handler (a member of the interrupt record) will be called. Usually the fop needs to share some data with the interrupt handler to enable it to perform its task (also shared via the interrupt record). The interrupt API offers two approaches to manage shared data: Async or reference-counting strategy: from the point on when the interrupt record is inserted to the list, it's owned jointly by the regular fop and the prospective interrupt handler. Both of them need to check before they return if the other is still holding a reference; if not, then they are responsible for reclaiming the shared"
},
{
"data": "Sync or borrow strategy: the interrupt handler is considered a borrower of the shared data. The interrupt handler should not reclaim the shared data. The fop will wait for the interrupt handler to finish (ie., the borrow to be returned), then it has to reclaim the shared data. The user of the interrupt API need to call the following functions to instrument this control flow: `fuseinterruptrecord_insert()` in the fop to insert the interrupt record to the list; `fuseinterruptfinish_fop()`in the fop (cbk) and `fuseinterruptfinish_interrupt()`in the interrupt handler to perform needed synchronization at the end their tenure. The data management strategies are implemented by the `fuseinterruptfinish_*()` functions (which have an argument to specify which strategy to use); these routines take care of freeing the interrupt record itself, while the reclamation of the shared data is left to the API user. A given FUSE fop can be enabled to handle interrupts via the following steps: Define a handler function (of type `fuseinterrupthandler_t`). It should implement the interrupt handling logic and in the end call (directly or as async callback) `fuseinterruptfinish_interrupt()`. The `intstat` argument to `fuseinterruptfinish_interrupt` should be either `INTERRUPTSQUELCHED` or `INTERRUPTHANDLED`. `INTERRUPT_SQUELCHED` means that the interrupt could not be delivered and the fop is going on uninterrupted. `INTERRUPT_HANDLED` means that the interrupt was actually handled. In this case the fop will be answered from interrupt context with errno `EINTR` (that is, the fop should not send a response to the kernel). (the enum `fuseinterruptstate` includes further members, which are reserved for internal use). We return to the `sync` and `datap` arguments later. In the `fuse_<FOP>` function create an interrupt record using `fuseinterruptrecordnew()`, passing the incoming `fusein_header` and the above handler function to it. Arbitrary further data can be referred to via the `data` member of the interrupt record that is to be passed on from fop context to interrupt context. When it's set up, pass the interrupt record to `fuseinterruptrecord_insert()`. In `fuse<FOP>cbk` call `fuseinterruptfinish_fop()`. `fuseinterruptfinish_fop()` returns a Boolean according to whether the interrupt was handled. If it was, then the FUSE request is already answered and the stack gets destroyed in `fuseinterruptfinish_fop` so `fuse<FOP>cbk()` can just return (zero). Otherwise follow the standard cbk logic (answer the FUSE request and destroy the stack -- these are typically accomplished by `fuseerrcbk()`). The last two argument of `fuseinterruptfinish_fop()` and `fuseinterruptfinishinterrupt()` are `gfboolean_t sync` and `void datap`. `sync` represents the strategy for freeing the interrupt record. The interrupt handler and the fop handler are in race to get at the interrupt record first (interrupt handler for purposes of doing the interrupt handling, fop handler for purposes of deactivating the interrupt record upon completion of the fop"
},
{
"data": "If `sync` is true, then the fop handler will wait for the interrupt handler to finish and it takes care of freeing. If `sync` is false, the loser of the above race will perform freeing. Freeing is done within the respective interrupt finish routines, except for the `data` field of the interrupt record; with respect to that, see the discussion of the `datap` parameter below. The strategy has to be consensual, that is, `fuseinterruptfinish_fop()` and `fuseinterruptfinish_interrupt()` must pass the same value for `sync`. If dismantling the resources associated with the interrupt record is simple, `sync = gffalse` is the suggested choice; `sync = gftrue` can be useful in the opposite case, when dismantling those resources would be inconvenient to implement in two places or to enact in non-fop context. If `datap` is `NULL`, the `data` member of the interrupt record will be freed within the interrupt finish routine. If it points to a valid `void *` pointer, and if caller is doing the cleanup (see `sync` above), then that pointer will be directed to the `data` member of the interrupt record and it's up to the caller what it's doing with it. If `sync` is true, interrupt handler can use `datap = NULL`, and fop handler will have `datap` point to a valid pointer. If `sync` is false, and handlers pass a pointer to a pointer for `datap`, they should check if the pointed pointer is NULL before attempting to deal with the data. The kernel acknowledges a successful interruption for a given FUSE request if the filesystem daemon answers it with errno EINTR; upon that, the syscall which induced the request will be abruptly terminated with an interrupt, rather than returning a value. In glusterfs, this can be arranged in two ways. If the interrupt handler wins the race for the interrupt record, ie. `fuseinterruptfinishfop()` returns true to `fuse<FOP>_cbk()`, then, as said above, `fuse<FOP>cbk()` does not need to answer the FUSE request. That's because then the interrupt handler will take care about answering it (with errno EINTR). If `fuseinterruptfinishfop()` returns false to `fuse<FOP>_cbk()`, then this return value does not inform the fop handler whether there was an interrupt or not. This return value occurs both when fop handler won the race for the interrupt record against the interrupt handler, and when there was no interrupt at all. However, the internal logic of the fop handler might detect from other circumstances that an interrupt was delivered. For example, the fop handler might be sleeping, waiting for some data to arrive, so that a premature wakeup (with no data present) occurs if the interrupt handler intervenes. In such cases it's the responsibility of the fop handler to reply the FUSE request with errro EINTR."
}
] |
{
"category": "Runtime",
"file_name": "fuse-interrupt.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "MinIO collects the following metrics at the cluster level. Metrics may include one or more labels, such as the server that calculated that metric. These metrics can be obtained from any MinIO server once per collection by using the following URL: ```shell https://HOSTNAME:PORT/minio/v2/metrics/cluster ``` Replace ``HOSTNAME:PORT`` with the hostname of your MinIO deployment. For deployments behind a load balancer, use the load balancer hostname instead of a single node hostname. | Name | Description | |:-|:-| | `minioauditfailed_messages` | Total number of messages that failed to send since start. | | `minioaudittargetqueuelength` | Number of unsent messages in queue for target. | | `minioaudittotal_messages` | Total number of messages sent since start. | | Name | Description | |:|:| | `minioclustercapacityrawfree_bytes` | Total free capacity online in the cluster. | | `minioclustercapacityrawtotal_bytes` | Total capacity online in the cluster. | | `minioclustercapacityusablefree_bytes` | Total free usable capacity online in the cluster. | | `minioclustercapacityusabletotal_bytes` | Total usable capacity online in the cluster. | | `minioclusterobjectssizedistribution` | Distribution of object sizes across a cluster | | `minioclusterobjectsversiondistribution` | Distribution of object versions across a cluster | | `minioclusterusageobjecttotal` | Total number of objects in a cluster | | `minioclusterusagetotalbytes` | Total cluster usage in bytes | | `minioclusterusageversiontotal` | Total number of versions (includes delete marker) in a cluster | | `minioclusterusagedeletemarkertotal` | Total number of delete markers in a cluster | | `minioclusterusagetotalbytes` | Total cluster usage in bytes | | `minioclusterbucket_total` | Total number of buckets in the cluster | | Name | Description | |:|:--| | `minioclusterdriveofflinetotal` | Total drives offline in this cluster. | | `minioclusterdriveonlinetotal` | Total drives online in this cluster. | | `minioclusterdrive_total` | Total drives in this cluster. | | Name | Description | |:|:-| | `minioclusterilmtransitionedbytes` | Total bytes transitioned to a tier. | | `minioclusterilmtransitionedobjects` | Total number of objects transitioned to a tier. | | `minioclusterilmtransitionedversions` | Total number of versions transitioned to a tier. | | Name | Description | |:|:--| | `minioclusterkms_online` | Reports whether the KMS is online (1) or offline (0). | | `minioclusterkmsrequesterror` | Number of KMS requests that failed due to some error. (HTTP 4xx status code). | | `minioclusterkmsrequestfailure` | Number of KMS requests that failed due to some internal failure. (HTTP 5xx status code). | | `minioclusterkmsrequestsuccess` | Number of KMS requests that succeeded. | | `minioclusterkms_uptime` | The time the KMS has been up and running in seconds. | | Name | Description | |:--|:--| | `minioclusternodesofflinetotal` | Total number of MinIO nodes offline. | | `minioclusternodesonlinetotal` | Total number of MinIO nodes online. | | `minioclusterwrite_quorum` | Maximum write quorum across all pools and sets | | `minioclusterhealth_status` | Get current cluster health status | | `minioclusterhealtherasuresethealingdrives` | Count of healing drives in the erasure set | | `minioclusterhealtherasuresetonlinedrives` | Count of online drives in the erasure set | | `minioclusterhealtherasuresetreadquorum` | Get read quorum of the erasure set | | `minioclusterhealtherasuresetwritequorum` | Get write quorum of the erasure set | | `minioclusterhealtherasureset_status` | Get current health status of the erasure set | Metrics marked as ``Site Replication Only`` only populate on deployments with configurations. For deployments with or configurations, these metrics populate instead under the"
},
{
"data": "| Name | Description |:--|:| | `minioclusterreplicationlasthourfailedbytes` | (Site Replication Only) Total number of bytes failed at least once to replicate in the last full hour. | | `minioclusterreplicationlasthourfailedcount` | (Site Replication Only) Total number of objects which failed replication in the last full hour. | | `minioclusterreplicationlastminutefailedbytes` | Total number of bytes failed at least once to replicate in the last full minute. | | `minioclusterreplicationlastminutefailedcount` | Total number of objects which failed replication in the last full minute. | | `minioclusterreplicationtotalfailedbytes` | (Site Replication Only_) Total number of bytes failed at least once to replicate since server start. | | `minioclusterreplicationtotalfailedcount` | (Site Replication Only_) Total number of objects which failed replication since server start. | | `minioclusterreplicationreceivedbytes` | (Site Replication Only) Total number of bytes replicated to this cluster from another source cluster. | | `minioclusterreplicationreceivedcount` | (Site Replication Only) Total number of objects received by this cluster from another source cluster. | | `minioclusterreplicationsentbytes` | (Site Replication Only) Total number of bytes replicated to the target cluster. | | `minioclusterreplicationsentcount` | (Site Replication Only) Total number of objects replicated to the target cluster. | | `minioclusterreplicationcredentialerrors` | (Site Replication Only) Total number of replication credential errors since server start | | `minioclusterreplicationproxiedgetrequeststotal` | (Site Replication Only)Number of GET requests proxied to replication target | | `minioclusterreplicationproxiedheadrequeststotal` | (Site Replication Only)Number of HEAD requests proxied to replication target | | `minioclusterreplicationproxieddeletetaggingrequeststotal` | (Site Replication Only_)Number of DELETE tagging requests proxied to replication target | | `minioclusterreplicationproxiedgettaggingrequeststotal` | (Site Replication Only_)Number of GET tagging requests proxied to replication target | | `minioclusterreplicationproxiedputtaggingrequeststotal` | (Site Replication Only_)Number of PUT tagging requests proxied to replication target | | `minioclusterreplicationproxiedgetrequestsfailures` | (Site Replication Only)Number of failures in GET requests proxied to replication target | | `minioclusterreplicationproxiedheadrequestsfailures` | (Site Replication Only)Number of failures in HEAD requests proxied to replication target | | `minioclusterreplicationproxieddeletetaggingrequestsfailures` | (Site Replication Only_)Number of failures proxying DELETE tagging requests to replication target | | `minioclusterreplicationproxiedgettaggingrequestsfailures` | (Site Replication Only_)Number of failures proxying GET tagging requests to replication target | | `minioclusterreplicationproxiedputtaggingrequestsfailures` | (Site Replication Only_)Number of failures proxying PUT tagging requests to replication target | Metrics marked as ``Site Replication Only`` only populate on deployments with configurations. For deployments with or configurations, these metrics populate instead under the endpoint. | Name | Description |:--|:| | `minioclusterreplicationcurrentactive_workers` | Total number of active replication workers | | `minioclusterreplicationaverageactive_workers` | Average number of active replication workers | | `minioclusterreplicationmaxactive_workers` | Maximum number of active replication workers seen since server start | | `minioclusterreplicationlinkonline` | Reports whether the replication link is online (1) or offline"
},
{
"data": "| | `minioclusterreplicationlinkofflinedurationseconds` | Total duration of replication link being offline in seconds since last offline event | | `minioclusterreplicationlinkdowntimedurationseconds` | Total downtime of replication link in seconds since server start | | `minioclusterreplicationaveragelinklatencyms` | Average replication link latency in milliseconds | | `minioclusterreplicationmaxlinklatencyms` | Maximum replication link latency in milliseconds seen since server start | | `minioclusterreplicationcurrentlinklatencyms` | Current replication link latency in milliseconds | | `minioclusterreplicationcurrenttransfer_rate` | Current replication transfer rate in bytes/sec | | `minioclusterreplicationaveragetransfer_rate` | Average replication transfer rate in bytes/sec | | `minioclusterreplicationmaxtransfer_rate` | Maximum replication transfer rate in bytes/sec seen since server start | | `minioclusterreplicationlastminutequeuedcount` | Total number of objects queued for replication in the last full minute | | `minioclusterreplicationlastminutequeuedbytes` | Total number of bytes queued for replication in the last full minute | | `minioclusterreplicationaveragequeued_count` | Average number of objects queued for replication since server start | | `minioclusterreplicationaveragequeued_bytes` | Average number of bytes queued for replication since server start | | `minioclusterreplicationmaxqueued_bytes` | Maximum number of bytes queued for replication seen since server start | | `minioclusterreplicationmaxqueued_count` | Maximum number of objects queued for replication seen since server start | | `minioclusterreplicationrecentbacklog_count` | Total number of objects seen in replication backlog in the last 5 minutes | | Name | Description | |:|:--| | `miniohealobjectserrorstotal` | Objects for which healing failed in current self healing run. | | `miniohealobjectshealtotal` | Objects healed in current self healing run. | | `miniohealobjects_total` | Objects scanned in current self healing run. | | `miniohealtimelastactivitynanoseconds` | Time elapsed (in nano seconds) since last self healing activity. | | Name | Description | |:|:--| | `miniointernodetrafficdialavgtime` | Average time of internodes TCP dial calls. | | `miniointernodetrafficdial_errors` | Total number of internode TCP dial timeouts and errors. | | `miniointernodetrafficerrors_total` | Total number of failed internode calls. | | `miniointernodetrafficreceived_bytes` | Total number of bytes received from other peer nodes. | | `miniointernodetrafficsent_bytes` | Total number of bytes sent to the other peer nodes. | | Name | Description | |:--|:--| | `minionotifycurrentsendinprogress` | Number of concurrent async Send calls active to all targets (deprecated, please use 'minionotifytargetcurrentsendin_progress' instead) | | `minionotifyeventserrorstotal` | Events that were failed to be sent to the targets (deprecated, please use 'minionotifytargetfailedevents' instead) | | `minionotifyeventssenttotal` | Total number of events sent to the targets (deprecated, please use 'minionotifytargettotalevents' instead) | | `minionotifyeventsskippedtotal` | Events that were skipped to be sent to the targets due to the in-memory queue being full | | `minionotifytargetcurrentsendinprogress` | Number of concurrent async Send calls active to the target | | `minionotifytargetqueuelength` | Number of events currently staged in the queue_dir configured for the target. | | `minionotifytargettotalevents` | Total number of events sent (or) queued to the target | | Name | Description | |:-|:| | `minios3requests4xxerrors_total` | Total number S3 requests with (4xx) errors. | | `minios3requests5xxerrors_total` | Total number S3 requests with (5xx) errors. | | `minios3requestscanceledtotal` | Total number S3 requests canceled by the client. | | `minios3requestserrorstotal` | Total number S3 requests with (4xx and 5xx) errors. | | `minios3requestsincomingtotal` | Volatile number of total incoming S3 requests. | | `minios3requestsinflighttotal` | Total number of S3 requests currently in flight. | | `minios3requestsrejectedauth_total` | Total number S3 requests rejected for auth failure. | | `minios3requestsrejectedheader_total` | Total number S3 requests rejected for invalid header. | | `minios3requestsrejectedinvalid_total` | Total number S3 invalid requests. | | `minios3requestsrejectedtimestamp_total` | Total number S3 requests rejected for invalid timestamp. | | `minios3requests_total` | Total number S3 requests. | | `minios3requestswaitingtotal` | Number of S3 requests in the waiting queue. | | `minios3requeststtfbseconds_distribution` | Distribution of the time to first byte across API calls. | | `minios3trafficreceivedbytes` | Total number of s3 bytes received. | | `minios3trafficsentbytes` | Total number of s3 bytes sent. | | Name | Description | |:|:| | `miniosoftwarecommit_info` | Git commit hash for the MinIO release. | | `miniosoftwareversion_info` | MinIO Release tag for the server. | | Name | Description | |:|:--| | `minionodedrivefreebytes` | Total storage available on a drive. | | `minionodedrivefreeinodes` | Total free"
},
{
"data": "| | `minionodedrivelatencyus` | Average last minute latency in s for drive API storage operations. | | `minionodedriveofflinetotal` | Total drives offline in this node. | | `minionodedriveonlinetotal` | Total drives online in this node. | | `minionodedrive_total` | Total drives in this node. | | `minionodedrivetotalbytes` | Total storage on a drive. | | `minionodedriveusedbytes` | Total storage used on a drive. | | `minionodedriveerrorstimeout` | Total number of drive timeout errors since server start | | `minionodedriveerrorsioerror` | Total number of drive I/O errors since server start | | `minionodedriveerrorsavailability` | Total number of drive I/O errors, timeouts since server start | | `minionodedriveiowaiting` | Total number I/O operations waiting on drive | | Name | Description | |:-|:| | `minionodeiamlastsyncdurationmillis` | Last successful IAM data sync duration in milliseconds. | | `minionodeiamsincelastsyncmillis` | Time (in milliseconds) since last successful IAM data sync. | | `minionodeiamsyncfailures` | Number of failed IAM data syncs since server start. | | `minionodeiamsyncsuccesses` | Number of successful IAM data syncs since server start. | | Name | Description | |:-|:--| | `minionodeilmexpirypending_tasks` | Number of pending ILM expiry tasks in the queue. | | `minionodeilmtransitionactive_tasks` | Number of active ILM transition tasks. | | `minionodeilmtransitionpending_tasks` | Number of pending ILM transition tasks in the queue. | | `minionodeilmtransitionmissedimmediatetasks` | Number of missed immediate ILM transition tasks. | | `minionodeilmversionsscanned` | Total number of object versions checked for ilm actions since server start. | | `minionodeilmactioncountdeleteaction` | Total action outcome of lifecycle checks since server start for deleting object | | `minionodeilmactioncountdeleteversion_action` | Total action outcome of lifecycle checks since server start for deleting a version | | `minionodeilmactioncounttransitionaction` | Total action outcome of lifecycle checks since server start for transition of an object | | `minionodeilmactioncounttransitionversion_action` | Total action outcome of lifecycle checks since server start for transition of a particular object version | | `minionodeilmactioncountdeleterestored_action` | Total action outcome of lifecycle checks since server start for deletion of temporarily restored object | | `minionodeilmactioncountdeleterestoredversionaction` | Total action outcome of lifecycle checks since server start for deletion of a temporarily restored version | | `minionodeilmactioncountdeleteallversionsaction` | Total action outcome of lifecycle checks since server start for deletion of all versions | | Name | Description | |:|:-| | `minionodetiertierttlbsecondsdistribution` | Distribution of time to last byte for objects downloaded from warm tier | | `minionodetierrequestssuccess` | Number of requests to download object from warm tier that were successful | | `minionodetierrequestsfailure` | Number of requests to download object from warm tier that were failure | | Name | Description | |:-|:-| | `minionodefiledescriptorlimit_total` | Limit on total number of open file descriptors for the MinIO Server process. | | `minionodefiledescriptoropen_total` | Total number of open file descriptors by the MinIO Server process. | | `minionodegoroutinetotal` | Total number of go routines running. | | `minionodeiorcharbytes` | Total bytes read by the process from the underlying storage system including cache, /proc/[pid]/io rchar. | | `minionodeioreadbytes` | Total bytes read by the process from the underlying storage system, /proc/[pid]/io read_bytes. | | `minionodeiowcharbytes` | Total bytes written by the process to the underlying storage system including page cache, /proc/[pid]/io wchar. | | `minionodeiowritebytes` | Total bytes written by the process to the underlying storage system, /proc/[pid]/io write_bytes. | | `minionodeprocesscputotal_seconds` | Total user and system CPU time spent in seconds. | | `minionodeprocessresidentmemory_bytes` | Resident memory size in"
},
{
"data": "| | `minionodeprocessvirtualmemory_bytes` | Virtual memory size in bytes. | | `minionodeprocessstarttimeseconds` | Start time for MinIO process per node, time in seconds since Unix epoc. | | `minionodeprocessuptimeseconds` | Uptime for MinIO process per node in seconds. | | Name | Description | |:-|:| | `minionodescannerbucketscans_finished` | Total number of bucket scans finished since server start. | | `minionodescannerbucketscans_started` | Total number of bucket scans started since server start. | | `minionodescannerdirectoriesscanned` | Total number of directories scanned since server start. | | `minionodescannerobjectsscanned` | Total number of unique objects scanned since server start. | | `minionodescannerversionsscanned` | Total number of object versions scanned since server start. | | `minionodesyscallreadtotal` | Total read SysCalls to the kernel. /proc/[pid]/io syscr. | | `minionodesyscallwritetotal` | Total write SysCalls to the kernel. /proc/[pid]/io syscw. | | `miniousagelastactivitynano_seconds` | Time elapsed (in nano seconds) since last scan activity. | MinIO collects the following metrics at the bucket level. Each metric includes the ``bucket`` label to identify the corresponding bucket. Metrics may include one or more additional labels, such as the server that calculated that metric. These metrics can be obtained from any MinIO server once per collection by using the following URL: ```shell https://HOSTNAME:PORT/minio/v2/metrics/bucket ``` Replace ``HOSTNAME:PORT`` with the hostname of your MinIO deployment. For deployments behind a load balancer, use the load balancer hostname instead of a single node hostname. | Name | Description | |:--|:--| | `miniobucketobjectssizedistribution` | Distribution of object sizes in the bucket, includes label for the bucket name. | | `miniobucketobjectsversiondistribution` | Distribution of object sizes in a bucket, by number of versions | These metrics only populate on deployments with or configurations. For deployments with configured, select metrics populate under the endpoint. | Name | Description | |:-|:| | `miniobucketreplicationlastminutefailedbytes` | Total number of bytes failed at least once to replicate in the last full minute. | | `miniobucketreplicationlastminutefailedcount` | Total number of objects which failed replication in the last full minute. | | `miniobucketreplicationlasthourfailedbytes` | Total number of bytes failed at least once to replicate in the last full hour. | | `miniobucketreplicationlasthourfailedcount` | Total number of objects which failed replication in the last full hour. | | `miniobucketreplicationtotalfailed_bytes` | Total number of bytes failed at least once to replicate since server start. | | `miniobucketreplicationtotalfailed_count` | Total number of objects which failed replication since server start. | | `miniobucketreplicationlatencyms` | Replication latency in milliseconds. | | `miniobucketreplicationreceivedbytes` | Total number of bytes replicated to this bucket from another source bucket. | | `miniobucketreplicationreceivedcount` | Total number of objects received by this bucket from another source bucket. | | `miniobucketreplicationsentbytes` | Total number of bytes replicated to the target bucket. | | `miniobucketreplicationsentcount` | Total number of objects replicated to the target"
},
{
"data": "| | `miniobucketreplicationcredentialerrors` | Total number of replication credential errors since server start | | `miniobucketreplicationproxiedgetrequeststotal` | Number of GET requests proxied to replication target | | `miniobucketreplicationproxiedheadrequeststotal` | Number of HEAD requests proxied to replication target | | `miniobucketreplicationproxieddeletetaggingrequests_total` | Number of DELETE tagging requests proxied to replication target | | `miniobucketreplicationproxiedgettaggingrequests_total` | Number of GET tagging requests proxied to replication target | | `miniobucketreplicationproxiedputtaggingrequests_total` | Number of PUT tagging requests proxied to replication target | | `miniobucketreplicationproxiedgetrequestsfailures` | Number of failures in GET requests proxied to replication target | | `miniobucketreplicationproxiedheadrequestsfailures` | Number of failures in HEAD requests proxied to replication target | | `miniobucketreplicationproxieddeletetaggingrequests_failures` | Number of failures in DELETE tagging proxy requests to replication target | | `miniobucketreplicationproxiedgettaggingrequests_failures` |Number of failures in GET tagging proxy requests to replication target | | `miniobucketreplicationproxiedputtaggingrequests_failures` | Number of failures in PUT tagging proxy requests to replication target | | Name | Description | |:--|:| | `miniobuckettrafficreceivedbytes` | Total number of S3 bytes received for this bucket. | | `miniobuckettrafficsentbytes` | Total number of S3 bytes sent for this bucket. | | Name | Description | |:-|:--| | `miniobucketusageobjecttotal` | Total number of objects. | | `miniobucketusageversiontotal` | Total number of versions (includes delete marker) | | `miniobucketusagedeletemarkertotal` | Total number of delete markers. | | `miniobucketusagetotalbytes` | Total bucket size in bytes. | | `miniobucketquotatotalbytes` | Total bucket quota size in bytes. | | Name | Description | |:--|:-| | `miniobucketrequests4xxerrors_total` | Total number of S3 requests with (4xx) errors on a bucket. | | `miniobucketrequests5xxerrors_total` | Total number of S3 requests with (5xx) errors on a bucket. | | `miniobucketrequestsinflighttotal` | Total number of S3 requests currently in flight on a bucket. | | `miniobucketrequests_total` | Total number of S3 requests on a bucket. | | `miniobucketrequestscanceledtotal` | Total number S3 requests canceled by the client. | | `miniobucketrequeststtfbseconds_distribution` | Distribution of time to first byte across API calls per bucket. | MinIO collects the following resource metrics at the node level. Each metric includes the `server` label to identify the corresponding node. Metrics may include one or more additional labels, such as the drive path, interface name, etc. These metrics can be obtained from any MinIO server once per collection by using the following URL: ```shell https://HOSTNAME:PORT/minio/v2/metrics/resource ``` Replace `HOSTNAME:PORT` with the hostname of your MinIO deployment. For deployments behind a load balancer, use the load balancer hostname instead of a single node hostname. | Name | Description | |:-|:| | `minionodedrivetotalbytes` | Total bytes on a drive. | | `minionodedriveusedbytes` | Used bytes on a drive. | | `minionodedrivetotalinodes` | Total inodes on a drive. | | `minionodedriveusedinodes` | Total inodes used on a drive. | | `minionodedrivereadsper_sec` | Reads per second on a drive. | | `minionodedrivereadskbpersec` | Kilobytes read per second on a drive. | | `minionodedrivereadsawait` | Average time for read requests to be served on a drive. | | `minionodedrivewritesper_sec` | Writes per second on a drive. | | `minionodedrivewriteskbpersec` | Kilobytes written per second on a drive. | | `minionodedrivewritesawait` | Average time for write requests to be served on a drive. | | `minionodedrivepercutil` | Percentage of time the disk was busy since uptime. | | Name | Description | |:|:--| | `minionodeifrxbytes` | Bytes received on the interface in 60s. | | `minionodeifrxbytes_avg` | Bytes received on the interface in 60s (avg) since uptime. | | `minionodeifrxbytes_max` | Bytes received on the interface in 60s (max) since uptime. | | `minionodeifrxerrors` | Receive errors in 60s. | | `minionodeifrxerrors_avg` | Receive errors in 60s (avg). | | `minionodeifrxerrors_max` | Receive errors in 60s (max). | | `minionodeiftxbytes` | Bytes transmitted in 60s. | | `minionodeiftxbytes_avg` | Bytes transmitted in 60s (avg). | | `minionodeiftxbytes_max` | Bytes transmitted in 60s (max). | | `minionodeiftxerrors` | Transmit errors in 60s. | | `minionodeiftxerrors_avg` | Transmit errors in 60s"
},
{
"data": "| | `minionodeiftxerrors_max` | Transmit errors in 60s (max). | | Name | Description | |:-|:-| | `minionodecpuavguser` | CPU user time. | | `minionodecpuavguser_avg` | CPU user time (avg). | | `minionodecpuavguser_max` | CPU user time (max). | | `minionodecpuavgsystem` | CPU system time. | | `minionodecpuavgsystem_avg` | CPU system time (avg). | | `minionodecpuavgsystem_max` | CPU system time (max). | | `minionodecpuavgidle` | CPU idle time. | | `minionodecpuavgidle_avg` | CPU idle time (avg). | | `minionodecpuavgidle_max` | CPU idle time (max). | | `minionodecpuavgiowait` | CPU ioWait time. | | `minionodecpuavgiowait_avg` | CPU ioWait time (avg). | | `minionodecpuavgiowait_max` | CPU ioWait time (max). | | `minionodecpuavgnice` | CPU nice time. | | `minionodecpuavgnice_avg` | CPU nice time (avg). | | `minionodecpuavgnice_max` | CPU nice time (max). | | `minionodecpuavgsteal` | CPU steam time. | | `minionodecpuavgsteal_avg` | CPU steam time (avg). | | `minionodecpuavgsteal_max` | CPU steam time (max). | | `minionodecpuavgload1` | CPU load average 1min. | | `minionodecpuavgload1_avg` | CPU load average 1min (avg). | | `minionodecpuavgload1_max` | CPU load average 1min (max). | | `minionodecpuavgload1_perc` | CPU load average 1min (percentage). | | `minionodecpuavgload1percavg` | CPU load average 1min (percentage) (avg). | | `minionodecpuavgload1percmax` | CPU load average 1min (percentage) (max). | | `minionodecpuavgload5` | CPU load average 5min. | | `minionodecpuavgload5_avg` | CPU load average 5min (avg). | | `minionodecpuavgload5_max` | CPU load average 5min (max). | | `minionodecpuavgload5_perc` | CPU load average 5min (percentage). | | `minionodecpuavgload5percavg` | CPU load average 5min (percentage) (avg). | | `minionodecpuavgload5percmax` | CPU load average 5min (percentage) (max). | | `minionodecpuavgload15` | CPU load average 15min. | | `minionodecpuavgload15_avg` | CPU load average 15min (avg). | | `minionodecpuavgload15_max` | CPU load average 15min (max). | | `minionodecpuavgload15_perc` | CPU load average 15min (percentage). | | `minionodecpuavgload15percavg` | CPU load average 15min (percentage) (avg). | | `minionodecpuavgload15percmax` | CPU load average 15min (percentage) (max). | | Name | Description | |:-|:| | `minionodemem_available` | Available memory on the node. | | `minionodememavailableavg` | Available memory on the node (avg). | | `minionodememavailablemax` | Available memory on the node (max). | | `minionodemem_buffers` | Buffers memory on the node. | | `minionodemembuffersavg` | Buffers memory on the node (avg). | | `minionodemembuffersmax` | Buffers memory on the node (max). | | `minionodemem_cache` | Cache memory on the node. | | `minionodememcacheavg` | Cache memory on the node (avg). | | `minionodememcachemax` | Cache memory on the node (max). | | `minionodemem_free` | Free memory on the node. | | `minionodememfreeavg` | Free memory on the node (avg). | | `minionodememfreemax` | Free memory on the node (max). | | `minionodemem_shared` | Shared memory on the node. | | `minionodememsharedavg` | Shared memory on the node (avg). | | `minionodememsharedmax` | Shared memory on the node (max). | | `minionodemem_total` | Total memory on the node. | | `minionodememtotalavg` | Total memory on the node (avg). | | `minionodememtotalmax` | Total memory on the node (max). | | `minionodemem_used` | Used memory on the node. | | `minionodememusedavg` | Used memory on the node (avg). | | `minionodememusedmax` | Used memory on the node (max). | | `minionodememusedperc` | Used memory percentage on the node. | | `minionodememusedperc_avg` | Used memory percentage on the node (avg). | | `minionodememusedperc_max` | Used memory percentage on the node (max). |"
}
] |
{
"category": "Runtime",
"file_name": "list.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Load Balancing and Fault Resilience with weaveDNS menu_order: 20 search_type: Documentation It is permissible to register multiple containers with the same name: weaveDNS returns all addresses, in a random order, for each request. This provides a basic load balancing capability. Expanding the , let us start an additional `pingme` container on a second host, and then run some ping tests. ``` host2$ weave launch $HOST1 host2$ eval $(weave env) host2$ docker run -dti --name=pingme weaveworks/ubuntu root@ubuntu:/# ping -nq -c 1 pingme PING pingme.weave.local (10.32.0.2) 56(84) bytes of data. ... root@ubuntu:/# ping -nq -c 1 pingme PING pingme.weave.local (10.40.0.1) 56(84) bytes of data. ... root@ubuntu:/# ping -nq -c 1 pingme PING pingme.weave.local (10.40.0.1) 56(84) bytes of data. ... root@ubuntu:/# ping -nq -c 1 pingme PING pingme.weave.local (10.32.0.2) 56(84) bytes of data. ... ``` Notice how the ping reaches different addresses. However, due to most DNS resolver libraries prefer certain addresses over others, to the point where in some circumstances the same address is always chosen. To avoid this behaviour, applications may want to perform their own address selection, e.g. by choosing a random entry from the result of . WeaveDNS removes the addresses of any container that dies. This offers a simple way to implement redundancy. E.g. if in our example we stop one of the `pingme` containers and re-run the ping tests, eventually (within ~30s at most, since that is the weaveDNS ) we will only be hitting the address of the container that is still alive. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "load-balance-fault-weavedns.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "layout: global title: Documentation Conventions This documentation provides a writing style guide that portrays professionalism and efficiency in delivering technical content in Alluxio documentation. The C's, in order of importance: Be correct Be concise Be consistent Be ceremonial or formal (because ceremonial was the best synonym to formal that started with a C) No documentation is better than incorrect documentation. Information conveyed is accurate Use a spell checker to fix typos Capitalize acronyms Ex. AWS, TTL, UFS, API, URL, SSH, I/O Capitalize proper nouns Ex. Alluxio, Hadoop, Java No one wants to read more words than necessary. Use the , the same tone used when issuing a command \"Run the command to start the process\" Not* \"Next, you can run the command to start the process\" \"Include a SocketAppender in the configuration...\" Not* \"A SocketAppender can be included in the configuration...\" Use the \"The process fails when misconfigured\" Not* \"The process will fail when misconfigured\" Not* \"It is known that starting the process will fail when misconfigured\" Dont use unnecessary punctuation Avoid using parentheses to de-emphasize a section Incorrect example: \"Alluxio serves as a new data access layer in the ecosystem, residing between any persistent storage systems (such as Amazon S3, Microsoft Azure Object Store, or Apache HDFS) and computation frameworks (such as Apache Spark, Presto, or Hadoop MapReduce).\" Reduce the use of dependent clauses that add no content Remove usages of the following: For example, ... However, ... First, ... There are many technical terms used throughout; it can potentially cause confusion when the same idea is expressed in multiple ways. See terminology table below When in doubt, search to see how similar documentation expresses the same term Code-like text should be annotated with backticks File paths Property keys and values Bash commands or flags Code blocks should be annotated with the associated file or usage type,"
},
{
"data": "```` ```java```` for Java source code ```` ```properties```` for a Java property file ```` ```shell```` for an interactive session in shell ```` ```bash```` for a shell script Alluxio prefixed terms, such as namespace, cache, or storage, should be preceded by \"the\" to differentiate from the commonly used term, but remain in lowercase if not a proper noun Ex. The data will be copied into the Alluxio storage*. Ex. When a new file is added to the Alluxio namespace*, ... Ex. The Alluxio master* never reads or writes data directly ... Documentation is not a conversation. Dont follow the same style as you would use when chatting with someone. Use the , also known as the Oxford comma, when listing items Example: \"Alluxio integrates with storage systems such as Amazon S3, Apache HDFS, and Microsoft Azure Object Store.\" Note the last comma after \"HDFS\". Avoid using contractions; remove the apostrophe and expand Dont -> Do not One space separates the ending period of a sentence and starting character of the next sentence; this has been the norm . Avoid using abbreviations Doc -> Documentation <table class=\"table table-striped\"> <tr> <th>Correct, preferred term</th> <th>Incorrect or less preferred term</th> </tr> <tr> <td markdown=\"span\">File system</td> <td markdown=\"span\">Filesystem</td> </tr> <tr> <td markdown=\"span\">Leading master</td> <td markdown=\"span\">Leader, lead master, primary master</td> </tr> <tr> <td markdown=\"span\">Standby master</td> <td markdown=\"span\">Backup master, following master, follower master</td> </tr> <tr> <td markdown=\"span\">Containerized</td> <td markdown=\"span\">Dockerized</td> </tr> <tr> <td markdown=\"span\">Superuser</td> <td markdown=\"span\">Super-user, super user</td> </tr> <tr> <td markdown=\"span\">I/O</td> <td markdown=\"span\">i/o, IO</td> </tr> <tr> <td markdown=\"span\">High availability mode</td> <td markdown=\"span\">Fault tolerance mode (Use of \"fault tolerance\" is fine, but not when interchangeable with \"high availability\")</td> </tr> <tr> <td markdown=\"span\">Hostname</td> <td markdown=\"span\">Host name</td> </tr> </table> Each sentence starts in a new line for ease of reviewing diffs. We do not have an official maximum characters per line for documentation files, but feel free to split sentences into separate lines to avoid needing to scroll horizontally to read."
}
] |
{
"category": "Runtime",
"file_name": "Documentation-Conventions.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(benchmark-performance)= The performance of your Incus server or cluster depends on a lot of different factors, ranging from the hardware, the server configuration, the selected storage driver and the network bandwidth to the overall usage patterns. To find the optimal configuration, you should run benchmark tests to evaluate different setups. Incus provides a benchmarking tool for this purpose. This tool allows you to initialize or launch a number of containers and measure the time it takes for the system to create the containers. If you run this tool repeatedly with different configurations, you can compare the performance and evaluate which is the ideal configuration. If the `incus-benchmark` tool isn't provided with your installation, you can build it from source. Make sure that you have `go` (see {ref}`requirements-go`) installed and install the tool with the following command: go install github.com/lxc/incus/cmd/incus-benchmark@latest Run `incus-benchmark [action]` to measure the performance of your Incus setup. The benchmarking tool uses the current Incus configuration. If you want to use a different project, specify it with `--project`. For all actions, you can specify the number of parallel threads to use (default is to use a dynamic batch size). You can also choose to append the results to a CSV report file and label them in a certain way. See `incus-benchmark help` for all available actions and flags. Before you run the benchmark, select what kind of image you want to use. Local image : If you want to measure the time it takes to create a container and ignore the time it takes to download the image, you should copy the image to your local image store before you run the benchmarking tool. To do so, run a command similar to the following and specify the fingerprint (for example, `2d21da400963`) of the image when you run `incus-benchmark`: incus image copy"
},
{
"data": "local: You can also assign an alias to the image and specify that alias (for example, `ubuntu`) when you run `incus-benchmark`: incus image copy images:ubuntu/22.04 local: --alias ubuntu Remote image : If you want to include the download time in the overall result, specify a remote image (for example, `images:ubuntu/22.04`). The default image that `incus-benchmark` uses is the latest Ubuntu image (`images:ubuntu`), so if you want to use this image, you can leave out the image name when running the tool. Run the following command to create a number of containers: incus-benchmark init --count <number> <image> Add `--privileged` to the command to create privileged containers. For example: ```{list-table} :header-rows: 1 - Command Description - `incus-benchmark init --count 10 --privileged` Create ten privileged containers that use the latest Ubuntu image. - `incus-benchmark init --count 20 --parallel 4 images:alpine/edge` Create 20 containers that use the Alpine Edge image, using four parallel threads. - `incus-benchmark init 2d21da400963` Create one container that uses the local image with the fingerprint `2d21da400963`. - `incus-benchmark init --count 10 ubuntu` Create ten containers that use the image with the alias `ubuntu`. ``` If you use the `init` action, the benchmarking containers are created but not started. To start the containers that you created, run the following command: incus-benchmark start Alternatively, use the `launch` action to both create and start the containers: incus-benchmark launch --count 10 <image> For this action, you can add the `--freeze` flag to freeze each container right after it starts. Freezing a container pauses its processes, so this flag allows you to measure the pure launch times without interference of the processes that run in each container after startup. To delete the benchmarking containers that you created, run the following command: incus-benchmark delete ```{note} You must delete all existing benchmarking containers before you can run a new benchmark. ```"
}
] |
{
"category": "Runtime",
"file_name": "benchmark_performance.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The governance model adopted in Vineyard is influenced by many CNCF projects. Open: Vineyard is open source community. See . Welcoming and respectful: See . Transparent and accessible: Work and collaboration should be done in public. Merit: Ideas and contributions are accepted according to their technical merit and alignment with project objectives, scope and design principles. Classify GitHub issues and perform pull request reviews for other maintainers and the community. During GitHub issue classification, apply all applicable to each new issue. Use your best judgement to apply labels, since they are extremely useful for follow-up of future issues. Maintainers are expected to respond to assigned Pull Requests in a reasonable time frame. Participate when called upon in the security release process. Note that although this should be a rare occurrence, if a serious vulnerability is found, the process may take up to several full days of work to implement. In general continue to be willing to spend at least 20% of your time working on Vineyard (1 day per week). Talk to one of the existing project to show your interest in becoming a maintainer. Becoming a maintainer generally means that you are going to be spending substantial time (>20%) on Vineyard for the foreseeable future. We will expect you to start contributing increasingly complicated PRs, under the guidance of the existing maintainers. We may ask you to do some PRs from our backlog. As you gain experience with the code base and our standards, we will ask you to do code reviews for incoming PRs. After a period of approximately 3 months of working together and making sure we see eye to eye, the existing maintainers will confer and decide whether to grant maintainer status or not. We make no guarantees on the length of time this will take, but 3 months is an approximate goal. If a maintainer is no longer interested or cannot perform the maintainer duties listed above, they should volunteer to be moved to emeritus status. The Vineyard community will never forcefully remove a current Maintainer, unless a maintainer fails to meet the principles of Vineyard community. Decisions are made based on consensus between maintainers. In extreme cases, a simple majority voting process is invoked where each maintainer receives one vote. Proposals and ideas can either be submitted for agreement via a github issue or PR, or by sending an email to . In general, we prefer that technical issues and maintainer membership are amicably worked out between the persons involved. If a dispute cannot be decided independently, get a third-party maintainer (e.g. a mutual contact with some background on the issue, but not involved in the conflict) to intercede and the final decision will be made. Decision making process should be transparent to adhere to the principles of Vineyard project. The Vineyard is aligned with the CNCF Code of Conduct. Some contents in this documents have been borrowed from and ."
}
] |
{
"category": "Runtime",
"file_name": "GOVERNANCE.md",
"project_name": "Vineyard",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "As a rook-ceph user, I should be able to setup mirroring across multiple clusters with overlapping networks in order to protect my data in case of any disaster. Sometimes users wish to connect two Kubernetes clusters into a single logical cluster. Often, both clusters may have a standardized install with overlapping CIDRs (Service CIDR and/or Pod network CIDR). The Kubernetes \"sig-multicluster\" community SIG defines a Multi-Cluster Services (MCS) API for providing cross-cluster connectivity from pods to remote services using global IPs. In order to support Kubernetes clusters connected by an MCS API-compatible application, Rook needs to use \"clusterset\" IPs instead of the Services' cluster IPs. Peer clusters should be connected using an MCS API compatible application. For example For scenarios like RBD mirroring, peers need direct access to Ceph OSDs. Each OSD will have to have a standard ClusterIP Service created for it to allow this. The OSD Service will be created only when multi-cluster networking support is enabled. The reference implementation used for development of this feature is . In the reference implementation, it is important for Services to be of `type` ClusterIP. Headless Services don't have internal routing between OSDs local to a cluster or any other Ceph daemons local to the cluster. Provide an option to enable `multiClusterService` in the `cephCluster` CR ``` yaml spec: network: multiClusterService: enabled: true ``` `ServiceExport` CR is used to specify which services should be exposed across all the clusters in the cluster set. The exported service becomes accessible as `<service>.<ns>.svc.clusterset.local`. Create ServiceExport resource for each mon and OSD service. ```yaml apiVersion: multicluster.x-k8s.io/v1alpha1 kind: ServiceExport name: <name> namespace: rook-ceph ``` Here, the ServiceExport resource name should be the name of the service that should be exported. For each mon and OSDs service: Create a corresponding `ServiceExport` resource. Verify that status of the `ServiceExport` should be `true`. Sample Status: ```yaml Status: Conditions: Last Transition Time: 2020-12-01T12:35:32Z Message: Awaiting sync of the ServiceImport to the broker Reason: AwaitingSync Status: False Type: Valid Last Transition Time: 2020-12-01T12:35:32Z Message: Service was successfully synced to the broker Reason: Status: True Type: Valid ``` Obtain the global IP for the exported service by issuing a DNS query to `<service>.<ns>.svc.clusterset.local`. Use this IP in the `--public-addr` flag when creating the mon or OSD deployment. The OSD will bind to the POD IP with the flag `--public-bind-addr` Ensure that mon endpoints configMap has the global IPs. The mons don't work on updating the IPs after they're already running. The monmap remembers the mon Public IP, so if it changes, they will see it as an error state and not respond on the new one. If user enables `multiClusterService` on an existing cluster where mons are already using the Cluster IP of the kubernetes service, then the operator should failover each mon to start a new mon."
}
] |
{
"category": "Runtime",
"file_name": "multi-cluster-service.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: List of Metrics There are two types of metrics in Alluxio, cluster-wide aggregated metrics, and per-process detailed metrics. Cluster metrics are collected and calculated by the leading master and displayed in the metrics tab of the web UI. These metrics are designed to provide a snapshot of the cluster state and the overall amount of data and metadata served by Alluxio. Process metrics are collected by each Alluxio process and exposed in a machine-readable format through any configured sinks. Process metrics are highly detailed and are intended to be consumed by third-party monitoring tools. Users can then view fine-grained dashboards with time-series graphs of each metric, such as data transferred or the number of RPC invocations. Metrics in Alluxio have the following format for master node metrics: ``` Master.[metricName].[tag1].[tag2]... ``` Metrics in Alluxio have the following format for non-master node metrics: ``` [processType].[metricName].[tag1].[tag2]...[hostName] ``` There is generally an Alluxio metric for every RPC invocation, to Alluxio or to the under store. Tags are additional pieces of metadata for the metric such as user name or under storage location. Tags can be used to further filter or aggregate on various characteristics. Workers and clients send metrics data to the Alluxio master through heartbeats. The interval is defined by property `alluxio.master.worker.heartbeat.interval` and `alluxio.user.metrics.heartbeat.interval` respectively. Bytes metrics are aggregated value from workers or clients. Bytes throughput metrics are calculated on the leading master. The values of bytes throughput metrics equal to bytes metrics counter value divided by the metrics record time and shown as bytes per minute. <table class=\"table table-striped\"> <tr><th style=\"width:35%\">Name</th><th style=\"width:15%\">Type</th><th style=\"width:50%\">Description</th></tr> {% for item in site.data.generated.cluster-metrics %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.metricName }}\"></a> `{{ item.metricName }}`</td> <td markdown=\"span\">{{ item.metricType }}</td> <td markdown=\"span\">{{ site.data.generated.en.cluster-metrics[item.metricName] }}</td> </tr> {% endfor %} </table> Metrics shared by the all Alluxio server and client processes. <table class=\"table table-striped\"> <tr><th style=\"width:35%\">Name</th><th style=\"width:15%\">Type</th><th style=\"width:50%\">Description</th></tr> {% for item in site.data.generated.process-metrics %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.metricName }}\"></a> `{{ item.metricName }}`</td> <td markdown=\"span\">{{ item.metricType }}</td> <td markdown=\"span\">{{ site.data.generated.en.process-metrics[item.metricName] }}</td> </tr> {% endfor %} </table> Metrics shared by the Alluxio server processes. <table class=\"table table-striped\"> <tr><th style=\"width:35%\">Name</th><th style=\"width:15%\">Type</th><th style=\"width:50%\">Description</th></tr> {% for item in site.data.generated.server-metrics %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.metricName }}\"></a> `{{ item.metricName }}`</td> <td markdown=\"span\">{{ item.metricType }}</td> <td markdown=\"span\">{{ site.data.generated.en.server-metrics[item.metricName] }}</td> </tr> {% endfor %} </table> <table class=\"table table-striped\"> <tr><th style=\"width:35%\">Name</th><th style=\"width:15%\">Type</th><th style=\"width:50%\">Description</th></tr> {% for item in site.data.generated.master-metrics %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.metricName }}\"></a> `{{ item.metricName }}`</td> <td>{{ item.metricType }}</td> <td>{{ site.data.generated.en.master-metrics[item.metricName] }}</td> </tr> {% endfor %} </table> <table class=\"table table-striped\"> <tr> <th>Metric Name</th> <th>Description</th> </tr> <tr> <td markdown=\"span\">`Master.CapacityTotalTier{TIER_NAME}`</td> <td markdown=\"span\">Total capacity in tier `{TIER_NAME}` of the Alluxio file system in bytes</td> </tr> <tr> <td markdown=\"span\">`Master.CapacityUsedTier{TIER_NAME}`</td> <td markdown=\"span\">Used capacity in tier `{TIER_NAME}` of the Alluxio file system in bytes</td> </tr> <tr> <td markdown=\"span\">`Master.CapacityFreeTier{TIER_NAME}`</td> <td markdown=\"span\">Free capacity in tier `{TIER_NAME}` of the Alluxio file system in bytes</td> </tr> <tr> <td markdown=\"span\">`Master.UfsSessionCount-Ufs:{UFS_ADDRESS}`</td> <td markdown=\"span\">The total number of currently opened UFS sessions to connect to the given `{UFS_ADDRESS}`</td> </tr> <tr> <td markdown=\"span\">`Master.{UFSRPCNAME}.UFS:{UFSADDRESS}.UFSTYPE:{UFS_TYPE}.User:{USER}`</td> <td markdown=\"span\">The details UFS rpc operation done by the current master</td> </tr> <tr> <td markdown=\"span\">`Master.PerUfsOp{UFSRPCNAME}.UFS:{UFS_ADDRESS}`</td> <td markdown=\"span\">The aggregated number of UFS operation `{UFSRPCNAME}` ran on UFS `{UFS_ADDRESS}` by leading master</td> </tr> <tr> <td markdown=\"span\">`Master.{LEADINGMASTERRPC_NAME}`</td> <td markdown=\"span\">The duration statistics of RPC calls exposed on leading master</td> </tr> </table> <table class=\"table table-striped\"> <tr><th style=\"width:35%\">Name</th><th style=\"width:15%\">Type</th><th style=\"width:50%\">Description</th></tr> {% for item in site.data.generated.worker-metrics %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.metricName }}\"></a> `{{ item.metricName }}`</td> <td>{{ item.metricType }}</td> <td>{{ site.data.generated.en.worker-metrics[item.metricName] }}</td> </tr> {% endfor %} </table> <table class=\"table table-striped\"> <tr> <th style=\"width:35%\">Metric Name</th> <th style=\"width:65%\">Description</th> </tr> <tr> <td"
},
{
"data": "<td markdown=\"span\">The total number of currently opened UFS sessions to connect to the given `{UFS_ADDRESS}`</td> </tr> <tr> <td markdown=\"span\">`Worker.{RPC_NAME}`</td> <td markdown=\"span\">The duration statistics of RPC calls exposed on workers</td> </tr> </table> Each client metric will be recorded with its local hostname or `alluxio.user.app.id` is configured. If `alluxio.user.app.id` is configured, multiple clients can be combined into a logical application. <table class=\"table table-striped\"> <tr><th style=\"width:35%\">Name</th><th style=\"width:15%\">Type</th><th style=\"width:50%\">Description</th></tr> {% for item in site.data.generated.client-metrics %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.metricName }}\"></a> `{{ item.metricName }}`</td> <td>{{ item.metricType }}</td> <td>{{ site.data.generated.en.client-metrics[item.metricName] }}</td> </tr> {% endfor %} </table> Fuse is a long-running Alluxio client. Depending on the launching ways, Fuse metrics show as client metrics when Fuse client is launching in a standalone AlluxioFuse process. worker metrics when Fuse client is embedded in the AlluxioWorker process. Fuse metrics includes: <table class=\"table table-striped\"> <tr><th style=\"width:35%\">Name</th><th style=\"width:15%\">Type</th><th style=\"width:50%\">Description</th></tr> {% for item in site.data.generated.fuse-metrics %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.metricName }}\"></a> `{{ item.metricName }}`</td> <td>{{ item.metricType }}</td> <td>{{ site.data.generated.en.fuse-metrics[item.metricName] }}</td> </tr> {% endfor %} </table> Fuse reading/writing file count can be used as the indicators for Fuse application pressure. If a large amount of concurrent read/write occur in a short period of time, each of the read/write operations may take longer time to finish. When a user or an application runs a filesystem command under Fuse mount point, this command will be processed and translated by operating system which will trigger the related Fuse operations exposed in . The count of how many times each operation is called, and the duration of each call will be recorded with metrics name `Fuse.<FUSEOPERATIONNAME>` dynamically. The important Fuse metrics include: <table class=\"table table-striped\"> <tr> <th style=\"width:35%\">Metric Name</th> <th style=\"width:65%\">Description</th> </tr> <tr> <td markdown=\"span\">`Fuse.readdir`</td> <td markdown=\"span\">The duration metrics of listing a directory</td> </tr> <tr> <td markdown=\"span\">`Fuse.getattr`</td> <td markdown=\"span\">The duration metrics of getting the metadata of a file</td> </tr> <tr> <td markdown=\"span\">`Fuse.open`</td> <td markdown=\"span\">The duration metrics of opening a file for read or overwrite</td> </tr> <tr> <td markdown=\"span\">`Fuse.read`</td> <td markdown=\"span\">The duration metrics of reading a part of a file</td> </tr> <tr> <td markdown=\"span\">`Fuse.create`</td> <td markdown=\"span\">The duration metrics of creating a file for write</td> </tr> <tr> <td markdown=\"span\">`Fuse.write`</td> <td markdown=\"span\">The duration metrics of writing a file</td> </tr> <tr> <td markdown=\"span\">`Fuse.release`</td> <td markdown=\"span\">The duration metrics of closing a file after read or write. Note that release is async so fuse threads will not wait for release to finish</td> </tr> <tr> <td markdown=\"span\">`Fuse.mkdir`</td> <td markdown=\"span\">The duration metrics of creating a directory</td> </tr> <tr> <td markdown=\"span\">`Fuse.unlink`</td> <td markdown=\"span\">The duration metrics of removing a file or a directory</td> </tr> <tr> <td markdown=\"span\">`Fuse.rename`</td> <td markdown=\"span\">The duration metrics of renaming a file or a directory</td> </tr> <tr> <td markdown=\"span\">`Fuse.chmod`</td> <td markdown=\"span\">The duration metrics of modifying the mode of a file or a directory</td> </tr> <tr> <td markdown=\"span\">`Fuse.chown`</td> <td markdown=\"span\">The duration metrics of modifying the user and/or group ownership of a file or a directory</td> </tr> </table> Fuse related metrics include: `Client.TotalRPCClients` shows the total number of RPC clients exist that is using to or can be used to connect to master or worker for operations. Worker metrics with `Direct` keyword. When Fuse is embedded in worker process, it can go through worker internal API to read from / write to this worker. The related metrics are ended with `Direct`. For example, `Worker.BytesReadDirect` shows how many bytes are served by this worker to its embedded Fuse client for read. If `alluxio.user.block.read.metrics.enabled=true` is configured, `Client.BlockReadChunkRemote` will be recorded. This metric shows the duration statistics of reading data from remote workers via gRPC. `Client.TotalRPCClients` and `Fuse.TotalCalls` metrics are good indicator of the current load of the Fuse applications. If applications"
},
{
"data": "Tensorflow) are running on top of Alluxio Fuse but these two metrics show a much lower value than before, the training job may be stuck with Alluxio. The following metrics are collected on each instance (Master, Worker or Client). <table class=\"table table-striped\"> <tr> <th style=\"width:35%\">Metric Name</th> <th style=\"width:65%\">Description</th> </tr> <tr> <td markdown=\"span\">`name`</td> <td markdown=\"span\">The name of the JVM</td> </tr> <tr> <td markdown=\"span\">`uptime`</td> <td markdown=\"span\">The uptime of the JVM</td> </tr> <tr> <td markdown=\"span\">`vendor`</td> <td markdown=\"span\">The current JVM vendor</td> </tr> </table> <table class=\"table table-striped\"> <tr> <th style=\"width:35%\">Metric Name</th> <th style=\"width:65%\">Description</th> </tr> <tr> <td markdown=\"span\">`PS-MarkSweep.count`</td> <td markdown=\"span\">Total number of mark and sweep</td> </tr> <tr> <td markdown=\"span\">`PS-MarkSweep.time`</td> <td markdown=\"span\">The time used to mark and sweep</td> </tr> <tr> <td markdown=\"span\">`PS-Scavenge.count`</td> <td markdown=\"span\">Total number of scavenge</td> </tr> <tr> <td markdown=\"span\">`PS-Scavenge.time`</td> <td markdown=\"span\">The time used to scavenge</td> </tr> </table> Alluxio provides overall and detailed memory usage information. Detailed memory usage information of code cache, compressed class space, metaspace, PS Eden space, PS old gen, and PS survivor space is collected in each process. A subset of the memory usage metrics are listed as following: <table class=\"table table-striped\"> <tr> <th style=\"width:35%\">Metric Name</th> <th style=\"width:65%\">Description</th> </tr> <tr> <td markdown=\"span\">`total.committed`</td> <td markdown=\"span\">The amount of memory in bytes that is guaranteed to be available for use by the JVM</td> </tr> <tr> <td markdown=\"span\">`total.init`</td> <td markdown=\"span\">The amount of the memory in bytes that is available for use by the JVM</td> </tr> <tr> <td markdown=\"span\">`total.max`</td> <td markdown=\"span\">The maximum amount of memory in bytes that is available for use by the JVM</td> </tr> <tr> <td markdown=\"span\">`total.used`</td> <td markdown=\"span\">The amount of memory currently used in bytes</td> </tr> <tr> <td markdown=\"span\">`heap.committed`</td> <td markdown=\"span\">The amount of memory from heap area guaranteed to be available</td> </tr> <tr> <td markdown=\"span\">`heap.init`</td> <td markdown=\"span\">The amount of memory from heap area available at initialization</td> </tr> <tr> <td markdown=\"span\">`heap.max`</td> <td markdown=\"span\">The maximum amount of memory from heap area that is available</td> </tr> <tr> <td markdown=\"span\">`heap.usage`</td> <td markdown=\"span\">The amount of memory from heap area currently used in GB</td> </tr> <tr> <td markdown=\"span\">`heap.used`</td> <td markdown=\"span\">The amount of memory from heap area that has been used</td> </tr> <tr> <td markdown=\"span\">`pools.Code-Cache.used`</td> <td markdown=\"span\">Used memory of collection usage from the pool from which memory is used for compilation and storage of native code</td> </tr> <tr> <td markdown=\"span\">`pools.Compressed-Class-Space.used`</td> <td markdown=\"span\">Used memory of collection usage from the pool from which memory is use for class metadata</td> </tr> <tr> <td markdown=\"span\">`pools.PS-Eden-Space.used`</td> <td markdown=\"span\">Used memory of collection usage from the pool from which memory is initially allocated for most objects</td> </tr> <tr> <td markdown=\"span\">`pools.PS-Survivor-Space.used`</td> <td markdown=\"span\">Used memory of collection usage from the pool containing objects that have survived the garbage collection of the Eden space</td> </tr> </table> <table class=\"table table-striped\"> <tr> <th style=\"width:35%\">Metric Name</th> <th style=\"width:65%\">Description</th> </tr> <tr> <td markdown=\"span\">`loaded`</td> <td markdown=\"span\">The total number of classes loaded</td> </tr> <tr> <td markdown=\"span\">`unloaded`</td> <td markdown=\"span\">The total number of unloaded classes</td> </tr> </table> <table class=\"table table-striped\"> <tr> <th style=\"width:35%\">Metric Name</th> <th style=\"width:65%\">Description</th> </tr> <tr> <td markdown=\"span\">`count`</td> <td markdown=\"span\">The current number of live threads</td> </tr> <tr> <td markdown=\"span\">`daemon.count`</td> <td markdown=\"span\">The current number of live daemon threads</td> </tr> <tr> <td markdown=\"span\">`peak.count`</td> <td markdown=\"span\">The peak live thread count</td> </tr> <tr> <td markdown=\"span\">`total_started.count`</td> <td markdown=\"span\">The total number of threads started</td> </tr> <tr> <td markdown=\"span\">`deadlock.count`</td> <td markdown=\"span\">The number of deadlocked threads</td> </tr> <tr> <td markdown=\"span\">`deadlock`</td> <td markdown=\"span\">The call stack of each thread related deadlock</td> </tr> <tr> <td markdown=\"span\">`new.count`</td> <td markdown=\"span\">The number of threads with new state</td> </tr> <tr> <td markdown=\"span\">`blocked.count`</td> <td markdown=\"span\">The number of threads with blocked state</td> </tr> <tr> <td markdown=\"span\">`runnable.count`</td> <td markdown=\"span\">The number of threads with runnable state</td> </tr> <tr> <td markdown=\"span\">`terminated.count`</td> <td markdown=\"span\">The number of threads with terminated state</td> </tr> <tr> <td markdown=\"span\">`timed_waiting.count`</td> <td markdown=\"span\">The number of"
}
] |
{
"category": "Runtime",
"file_name": "Metrics-List.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "`BlobNode` is a single-machine storage engine module, mainly responsible for organizing data to disk, reading data from disk, deleting data from disk, and executing background tasks. BlobNode configuration is based on the , and the following configuration instructions mainly apply to private configuration for BlobNode. | Configuration Item | Description | Required | |:|:|:| | Public Configuration | Such as server ports, running logs, and audit logs, refer to the section | Yes | | disks | List of disk paths to register | Yes | | disable_sync | Whether to disable disk sync. A value of true means that sync is disabled, which can improve write performance. | No | | rack | Rack number. This field is required when clustermgr opens rack isolation. | No | | host | Blobnode service address for this machine, which needs to be registered with clustermgr | Yes | | mustmountpoint | Verify whether the registered path is a mount point. It is recommended to enable this in production environments. | No | | data_qos | Data QoS hierarchical flow control. It is recommended to enable this in production environments. | No | | meta_config | Metadata-related configuration, including the cache size of RocksDB. | No | | clustermgr | Clustermgr service address information, etc. | Yes | ```json { \"cluster_id\": \"cluster ID\", \"idc\": \"IDC\", \"rack\": \"rack\", \"host\": \"service address IP:port\", \"droppedbidrecord\": { \"dir\": \"directory for dropped bid records\", \"chunkbits\": \"n: file size, 2^n bytes\" }, \"disks\": [ { \"auto_format\": \"whether to automatically create directories\", \"disable_sync\": \"whether to disable disk sync\", \"path\": \"data storage directory\", \"max_chunks\": \"maximum number of chunks per disk\" }, { \"auto_format\": \"same as above\", \"disable_sync\": \"same as above\", \"path\": \"same as above: each disk needs to be configured and registered with clustermgr\", \"max_chunks\": \"same as above\" } ], \"disk_config\": { \"diskreservedspace_B\": \"reserved space per disk. The available space will be reduced by this value. Default is 60GB.\", \"compactreservedspace_B\": \"reserved space for compression per disk. Default is 20GB.\", \"chunkprotectionM\": \"protection period for released chunks. If the last modification time is within this period, release is not allowed.\", \"chunkcompactinterval_S\": \"interval for the compression task\", \"chunkcleaninterval_S\": \"interval for the chunk cleaning task\", \"chunkgccreatetimeprotection_M\": \"protection period for the creation time of chunks during cleaning\", \"chunkgcmodifytimeprotection_M\": \"protection period for the modification time of chunks during cleaning\", \"diskusageinterval_S\": \"interval for updating disk space usage\", \"diskcleantrashintervalS\": \"interval for cleaning disk garbage data\", \"disktrashprotection_M\": \"protection period for disk garbage data\", \"allowcleantrash\": \"whether to allow cleaning garbage\", \"disablemodifyin_compacting\": \"whether to disallow chunk modification during compression\", \"compactminsize_threshold\": \"minimum chunk size for compression\", \"compacttriggerthreshold\": \"chunk size at which compression is triggered\", \"compactemptyrate_threshold\": \"hole rate at which chunks can be compressed\", \"needcompactcheck\": \"whether to check the consistency of data before and after compression\", \"allowforcecompact\": \"whether to allow forced compression through the interface, bypassing compression conditions\", \"compactbatchsize\": \"number of bids per batch for compression\", \"mustmountpoint\": \"whether the data storage directory must be a mount point\", \"metricreportinterval_S\": \"interval for metric reporting\", \"data_qos\": { \"diskbandwidthMBPS\": \"bandwidth threshold per disk. When the bandwidth reaches this value, the bandwidth limit for each level is adjusted to bandwidth_MBPS*factor.\", \"disk_iops\": \"IOPS threshold per disk. When the IOPS reaches this value, the IOPS limit for each level is adjusted to the level configuration's"
},
{
"data": "\"flow_conf\": { \"level0\": { \"bandwidth_MBPS\": \"(level0 is for user read/write IO) bandwidth limit in MB/s\", \"iops\": \"IOPS limit\", \"factor\": \"limiting factor\" }, \"level1\": { \"bandwidth_MBPS\": \"(level1 is for shard repair IO) bandwidth limit in MB/s\", \"iops\": \"same as above\", \"factor\": \"same as above\" }, \"level2\": { \"bandwidth_MBPS\": \"(level2 is for disk repair, delete, and compact IO) bandwidth limit in MB/s\", \"iops\": \"same as above\", \"factor\": \"same as above\" }, \"level3\": { \"bandwidth_MBPS\": \"(level3 is for balance, drop, and manual migrate IO) bandwidth limit in MB/s\", \"iops\": \"same as above\", \"factor\": \"same as above\" }, \"level4\": { \"bandwidth_MBPS\": \"(level4 is for inspect IO) bandwidth limit in MB/s\", \"iops\": \"same as above\", \"factor\": \"same as above\" } } } }, \"meta_config\": { \"metarootprefix\": \"unified meta data storage directory configuration. Can be configured to an SSD disk to improve metadata read/write speed. Not configured by default.\", \"batchprocesscount\": \"number of batch processing requests for metadata, including deletion and writing\", \"support_inline\": \"whether to enable inline writing of small files to metadata storage RocksDB\", \"tinyfilethresholdB\": \"threshold for small files, can be set to less than or equal to 128k\", \"writepriratio\": \"ratio of write requests for metadata batch processing. The specific number is batchprocesscount*writepriratio\", \"sync\": \"whether to enable disk sync\", \"rocksdb_option\": { \"lrucache\": \"cache size\", \"writebuffersize\": \"RocksDB write buffer size\" }, \"meta_qos\": { \"level0\": { \"iops\": \"IOPS limit for user read/write IO, metadata flow control configuration\" }, \"level1\": { \"iops\": \"IOPS limit for shard repair IO\" }, \"level2\": { \"iops\": \"IOPS limit for disk repair, delete, and compact IO\" }, \"level3\": { \"iops\": \"IOPS limit for balance, drop, and manual migrate IO\" }, \"level4\": { \"iops\": \"IOPS limit for inspect IO\" } } }, \"clustermgr\": { \"hosts\": \"clustermgr service address\" }, \"blobnode\": { \"clienttimeoutms\": \"timeout for blobnode client used in background tasks\" }, \"scheduler\": { \"hostsyncinterval_ms\": \"backend node synchronization time for scheduler client used in background tasks\" }, \"chunkprotectionperiod_S\": \"protection period for expired epoch chunks based on creation time\", \"putqpslimitperdisk\": \"concurrency control for single-disk writes\", \"getqpslimitperdisk\": \"concurrency control for single-disk reads\", \"getqpslimitperkey\": \"concurrency control for reads of a single shard\", \"deleteqpslimitperdisk\": \"concurrency control for single-disk deletions\", \"shardrepairconcurrency\": \"concurrency control for background task shard repair\", \"flock_filename\": \"process file lock path\" } ``` ```json { \"bind_addr\": \":8889\", \"log\": { \"level\": 2 }, \"cluster_id\": 10001, \"idc\": \"bjht\", \"rack\": \"HT02-B11-F4-402-0406\", \"host\": \"http://10.39.34.185:8889\", \"droppedbidrecord\": { \"dir\": \"/home/service/ebs-blobnode/package/droppedbids/\", \"chunkbits\": 29 }, \"disks\": [ {\"autoformat\": true,\"disablesync\": true,\"path\": \"/home/service/var/data1\"}, {\"autoformat\": true,\"disablesync\": true,\"path\": \"/home/service/var/data2\"} ], \"disk_config\": { \"chunkcleaninterval_S\": 60, \"chunkprotectionM\": 30, \"diskcleantrashintervalS\": 60, \"disktrashprotection_M\": 1440, \"metricreportinterval_S\": 300, \"needcompactcheck\": true, \"allowforcecompact\": true, \"allowcleantrash\": true, \"mustmountpoint\": true, \"data_qos\": { \"diskbandwidthMBPS\": 200, \"disk_iops\": 8000, \"flow_conf\": { \"level0\": { \"bandwidth_MBPS\": 200, \"iops\": 4000, \"factor\": 1 }, \"level1\": { \"bandwidth_MBPS\": 40, \"iops\": 2000, \"factor\": 0.5 }, \"level2\": { \"bandwidth_MBPS\": 40, \"iops\": 2000, \"factor\": 0.5 }, \"level3\": { \"bandwidth_MBPS\": 40, \"iops\": 2000, \"factor\": 0.5 }, \"level4\": { \"bandwidth_MBPS\": 20, \"iops\": 1000, \"factor\": 0.5 } } } }, \"meta_config\": { \"sync\": false, \"rocksdb_option\": { \"lrucache\": 268435456, \"writebuffersize\": 52428800 }, \"meta_qos\": { \"level0\": { \"iops\": 8000 }, \"level1\": { \"iops\": 8000 }, \"level2\": { \"iops\": 8000 }, \"level3\": { \"iops\": 8000 }, \"level4\": { \"iops\": 8000 } } }, \"clustermgr\": { \"hosts\": [ \"http://10.39.30.78:9998\", \"http://10.39.32.224:9998\", \"http://10.39.32.234:9998\" ], \"transport_config\": { \"maxconnsper_host\": 4, \"auth\": { \"enable_auth\": false, \"secret\": \"b2e5e2ed-6fca-47ce-bfbc-5e8f0650603b\" } } }, \"blobnode\": { \"clienttimeoutms\": 5000 }, \"scheduler\": { \"hostsyncinterval_ms\": 3600000 }, \"chunkprotectionperiod_S\": 600, \"putqpslimitperdisk\": 1024, \"getqpslimitperdisk\": 1024, \"getqpslimitperkey\": 1024, \"deleteqpslimitperdisk\": 64, \"shardrepairconcurrency\": 100, \"flockfilename\": \"/home/service/ebs-blobnode/package/run/blobnode.0.flock\", \"auditlog\": { \"logdir\": \"/home/service/ebs-blobnode/_package/run/auditlog/ebs-blobnode\", \"chunkbits\": 29, \"logfilesuffix\": \".log\", \"backup\": 10, \"filters\": [ {\"should\": {\"match\": {\"path\": [\"list\", \"metrics\", \"/shard/get/\"]}}} ], \"metric_config\": { \"idc\": \"bjht\", \"service\": \"BLOBNODE\", \"team\": \"ocs\", \"enablehttpmethod\": true, \"enablereqlength_cnt\": true, \"enableresplength_cnt\": true, \"enablerespduration\": true, \"maxapilevel\": 3, \"size_buckets\": [ 0,"
}
] |
{
"category": "Runtime",
"file_name": "blobnode.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Enhancement Request about: Suggest an enhancement to the Submariner project labels: enhancement <!-- Please only use this template for submitting enhancement requests --> What would you like to be added: Why is this needed:"
}
] |
{
"category": "Runtime",
"file_name": "enhancement.md",
"project_name": "Submariner",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Cleanup Rook provides the following clean up options: To tear down the cluster, the following resources need to be cleaned up: The resources created under Rook's namespace (default `rook-ceph`) such as the Rook operator created by `operator.yaml` and the cluster CR `cluster.yaml`. `/var/lib/rook/rook-ceph`: Path on each host in the cluster where configuration is stored by the ceph mons and osds Devices used by the OSDs If the default namespaces or paths such as `dataDirHostPath` are changed in the example yaml files, these namespaces and paths will need to be changed throughout these instructions. If tearing down a cluster frequently for development purposes, it is instead recommended to use an environment such as that can easily be reset without worrying about any of these steps. First clean up the resources from applications that consume the Rook storage. These commands will clean up the resources from the example application and walkthroughs (unmount volumes, delete volume claims, etc). ```console kubectl delete -f ../wordpress.yaml kubectl delete -f ../mysql.yaml kubectl delete -n rook-ceph cephblockpool replicapool kubectl delete storageclass rook-ceph-block kubectl delete -f csi/cephfs/kube-registry.yaml kubectl delete storageclass csi-cephfs ``` !!! important After applications have been cleaned up, the Rook cluster can be removed. It is important to delete applications before removing the Rook operator and Ceph cluster. Otherwise, volumes may hang and nodes may require a restart. !!! warning DATA WILL BE PERMANENTLY DELETED AFTER DELETING THE `CephCluster` To instruct Rook to wipe the host paths and volumes, edit the `CephCluster` and add the `cleanupPolicy`: ```console kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '{\"spec\":{\"cleanupPolicy\":{\"confirmation\":\"yes-really-destroy-data\"}}}' ``` Once the cleanup policy is enabled, any new configuration changes in the CephCluster will be blocked. Nothing will happen until the deletion of the CR is requested, so this `cleanupPolicy` change can still be reverted if needed. Checkout more details about the `cleanupPolicy` Delete the `CephCluster` CR. ```console kubectl -n rook-ceph delete cephcluster rook-ceph ``` Verify that the cluster CR has been deleted before continuing to the next step. ```console kubectl -n rook-ceph get cephcluster ``` If the `cleanupPolicy` was applied, wait for the `rook-ceph-cleanup` jobs to be completed on all the"
},
{
"data": "These jobs will perform the following operations: Delete the namespace directory under `dataDirHostPath`, for example `/var/lib/rook/rook-ceph`, on all the nodes Wipe the data on the drives on all the nodes where OSDs were running in this cluster !!! note The cleanup jobs might not start if the resources created on top of Rook Cluster are not deleted completely. See Remove the Rook operator, RBAC, and CRDs, and the `rook-ceph` namespace. ```console kubectl delete -f operator.yaml kubectl delete -f common.yaml kubectl delete -f crds.yaml ``` !!! attention The final cleanup step requires deleting files on each host in the cluster. All files under the `dataDirHostPath` property specified in the cluster CRD will need to be deleted. Otherwise, inconsistent state will remain when a new cluster is started. If the `cleanupPolicy` was not added to the CephCluster CR before deleting the cluster, these manual steps are required to tear down the cluster. Connect to each machine and delete the namespace directory under `dataDirHostPath`, for example `/var/lib/rook/rook-ceph`. Disks on nodes used by Rook for OSDs can be reset to a usable state. Note that these scripts are not one-size-fits-all. Please use them with discretion to ensure you are not removing data unrelated to Rook. A single disk can usually be cleared with some or all of the steps below. ```console DISK=\"/dev/sdX\" sgdisk --zap-all $DISK dd if=/dev/zero of=\"$DISK\" bs=1M count=100 oflag=direct,dsync blkdiscard $DISK partprobe $DISK ``` Ceph can leave LVM and device mapper data on storage drives, preventing them from being redeployed. These steps can clean former Ceph drives for reuse. Note that this only needs to be run once on each node. If you have only one Rook cluster and all Ceph disks are being wiped, run the following command. ```console ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove % rm -rf /dev/ceph-* rm -rf /dev/mapper/ceph--* ``` If disks are still reported locked, rebooting the node often helps clear LVM-related holds on disks. If there are multiple Ceph clusters and some disks are not wiped yet, it is necessary to manually determine which disks map to which device mapper devices. The most common issue cleaning up the cluster is that the `rook-ceph` namespace or the cluster CRD remain indefinitely in the `terminating` state. A namespace cannot be removed until all of its resources are removed, so determine which resources are pending termination. If a pod is still terminating, consider forcefully terminating the pod (`kubectl -n rook-ceph delete pod <name>`). ```console kubectl -n rook-ceph get pod ``` If the cluster CRD still exists even though it has been deleted, see the next section on removing the"
},
{
"data": "```console kubectl -n rook-ceph get cephcluster ``` When a Cluster CRD is created, a is added automatically by the Rook operator. The finalizer will allow the operator to ensure that before the cluster CRD is deleted, all block and file mounts will be cleaned up. Without proper cleanup, pods consuming the storage will be hung indefinitely until a system reboot. The operator is responsible for removing the finalizer after the mounts have been cleaned up. If for some reason the operator is not able to remove the finalizer (i.e., the operator is not running anymore), delete the finalizer manually with the following command: ```console for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ {print $1}'); do kubectl get -n rook-ceph \"$CRD\" -o name | \\ xargs -I {} kubectl patch -n rook-ceph {} --type merge -p '{\"metadata\":{\"finalizers\": []}}' done ``` If the namespace is still stuck in Terminating state, check which resources are holding up the deletion and remove their finalizers as well: ```console kubectl api-resources --verbs=list --namespaced -o name \\ | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph ``` Rook adds a finalizer `ceph.rook.io/disaster-protection` to resources critical to the Ceph cluster so that the resources will not be accidentally deleted. The operator is responsible for removing the finalizers when a CephCluster is deleted. If the operator is not able to remove the finalizers (i.e., the operator is not running anymore), remove the finalizers manually: ```console kubectl -n rook-ceph patch configmap rook-ceph-mon-endpoints --type merge -p '{\"metadata\":{\"finalizers\": []}}' kubectl -n rook-ceph patch secrets rook-ceph-mon --type merge -p '{\"metadata\":{\"finalizers\": []}}' ``` To keep your data safe in the cluster, Rook disallows deleting critical cluster resources by default. To override this behavior and force delete a specific custom resource, add the annotation `rook.io/force-deletion=\"true\"` to the resource and then delete it. Rook will start a cleanup job that will delete all the related ceph resources created by that custom resource. For example, run the following commands to clean the `CephFilesystemSubVolumeGroup` resource named `my-subvolumegroup` ``` console kubectl -n rook-ceph annotate cephfilesystemsubvolumegroups.ceph.rook.io my-subvolumegroup rook.io/force-deletion=\"true\" kubectl -n rook-ceph delete cephfilesystemsubvolumegroups.ceph.rook.io my-subvolumegroup ``` Once the cleanup job is completed successfully, Rook will remove the finalizers from the deleted custom resource. This cleanup is supported only for the following custom resources: | Custom Resource | Ceph Resources to be cleaned up | | -- | - | | CephFilesystemSubVolumeGroup | CSI stored RADOS OMAP details for pvc/volumesnapshots, subvolume snapshots, subvolume clones, subvolumes | | CephBlockPoolRadosNamespace | Images and snapshots in the RADOS namespace|"
}
] |
{
"category": "Runtime",
"file_name": "ceph-teardown.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Adding a new FOP ================ Steps to be followed when adding a new FOP to GlusterFS: Edit `glusterfs.h` and add a `GFFOP*` constant. Edit `xlator.[ch]` and: add the new prototype for fop and callback. edit `xlator_fops` structure. Edit `xlator.c` and add to fill_defaults. Edit `protocol.h` and add struct necessary for the new FOP. Edit `defaults.[ch]` and provide default implementation. Edit `call-stub.[ch]` and provide stub implementation. Edit client-protocol and add your FOP. Edit server-protocol and add your FOP. Implement your FOP in any translator for which the default implementation is not sufficient."
}
] |
{
"category": "Runtime",
"file_name": "adding-fops.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Display individual pcap recorder ``` cilium-dbg recorder get <recorder id> [flags] ``` ``` -h, --help help for get -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Introspect or mangle pcap recorder"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_recorder_get.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "All notable changes to this project will be documented in this file. This project adheres to . : JSON: Fix complex number encoding with negative imaginary part. Thanks to @hemantjadon. : JSON: Fix inaccurate precision when encoding float32. Enhancements: : Avoid panicking in Sampler core if the level is out of bounds. : Reduce the size of BufferedWriteSyncer by aligning the fields better. Thanks to @lancoLiu and @thockin for their contributions to this release. Bugfixes: : Fix nil dereference in logger constructed by `zap.NewNop`. Enhancements: : Add `zapcore.BufferedWriteSyncer`, a new `WriteSyncer` that buffers messages in-memory and flushes them periodically. : Add `zapio.Writer` to use a Zap logger as an `io.Writer`. : Add `zap.WithClock` option to control the source of time via the new `zapcore.Clock` interface. : Avoid panicking in `zap.SugaredLogger` when arguments of `w` methods don't match expectations. : Add support for filtering by level or arbitrary matcher function to `zaptest/observer`. : Comply with `io.StringWriter` and `io.ByteWriter` in Zap's `buffer.Buffer`. Thanks to @atrn0, @ernado, @heyanfu, @hnlq715, @zchee for their contributions to this release. Bugfixes: : Encode `<nil>` for nil `error` instead of a panic. , : Update minimum version constraints to address vulnerabilities in dependencies. Enhancements: : Improve alignment of fields of the Logger struct, reducing its size from 96 to 80 bytes. : Support `grpclog.LoggerV2` in zapgrpc. : Support URL-encoded POST requests to the AtomicLevel HTTP handler with the `application/x-www-form-urlencoded` content type. : Support multi-field encoding with `zap.Inline`. : Speed up SugaredLogger for calls with a single string. : Add support for filtering by field name to `zaptest/observer`. Thanks to @ash2k, @FMLS, @jimmystewpot, @Oncilla, @tsoslow, @tylitianrui, @withshubh, and @wziww for their contributions to this release. Bugfixes: : Fix missing newline in IncreaseLevel error messages. : Fix panic in JSON encoder when encoding times or durations without specifying a time or duration encoder. : Honor CallerSkip when taking stack traces. : Fix the default file permissions to use `0666` and rely on the umask instead. : Encode `<nil>` for nil `Stringer` instead of a panic error log. Enhancements: : Added `zapcore.TimeEncoderOfLayout` to easily create time encoders for custom layouts. : Added support for a configurable delimiter in the console encoder. : Optimize console encoder by pooling the underlying JSON encoder. : Add ability to include the calling function as part of logs. : Add `StackSkip` for including truncated stacks as a field. : Add options to customize Fatal behaviour for better testability. Thanks to @SteelPhase, @tmshn, @lixingwang, @wyxloading, @moul, @segevfiner, @andy-retailnext and @jcorbin for their contributions to this release. Bugfixes: : Fix handling of `Time` values out of `UnixNano` range. : Fix `IncreaseLevel` being reset after a call to `With`. Enhancements: : Add `WithCaller` option to supersede the `AddCaller`"
},
{
"data": "This allows disabling annotation of log entries with caller information if previously enabled with `AddCaller`. : Deprecate `NewSampler` constructor in favor of `NewSamplerWithOptions` which supports a `SamplerHook` option. This option adds support for monitoring sampling decisions through a hook. Thanks to @danielbprice for their contributions to this release. Bugfixes: : Fix panic on attempting to build a logger with an invalid Config. : Vendoring Zap with `go mod vendor` no longer includes Zap's development-time dependencies. : Fix issue introduced in 1.14.0 that caused invalid JSON output to be generated for arrays of `time.Time` objects when using string-based time formats. Thanks to @YashishDua for their contributions to this release. Enhancements: : Optimize calls for disabled log levels. : Add millisecond duration encoder. : Add option to increase the level of a logger. : Optimize time formatters using `Time.AppendFormat` where possible. Thanks to @caibirdme for their contributions to this release. Enhancements: : Add `Intp`, `Stringp`, and other similar `p` field constructors to log pointers to primitives with support for `nil` values. Thanks to @jbizzle for their contributions to this release. Enhancements: : Migrate to Go modules. Enhancements: : Add `zapcore.OmitKey` to omit keys in an `EncoderConfig`. : Add `RFC3339` and `RFC3339Nano` time encoders. Thanks to @juicemia, @uhthomas for their contributions to this release. Bugfixes: : Fix `MapObjectEncoder.AppendByteString` not adding value as a string. : Fix incorrect call depth to determine caller in Go 1.12. Enhancements: : Add `zaptest.WrapOptions` to wrap `zap.Option` for creating test loggers. : Don't panic when encoding a String field. : Disable HTML escaping for JSON objects encoded using the reflect-based encoder. Thanks to @iaroslav-ciupin, @lelenanam, @joa, @NWilson for their contributions to this release. Bugfixes: : MapObjectEncoder should not ignore empty slices. Enhancements: : Reduce number of allocations when logging with reflection. , : Expose a registry for third-party logging sinks. Thanks to @nfarah86, @AlekSi, @JeanMertz, @philippgille, @etsangsplk, and @dimroc for their contributions to this release. Enhancements: : Make log level configurable when redirecting the standard library's logger. : Add a logger that writes to a `testing.TB`. : Add a top-level alias for `zapcore.Field` to clean up GoDoc. Bugfixes: : Add a missing import comment to `go.uber.org/zap/buffer`. Thanks to @DiSiqueira and @djui for their contributions to this release. Bugfixes: : Store strings when using AddByteString with the map encoder. Enhancements: : Add `NewStdLogAt`, which extends `NewStdLog` by allowing the user to specify the level of the logged messages. Enhancements: : Omit zap stack frames from stacktraces. : Add a `ContextMap` method to observer logs for simpler field validation in tests. Enhancements: and : Support errors produced by `go.uber.org/multierr`. : Support user-supplied encoders for logger names. Bugfixes: : Fix a bug that incorrectly truncated deep stacktraces. Thanks to @richard-tunein and @pavius for their contributions to this release. This release fixes two"
},
{
"data": "Bugfixes: : Support a variety of case conventions when unmarshaling levels. : Fix a panic in the observer. This release adds a few small features and is fully backward-compatible. Enhancements: : Add a `LineEnding` field to `EncoderConfig`, allowing users to override the Unix-style default. : Preserve time zones when logging times. : Make `zap.AtomicLevel` implement `fmt.Stringer`, which makes a variety of operations a bit simpler. This release adds an enhancement to zap's testing helpers as well as the ability to marshal an AtomicLevel. It is fully backward-compatible. Enhancements: : Add a substring-filtering helper to zap's observer. This is particularly useful when testing the `SugaredLogger`. : Make `AtomicLevel` implement `encoding.TextMarshaler`. This release adds a gRPC compatibility wrapper. It is fully backward-compatible. Enhancements: : Add a `zapgrpc` package that wraps zap's Logger and implements `grpclog.Logger`. This release fixes two bugs and adds some enhancements to zap's testing helpers. It is fully backward-compatible. Bugfixes: : Fix caller path trimming on Windows. : Fix a panic when attempting to use non-existent directories with zap's configuration struct. Enhancements: : Add filtering helpers to zaptest's observing logger. Thanks to @moitias for contributing to this release. This is zap's first stable release. All exported APIs are now final, and no further breaking changes will be made in the 1.x release series. Anyone using a semver-aware dependency manager should now pin to `^1`. Breaking changes: : Add byte-oriented APIs to encoders to log UTF-8 encoded text without casting from `[]byte` to `string`. : To support buffering outputs, add `Sync` methods to `zapcore.Core`, `zap.Logger`, and `zap.SugaredLogger`. : Rename the `testutils` package to `zaptest`, which is less likely to clash with other testing helpers. Bugfixes: : Make the ISO8601 time formatters fixed-width, which is friendlier for tab-separated console output. : Remove the automatic locks in `zapcore.NewCore`, which allows zap to work with concurrency-safe `WriteSyncer` implementations. : Stop reporting errors when trying to `fsync` standard out on Linux systems. : Report the correct caller from zap's standard library interoperability wrappers. Enhancements: : Add a registry allowing third-party encodings to work with zap's built-in `Config`. : Make the representation of logger callers configurable (like times, levels, and durations). : Allow third-party encoders to use their own buffer pools, which removes the last performance advantage that zap's encoders have over plugins. : Add `CombineWriteSyncers`, a convenience function to tee multiple `WriteSyncer`s and lock the result. : Make zap's stacktraces compatible with mid-stack inlining (coming in Go 1.9). : Export zap's observing logger as `zaptest/observer`. This makes it easier for particularly punctilious users to unit test their application's logging. Thanks to @suyash, @htrendev, @flisky, @Ulexus, and @skipor for their contributions to this release. This is the third release candidate for zap's stable release. There are no breaking"
},
{
"data": "Bugfixes: : Byte slices passed to `zap.Any` are now correctly treated as binary blobs rather than `[]uint8`. Enhancements: : Users can opt into colored output for log levels. : In addition to hijacking the output of the standard library's package-global logging functions, users can now construct a zap-backed `log.Logger` instance. : Frames from common runtime functions and some of zap's internal machinery are now omitted from stacktraces. Thanks to @ansel1 and @suyash for their contributions to this release. This is the second release candidate for zap's stable release. It includes two breaking changes. Breaking changes: : Zap's global loggers are now fully concurrency-safe (previously, users had to ensure that `ReplaceGlobals` was called before the loggers were in use). However, they must now be accessed via the `L()` and `S()` functions. Users can update their projects with ``` gofmt -r \"zap.L -> zap.L()\" -w . gofmt -r \"zap.S -> zap.S()\" -w . ``` and : RC1 was mistakenly shipped with invalid JSON and YAML struct tags on all config structs. This release fixes the tags and adds static analysis to prevent similar bugs in the future. Bugfixes: : Redirecting the standard library's `log` output now correctly reports the logger's caller. Enhancements: and : Zap now transparently supports non-standard, rich errors like those produced by `github.com/pkg/errors`. : Though `New(nil)` continues to return a no-op logger, `NewNop()` is now preferred. Users can update their projects with `gofmt -r 'zap.New(nil) -> zap.NewNop()' -w .`. : Incorrectly importing zap as `github.com/uber-go/zap` now returns a more informative error. Thanks to @skipor and @chapsuk for their contributions to this release. This is the first release candidate for zap's stable release. There are multiple breaking changes and improvements from the pre-release version. Most notably: Zap's import path is now \"go.uber.org/zap\"* — all users will need to update their code. User-facing types and functions remain in the `zap` package. Code relevant largely to extension authors is now in the `zapcore` package. The `zapcore.Core` type makes it easy for third-party packages to use zap's internals but provide a different user-facing API. `Logger` is now a concrete type instead of an interface. A less verbose (though slower) logging API is included by default. Package-global loggers `L` and `S` are included. A human-friendly console encoder is included. A declarative config struct allows common logger configurations to be managed as configuration instead of code. Sampling is more accurate, and doesn't depend on the standard library's shared timer heap. This is a minor version, tagged to allow users to pin to the pre-1.0 APIs and upgrade at their leisure. Since this is the first tagged release, there are no backward compatibility concerns and all functionality is new. Early zap adopters should pin to the 0.1.x minor version until they're ready to upgrade to the upcoming stable release."
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This enhancement adds support for user configured (storage class, secrets) encrypted volumes, this in return means that backups of that volume end up also being encrypted. // TODO: create issue // TODO: create issue // TODO: create issue user is able to create & use an encrypted volume with cipher customization options user is able to configure the keys that are used for encryption user is able to take backups from an encrypted volume user is able to restore an encrypted backup to a new encrypted volume external key management support, currently keys utilize kubernetes secrets rotating key support, user can do this manually though as a workaround securing of secrets, user is responsible for cluster setup and security of the secrets All regular longhorn operations should also be supported for encrypted volumes, therefore the only user story that is mentioned is how to create and use an encrypted volume. create a storage class with (encrypted=true) and either a global secret or a per volume secret create the secret for that volume in the configured namespace with customization options of the cipher for instance `cipher`, `key-size` and `hash` create a pvc that references the created storage class volume will be created then encrypted during first use afterwards a regular filesystem that lives on top of the encrypted volume will be exposed to the pod Creation and usage of an encrypted volume requires 2 things: the storage class needs to specify `encrypted: \"true\"` as part of its parameters. secrets need to be created and reference for the csi operations need to be setup. see below examples for different types of secret usage. The kubernetes sidecars are responsible for retrieval of the secret and passing it to the csi driver. If the secret hasn't been created the PVC will remain in the Pending State. And the side cars will retry secret retrieval periodically, once it's available the sidecar container will call `Controller::CreateVolume` and pass the secret after which longhorn will create a volume. The below storage class uses a global secret named `longhorn-crypto` in the `longhorn-system` namespace. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn-crypto-global provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"3\" staleReplicaTimeout: \"2880\" # 48 hours in minutes fromBackup: \"\" encrypted: \"true\" csi.storage.k8s.io/provisioner-secret-name: \"longhorn-crypto\" csi.storage.k8s.io/provisioner-secret-namespace: \"longhorn-system\" csi.storage.k8s.io/node-publish-secret-name: \"longhorn-crypto\" csi.storage.k8s.io/node-publish-secret-namespace: \"longhorn-system\" csi.storage.k8s.io/node-stage-secret-name: \"longhorn-crypto\" csi.storage.k8s.io/node-stage-secret-namespace: \"longhorn-system\" ``` The global secret reference by the `longhorn-crypto-global` storage class. This type of setup means that all volumes share the same encryption key. ```yaml apiVersion: v1 kind: Secret metadata: name: longhorn-crypto namespace: longhorn-system stringData: CRYPTOKEYVALUE: \"Simple passphrase\" CRYPTOKEYPROVIDER: \"secret\" # this is optional we currently only support direct keys via secrets CRYPTOKEYCIPHER: \"aes-xts-plain64\" # this is optional CRYPTOKEYHASH: \"sha256\" # this is optional CRYPTOKEYSIZE: \"256\" # this is optional ``` The below storage class uses a per volume secret, the name and namespace of the secret is based on the pvc values. These templates will be resolved by the external sidecars and the resolved values end up as Secret refs on the PV. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn-crypto-per-volume provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"3\" staleReplicaTimeout: \"2880\" # 48 hours in minutes fromBackup: \"\" encrypted: \"true\" csi.storage.k8s.io/provisioner-secret-name: ${pvc.name} csi.storage.k8s.io/provisioner-secret-namespace: ${pvc.namespace} csi.storage.k8s.io/node-publish-secret-name: ${pvc.name} csi.storage.k8s.io/node-publish-secret-namespace: ${pvc.namespace} csi.storage.k8s.io/node-stage-secret-name: ${pvc.name} csi.storage.k8s.io/node-stage-secret-namespace: ${pvc.namespace} ``` add a `Encrypted` boolean to the `Volume` struct utilized by the http client, this ends up being stored in `Volume.Spec.encrypted` of the volume cr. Storing the `Encrypted` value is necessary to support encryption for RWX"
},
{
"data": "Host requires `dm_crypt` kernel module as well as `cryptsetup` installed. We utilize the below parameters from a secret, `CRYPTOKEYPROVIDER` allows us in the future to add other key management systems `CRYPTOKEYCIPHER` allow users to choose the cipher algorithm when creating an encrypted volume by `cryptsetup` `CRYPTOKEYHASH` specifies the hash used in the LUKS key setup scheme and volume key digest `CRYPTOKEYSIZE` sets the key size in bits. The argument has to be a multiple of 8 and the maximum interactive passphrase length is 512 (characters) ```yaml CRYPTOKEYVALUE: \"Simple passphrase\" CRYPTOKEYPROVIDER: \"secret\" # this is optional we currently only support direct keys via secrets CRYPTOKEYCIPHER: \"aes-xts-plain64\" # this is optional CRYPTOKEYHASH: \"sha256\" # this is optional CRYPTOKEYSIZE: \"256\" # this is optional ``` utilize host `dm_crypt` kernel module for device encryption utilize host installed `cryptsetup` for configuration of the crypto device add csi driver `NodeStageVolume` support to handle device global per node mounting, we skip mounting for volumes that are being used via `VolumeMode: Block` refactor csi driver NodePublishVolume to bind mount the `stagingpath` into the `targetpath` we utilize a bind mount for `VolumeMode: Mount` we do a regular device file creation for `VolumeMode: Block` during csi `NodeStageVolume` encrypt (first time use) / open regular longhorn device this exposes a crypto mapped device (/dev/mapper/<volume-name>) mount crypto device into `staging_path` during csi `NodeUnstageVolume` unmount `staging_path` close crypto device create a storage class with (encrypted=true) and either a global secret or a per volume secret create the secret for that volume in the configured namespace create a pvc that references the created storage class create a pod that uses that pvc for a volume mount wait for pod up and healthy create a storage class with (encrypted=true) and either a global secret or a per volume secret create the secret with customized options of the cipher for that volume in the configured namespace create a pvc that references the created storage class create a pod that uses that pvc for a volume mount wait for pod up and healthy check if the customized options of the cipher are correct create a storage class with (encrypted=true) and either a global secret or a per volume secret create a pvc that references the created storage class create a pod that uses that pvc for a volume mount verify pvc remains in pending state verify pod remains in creation state create a storage class with (encrypted=true) and either a global secret or a per volume secret create the secret for that volume in the configured namespace create a pvc that references the created storage class create a pod that uses that pvc for a volume mount wait for pod up and healthy write known test pattern into fs verify absence (grep) of known test pattern after reading block device content `/dev/longhorn/<volume-name>` create a storage class with (encrypted=true) and either a global secret or a per volume secret create the secret for that volume in the configured namespace create a pvc that references the created storage class create a pod that uses that pvc for a volume mount wait for pod up and healthy scale down pod change `CRYPTOKEYVALUE` of secret scale up pod verify pod remains in pending state (failure to mount volume) requires new pvc's since encryption would overwrite the previously created filesystem. (csi driver prevents this) Host requires `dm_crypt` kernel module as well as `cryptsetup` installed. supporting external key vaults is possible in the future with some additional implementation support rotating keys is possible in the future with some additional implementation"
}
] |
{
"category": "Runtime",
"file_name": "20221024-pv-encryption.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. --> <!-- Copyright 2019 Joyent, Inc. Copyright 2024 MNX Cloud, Inc. --> Thanks for using SmartOS and for considering contributing to it! All changes to Triton project repositories go through code review via a GitHub pull request. If you're making a substantial change, you probably want to contact developers first. If you have any trouble with the contribution process, please feel free to contact developers [on the mailing list or IRC](README.md#community). Note that larger Triton project changes are typically designed and discussed via [\"Requests for Discussion (RFDs)\"](https://github.com/TritonDataCenter/rfd). SmartOS repositories use the [Triton Engineering Guidelines](https://github.com/TritonDataCenter/eng/blob/master/docs/index.md). Notably: The #master or #main branch should be first-customer-ship (FCS) quality at all times. Don't push anything until it's tested. All repositories should be \"make check\" clean at all times. All repositories should have tests that run cleanly at all times. There are two separate issue trackers that are relevant for SmartOS code: An internal JIRA instance. A JIRA ticket has an ID like `OS-7260`, where \"OS\" is the JIRA project name -- in this case used by the and related repos. A read-only view of most JIRA tickets is made available at <https://smartos.org/bugview/> (e.g. <https://smartos.org/bugview/OS-7260>). GitHub issues for the relevant repo, e.g. <https://github.com/TritonDataCenter/smartos-ui/issues>. Before Triton was open sourced, Joyent engineering used a private JIRA instance. While we continue to use JIRA internally, we also use GitHub issues for tracking -- primarily to allow interaction with those without access to JIRA. All persons and/or organizations contributing to, or intercting with our repositories or communities are required to abide by the ."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "See . Support external ip for ChunkServer to divide inter-cluster communication and communication with clients. The Curveopstool supports printing space-info for each logical pool. Add snapshot clone tool. The curve-ansible adds deploy_monitor.yml. Braft completely non-intrusive modifications. Replace HMacSha256 implementation. Fixed the bug that chunkserver_deploy.sh not able to delete the previous fstab record when deploy one. Fixed mds abort when exiting. Fixed curvefs_tool (curve) missing support for password. Fixed the bug that data from Etcd would not placed in LRU cache. Fixed ChunkServer not aborting when read returns an internal error. <hr/> <hr/> <hr/>"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.0.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: Troubleshooting This page is a collection of high-level guides and tips regarding how to diagnose issues encountered in Alluxio. Note: this doc is not intended to be the full list of Alluxio questions. Join the to chat with users and developers, or post questions on {:target=\"_blank\"}. Alluxio generates Master, Worker and Client logs under the dir `${ALLUXIO_HOME}/logs`. See for more information. You can find details about and . Java remote debugging makes it easier to debug Alluxio at the source level without modifying any code. You will need to set the JVM remote debugging parameters before starting the process. There are several ways to add the remote debugging parameters; you can export the following configuration properties in shell or `conf/alluxio-env.sh`: ```shell export ALLUXIOMASTERATTACHOPTS=\"-agentlib:jdwp=transport=dtsocket,server=y,suspend=n,address=60001\" export ALLUXIOWORKERATTACHOPTS=\"-agentlib:jdwp=transport=dtsocket,server=y,suspend=n,address=60002\" ``` ```shell export ALLUXIOMASTERATTACHOPTS=\"-agentlib:jdwp=transport=dtsocket,server=y,suspend=n,address=*:60001\" export ALLUXIOWORKERATTACHOPTS=\"-agentlib:jdwp=transport=dtsocket,server=y,suspend=n,address=*:60002\" ``` In general, you can use `ALLUXIO<PROCESS>ATTACH_OPTS` to specify how an Alluxio process should be attached to. `suspend={y | n}` will decide whether the JVM process waits until the debugger connects or not. `address` determines which port the Alluxio process will use to be attached to by a debugger. If left blank, it will choose an open port by itself. After completing this setup, learn how . If you want to debug shell commands (e.g. `bin/alluxio fs ls /`), you can set the `ALLUXIOUSERATTACH_OPTS` in `conf/alluxio-env.sh` as above: ```shell export ALLUXIOUSERATTACHOPTS=\"-agentlib:jdwp=transport=dtsocket,server=y,suspend=n,address=60000\" ``` ```shell export ALLUXIOUSERATTACHOPTS=\"-agentlib:jdwp=transport=dtsocket,server=y,suspend=n,address=*:60000\" ``` After setting this parameter, you can add the `-debug` flag to start a debug server such as `bin/alluxio fs ls / --attach-debug`. After completing this setup, learn how . There exists a {:target=\"_blank\"}. Start the process or shell command of interest, then create a new java remote configuration, set the debug server's host and port, and start the debug session. If you set a breakpoint which can be reached, the IDE will enter debug mode. You can inspect the current context's variables, call stack, thread list, and expression evaluation. Alluxio has a `collectInfo` command that collect information to troubleshoot an Alluxio cluster. `collectInfo` will run a set of sub-commands that each collects one aspect of system information, as explained below. In the end the collected information will be bundled into one tarball which contains a lot of information regarding your Alluxio cluster. The tarball size mostly depends on your cluster size and how much information you are collecting. For example, `collectLog` operation can be costly if you have huge amounts of logs. Other commands typically do not generate files larger than 1MB. The information in the tarball will help you troubleshoot your cluster. Or you can share the tarball with someone you trust to help troubleshoot your Alluxio cluster. The `collectInfo` command will SSH to each node and execute the set of sub-commands. In the end of execution the collected information will be written to files and tarballed. Each individual tarball will be collected to the issuing node. Then all the tarballs will be bundled into the final tarball, which contains all information about the Alluxio cluster. NOTE: Be careful if your configuration contains credentials like AWS keys! You should ALWAYS CHECK what is in the tarball and REMOVE the sensitive information from the tarball before sharing it with someone! `collectAlluxioInfo` will run a set of Alluxio commands that collect information about the Alluxio cluster, like `bin/alluxio info report` etc. When the Alluxio cluster is not running, this command will fail to collect some"
},
{
"data": "This sub-command will run both `bin/alluxio conf get` which collects local configuration properties, and `bin/alluxio conf get --master --source` which prints configuration properties that are received from the master. Both of them mask credential properties. The difference is the latter command fails if the Alluxio cluster is not up. `collectConfig` will collect all the configuration files under `${alluxio.work.dir}/conf`. From Alluxio 2.4, the `alluxio-site.properties` file will not be copied, as many users tend to put their plaintext credentials to the UFS in this file. Instead, the `collectAlluxioInfo` will run a `alluxio conf get` command which prints all the configuration properties, with the credential fields masked. The will collect all the current node configuration. So in order to collect Alluxio configuration in the tarball, please make sure `collectAlluxioInfo` sub-command is run. WARNING: If you put credential fields in the configuration files except alluxio-site.properties (eg. `alluxio-env.sh`), DO NOT share the collected tarball with anybody unless you have manually obfuscated them in the tarball! `collectLog` will collect all the logs under `${alluxio.work.dir}/logs`. NOTE: Roughly estimate how much log you are collecting before executing this command! `collectMetrics` will collect Alluxio metrics served at `http://${alluxio.master.hostname}:${alluxio.master.web.port}/metrics/json/` by default. The metrics will be collected multiple times to see the progress. `collectJvmInfo` will collect information about the existing JVMs on each node. This is done by running a `jps` command then `jstack` on each found JVM process. This will be done multiple times to see if the JVMs are making progress. `collectEnv` will run a set of bash commands to collect information about the running node. This runs system troubleshooting commands like `env`, `hostname`, `top`, `ps` etc. WARNING: If you stored credential fields in environment variables like AWSACCESSKEY or in process start parameters like `-Daws.access.key=XXX`, DO NOT share the collected tarball with anybody unless you have manually obfuscated them in the tarball! `all` will run all the sub-commands above. The `collectInfo` command has the below options. ```shell Collects information such as logs, config, metrics, and more from the running Alluxio cluster and bundle into a single tarball [command] must be one of the following values: all runs all the commands below cluster: runs a set of Alluxio commands to collect information about the Alluxio cluster conf: collects the configuration files under ${ALLUXIO_HOME}/config/ env: runs a set of linux commands to collect information about the cluster jvm: collects jstack from the JVMs log: collects the log files under ${ALLUXIO_HOME}/logs/ metrics: collects Alluxio system metrics WARNING: This command MAY bundle credentials. To understand the risks refer to the docs here. https://docs.alluxio.io/os/user/edge/en/operation/Troubleshooting.html#collect-alluxio-cluster-information Usage: bin/alluxio info collect [command] [flags] Flags: --additional-logs strings Additional file name prefixes from ${ALLUXIO_HOME}/logs to include in the tarball, inclusive of the default log files --attach-debug True to attach debug opts specified by $ALLUXIOUSERATTACH_OPTS --end-time string Logs that do not contain entries before this time will be ignored, format must be like 2006-01-02T15:04:05 --exclude-logs strings File name prefixes from ${ALLUXIO_HOME}/logs to exclude; this is evaluated after adding files from --additional-logs --exclude-worker-metrics True to skip worker metrics collection -h, --help help for collect --include-logs strings File name prefixes from ${ALLUXIO_HOME}/logs to include in the tarball, ignoring the default log files; cannot be used with --exclude-logs or --additional-logs -D, --java-opts strings Alluxio properties to apply,"
},
{
"data": "-Dkey=value --local True to only collect information from the local machine --max-threads int Parallelism of the command; use a smaller value to limit network I/O when transferring tarballs (default 1) --output-dir string Output directory to write collect info tarball to --start-time string Logs that do not contain entries after this time will be ignored, format must be like 2006-01-02T15:04:05 ``` `--output-dir` is a required flag, specifying the directory to write the final tarball to. Options: `--max-threads threadNum` option configures how many threads to use for concurrently collecting information and transmitting tarballs. When the cluster has a large number of nodes, or large log files, the network IO for transmitting tarballs can be significant. Use this parameter to constrain the resource usage of this command. `--local` option specifies the `collectInfo` command to run only on `localhost`. That means the command will only collect information about the `localhost`. If your cluster does not have password-less SSH across nodes, you will need to run with `--local` option locally on each node in the cluster, and manually gather all outputs. If your cluster has password-less SSH across nodes, you can run without `--local` command, which will essentially distribute the task to each node and gather the locally collected tarballs for you. `--help` option asks the command to print the help message and exit. `--additional-logs <filename-prefixes>` specifies extra log file name prefixes to include. By default, only log files recognized by Alluxio will be collected by the `collectInfo` command. The recognized files include below: ``` logs/master.log*, logs/master.out*, logs/job_master.log*, logs/job_master.out*, logs/master_audit.log*, logs/worker.log*, logs/worker.out*, logs/job_worker.log*, logs/job_worker.out*, logs/proxy.log*, logs/proxy.out*, logs/task.log*, logs/task.out*, logs/user/* ``` Other than mentioned above, `--additional-logs <filename-prefixes>` specifies that files whose names start with the prefixes in `<filename-prefixes>` should be collected. This will be checked after the exclusions defined in `--exclude-logs`. `<filename-prefixes>` specifies the filename prefixes, separated by commas. `--exclude-logs <filename-prefixes>` specifies file name prefixes to ignore from the default list. `--include-logs <filename-prefixes>` specifies only to collect files whose names start with the specified prefixes, and ignore all the rest. You CANNOT use `--include-logs` option together with either `--additional-logs` or `--exclude-logs`, because it is ambiguous what you want to include. `--end-time <datetime>` specifies a datetime after which the log files can be ignored. A log file will be ignore if the file was created after this end time. The first couple of lines of the log file will be parsed, in order to infer when the log file started. The `<datetime>` is a datetime string like `2020-06-27T11:58:53`. The parsable datetime formats include below: ``` \"2020-01-03 12:10:11,874\" \"2020-01-03 12:10:11\" \"2020-01-03 12:10\" \"20/01/03 12:10:11\" \"20/01/03 12:10\" 2020-01-03T12:10:11.874+0800 2020-01-03T12:10:11 2020-01-03T12:10 ``` `--start-time <datetime>` specifies a datetime before with the log files can be ignored. A log file will be ignored if the last modified time is before this start time. There are some special characters and patterns in file path names that are not supported in Alluxio. Please avoid creating file path names with these patterns or acquire additional handling from client end. Question mark (`'?'`) Pattern with period (`./` and `../`) Backslash (`'\\'`) If you are operating your Alluxio cluster it is possible you may notice a message in the logs like: ``` LEAK: <>.close() was not called before resource is garbage-collected. See https://docs.alluxio.io/os/user/stable/en/administration/Troubleshooting.html#resource-leak-detection for more information about this message. ``` Alluxio has a built-in detection mechanism to help identify potential resource leaks. This message indicates there is a bug in the Alluxio code which is causing a resource leak. If this message appears during cluster operation, please [open a GitHub Issue](https://github.com/Alluxio/alluxio/issues/new/choose){:target=\"_blank\"} as a bug report and share your log message and any relevant stack traces that are shared with"
},
{
"data": "By default, Alluxio samples a portion of some resource allocations when detecting these leaks, and for each tracked resource record the object's recent accesses. The sampling rate and access tracking will result in a resource and performance penalty. The amount of overhead introduced by the leak detector can be controlled through the property `alluxio.leak.detector.level`. Valid values are `DISABLED`: no leak tracking or logging is performed, lowest overhead `SIMPLE`: samples and tracks only leaks and does not log recent accesses. minimal overhead `ADVANCED`: samples and tracks recent accesses, higher overhead `PARANOID`: tracks for leaks on every resource allocation, highest overhead. Alluxio master periodically checks its resource usage, including CPU and memory usage, and several internal data structures that are performance critical. This interval is configured by `alluxio.master.throttle.heartbeat.interval` (defaults to 3 seconds). On every sampling point in time (PIT), Alluxio master takes a snapshot of its resource usage. A continuous number of PIT snapshots (number configured by alluxio.master.throttle.observed.pit.number, defaults to 3) will be saved and used to generate the aggregated resource usage which is used to decide the system status. Each PIT includes the following metrics. ``` directMemUsed=5268082, heapMax=59846950912, heapUsed=53165684872, cpuLoad=0.4453061982287778, pitTotalJVMPauseTimeMS=190107, totalJVMPauseTimeMS=0, rpcQueueSize=0, pitTimeMS=1665995384998} ``` `directMemUsed`: direct memory allocated by `ByteBuffer.allocateDirect` `heapMax` : the allowed max heap size `heapUsed` : the heap memory used `cpuLoad` : the cpu load `pitTotalJVMPauseTimeMS` : aggregated total JVM pause time from the beginning `totalJVMPauseTimeMS` : the JVM pause time since last PIT `rpcQueueSize` : the rpc queue size `pitTimeMS` : the timestamp in millisecond when this snapshot is taken The aggregated server indicators are the certain number of continuous PITs, this one is generated in a sliding window. The alluxio master has a derived indicator `Master.system.status` that is based on the heuristic algorithm. ``` \"Master.system.status\" : { \"value\" : \"STRESSED\" } ``` The possible statuses are: `IDLE` `ACTIVE` `STRESSED` `OVERLOADED` The system status is mainly decided by the JVM pause time and the free heap memory. Usually the status transition is `IDLE` <> `ACTIVE` <> `STRESSED` <> `OVERLOADED` If the JVM pause time is longer than `alluxio.master.throttle.overloaded.heap.gc.time`, the system status is directly set to `OVERLOADED`. If the used heap memory is less than the low used heap memory boundary threshold, the system.status is deescalated. If the used heap memory is less than the upper used heap memory boundary threshold, the system.status is unchanged. If the aggregated used heap memory is greater than the upper used heap memory boundary threshold, the sytem.status is escalated. As the used heap memory grows or shrinks, the value of the system status will update if it crosses any of the thresholds defined by the configurations below The thresholds are ```properties alluxio.master.throttle.overloaded.heap.gc.time ``` ```properties alluxio.master.throttle.active.heap.used.ratio alluxio.master.throttle.stressed.heap.used.ratio alluxio.master.throttle.overloaded.heap.used.ratio ``` If the system status is `STRESSED` or `OVERLOADED`, `WARN` level log would be printed containing the following the filesystem indicators: ``` 2022-10-17 08:29:41,998 WARN SystemMonitor - System transition status is UNCHANGED, status is STRESSED, related Server aggregate indicators:ServerIndicator{directMemUsed=15804246, heapMax=58686177280, heapUsed=157767176816, cpuLoad=1.335918594686334, pitTotalJVMPauseTimeMS=62455, totalJVMPauseTimeMS=6, rpcQueueSize=0, pitTimeMS=1665989354196}, pit indicators:ServerIndicator{directMemUsed=5268082, heapMax=59846950912, heapUsed=48601091600, cpuLoad=0.4453061982287778, pitTotalJVMPauseTimeMS=190107, totalJVMPauseTimeMS=0, rpcQueueSize=0, pitTimeMS=1665995381998} 2022-10-17 08:29:41,998 WARN SystemMonitor - The delta filesystem indicators FileSystemIndicator{Master.DeletePathOps=0, Master.PathsDeleted=0, Master.MetadataSyncPathsFail=0, Master.CreateFileOps=0, Master.ListingCacheHits=0, Master.MetadataSyncSkipped=3376, Master.UfsStatusCacheSize=0, Master.CreateDirectoryOps=0, Master.FileBlockInfosGot=0, Master.MetadataSyncPrefetchFail=0, Master.FilesCompleted=0, Master.RenamePathOps=0, Master.MetadataSyncSuccess=0, Master.MetadataSyncActivePaths=0, Master.FilesCreated=0, Master.PathsRenamed=0, Master.FilesPersisted=658, Master.CompletedOperationRetryCount=0, Master.ListingCacheEvictions=0, Master.MetadataSyncTimeMs=0, Master.SetAclOps=0, Master.PathsMounted=0, Master.FreeFileOps=0, Master.PathsUnmounted=0, Master.CompleteFileOps=0, Master.NewBlocksGot=0, Master.GetNewBlockOps=0, Master.ListingCacheMisses=0, Master.FileInfosGot=3376, Master.GetFileInfoOps=3376, Master.GetFileBlockInfoOps=0, Master.UnmountOps=0, Master.MetadataSyncPrefetchPaths=0, Master.getConfigHashInProgress=0, Master.MetadataSyncPathsSuccess=0, Master.FilesFreed=0, Master.MetadataSyncNoChange=0, Master.SetAttributeOps=0, Master.getConfigurationInProgress=0, Master.MetadataSyncPendingPaths=0, Master.DirectoriesCreated=0, Master.ListingCacheLoadTimes=0, Master.MetadataSyncPrefetchSuccess=0, Master.MountOps=0, Master.UfsStatusCacheChildrenSize=0, Master.MetadataSyncPrefetchOpsCount=0, Master.registerWorkerStartInProgress=0, Master.MetadataSyncPrefetchCancel=0, Master.MetadataSyncPathsCancel=0, Master.MetadataSyncPrefetchRetries=0, Master.MetadataSyncFail=0, Master.MetadataSyncOpsCount=3376} ``` The monitoring indicators describe the system status in a heuristic way to have a basic understanding of its load."
}
] |
{
"category": "Runtime",
"file_name": "Troubleshooting.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List SRv6 SID entries List SRv6 SID entries. ``` cilium-dbg bpf srv6 sid [flags] ``` ``` -h, --help help for sid -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the SRv6 routing rules"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_srv6_sid.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The basic configuration is shared by each module and mainly includes server ports, logs, and audit logs. ```json { \"bind_addr\": \"host:port\", \"auditlog\": { \"logdir\": \"audit log path\", \"chunkbits\": \"audit log file size, equal to 2^chunkbits bytes\", \"rotate_new\": \"whether to enable a new log file for each restart, true or false\", \"logfilesuffix\": \"log file suffix, for example `.log`\", \"backup\": \"number of files to keep, not set or 0 means no limit\", \"log_format\": \"Use text or JSON format, with text format being the default\", \"metric_config\": { \"idc\": \"IDC number\", \"service\": \"service name\", \"tag\": \"tag\", \"team\": \"team\", \"enablereqlength_cnt\": \"whether to enable request length statistics, true or false, default is false\", \"enableresplength_cnt\": \"whether to enable response length statistics, true or false, default is false\", \"enablerespduration\": \"whether to enable response latency, true or false, default is false\", \"maxapilevel\": \"maximum API level, such as 2 for /get/name\" }, \"filters\": \"Filter log by multi-criteria matching of log's fields\" }, \"auth\": { \"enable_auth\": \"whether to enable authentication, true or false, default is false\", \"secret\": \"authentication key\" }, \"shutdowntimeouts\": \"service shutdown timeout\", \"log\":{ \"level\": \"log level, debug, info, warn, error, panic, fatal\", \"filename\": \"log storage path\", \"maxsize\": \"maximum size of each log file\", \"maxage\": \"number of days to keep\", \"maxbackups\": \"number of log files to keep\" } } ```"
}
] |
{
"category": "Runtime",
"file_name": "base.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(instances-access-files)= You can manage files inside an instance using the Incus client without needing to access the instance through the network. Files can be individually edited or deleted, pushed from or pulled to the local machine. Alternatively, you can mount the instance's file system onto the local machine. For containers, these file operations always work and are handled directly by Incus. For virtual machines, the `incus-agent` process must be running inside of the virtual machine for them to work. To edit an instance file from your local machine, enter the following command: incus file edit <instancename>/<pathto_file> For example, to edit the `/etc/hosts` file in the instance, enter the following command: incus file edit my-container/etc/hosts ```{note} The file must already exist on the instance. You cannot use the `edit` command to create a file on the instance. ``` To delete a file from your instance, enter the following command: incus file delete <instancename>/<pathto_file> To pull a file from your instance to your local machine, enter the following command: incus file pull <instancename>/<pathtofile> <localfile_path> For example, to pull the `/etc/hosts` file to the current directory, enter the following command: incus file pull my-instance/etc/hosts . Instead of pulling the instance file into a file on the local system, you can also pull it to stdout and pipe it to stdin of another command. This can be useful, for example, to check a log file: incus file pull my-instance/var/log/syslog - | less To pull a directory with all contents, enter the following command: incus file pull -r <instancename>/<pathtodirectory> <locallocation> To push a file from your local machine to your instance, enter the following command: incus file push <localfilepath> <instancename>/<pathto_file> To push a directory with all contents, enter the following command: incus file push -r <locallocation> <instancename>/<pathtodirectory> You can mount an instance file system into a local path on your client. To do so, make sure that you have `sshfs` installed. Then run the following command: incus file mount <instancename>/<pathtodirectory> <locallocation> You can then access the files from your local machine. Alternatively, you can set up an SSH SFTP listener. This method allows you to connect with any SFTP client and with a dedicated user name. To do so, first set up the listener by entering the following command: incus file mount <instance_name> [--listen <address>:<port>] For example, to set up the listener on a random port on the local machine (for example, `127.0.0.1:45467`): incus file mount my-instance If you want to access your instance files from outside your local network, you can pass a specific address and port: incus file mount my-instance --listen 192.0.2.50:2222 ```{caution} Be careful when doing this, because it exposes your instance remotely. ``` To set up the listener on a specific address and a random port: incus file mount my-instance --listen 192.0.2.50:0 The command prints out the assigned port and a user name and password for the connection. ```{tip} You can specify a user name by passing the `--auth-user` flag. ``` Use this information to access the file system. For example, if you want to use `sshfs` to connect, enter the following command: sshfs <username>@<address>:<pathtodirectory> <locallocation> -p <port> For example: sshfs [email protected]:/home my-instance-files -p 35147 You can then access the file system of your instance at the specified location on the local machine."
}
] |
{
"category": "Runtime",
"file_name": "instances_access_files.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "As the apiextensions.k8s.io API moves to GA, structural schema in Custom Resource Definitions (CRDs) will become required. This document proposes updating the CRD generation logic as part of `velero install` to include structural schema for each Velero CRD. Enable structural schema and validation for Velero Custom Resources. Update Velero codebase to use Kubebuilder for controller/code generation. Solve for keeping CRDs in the Velero Helm chart up-to-date. Currently, Velero CRDs created by the `velero install` command do not contain any structural schema. The CRD is simply using the name and plurals from the info. Updating the info returned by that method would be one way to add support for structural schema when generating the CRDs, but this would require manually describing the schema and would duplicate information from the API structs (e.g. comments describing a field). Instead, the project from Kubebuilder provides tooling for generating CRD manifests (YAML) from the Velero API types. This document proposes adding controller-tools to the project to automatically generate CRDs, and use these generated CRDs as part of `velero install`. controller-tools works by reading the Go files that contain the API type definitions. It uses a combination of the struct fields, types, tags and comments to build the OpenAPIv3 schema for the CRDs. The tooling makes some assumptions based on conventions followed in upstream Kubernetes and the ecosystem, which involves some changes to the Velero API type definitions, especially around optional fields. In order for controller-tools to read the Go files containing Velero API type definitions, the CRDs need to be generated at build time, as these files are not available at runtime (i.e. the Go files are not accessible by the compiled binary). These generated CRD manifests (YAML) will then need to be available to the `pkg/install` package for it to include when installing Velero resources. API type definitions need to be updated to correctly identify optional and required fields for each API type. Upstream Kubernetes defines all optional fields using the `omitempty` tag as well as a `// +optional` annotation above the field (e.g. see ). controller-tools will mark a field as optional if it sees either the tag or the annotation, but to keep consistent with upstream, optional fields will be updated to use both indicators (as by the Kubebuilder project). Additionally, upstream Kubernetes defines the metav1.ObjectMeta, metav1.ListMeta, Spec and Status as . Some Velero API types set the `omitempty` tag on Status, but not on other fields - these will all need to be updated to be made optional. Below is a list of the Velero API type fields and what changes (if any) will be"
},
{
"data": "Note that this only includes fields used in the spec, all status fields will become"
},
{
"data": "| Type | Field | Changes | ||-|-| | BackupSpec | IncludedNamespaces | make optional | | | ExcludedNamespaces | make optional | | | IncludedResources | make optional | | | ExcludedResources | make optional | | | LabelSelector | make optional | | | SnapshotVolumes | make optional | | | TTL | make optional | | | IncludeClusterResources | make optional | | | Hooks | make optional | | | StorageLocation | make optional | | | VolumeSnapshotLocations | make optional | | BackupHooks | Resources | make optional | | BackupResourceHookSpec | Name | none (required) | | | IncludedNamespaces | make optional | | | ExcludedNamespaces | make optional | | | IncludedResources | make optional | | | ExcludedResources | make optional | | | LabelSelector | make optional | | | PreHooks | make optional | | | PostHooks | make optional | | BackupResourceHook | Exec | none (required) | | ExecHook | Container | make optional | | | Command | required, validation: MinItems=1 | | | OnError | make optional | | | Timeout | make optional | | HookErrorMode | | validation: Enum | | BackupStorageLocationSpec | Provider | none (required) | | | Config | make optional | | | StorageType | none (required) | | | AccessMode | make optional | | StorageType | ObjectStorage | make required | | ObjectStorageLocation | Bucket | none (required) | | | Prefix | make optional | | BackupStorageLocationAccessMode | | validation: Enum | | DeleteBackupRequestSpec | BackupName | none (required) | | DownloadRequestSpec | Target | none (required) | | DownloadTarget | Kind | none (required) | | | Name | none (required) | | DownloadTargetKind | | validation: Enum | | PodVolumeBackupSpec | Node | none (required) | | | Pod | none (required) | | | Volume | none (required) | | | BackupStorageLocation | none (required) | | | RepoIdentifier | none (required) | | | Tags | make optional | | PodVolumeRestoreSpec | Pod | none (required) | | | Volume | none (required) | | | BackupStorageLocation | none (required) | | | RepoIdentifier | none (required) | | | SnapshotID | none (required) | | ResticRepositorySpec | VolumeNamespace | none (required) | | | BackupStorageLocation | none (required) | | | ResticIdentifier | none (required) | | | MaintenanceFrequency | none (required) | | RestoreSpec | BackupName | none (required) - should be set to \"\" if using ScheduleName | | | ScheduleName | make optional | | | IncludedNamespaces | make optional | | | ExcludedNamespaces | make optional | | | IncludedResources | make optional | | | ExcludedResources | make optional | | | NamespaceMapping | make optional | | | LabelSelector | make optional | | | RestorePVs | make optional | | | IncludeClusterResources | make optional | | ScheduleSpec | Template | none (required) | | | Schedule | none (required) | | VolumeSnapshotLocationSpec | Provider | none (required) | | | Config | make optional | The build image will be updated as follows to include the controller-tool tooling: ```diff diff --git a/hack/build-image/Dockerfile b/hack/build-image/Dockerfile index b69a8c8a..07eac9c6 100644 a/hack/build-image/Dockerfile +++ b/hack/build-image/Dockerfile @@ -21,6 +21,8 @@ RUN mkdir -p /go/src/k8s.io && \\ git clone -b kubernetes-1.15.3 https://github.com/kubernetes/apimachinery && \\ cd /go/src/k8s.io/code-generator && GO111MODULE=on go mod vendor && \\ go get -d sigs.k8s.io/controller-tools/cmd/controller-gen && \\ cd /go/src/sigs.k8s.io/controller-tools && GO111MODULE=on go mod vendor && \\ go get golang.org/x/tools/cmd/goimports && \\ cd /go/src/golang.org/x/tools && \\ git checkout 40a48ad93fbe707101afb2099b738471f70594ec && \\ ``` To tie in the CRD manifest generation with existing"
}
] |
{
"category": "Runtime",
"file_name": "generating-velero-crds-with-structural-schema.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Filesystem Mirroring Ceph filesystem mirroring is a process of asynchronous replication of snapshots to a remote CephFS file system. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name (for a given directory on the remote file system) as the snapshot being synchronized. It is generally useful when planning for Disaster Recovery. Mirroring is for clusters that are geographically distributed and stretching a single cluster is not possible due to high latencies. This guide assumes you have created a Rook cluster as explained in the main The following will enable mirroring on the filesystem: ```yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPools: name: replicated failureDomain: host replicated: size: 3 preserveFilesystemOnDelete: true metadataServer: activeCount: 1 activeStandby: true mirroring: enabled: true peers: secretNames: snapshotSchedules: path: / interval: 24h # daily snapshots snapshotRetention: path: / duration: \"h 24\" ``` Launch the `rook-ceph-fs-mirror` pod on the source storage cluster, which deploys the `cephfs-mirror` daemon in the cluster: ```console kubectl create -f deploy/examples/filesystem-mirror.yaml ``` Please refer to for more information. Once mirroring is enabled, Rook will by default create its own so that it can be used by another cluster. The bootstrap peer token can be found in a Kubernetes Secret. The name of the Secret is present in the Status field of the CephFilesystem CR: ```yaml status: info: fsMirrorBootstrapPeerSecretName: fs-peer-token-myfs ``` This secret can then be fetched like so: ```console eyJmc2lkIjoiOTFlYWUwZGQtMDZiMS00ZDJjLTkxZjMtMTMxMWM5ZGYzODJiIiwiY2xpZW50X2lkIjoicmJkLW1pcnJvci1wZWVyIiwia2V5IjoiQVFEN1psOWZ3V1VGRHhBQWdmY0gyZi8xeUhYeGZDUTU5L1N0NEE9PSIsIm1vbl9ob3N0IjoiW3YyOjEwLjEwMS4xOC4yMjM6MzMwMCx2MToxMC4xMDEuMTguMjIzOjY3ODldIn0= ``` The decoded secret must be saved in a file before importing. ```console ``` See the CephFS mirror documentation on . Further refer to CephFS mirror documentation to . To check the `mirror daemon status`, please run the following command from the pod. For example : ```console ``` ```json [ { \"daemon_id\": 906790, \"filesystems\": [ { \"filesystem_id\": 1, \"name\": \"myfs\", \"directory_count\": 1, \"peers\": [ { \"uuid\": \"a24a3366-8130-4d55-aada-95fa9d3ff94d\", \"remote\": { \"client_name\": \"client.mirror\", \"cluster_name\": \"91046889-a6aa-4f74-9fb0-f7bb111666b4\", \"fs_name\": \"myfs\" }, \"stats\": { \"failure_count\": 0, \"recovery_count\": 0 } } ] } ] } ] ``` Please refer to the `--admin-daemon` socket commands from the CephFS mirror documentation to verify and run the commands from the `rook-ceph-fs-mirror` pod: ```console ``` Fetch the `ceph-client.fs-mirror` daemon admin socket file from the `/var/run/ceph` directory: ```console ``` ```console ``` ```json { \"rados_inst\": \"X.X.X.X:0/2286593433\", \"peers\": { \"a24a3366-8130-4d55-aada-95fa9d3ff94d\": { \"remote\": { \"client_name\": \"client.mirror\", \"cluster_name\": \"91046889-a6aa-4f74-9fb0-f7bb111666b4\", \"fs_name\": \"myfs\" } } }, \"snap_dirs\": { \"dir_count\": 1 } } ``` For getting `peer synchronization status`: ```console ``` ```json { \"/volumes/_nogroup/subvol-1\": { \"state\": \"idle\", \"lastsyncedsnap\": { \"id\": 4, \"name\": \"snap2\" }, \"snaps_synced\": 0, \"snaps_deleted\": 0, \"snaps_renamed\": 0 } } ```"
}
] |
{
"category": "Runtime",
"file_name": "filesystem-mirroring.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "In this example, we'll present how to manipulate the linear memory with the APIs defined in . The code in the following example is verified on wasmedge-sdk v0.5.0 wasmedge-sys v0.10.0 wasmedge-types v0.3.0 Before talking about the code, let's first see the wasm module we use in this example. In the wasm module, a linear memory of 1-page (64KiB) size is defined; in addition, three functions are exported from this module: `getat`, `setat`, and `mem_size`. ```wasm (module (type $memsizet (func (result i32))) (type $getatt (func (param i32) (result i32))) (type $setatt (func (param i32) (param i32))) (memory $mem 1) (func $getat (type $getat_t) (param $idx i32) (result i32) (i32.load (local.get $idx))) (func $setat (type $setat_t) (param $idx i32) (param $val i32) (i32.store (local.get $idx) (local.get $val))) (func $memsize (type $memsize_t) (result i32) (memory.size)) (export \"getat\" (func $getat)) (export \"setat\" (func $setat)) (export \"memsize\" (func $memsize)) (export \"memory\" (memory $mem))) ``` Next, we'll demonstrate how to manipulate the linear memory by calling the exported functions. Let's start off by getting all imports right away so you can follow along ```rust // please add this feature if you're using rust of version < 1.63 // #![feature(explicitgenericargswithimpl_trait)] use wasmedge_sdk::{params, wat2wasm, Executor, Module, Store, WasmVal}; ``` To load a `Module`, `wasmedge-sdk` defines two methods: loads a wasm module from a file, and meanwhile, validates the loaded wasm module. loads a wasm module from an array of in-memory bytes, and meanwhile, validates the loaded wasm module. Here we use `Module::from_bytes` method to load our wasm module from an array of in-memory bytes. ```rust let wasm_bytes = wat2wasm( r#\" (module (type $memsizet (func (result i32))) (type $getatt (func (param i32) (result i32))) (type $setatt (func (param i32) (param i32))) (memory $mem 1) (func $getat (type $getat_t) (param $idx i32) (result i32) (i32.load (local.get $idx))) (func $setat (type $setat_t) (param $idx i32) (param $val i32) (i32.store (local.get $idx) (local.get $val))) (func $memsize (type $memsize_t) (result i32) (memory.size)) (export \"getat\" (func $getat)) (export \"setat\" (func $setat)) (export \"memsize\" (func $memsize)) (export \"memory\" (memory $mem))) \"#"
},
{
"data": ")?; // loads a wasm module from the given in-memory bytes let module = Module::frombytes(None, &wasmbytes)?; ``` The module returned by `Module::from_bytes` is a compiled module, also called AST Module in WasmEdge terminology. To use it in WasmEdge runtime environment, we need to instantiate the AST module. We use API to achieve the goal. ```rust // create an executor let mut executor = Executor::new(None, None)?; // create a store let mut store = Store::new()?; // register the module into the store let externinstance = store.registernamed_module(&mut executor, \"extern\", &module)?; ``` In the code above, we register the AST module into a `Store`, in which the module is instantiated, and as a result, a named `extern` is returned. In the previous section, we get an instance by registering a compiled module into the runtime environment. Now we retrieve the memory instance from the module instance, and make use of the APIs defined in to manipulate the linear memory. ```rust // get the exported memory instance let mut memory = extern_instance .memory(\"memory\") .okorelse(|| anyhow::anyhow!(\"failed to get memory instance named 'memory'\"))?; // check memory size assert_eq!(memory.size(), 1); asserteq!(memory.datasize(), 65536); // grow memory size memory.grow(2)?; assert_eq!(memory.size(), 3); asserteq!(memory.datasize(), 3 * 65536); // get the exported functions: \"setat\" and \"getat\" let setat = externinstance .func(\"set_at\") .okorelse(|| anyhow::Error::msg(\"Not found exported function named 'set_at'.\"))?; let getat = externinstance .func(\"get_at\") .okorelse(|| anyhow::Error::msg(\"Not found exported function named 'get_at`.\"))?; // call the exported function named \"set_at\" let mem_addr = 0x2220; let val = 0xFEFEFFE; setat.call(&mut executor, params!(memaddr, val))?; // call the exported function named \"get_at\" let returns = getat.call(&mut executor, params!(memaddr))?; asserteq!(returns[0].toi32(), val); // call the exported function named \"set_at\" let pagesize = 0x10000; let memaddr = (pagesize * 2) - std::mem::sizeofval(&val) as i32; let val = 0xFEA09; setat.call(&mut executor, params!(memaddr, val))?; // call the exported function named \"get_at\" let returns = getat.call(&mut executor, params!(memaddr))?; asserteq!(returns[0].toi32(), val); ``` The comments in the code explain the meaning of the code sample above, so we don't describe more. The complete code of this example can be found in ."
}
] |
{
"category": "Runtime",
"file_name": "memory_manipulation.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"ark delete restore\" layout: docs Delete a restore Delete a restore ``` ark delete restore NAME [flags] ``` ``` -h, --help help for restore ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Delete ark resources"
}
] |
{
"category": "Runtime",
"file_name": "ark_delete_restore.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Longhorn's storage stack, based on iSCSI and a customized protocol, has limitations such as increased I/O latencies and reduced IOPS due to the longer data path. This makes it less suitable for latency-critical applications. To overcome these challenges, Longhorn introduces the Storage Performance Development Kit (SPDK) to enhance overall performance. With SPDK integration, Longhorn optimizes system efficiency, addresses latency concerns, and provides a high-performance storage solution capable of meeting diverse workload demands. Introduce backend store drivers `v1`: legacy data path `v2`: a newly introduced data path based on SPDK Introduce disk types and management Support volume creation, attachment, detachment and deletion Support orphaned replica collection Support runtime replica rebuilding Support changing number of replicas of a volume Support volume expansion Support volume backup Longhorn's storage stack is built upon iSCSI and a customized protocol. However, the longer data path associated with this architecture introduces certain limitations, resulting in increased I/O latencies and reduced IOPS. Consequently, Longhorn may not be the ideal choice for latency-critical applications, as the performance constraints could impede their deployment on the platform. By incorporating SPDK, Longhorn leverages its capabilities to significantly improve performance levels. The integration of SPDK enables Longhorn to optimize system efficiency, mitigate latency concerns, and deliver a high-performance storage solution that can better meet the demands of diverse workloads. Environment Setup Configure Kernel Modules (uio and uiopcigeneric) and Huge Pages for SPDK ```bash kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/prerequisite/longhorn-spdk-setup.yaml ``` Install NVMe Userspace Tool and Load `nvme-tcp` Kernel Module nvme-cli on each node and make sure that the version of nvme-cli is equal to or greater than version `1.12` . ```bash kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/prerequisite/longhorn-nvme-cli-installation.yaml ``` Restart `kubelet` Modifying the Huge Page configuration of a node requires either a restart of kubelet or a complete reboot of the node. This step is crucial to ensure that the changes take effect and are properly applied. Install Longhorn system Enable SPDK Support Enable the SPDK feature by changing the `v2-data-engine` setting to `true` after installation. Following this, the instance-manager pods shall be automatically restarted. Add Disks for volumes using v2 data engine Legacy disks are classified as `filesystem`-type disks Add one or multiple `block`-type disks into `node.Spec.Disks` ```bash block-disk-example1: allowScheduling: true evictionRequested: false path: /path/to/block/device storageReserved: 0 tags: [] diskType: block ``` Create a storage class utilizing the enhanced performance capabilities offered by SPDK ```bash kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn-v2-data-engine provisioner:"
},
{
"data": "allowVolumeExpansion: true reclaimPolicy: Delete volumeBindingMode: Immediate parameters: numberOfReplicas: \"2\" staleReplicaTimeout: \"2880\" fromBackup: \"\" fsType: \"ext4\" dataEngine: \"v2\" ``` Create workloads that use Longhorn volumes provisioning based onthe storage class. Global settings `v2-data-engine`: This setting allows users to enable v2 data engine support. Default: false. `v2-data-engine-hugepage-limit`: This setting allows users to specify the 2 MiB hugepage size for v2 data engine. Default: 2048. CRD Introduce `diskType` in `node.Spec.Disks` `filesystem`: disks for legacy volumes. These disks, which are actually directories, store and organize data in a hierarchical manner. `block`: block disks for volumes using v2 data engine The replica scheduler assigns replicas of legacy volumes to `filesystem`-type disks while replicas of volumes using v2 data engine are scheduled to `block`-type disks. Introduce `backendStoreDriver` in `volume.Spec`, `engine.Spec` and `replica.Spec`. `backendStoreDriver` is utilized to differentiate between volume types and their associated data paths. Introduce `Instance`, `Disk` and `SPDK` gRPC services `Instance` gRPC service: It is tasked with managing various operations related to instance management, including creation, deletion, retrieval, listing, and watching. An instance, either an engine or a replica of a legacy volume, represents a process. On the other hand, for replicas of volumes using v2 data engine, an instance represents a logical volume. In the case of an engine for an volume using v2 data engine, an instance is associated with a raid bdev, a frontend NVMe target/initiator pair and a bind mount device. `Disk` gRPC service: It is responsible for managing various disk operations, including creation, deletion, and retrieval. Additionally, it provides functionalities to list or delete replica instances associated with the disks. In the case of a legacy volume, a replica instance is represented as a replica directory on the disk. On the other hand, for an volume using v2 data engine, a replica instance is a replica chained by logical volumes. `SPDK` gRPC service: It manages replicas chained by logical volumes and engines constructed using SPDK raid1 bdevs. In addition, the service is responsible for the communication with `spdk_tgt`. Proxy gRPC service APIs Update gRPC service APIs for support different disk type, filesystem and block, and data engines, v1 and v2. Disk orchestration Within the Longhorn system, an aio bdev and an lvstore are created on top of a block-type disk. Replicas in terms of logical volumes (lvols) are then created on the lvstore. Orphaned replicas collection The features have been integrated into the existing framework for collecting and cleaning up orphaned replicas."
}
] |
{
"category": "Runtime",
"file_name": "20230523-support-spdk-volumes.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "oep-number: Replica Scaleup REV1 title: Replica Scaleup authors: \"@vishnuitta\" owners: \"@kmova\" \"@mynktl\" \"@pawanpraka1\" \"@mittachaitu\" editor: \"@vishnuitta\" creation-date: 2019-09-10 last-updated: 2019-09-10 status: provisional * * * * * * * * * * * * * * * This proposal includes design for cStor data plane to allow adding new replicas to it so that replication factor of the volume will be increased. to move replica from one pool to another There are cases arising where OpenEBS user lost multiple copies of data, and, working with available single copy. This proposal is to enable him to add more replicas to it again. This also allows OpenEBS user to distribute volume replicas Increase ReplicationFactor of volume by adding replicas Replace a replica with other one which is used in volume distribution and ephemeral cases Identify replicas which are allowed to connect to target Scaling down replicas Add replicas in GitOps model Workflow to replace non-existing replica operator that detects the need to increase ReplicationFactor As an OpenEBS user, I should be able to add replicas to my volume. As an OpenEBS user, I should be able to move replica across pools. As an OpenEBS user, I should be able to replace non-existing replica with new replica. Currently, cstor-istgt reads replication related configuration, i.e., ReplicationFactor (RF) and ConsistencyFactor (CF), from istgt.conf file. cstor-volume-mgmt is the container that fills the RF and CF details into istgt.conf file by reading from CStorVolume CR. Another property that cstor-istgt uses is 'quorum'. This property is set at replica, and, replica sends this to istgt during handshake phase. This property can take values of 'on' and 'off'. 'on' means that data written to this replica is available for reads. 'off' means that data written to this replica is lost. cstor-pool-mgmt is the container that sets quorum property at replica. It sets property as 'on' if replica is created first time related to that CVR. Otherwise, it sets to 'off'. cstor-pool-mgmt creates replica with quorum as 'on' if status.Phase is Init. If state is 'Recreate', it creates replica (if needed) with quorum as 'off'. At cstor-istgt, if quorum is off for a replica, that replica won't participate in IO related consistency factor checks. Its there to rebuild missing data while its taking ongoing IOs. if quorum is on, that replica participate in deciding the fate of IOs. cstor-istgt returns success/failure to client based on the response from replicas in quorum list. cstor-volume-mgmt reads configuration from CStorVolume CR. spec.replicationFactor spec.consistencyFactor cstor-volume-mgmt updates the above info into istgt.conf file as ReplicationFactor ConsistencyFactor Based on the status of CVR and availability of replica, cstor-pool-mgmt sets quorum property while creating replica. Above mentioned properties are used as below at istgt: Allow replicas (up to max of 5) to connect to target until RF number of healthy replicas are available. At least CF in-quorum replicas are required to perform rebuilding and start IOs Rebuilding will be triggered and replicas become healthy. As RF number of replicas are healthy, other connected replicas, if any, gets disconnected. Due to misconfiguration from user, if old replica connects back instead of the replaced one, there are chances of serving wrong data, and can lead to data corruption. To understand this with example, look at #notes section. There is no data-consistent way of increasing RF and CF of CStorVolume"
},
{
"data": "Increase in replication factor will be provided in declarative way. For replicas identification that can connect to target, replicas information as well need to be stored in CStorVolume CR and istgt.conf. New fields will be added to CStorVolume CR and istgt.conf. In CStorVolume CR, `spec.DesiredReplicationFactor` will be added to help in adding replicas, and, `status.ReplicaList` to store list of known replicas. In istgt.conf, `DesiredReplicationFactor` and `ReplicaList` will be added. status.ReplicaList of CStorVolume CR need to contain the replicas information that are allowed to connect to target. There can be <= RF number of entries, which eventually becomes RF entries. All the replicas in this list will be with quorum property 'on'. Replicas in this list are termed as 'Known Replicas'. If RF < DRF, new replicas are allowed to connect to target. Once it becomes healthy, RF will be increased and new replica will be added to spec.ReplicaList. Replica that need to be replaced/moved will also be done in declarative way. During handshake, replicas will share `ReplicaGUID_Data` related to particular `ReplicaID_CVR` with target. `ReplicaID_CVR` is a unique number related to CVR for which replica is created in dataplane whose GUID is `ReplicaGUID_Data`. If replica is recreated for particular CVR, `ReplicaGUID_Data` will be changed but not `ReplicaID_CVR`. If replica is moved to another pool, new CVR should be created on new pool with `ReplicaID_CVR` same as that of old pool, and status.Phase as 'Recreate'. If new replica need to added to volume, new CVR will be created. So, `ReplicaIDCVR` will be new, and its `ReplicaGUIDData` also will be new. In CStorVolumeReplica, new field `spec.ReplicaID` will be added. During dataset creation, cstor-pool-mgmt will set this at dataset with property as `io.openebs:replicaID`. User will create proper CVR with status.phase as `Recreate`. User will edit CStorVolume CR to set 'DesiredReplicationFactor' Volume is available online as user reduced RF when too many replicas are lost (or) sufficient replicas i.e., CF are available. Leave the DesiredReplicationFactor as it is, and cStor increases the RF if descreased by user. User will delete CVR which is on old pool User will create proper CVR with status.phase as `Recreate` on new pool and `spec.ReplicaID` same as the one of CVR on old pool. cstor-volume-mgmt watches for CStorVolume CR and updates istgt.conf file if there is any change in DesiredReplicationFactor. It will trigger istgtcontrol command so that istgt updates DRF during runtime. A listener will be created on UnixDomain socket in cstor-volume-mgmt container. istgt connects to this listener to update the replica related information if there is any change in the list. cstor-volume-mgmt updates the status.ReplicaList part of CStorVolume CR and sends the success/failure of CR update as response to istgt. During start phase of cstor-volume-mgmt container, it updates the istgt.conf file with values from spec.DesiredReplicationFactor and status.ReplicaList of CStorVolume CR. ``` [Conf file] > ISTGT <(RID&RGUID) REPLICAS <--(R_ID)- CSTOR-POOL-MGMT | /|\\ /|\\ (UnixSocket Conn) | | | | | | (istgtcontrol) (R_ID) \\|/ | | CSTOR-VOLUME-MGMT <->[CStorVolume CR] [CVR CR] | | \\|/ [Conf file] ``` This can be better explained with scenarios. Consider case where 3 replicas (R1, R2, R3) are connected and are fine. Lets say, one replica(R3) was lost and new one (R4) got added. After R4 got reconstructed with data, another replica (R2) got lost and new one (R5) got connected. Once R5 got reconstructed, all replicas have quorum property"
},
{
"data": "But, just that R2 and R3 are not reachable. At a later point of time, if R1, R4, R5 are down and R2, R3 are connected to istgt (which can be due to attachment of old disks), istgt cannot identify that there is missing data with R2, R3. Consider replacing non-existing replica. When user reduced replication factor, and later increased it, if old replicas gets connected, istgt can't identify that there is missing data with replicas. This is needed to identify the replica that gets moved across pools. For the case of replacement also, this is needed to identify the replica. Consider case of RF as 5 and CF as 3. R1, R2, R3, R4 and R5 are replicas. R1, R2 and R3 are online and IOs are happening. R6 connects and gets added. Here, 2 approaches are possible one approach is to add R6 to known replicas of R1, R2, R3, R4, R5 (or) another approach is to replace R4 and R5 with R6. Later R6 also disconnected. And, after few more IOs, R1, R2, R3 also got disconnected. At this point if first approach is followed, if R4, R5 and R6 connects, there will be data inconsistency. if second approach is followed, there will be only 4 replicas. If 5th one, either R4 or R5 connects, it need to become healthy before getting added to to list. This would be time consuming. Current code takes care of reconstructing data to non-quorum replica once it does handshake with target. But, changes are required in allowing replica to perform handshake with target to achieve DesiredReplicationFactor. `spec`'s `rq` need to contain only known replicas, which should be having quorum as 'on'. All other replicas need to get into `nonquorumrq` (whose name can be changed to `unknown_rq`) There won't be any change for `healthyrcount` and `degradedrcount` which looks at `spec`'s `rq` Current implementation says that CF number of in-quorum replicas are needed. But, with addition of known replica list to achieve data consistency, CF number of any known replicas need to be connected. Below are the steps to allow replica handshake with istgt: If currently connected replicas count >= DRF, reject this if it is non-quorum (or) another connected replica (in the order non-quorum, not-in-the-known-list) if the new one is part of known list with `ReplicaIDCVR` or `ReplicaGUIDData`, and, raise a log/alert as too many replicas are connecting. If DesiredRF number of known replicas are connected, i.e., (`healthyrcount` + `degradedrcount` == DRF), reject all and raise log/alert Make sure connected replica is an appropriate one to the volume, by checking replica name with vol name Allow replica if it is in known list Allow replica if its `ReplicaID_CVR` is in known list, but add to unknown list (Replacement case)(here, quorum might be off or"
},
{
"data": "It can be on if this replica got disconnected during transitioning to known list) Allow replica if its `ReplicaID_CVR` is NOT in known list only if RF < DRF, but add to unknown list [Replica addition case] If RF number of known replicas are NOT available [new volume or upgrade case] Add to unknown list if quorum is off (replacement case) Add to known list if quorum is on Allow if RF < DRF, but, add to unknown list Reject otherwise Note: Make sure there is only one replica for a given `ReplicaID_CVR` When a replica completes rebuilding and turns healthy, it might be undergoing either replacement or addition. If replica's `ReplicaID_CVR` is already in known list, it is replacement case. Replace the `ReplicaGUIDData` with new one for `ReplicaIDCVR` in known list. If replica's `ReplicaID_CVR` is NOT in known list, it is replica addition case. Add `ReplicaIDCVR` with `ReplicaGUIDData` to known replica list. For the case of adding new replica or a replacement replica, steps to follow for data consistency are: Identify the case when replica turned from quorum off to quorum on state. Let this replica referred as R. If RF number of known replicas are NOT available or `ReplicaID_CVR` is in known list, update CStorVolume CR and in-memory structures (Replica replacement or replica movement case) If RF == DRF, disconnect (Replica scaleup case) If there is no change in CF with increase in RF, update the CStorVolume CR and in-memory structures with increased RF Pause IOs for few seconds, and, make sure there are no pending IOs on R [Why? If CR got updated, and there are pending IOs on R, CF of those IOs MAY NOT be met with new RF] If pending IOs still exists, resume IOs and retry above steps in next iteration. If there are no pending IOs, mark `R` as scaleup replica and resume IOs When any replica is marked as scaleup replica, all write IOs need to be verified to be successful with new and old consistency models. Inform cstor-volume-mgmt with new replicas and replication factor details If updating CR succeeds, update in-memory data structures of istgt If updating CR fails, retry above steps after some time Control plane should never create more than initially configured RF number of CVRs with quorum on. Quorum should be 'off' for replicas that are later created either to re-create replica as data got lost or adding new replica to increase RF. CF is set at Lower_Ceil(RF/2) + 1 following the formula used in control plane. However, data plane can take any number of CF which is less than RF. I/P validation need to be done such that DRF is >= RF Considering the case of reduction in RF, istgt need to update ReplicaList whenever in-memory list doesn't match with the list in conf file. Start old replica and make sure it doesn't connect Not more than DRF number of quorum replicas at any time No non-quorum replicas if DRF number of quorum replicas exists Data consistency checks in the newly added replicas Updated `ReplicaList` in CV CR when new replicas got added Failure case handling to update CV CR Only replicas in ReplicaList should be allowed to connect even with different components restarts replicas that are NOT in ReplicaList should NOT be allowed to connect even with different components restarts Upgrade cases I/P values validation like smaller DRF, reducing DRF/RF Start with 3 replicas and write data. Recreate 2 pools and reduce RF to 1. Increase it to 3 and verify data consistency. All testcases mentioned in `Testcases` section need to be automated Owner acceptance"
}
] |
{
"category": "Runtime",
"file_name": "20190910-replica-scaleup.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document discusses threat models associated with the Kata Containers project. Kata was designed to provide additional isolation of container workloads, protecting the host infrastructure from potentially malicious container users or workloads. Since Kata Containers adds a level of isolation on top of traditional containers, the focus is on the additional layer provided, not on traditional container security. This document provides a brief background on containers and layered security, describes the interface to Kata from CRI runtimes, a review of utilized virtual machine interfaces, and then a review of threats. Kata seeks to prevent an untrusted container workload or user of that container workload to gain control of, obtain information from, or tamper with the host infrastructure. In our scenario, an asset is anything on the host system, or elsewhere in the cluster infrastructure. The attacker is assumed to be either a malicious user or the workload itself running within the container. The goal of Kata is to prevent attacks which would allow any access to the defined assets. Traditional containers leverage several key Linux kernel features to provide isolation and a view that the container workload is the only entity running on the host. Key features include `Namespaces`, `cgroups`, `capablities`, `SELinux` and `seccomp`. The canonical runtime for creating such a container is `runc`. In the remainder of the document, the term `traditional-container` will be used to describe a container workload created by runc. Kata Containers provides a second layer of isolation on top of those provided by traditional-containers. The hardware virtualization interface is the basis of this additional layer. Kata launches a lightweight virtual machine, and uses the guests Linux kernel to create a container workload, or workloads in the case of multi-container pods. In Kubernetes and in the Kata implementation, the sandbox is carried out at the pod level. In Kata, this sandbox is created using a virtual machine. A typical Kata Containers deployment uses Kubernetes with a CRI implementation. On every node, Kubelet will interact with a CRI implementor, which will in turn interface with an OCI based runtime, such as Kata Containers. Typical CRI implementors are `cri-o` and `containerd`. The CRI API, as defined at the Kubernetes , results in a few constructs being supported by the CRI implementation, and ultimately in the OCI runtime creating the workloads. In order to run a container inside of the Kata sandbox, several virtual machine devices and interfaces are required. Kata translates sandbox and container definitions to underlying virtualization technologies provided by a set of virtual machine monitors (VMMs) and hypervisors. These devices and their underlying implementations are discussed in detail in the following section. In case of Kata, today the devices which we need in the guest are: Storage: In the current design of Kata Containers, we are reliant on the CRI implementor to assist in image handling and volume management on the host. As a result, we need to support a way of passing to the sandbox the container rootfs, volumes requested by the workload, and any other volumes created to facilitate sharing of secrets and `configmaps` with the containers. Depending on how these are managed, a block based device or file-system sharing is required. Kata Containers does this by way of `virtio-blk` and/or `virtio-fs`. Networking: A method for enabling network connectivity with the workload is required. Typically this will be done providing a `TAP` device to the VMM, and this will be exposed to the guest as a `virtio-net`"
},
{
"data": "It is feasible to pass in a NIC device directly, in which case `VFIO` is leveraged and the device itself will be exposed to the guest. Control: In order to interact with the guest agent and retrieve `STDIO` from containers, a medium of communication is required. This is available via `virtio-vsock`. Devices: `VFIO` is utilized when devices are passed directly to the virtual machine and exposed to the container. Dynamic Resource Management: `ACPI` is utilized to allow for dynamic VM resource management (for example: CPU, memory, device hotplug). This is required when containers are resized, or more generally when containers are added to a pod. How these devices are utilized varies depending on the VMM utilized. We clarify the default settings provided when integrating Kata with the QEMU, Firecracker and Cloud Hypervisor VMMs in the following sections. Each virtio device is implemented by a backend, which may execute within userspace on the host (vhost-user), the VMM itself, or within the host kernel (vhost). While it may provide enhanced performance, vhost devices are often seen as higher risk since an exploit would be already running within the kernel space. While VMM and vhost-user are both in userspace on the host, `vhost-user` generally allows for the back-end process to require less system calls and capabilities compared to a full VMM. The backend for `virtio-blk` and `virtio-scsi` are based in the VMM itself (ring3 in the context of x86) by default for Cloud Hypervisor, Firecracker and QEMU. While `vhost` based back-ends are available for QEMU, it is not recommended. `vhost-user` back-ends are being added for Cloud Hypervisor, they are not utilized in Kata today. `virtio-fs` is supported in Cloud Hypervisor and QEMU. `virtio-fs`'s interaction with the host filesystem is done through a vhost-user daemon, `virtiofsd`. The `virtio-fs` client, running in the guest, will generate requests to access files. `virtiofsd` will receive requests, open the file, and request the VMM to `mmap` it into the guest. When DAX is utilized, the guest will access the host's page cache, avoiding the need for copy and duplication. DAX is still an experimental feature, and is not enabled by default. From the `virtiofsd` : ```This program must be run as the root user. Upon startup the program will switch into a new file system namespace with the shared directory tree as its root. This prevents file system escapes due to symlinks and other file system objects that might lead to files outside the shared directory. The program also sandboxes itself using seccomp(2) to prevent ptrace(2) and other vectors that could allow an attacker to compromise the system after gaining control of the virtiofsd process.``` DAX-less support for `virtio-fs` is available as of the 5.4 Linux kernel. QEMU VMM supports virtio-fs as of v4.2. Cloud Hypervisor supports `virtio-fs`. `virtio-net` has many options, depending on the VMM and Kata configurations. While QEMU has options for `vhost`, `virtio-net` and `vhost-user`, the `virtio-net` backend for Kata defaults to `vhost-net` for performance reasons. The default configuration is being reevaluated. For Firecracker, the `virtio-net` backend is within Firecracker's VMM. For Cloud Hypervisor, the current backend default is within the VMM. `vhost-user-net` support is being added (written in rust, Cloud Hypervisor specific). In QEMU, vsock is backed by `vhost_vsock`, which runs within the kernel itself. In Firecracker and Cloud Hypervisor, vsock is backed by a unix-domain-socket in the hosts userspace. Utilizing VFIO, devices can be passed through to the virtual machine. We will assess this separately. Exposure to host is limited to gaps in device pass-through handling. This is supported in QEMU and Cloud Hypervisor, but not Firecracker. ACPI is necessary for hotplug of"
}
] |
{
"category": "Runtime",
"file_name": "threat-model.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Helm Charts Overview Rook has published the following Helm charts for the Ceph storage provider: : Starts the Ceph Operator, which will watch for Ceph CRs (custom resources) : Creates Ceph CRs that the operator will use to configure the cluster The Helm charts are intended to simplify deployment and upgrades. Configuring the Rook resources without Helm is also fully supported by creating the directly."
}
] |
{
"category": "Runtime",
"file_name": "helm-charts.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document provides guidance on migrating from other CNIs to Antrea starting from version v1.15.0 onwards. NOTE: The following is a reference list of CNIs and versions for which we have verified the migration process. CNIs and versions that are not listed here might also work. Please create an issue if you run into problems during the migration to Antrea. During the migration process, no Kubernetes resources should be created or deleted, otherwise the migration process might fail or some unexpected problems might occur. | CNI | Version | ||| | Calico | v3.26 | | Flannel | v0.22.0 | The migration process is divided into three steps: Clean up the old CNI. Install Antrea in the cluster. Deploy Antrea migrator. The cleanup process varies across CNIs, typically you should remove the DaemonSet, Deployment, and CRDs of the old CNI from the cluster. For example, if you used `kubectl apply -f <CNI_MANIFEST>` to install the old CNI, you could then use `kubectl delete -f <CNI_MANIFEST>` to uninstall it. The second step is to install Antrea in the cluster. You can follow the to install Antrea. The following is an example of installing Antrea v1.14.1: ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.14.1/antrea.yml ``` After Antrea is up and running, you can now deploy Antrea migrator by the following command. The migrator runs as a DaemonSet, `antrea-migrator`, in the cluster, which will restart all non hostNetwork Pods in the cluster in-place and perform necessary network resource cleanup. ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-migrator.yml ``` The reason for restarting all Pods is that Antrea needs to take over the network management and IPAM from the old CNI. In order to avoid the Pods being rescheduled and minimize service downtime, the migrator restarts all non-hostNetwork Pods in-place by restarting their sandbox containers. Therefore, it's expected to see the `RESTARTS` count for these Pods being increased by 1 like below: ```bash $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES migrate-example-6d6b97f96b-29qbq 1/1 Running 1 (24s ago) 2m5s 10.10.1.3 test-worker <none> <none> migrate-example-6d6b97f96b-dqx2g 1/1 Running 1 (23s ago) 2m5s 10.10.1.6 test-worker <none> <none> migrate-example-6d6b97f96b-jpflg 1/1 Running 1 (23s ago) 2m5s 10.10.1.5 test-worker <none> <none> ``` When the `antrea-migrator` Pods on all Nodes are in `Running` state, the migration process is completed. You can then remove the `antrea-migrator` DaemonSet safely with the following command: ```bash kubectl delete -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-migrator.yml ```"
}
] |
{
"category": "Runtime",
"file_name": "migrate-to-antrea.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document contains info on how to setup a Kubernetes cluster to install Sysbox. Each section covers a different Kubernetes distro. ) Create a cluster through or any equivalent tool. Take into account the Sysbox requirements described . Once the cluster is created, proceed to install Sysbox as shown . Create the EKS cluster using an Ubuntu-based AMI for the K8s worker nodes. Ensure the nodes have a minimum of 4 vCPUs each. An easy way to do this is using , the official CLI for AWS EKS. For example, the following cluster configuration yaml creates an EKS cluster with Kubernetes v1.20 and a composed of 3 Ubuntu-based nodes (t3.xlarge instances): ```yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: my-cluster region: us-west-2 version: \"1.20\" managedNodeGroups: name: ubuntu-nodes amiFamily: Ubuntu2004 instanceType: t3.xlarge desiredCapacity: 3 minSize: 3 maxSize: 5 volumeSize: 200 ssh: allow: true publicKeyName: awsEksKey ``` First create the `awsEksKey`: ```console aws ec2 create-key-pair --region us-west-2 --key-name awsEksKey ``` And then create the cluster with: ```console eksctl create cluster --config-file=<your-cluster-config.yaml> 2021-08-11 02:16:41 [] eksctl version 0.59.0 2021-08-11 02:16:41 [] using region us-west-2 ... 2021-08-11 02:43:15 [] EKS cluster \"my-cluster2\" in \"us-west-2\" region is ready ``` The cluster creation process on EKS takes a while (~25 min!). Once the cluster is created, proceed to install Sysbox as shown . NOTES: The installation of Sysbox (which also installs CRI-O on the desired K8s worker nodes) takes between 2->3 minutes on EKS. You can view it's progress by looking at the logs of the sysbox-deploy-k8s pod. ```console $ kubectl -n kube-system logs -f pod/sysbox-deploy-<pod-id> Adding K8s label \"crio-runtime=installing\" to node ... The k8s runtime on this node is now CRI-O. Sysbox installation completed. Done. ``` If the installation takes significantly longer, something is likely wrong. See for troubleshoot info. Create a cluster by following the official . Sysbox can be properly installed over the K8s worker nodes created by AKS by default (i.e., Ubuntu-Bionic + Containerd). However, the default hardware specs (2 vCPU) are not ideal to run Sysbox pods on, so ensure that the nodes have a minimum of 4 vCPUs each. NOTES: The installation of Sysbox (which also installs CRI-O on the desired K8s worker nodes) takes between 1->2 minutes on AKS. If it takes significantly longer than this, something is likely wrong. See for troubleshoot info. Create a cluster with a . Create the K8s worker nodes where Sysbox will be installed using the \"Ubuntu with Containerd\" image templates. Ensure the nodes have a minimum of 4 vCPUs"
},
{
"data": "Do NOT enable secure-boot on the nodes, as this prevents the sysbox-deploy-k8s daemonset from installing the into the kernel. This module is usually present in Ubuntu desktop and server images, but not present in Ubuntu cloud images, so the sysbox-deploy-k8s daemonset must install it. Label the nodes and deploy the Sysbox installation daemonset as shown in the . NOTES: The installation of Sysbox (which also installs CRI-O on the desired K8s worker nodes) takes between 1->2 minutes on GKE. If it takes significantly longer than this, something is likely wrong. See for troubleshoot info. Create a cluster through the UI, or by making use of the RKE provisioning . Take into account the Sysbox node requirements described . Once the cluster is fully operational, proceed to install Sysbox as shown . NOTES: The installation of Sysbox (which also installs CRI-O on the desired K8s worker nodes) takes between 1->2 minutes on RKE clusters. Upon successful installation of Sysbox, all the K8s PODs will be re-spawned through CRI-O. However, the control-plane components (e.g., kubelet) created as Docker containers by the RKE provisioning tool, will continue to be handled by docker. Create an RKE2 cluster through the UI (must be running Rancher v2.6+), or by making use of the RKE2 provisioning . Take into account the Sysbox node requirements described . Once the cluster is fully operational, proceed to install Sysbox as shown . NOTES: The installation of Sysbox (which also installs CRI-O on the desired K8s worker nodes) takes between 1->2 minutes on RKE2 clusters. Create a Lokomotive cluster as described in the . Take into account the Sysbox node requirements described , and the fact that Lokomotive runs atop Flatcar Container Linux distribution, which is only in the Sysbox-EE offering. Once the cluster is fully operational, proceed to install Sysbox as shown . NOTES: The current Sysbox K8s installer does not fully support Lokomotive's \"self-hosted\" approach to manage K8s clusters. In particular, Sysbox is currently unable to interact with the two different sets of Kubelet processes created by Lokomotive. That is, Sysbox is only capable of configuring the Kubelet process. Thereby, for the proper operation of Sysbox within a Lokomotive cluster, the default Kubelet daemon-set must be disabled or eliminated with the following (or equivalent) instruction: ```console $ kubectl delete -n kube-system daemonset.apps/kubelet --cascade=true ``` Sysbox installation in a Lokomotive cluster is strikingly fast -- usually doesn't exceed 20-30 seconds. This is just a consequence of the fact that `sysbox-ee-deploy-k8s` daemon-set prepackages all the required dependencies."
}
] |
{
"category": "Runtime",
"file_name": "install-k8s-distros.md",
"project_name": "Sysbox",
"subcategory": "Container Runtime"
}
|
[
{
"data": "::: warning Note QoS is a new feature added in v3.2.1. ::: The stability of the master is very important for the entire cluster, so in order to prevent accidents (a large number of retries) or malicious attacks, the interfaces of the master need to be managed by QPS throttling. The QPS throttling for the master interface is based on QPS (how many requests are accepted per second). For interfaces that have not set throttling, no restrictions are imposed. For interfaces that have set throttling, there is a throttling wait timeout to prevent the avalanche effect. Before setting interface throttling, you can query which interfaces the master supports. ```bash curl -v \"http://192.168.0.11:17010/admin/getMasterApiList\" ``` ::: tip Note `192.168.0.11` is the IP address of the master, and the same applies below. ::: The response is as follows: ```json { \"code\": 0, \"data\": { \"adddatanode\": \"/dataNode/add\", \"addmetanode\": \"/metaNode/add\", \"addraftnode\": \"/raftNode/add\", \"adminadddatareplica\": \"/dataReplica/add\", \"adminaddmetareplica\": \"/metaReplica/add\", \"adminchangemetapartitionleader\": \"/metaPartition/changeleader\", \"adminclusterfreeze\": \"/cluster/freeze\", \"adminclusterstat\": \"/cluster/stat\", \"admincreatedatapartition\": \"/dataPartition/create\", \"admincreatemetapartition\": \"/metaPartition/create\", \"admincreatepreloaddatapartition\": \"/dataPartition/createPreLoad\", \"admincreatevol\": \"/admin/createVol\", \"admindatapartitionchangeleader\": \"/dataPartition/changeleader\", \"admindecommissiondatapartition\": \"/dataPartition/decommission\", \"admindecommissionmetapartition\": \"/metaPartition/decommission\", \"admindeletedatareplica\": \"/dataReplica/delete\", \"admindeletemetareplica\": \"/metaReplica/delete\", \"admindeletevol\": \"/vol/delete\", \"admindiagnosedatapartition\": \"/dataPartition/diagnose\", \"admindiagnosemetapartition\": \"/metaPartition/diagnose\", \"admingetallnodesetgrpinfo\": \"/admin/getDomainInfo\", \"admingetcluster\": \"/admin/getCluster\", \"admingetdatapartition\": \"/dataPartition/get\", \"admingetinvalidnodes\": \"/invalid/nodes\", \"admingetip\": \"/admin/getIp\", \"admingetisdomainon\": \"/admin/getIsDomainOn\", \"admingetmasterapilist\": \"/admin/getMasterApiList\", \"admingetnodeinfo\": \"/admin/getNodeInfo\", \"admingetnodesetgrpinfo\": \"/admin/getDomainNodeSetGrpInfo\", \"admingetvol\": \"/admin/getVol\", \"adminlistvols\": \"/vol/list\", \"adminloaddatapartition\": \"/dataPartition/load\", \"adminloadmetapartition\": \"/metaPartition/load\", \"adminsetapiqpslimit\": \"/admin/setApiQpsLimit\", \"adminsetclusterinfo\": \"/admin/setClusterInfo\", \"adminsetdprdonly\": \"/admin/setDpRdOnly\", \"adminsetmetanodethreshold\": \"/threshold/set\", \"adminsetnodeinfo\": \"/admin/setNodeInfo\", \"adminsetnoderdonly\": \"/admin/setNodeRdOnly\", \"adminupdatedatanode\": \"/dataNode/update\", \"adminupdatedomaindatauseratio\": \"/admin/updateDomainDataRatio\", \"adminupdatemetanode\": \"/metaNode/update\", \"adminupdatenodesetcapcity\": \"/admin/updateNodeSetCapcity\", \"adminupdatenodesetid\": \"/admin/updateNodeSetId\", \"adminupdatevol\": \"/vol/update\", \"adminupdatezoneexcluderatio\": \"/admin/updateZoneExcludeRatio\", \"adminvolexpand\": \"/vol/expand\", \"adminvolshrink\": \"/vol/shrink\", \"canceldecommissiondatanode\": \"/dataNode/cancelDecommission\", \"clientdatapartitions\": \"/client/partitions\", \"clientmetapartition\": \"/metaPartition/get\", \"clientmetapartitions\": \"/client/metaPartitions\", \"clientvol\": \"/client/vol\", \"clientvolstat\": \"/client/volStat\", \"decommissiondatanode\": \"/dataNode/decommission\", \"decommissiondisk\": \"/disk/decommission\", \"decommissionmetanode\": \"/metaNode/decommission\", \"getallzones\": \"/zone/list\", \"getdatanode\": \"/dataNode/get\", \"getdatanodetaskresponse\": \"/dataNode/response\", \"getmetanode\": \"/metaNode/get\", \"getmetanodetaskresponse\": \"/metaNode/response\", \"gettopologyview\": \"/topo/get\", \"migratedatanode\": \"/dataNode/migrate\", \"migratemetanode\": \"/metaNode/migrate\", \"qosgetclientslimitinfo\": \"/qos/getClientsInfo\", \"qosgetstatus\": \"/qos/getStatus\", \"qosgetzonelimitinfo\": \"/qos/getZoneLimit\", \"qosupdate\": \"/qos/update\", \"qosupdateclientparam\": \"/qos/updateClientParam\", \"qosupdatemasterlimit\": \"/qos/masterLimit\", \"qosupdatezonelimit\": \"/qos/updateZoneLimit\", \"qosupload\": \"/admin/qosUpload\", \"raftstatus\": \"/get/raftStatus\", \"removeraftnode\": \"/raftNode/remove\", \"updatezone\": \"/zone/update\", \"usercreate\": \"/user/create\", \"userdelete\": \"/user/delete\", \"userdeletevolpolicy\": \"/user/deleteVolPolicy\", \"usergetakinfo\": \"/user/akInfo\", \"usergetinfo\": \"/user/info\", \"userlist\": \"/user/list\", \"userremovepolicy\": \"/user/removePolicy\", \"usersofvol\": \"/vol/users\", \"usertransfervol\": \"/user/transferVol\", \"userupdate\": \"/user/update\", \"userupdatepolicy\": \"/user/updatePolicy\" }, \"msg\": \"success\" } ``` Taking the `/dataPartition/get` interface as an example, from the response of the interface query command, you can see that the name of the interface is `admingetdatapartition`, and the command to set the interface throttling is as follows: ```bash curl -v \"http://192.168.0.11:17010/admin/setApiQpsLimit?name=AdminGetDataPartition&limit=2000&timeout=5\" ``` ::: tip Note The value of the `name` parameter is the key under `data` in the response of the interface query command, such as `admingetdatapartition`. The letter case of the value is not case-sensitive and can be written as `AdminGetDataPartition`. ::: | Parameter | Type | Description | |--|--|| | name | string | Interface name (case-insensitive) | | timeout | uint | Interface throttling wait timeout (in seconds) | When the interface throttling is triggered (the QPS limit is reached), subsequent requests will be queued. The default timeout is 5 seconds. If the request is not processed after 5 seconds, a 429 response code will be returned. ```bash curl -v \"http://192.168.0.11:17010/admin/getApiQpsLimit\" ``` The response is as follows: ```json { \"code\": 0, \"msg\": \"success\", \"data\": { \"/admin/getIp\": { \"api_name\": \"admingetip\", \"query_path\": \"/admin/getIp\", \"limit\": 1, \"limiter_timeout\": 5 }, \"/dataPartition/get\": { \"api_name\": \"admingetdatapartition\", \"query_path\": \"/dataPartition/get\", \"limit\": 2000, \"limiter_timeout\": 5 } } } ``` ```bash curl -v \"http://192.168.0.11:17010/admin/rmApiQpsLimit?name=AdminGetDataPartition\" ``` | Parameter | Type | Description | |--|--|--| | name | string | Interface name (case-insensitive) |"
}
] |
{
"category": "Runtime",
"file_name": "qos.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Thanks for trying out Velero! We welcome all feedback, find all the ways to connect with us on our Community page: You can find details on the Velero maintainers' support process ."
}
] |
{
"category": "Runtime",
"file_name": "SUPPORT.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(network-increase-bandwidth)= You can increase the network bandwidth of your Incus setup by configuring the transmit queue length (`txqueuelen`). This change makes sense in the following scenarios: You have a NIC with 1 GbE or higher on an Incus host with a lot of local activity (instance-instance connections or host-instance connections). You have an internet connection with 1 GbE or higher on your Incus host. The more instances you use, the more you can benefit from this tweak. ```{note} The following instructions use a `txqueuelen` value of 10000, which is commonly used with 10GbE NICs, and a `net.core.netdevmaxbacklog` value of 182757. Depending on your network, you might need to use different values. In general, you should use small `txqueuelen` values with slow devices with a high latency, and high `txqueuelen` values with devices with a low latency. For the `net.core.netdevmaxbacklog` value, a good guideline is to use the minimum value of the `net.ipv4.tcp_mem` configuration. ``` Complete the following steps to increase the network bandwidth on the Incus host: Increase the transmit queue length (`txqueuelen`) of both the real NIC and the Incus NIC (for example, `incusbr0`). You can do this temporarily for testing with the following command: ifconfig <interface> txqueuelen 10000 To make the change permanent, add the following command to your interface configuration in `/etc/network/interfaces`: up ip link set eth0 txqueuelen 10000 Increase the receive queue length (`net.core.netdevmaxbacklog`). You can do this temporarily for testing with the following command: echo 182757 > /proc/sys/net/core/netdevmaxbacklog To make the change permanent, add the following configuration to `/etc/sysctl.conf`: net.core.netdevmaxbacklog = 182757 You must also change the `txqueuelen` value for all Ethernet interfaces in your instances. To do this, use one of the following methods: Apply the same changes as described above for the Incus host. Set the `queue.tx.length` device option on the instance profile or configuration."
}
] |
{
"category": "Runtime",
"file_name": "network_increase_bandwidth.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "% runc-start \"8\" runc start - start a previously created container runc start container-id The start command executes the process defined in config.json in a container previously created by runc-create(8). runc-create(8), runc(8)."
}
] |
{
"category": "Runtime",
"file_name": "runc-start.8.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for the specified shell Generate the autocompletion script for cilium-operator-aws for the specified shell. See each sub-command's help for details on how to use the generated script. ``` -h, --help help for completion ``` - Run cilium-operator-aws - Generate the autocompletion script for bash - Generate the autocompletion script for fish - Generate the autocompletion script for powershell - Generate the autocompletion script for zsh"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-aws_completion.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This container contains all build-time dependencies in order to build rkt. It currently can be built in: Debian Sid. All commands assume you are running them in your local git checkout of rkt. Configure the path to your git checkout of `rkt` and the build output directory respectively: ``` export SRC_DIR= export BUILDDIR= mkdir -p $BUILDDIR ``` Start the container which will run the , and compile rkt: ``` ./scripts/build-rir.sh ``` You should see rkt building in your rkt container, and once it's finished, the output should be in `$BUILD_DIR` on your host."
}
] |
{
"category": "Runtime",
"file_name": "rkt-build-rkt.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: CephFilesystem CRD Rook allows creation and customization of shared filesystems through the custom resource definitions (CRDs). The following settings are available for Ceph filesystems. !!! note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because both of the defined pools set the to `host` and the `replicated.size` to `3`. The `failureDomain` can also be set to another location type (e.g. `rack`), if it has been added as a `location` in the . ```yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPools: name: replicated failureDomain: host replicated: size: 3 preserveFilesystemOnDelete: true metadataServer: activeCount: 1 activeStandby: true annotations: placement: resources: ``` (These definitions can also be found in the file) Erasure coded pools require the OSDs to use `bluestore` for the configured . Additionally, erasure coded pools can only be used with `dataPools`. The `metadataPool` must use a replicated pool. !!! note This sample requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the will be set to `host` by default, and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). ```yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: myfs-ec namespace: rook-ceph spec: metadataPool: replicated: size: 3 dataPools: name: default replicated: size: 3 name: erasurecoded erasureCoded: dataChunks: 2 codingChunks: 1 metadataServer: activeCount: 1 activeStandby: true ``` IMPORTANT: For erasure coded pools, we have to create a replicated pool as the default data pool and an erasure-coded pool as a secondary pool. (These definitions can also be found in the file. Also see an example in the for how to configure the volume.) `name`: The name of the filesystem to create, which will be reflected in the pool and other resource names. `namespace`: The namespace of the Rook cluster where the filesystem is created. The pools allow all of the settings defined in the Pool CRD spec. For more details, see the settings. In the example above, there must be at least three hosts (size 3) and at least eight devices (6 data + 2 coding chunks) in the cluster. `metadataPool`: The settings used to create the filesystem metadata pool. Must use replication. `dataPools`: The settings to create the filesystem data pools. Optionally (and we highly recommend), a pool name can be specified with the `name` field to override the default generated name; see more below. If multiple pools are specified, Rook will add the pools to the filesystem. Assigning users or files to a pool is left as an exercise for the reader with the . The data pools can use replication or erasure coding. If erasure coding pools are specified, the cluster must be running with bluestore enabled on the OSDs. `name`: (optional, and highly recommended) Override the default generated name of the pool. The final pool name will consist of the filesystem name and pool name, e.g.,"
},
{
"data": "We highly recommend to specify `name` to prevent issues that can arise from modifying the spec in a way that causes Rook to lose the original pool ordering. `preserveFilesystemOnDelete`: If it is set to 'true' the filesystem will remain when the CephFilesystem resource is deleted. This is a security measure to avoid loss of data if the CephFilesystem resource is deleted accidentally. The default value is 'false'. This option replaces `preservePoolsOnDelete` which should no longer be set. (deprecated) `preservePoolsOnDelete`: This option is replaced by the above `preserveFilesystemOnDelete`. For backwards compatibility and upgradeability, if this is set to 'true', Rook will treat `preserveFilesystemOnDelete` as being set to 'true'. The metadata server settings correspond to the MDS daemon settings. `activeCount`: The number of active MDS instances. As load increases, CephFS will automatically partition the filesystem across the MDS instances. Rook will create double the number of MDS instances as requested by the active count. The extra instances will be in standby mode for failover. `activeStandby`: If true, the extra MDS instances will be in active standby mode and will keep a warm cache of the filesystem metadata for faster failover. The instances will be assigned by CephFS in failover pairs. If false, the extra MDS instances will all be on passive standby mode and will not maintain a warm cache of the metadata. `mirroring`: Sets up mirroring of the filesystem `enabled`: whether mirroring is enabled on that filesystem (default: false) `peers`: to configure mirroring peers `secretNames`: a list of peers to connect to. Currently (Ceph Pacific release) only a single* peer is supported where a peer represents a Ceph cluster. `snapshotSchedules`: schedule(s) snapshot.One or more schedules are supported. `path`: filesystem source path to take the snapshot on `interval`: frequency of the snapshots. The interval can be specified in days, hours, or minutes using d, h, m suffix respectively. `startTime`: optional, determines at what time the snapshot process starts, specified using the ISO 8601 time format. `snapshotRetention`: allow to manage retention policies: `path`: filesystem source path to apply the retention on `duration`: `annotations`: Key value pair list of annotations to add. `labels`: Key value pair list of labels to add. `placement`: The mds pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, and `podAntiAffinity` similar to placement defined for daemons configured by the . `resources`: Set resource requests/limits for the Filesystem MDS Pod(s), see `priorityClassName`: Set priority class name for the Filesystem MDS Pod(s) `startupProbe` : Disable, or override timing and threshold values of the Filesystem MDS startup probe `livenessProbe` : Disable, or override timing and threshold values of the Filesystem MDS livenessProbe. The format of the resource requests/limits structure is the same as described in the . If the memory resource limit is declared Rook will automatically set the MDS configuration `mdscachememory_limit`. The configuration value is calculated with the aim that the actual MDS memory consumption remains consistent with the MDS pods' resource declaration. In order to provide the best possible experience running Ceph in containers, Rook internally recommends the memory for MDS daemons to be at least 4096MB. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log."
}
] |
{
"category": "Runtime",
"file_name": "ceph-filesystem-crd.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Integrating Docker via the Network Plugin (V2) menu_order: 20 search_type: Documentation * Docker Engine version 1.12 introduced . This document describes how to use the Network Plugin V2 of Weave Net. Before using the plugin, please keep in mind the plugin works only in Swarm mode and requires Docker version 1.13 or later. To install the plugin run the following command on each host already participating in a Swarm cluster, i.e. on all master and worker nodes: $ docker plugin install weaveworks/net-plugin:latest_release Docker will pull the plugin from Docker Store, and it will ask to grant privileges before installing the plugin. Afterwards, it will start `weaver` process which will try to connect to Swarm masters running Weave Net. There are several configuration parameters which can be set with: $ docker plugin set weaveworks/net-plugin:latest_release PARAM=VALUE The parameters include: `WEAVE_PASSWORD` - if non empty, it will instruct Weave Net to encrypt traffic - see for more details. `WEAVE_MULTICAST` - set to 1 on each host running the plugin to enable multicast traffic on any Weave Net network. `WEAVE_MTU` - Weave Net defaults to 1376 bytes, but you can set a smaller size if your underlying network has a tighter limit, or set a larger size for better performance if your network supports jumbo frames - see for more details. `IPALLOC_RANGE` - the range of IP addresses used by Weave Net and the subnet they are placed in (CIDR format; default 10.32.0.0/12). See for more details. Before setting any parameter, the plugin has to be disabled with: $ docker plugin disable weaveworks/net-plugin:latest_release To re-enable the plugin run the following command: $ docker plugin enable weaveworks/net-plugin:latest_release After you have launched the plugin, you can create a network for Docker Swarm services by running the following command on any Docker Swarm master node: $ docker network create --driver=weaveworks/net-plugin:latest_release mynetwork Or you can create a network for any Docker container with: $ docker network create --driver=weaveworks/net-plugin:latest_release --attachable mynetwork To start a service attached to the network run, for example: $ docker service create --network=mynetwork ... Or to start a container: $ docker run --network=mynetwork ... See Also *"
}
] |
{
"category": "Runtime",
"file_name": "plugin-v2.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document explains the design, architecture and advanced use cases of the MinIO distributed server. ``` NAME: minio server - start object storage server USAGE: minio server [FLAGS] DIR1 [DIR2..] minio server [FLAGS] DIR{1...64} minio server [FLAGS] DIR{1...64} DIR{65...128} DIR: DIR points to a directory on a filesystem. When you want to combine multiple drives into a single large system, pass one directory per filesystem separated by space. You may also use a '...' convention to abbreviate the directory arguments. Remote directories in a distributed setup are encoded as HTTP(s) URIs. ``` Standalone erasure coded configuration with 4 sets with 16 drives each. ``` minio server dir{1...64} ``` Distributed erasure coded configuration with 64 sets with 16 drives each. ``` minio server http://host{1...16}/export{1...64} ``` Expansion of ellipses and choice of erasure sets based on this expansion is an automated process in MinIO. Here are some of the details of our underlying erasure coding behavior. Erasure coding used by MinIO is erasure coding scheme, which has a total shard maximum of 256 i.e 128 data and 128 parity. MinIO design goes beyond this limitation by doing some practical architecture choices. Erasure set is a single erasure coding unit within a MinIO deployment. An object is sharded within an erasure set. Erasure set size is automatically calculated based on the number of drives. MinIO supports unlimited number of drives but each erasure set can be upto 16 drives and a minimum of 2 drives. We limited the number of drives to 16 for erasure set because, erasure code shards more than 16 can become chatty and do not have any performance advantages. Additionally since 16 drive erasure set gives you tolerance of 8 drives per object by default which is plenty in any practical scenario. Choice of erasure set size is automatic based on the number of drives available, let's say for example if there are 32 servers and 32 drives which is a total of 1024 drives. In this scenario 16 becomes the erasure set size. This is decided based on the greatest common divisor (GCD) of acceptable erasure set sizes ranging from 4 to 16. If total drives has many common divisors the algorithm chooses the minimum amounts of erasure sets possible for a erasure set size of any N. In the example with 1024 drives - 4, 8, 16 are GCD factors. With 16 drives we get a total of 64 possible sets, with 8 drives we get a total of 128 possible sets, with 4 drives we get a total of 256 possible sets. So algorithm automatically chooses 64 sets, which is 16 64 = 1024* drives in total. If total number of nodes are of odd number then GCD algorithm provides affinity towards odd number erasure sets to provide for uniform distribution across nodes. This is to ensure that same number of drives are pariticipating in any erasure set. For example if you have 2 nodes with 180 drives then GCD is 15 but this would lead to uneven distribution, one of the nodes would participate more drives. To avoid this the affinity is given towards nodes which leads to next best GCD factor of 12 which provides uniform distribution. In this algorithm, we also make sure that we spread the drives out evenly. MinIO server expands ellipses passed as arguments. Here is a sample expansion to demonstrate the process. ``` minio server"
},
{
"data": "``` Expected expansion ``` http://host1/export1 http://host2/export1 http://host1/export2 http://host2/export2 http://host1/export3 http://host2/export3 http://host1/export4 http://host2/export4 http://host1/export5 http://host2/export5 http://host1/export6 http://host2/export6 http://host1/export7 http://host2/export7 http://host1/export8 http://host2/export8 ``` A noticeable trait of this expansion is that it chooses unique hosts such the setup provides maximum protection and availability. Choosing an erasure set for the object is decided during `PutObject()`, object names are used to find the right erasure set using the following pseudo code. ```go // hashes the key returning an integer. func sipHashMod(key string, cardinality int, id [16]byte) int { if cardinality <= 0 { return -1 } sip := siphash.New(id[:]) sip.Write([]byte(key)) return int(sip.Sum64() % uint64(cardinality)) } ``` Input for the key is the object name specified in `PutObject()`, returns a unique index. This index is one of the erasure sets where the object will reside. This function is a consistent hash for a given object name i.e for a given object name the index returned is always the same. Write and Read quorum are required to be satisfied only across the erasure set for an object. Healing is also done per object within the erasure set which contains the object. MinIO does erasure coding at the object level not at the volume level, unlike other object storage vendors. This allows applications to choose different storage class by setting `x-amz-storage-class=STANDARD/REDUCED_REDUNDANCY` for each object uploads so effectively utilizing the capacity of the cluster. Additionally these can also be enforced using IAM policies to make sure the client uploads with correct HTTP headers. MinIO also supports expansion of existing clusters in server pools. Each pool is a self contained entity with same SLA's (read/write quorum) for each object as original cluster. By using the existing namespace for lookup validation MinIO ensures conflicting objects are not created. When no such object exists then MinIO simply uses the least used pool to place new objects. ``` minio server http://host{1...32}/export{1...32} http://host{1...12}/export{1...12} ``` In above example there are two server pools 32 * 32 = 1024 drives pool1 12 * 12 = 144 drives pool2 Notice the requirement of common SLA here original cluster had 1024 drives with 16 drives per erasure set with default parity of '4', second pool is expected to have a minimum of 8 drives per erasure set to match the original cluster SLA (parity count) of '4'. '12' drives stripe per erasure set in the second pool satisfies the original pool's parity count. Refer to the sizing guide with details on the default parity count chosen for different erasure stripe sizes MinIO places new objects in server pools based on proportionate free space, per pool. Following pseudo code demonstrates this behavior. ```go func getAvailablePoolIdx(ctx context.Context) int { serverPools := z.getServerPoolsAvailableSpace(ctx) total := serverPools.TotalAvailable() // choose when we reach this many choose := rand.Uint64() % total atTotal := uint64(0) for _, pool := range serverPools { atTotal += pool.Available if atTotal > choose && pool.Available > 0 { return pool.Index } } // Should not happen, but print values just in case. panic(fmt.Errorf(\"reached end of serverPools (total: %v, atTotal: %v, choose: %v)\", total, atTotal, choose)) } ``` Standalone erasure coded configuration with 4 sets with 16 drives each, which spawns drives across controllers. ``` minio server /mnt/controller{1...4}/data{1...16} ``` Standalone erasure coded configuration with 16 sets, 16 drives per set, across mounts and controllers. ``` minio server /mnt{1...4}/controller{1...4}/data{1...16} ``` Distributed erasure coded configuration with 2 sets, 16 drives per set across hosts. ``` minio server http://host{1...32}/disk1 ``` Distributed erasure coded configuration with rack level redundancy 32 sets in total, 16 drives per set. ``` minio server http://rack{1...4}-host{1...8}.example.net/export{1...16} ```"
}
] |
{
"category": "Runtime",
"file_name": "DESIGN.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document defines the project governance for Velero. Velero, an open source project, is committed to building an open, inclusive, productive and self-governing open source community focused on building a high quality tool that enables users to safely backup and restore, perform disaster recovery, and migrate Kubernetes cluster resources and persistent volumes. The community is governed by this document with the goal of defining how community should work together to achieve this goal. The following code repositories are governed by Velero community and maintained under the `vmware-tanzu\\Velero` organization. :* Main Velero codebase :* The Helm chart for the Velero server component :* This repository contains Velero plugins for snapshotting CSI backed PVCs using the CSI beta snapshot APIs :* This repository contains the Velero Plugin for vSphere. This plugin is a volume snapshotter plugin that provides crash-consistent snapshots of vSphere block volumes and backup of volume data into S3 compatible storage. :* This repository contains the plugins to support running Velero on AWS, including the object store plugin and the volume snapshotter plugin :* This repository contains the plugins to support running Velero on GCP, including the object store plugin and the volume snapshotter plugin :* This repository contains the plugins to support running Velero on Azure, including the object store plugin and the volume snapshotter plugin :* This repository contains example plugins for Velero Users:* Members that engage with the Velero community via any medium (Slack, GitHub, mailing lists, etc.). Contributors:* Regular contributions to projects (documentation, code reviews, responding to issues, participation in proposal discussions, contributing code, etc.). Maintainers*: The Velero project leaders. They are responsible for the overall health and direction of the project; final reviewers of PRs and responsible for releases. Some Maintainers are responsible for one or more components within a project, acting as technical leads for that component. Maintainers are expected to contribute code and documentation, review PRs including ensuring quality of code, triage issues, proactively fix bugs, and perform maintenance tasks for these components. New maintainers must be nominated by an existing maintainer and must be elected by a supermajority of existing maintainers. Likewise, maintainers can be removed by a supermajority of the existing maintainers or can resign by notifying one of the maintainers. A supermajority is defined as two-thirds of members in the group. A supermajority of is required for certain decisions as outlined above. A supermajority vote is equivalent to the number of votes in favor being at least twice the number of votes against. For example, if you have 5 maintainers, a supermajority vote is 4 votes. Voting on decisions can happen on the mailing list, GitHub, Slack, email, or via a voting service, when appropriate. Maintainers can either vote \"agree, yes, +1\", \"disagree, no, -1\", or \"abstain\". A vote passes when supermajority is met. An abstain vote equals not voting at all. Ideally, all project decisions are resolved by consensus. If impossible, any maintainer may call a"
},
{
"data": "Unless otherwise specified in this document, any vote will be decided by a supermajority of maintainers. Votes by maintainers belonging to the same company will count as one vote; e.g., 4 maintainers employed by fictional company Valerium will only have one combined vote. If voting members from a given company do not agree, the company's vote is determined by a supermajority of voters from that company. If no supermajority is achieved, the company is considered to have abstained. One of the most important aspects in any open source community is the concept of proposals. Large changes to the codebase and / or new features should be preceded by a proposal in our community repo. This process allows for all members of the community to weigh in on the concept (including the technical details), share their comments and ideas, and offer to help. It also ensures that members are not duplicating work or inadvertently stepping on toes by making large conflicting changes. The project roadmap is defined by accepted proposals. Proposals should cover the high-level objectives, use cases, and technical recommendations on how to implement. In general, the community member(s) interested in implementing the proposal should be either deeply engaged in the proposal process or be an author of the proposal. The proposal should be documented as a separated markdown file pushed to the root of the `design` folder in the repository via PR. The name of the file should follow the name pattern `<short meaningful words joined by '-'>_design.md`, e.g: `restore-hooks-design.md`. Use the as a starting point. The proposal PR can follow the GitHub lifecycle of the PR to indicate its status: Open*: Proposal is created and under review and discussion. Merged*: Proposal has been reviewed and is accepted (either by consensus or through a vote). Closed*: Proposal has been reviewed and was rejected (either by consensus or through a vote). To maintain velocity in a project as busy as Velero, the concept of [Lazy Consensus](http://en.osswiki.info/concepts/lazy_consensus) is practiced. Ideas and / or proposals should be shared by maintainers via GitHub with the appropriate maintainer groups (e.g., `@vmware-tanzu/velero-maintainers`) tagged. Out of respect for other contributors, major changes should also be accompanied by a ping on Slack or a note on the Velero mailing list as appropriate. Author(s) of proposal, Pull Requests, issues, etc. will give a time period of no less than five (5) working days for comment and remain cognizant of popular observed world holidays. Other maintainers may chime in and request additional time for review, but should remain cognizant of blocking progress and abstain from delaying progress unless absolutely needed. The expectation is that blocking progress is accompanied by a guarantee to review and respond to the relevant action(s) (proposals, PRs, issues, etc.) in short order. Lazy Consensus is practiced for all projects in the `Velero` org, including the main project repository and the additional repositories. Lazy consensus does not apply to the process of: Removal of maintainers from Velero All substantive changes in Governance require a supermajority agreement by all maintainers."
}
] |
{
"category": "Runtime",
"file_name": "GOVERNANCE.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The jailer is a program designed to isolate the Firecracker process in order to enhance Firecracker's security posture. It is meant to address the security needs of Firecracker only and is not intended to work with other binaries. Additionally, each jailer binary should be used with a statically linked Firecracker binary (with the default musl toolchain) of the same version. Experimental gnu builds are not supported. The jailer is invoked in this manner: ```bash jailer --id <id> \\ --exec-file <exec_file> \\ --uid <uid> \\ --gid <gid> [--parent-cgroup <relative_path>] [--cgroup-version <cgroup-version>] [--cgroup <cgroup>] [--chroot-base-dir <chroot_base>] [--netns <netns>] [--resource-limit <resource=value>] [--daemonize] [--new-pid-ns] [--...extra arguments for Firecracker] ``` `id` is the unique VM identification string, which may contain alphanumeric characters and hyphens. The maximum `id` length is currently 64 characters. `exec_file` is the path to the Firecracker binary that will be exec-ed by the jailer. The filename must include the string `firecracker`. This is enforced because the interaction with the jailer is Firecracker specific. `uid` and `gid` are the uid and gid the jailer switches to as it execs the target binary. `parent-cgroup` is used to allow the placement of microvm cgroups in custom nested hierarchies. By specifying this parameter, the jailer will create a new cgroup named `id` for the microvm in the `<cgroupbase>/<parentcgroup>` subfolder. `cgroup_base` is the cgroup controller root for `cgroup v1` (e.g. `/sys/fs/cgroup/cpu`) or the unified controller hierarchy for `cgroup v2` ( e.g. `/sys/fs/cgroup/unified`. `<parent_cgroup>` is a relative path within that hierarchy. For example, if `--parent-cgroup alluvms/externaluvms` is specified, the jailer will write all cgroup parameters specified through `--cgroup` in `/sys/fs/cgroup/<controllername>/alluvms/external_uvms/<id>`. By default, the parent cgroup is `exec-file`. If there are no `--cgroup` parameters specified and `--group-version=2` was passed, then the jailer will move the process to the specified cgroup. `cgroup-version` is used to select which type of cgroup hierarchy to use for the creation of cgroups. The default value is \"1\" which means that cgroups specified with the `cgroup` argument will be created within a v1 hierarchy. Supported options are \"1\" for cgroup-v1 and \"2\" for cgroup-v2. `cgroup` cgroups can be passed to the jailer to let it set the values when the microVM process is spawned. The `--cgroup` argument must follow this format: `<cgroup_file>=<value>` (e.g `cpuset.cpus=0`). This argument can be used multiple times to set multiple cgroups. This is useful to avoid providing privileged permissions to another process for setting the cgroups before or after the jailer is executed. The `--cgroup` flag can help as well to set Firecracker process cgroups before the VM starts running, with no need to create the entire cgroup hierarchy manually (which requires privileged permissions). `chroot_base` represents the base folder where chroot jails are built. The default is `/srv/jailer`. `netns` represents the path to a network namespace handle. If present, the jailer will use this to join the associated network namespace. For extra security and control over resource usage, `resource-limit` can be used to set bounds to the process resources. The `--resource-limit` argument must follow this format: `<resource>=<value>` (e.g `no-file=1024`) and can be used multiple times to set multiple bounds. Current available resources that can be limited using this argument are: `fsize`: The maximum size in bytes for files created by the"
},
{
"data": "`no-file`: Specifies a value one greater than the maximum file descriptor number that can be opened by this process. Here is an example on how to set multiple resource limits using this argument: ```bash --resource-limit fsize=250000000 --resource-limit no-file=1024 ``` When present, the `--daemonize` flag causes the jailer to call `setsid()` and redirect all three standard I/O file descriptors to `/dev/null`. When present, the `--new-pid-ns` flag causes the jailer to spawn the provided binary into a new PID namespace. It makes use of the libc `clone()` function with the `CLONE_NEWPID` flag. As a result, the jailer and the process running the exec file have different PIDs. The PID of the child process is stored in the jail root directory inside `<execfilename>.pid`. The jailer adheres to the \"end of command options\" convention, meaning all parameters specified after `--` are forwarded to Firecracker. For example, this can be paired with the `--config-file` Firecracker argument to specify a configuration file when starting Firecracker via the jailer (the file path and the resources referenced within must be valid relative to a jailed Firecracker). Please note the jailer already passes `--id` parameter to the Firecracker process. After starting, the Jailer goes through the following operations: Validate all provided paths and the VM `id`. Close all open file descriptors based on `/proc/<jailer-pid>/fd` except input, output and error. Cleanup all environment variables received from the parent process. Create the `<chrootbase>/<execfile_name>/<id>/root` folder, which will be henceforth referred to as `chrootdir`. `execfile_name` is the last path component of `exec_file` (for example, that would be `firecracker` for `/usr/bin/firecracker`). Nothing is done if the path already exists (it should not, since `id` is supposed to be unique). Copy `exec_file` to `<chrootbase>/<execfilename>/<id>/root/<execfile_name>`. This ensures the new process will not share memory with any other Firecracker process. Set resource bounds for current process and its children through `--resource-limit` argument, by calling `setrlimit()` system call with the specific resource argument. If no limits are provided, the jailer bounds `no-file` to a maximum default value of 2048. Create the `cgroup` sub-folders. The jailer can use either `cgroup v1` or `cgroup v2`. On most systems, this is mounted by default in `/sys/fs/cgroup` (should be mounted by the user otherwise). The jailer will parse `/proc/mounts` to detect where each of the controllers required in `--cgroup` can be found (multiple controllers may share the same path). For each identified location (referred to as `<cgroup_base>`), the jailer creates the `<cgroupbase>/<parentcgroup>/<id>` subfolder, and writes the current pid to `<cgroupbase>/<parentcgroup>/<id>/tasks`. Also, the value passed for each `<cgroup_file>` is written to the file. If `--node` is used the corresponding values are written to the appropriate `cpuset.mems` and `cpuset.cpus` files. Call `unshare()` into a new mount namespace, use `pivot_root()` to switch the old system root mount point with a new one base in `chroot_dir`, switch the current working directory to the new root, unmount the old root mount point, and call `chroot` into the current directory. Use `mknod` to create a `/dev/net/tun` equivalent inside the jail. Use `mknod` to create a `/dev/kvm` equivalent inside the jail. Use `chown` to change ownership of the `chroot_dir` (root path `/` as seen by the jailed firecracker), `/dev/net/tun`, `/dev/kvm`. The ownership is changed to the provided"
},
{
"data": "If `--netns <netns>` is present, attempt to join the specified network namespace. If `--daemonize` is specified, call `setsid()` and redirect `STDIN`, `STDOUT`, and `STDERR` to `/dev/null`. If `--new-pid-ns` is specified, call `clone()` with `CLONE_NEWPID` flag to spawn a new process within a new PID namespace. The new process will assume the role of init(1) in the new namespace. The parent will store child's PID inside `<execfilename>.pid`, while the child drops privileges and `exec()`s into the `<execfilename>`, as described below. Drop privileges via setting the provided `uid` and `gid`. Exec into `<execfilename> --id=<id> --start-time-us=<opaque> --start-time-cpu-us=<opaque>` (and also forward any extra arguments provided to the jailer after `--`, as mentioned in the Jailer Usage section), where: `id`: (`string`) - The `id` argument provided to jailer. `opaque`: (`number`) time calculated by the jailer that it spent doing its work. Lets assume Firecracker is available as `/usr/bin/firecracker`, and the jailer can be found at `/usr/bin/jailer`. We pick the unique id 551e7604-e35c-42b3-b825-416853441234, and we choose to run on NUMA node 0 (in order to isolate the process in the 0th NUMA node we need to set `cpuset.mems=0` and `cpuset.cpus` equals to the CPUs of that NUMA node), using uid 123, and gid 100. For this example, we are content with the default `/srv/jailer` chroot base dir. We start by running: ```bash /usr/bin/jailer --id 551e7604-e35c-42b3-b825-416853441234 --cgroup cpuset.mems=0 --cgroup cpuset.cpus=$(cat /sys/devices/system/node/node0/cpulist) --exec-file /usr/bin/firecracker --uid 123 --gid 100 \\ --netns /var/run/netns/my_netns --daemonize ``` After opening the file descriptors mentioned in the previous section, the jailer will create the following resources (and all their prerequisites, such as the path which contains them): `/srv/jailer/firecracker/551e7604-e35c-42b3-b825-416853441234/root/firecracker` (copied from `/usr/bin/firecracker`) We are going to refer to `/srv/jailer/firecracker/551e7604-e35c-42b3-b825-416853441234/root` as `<chroot_dir>`. Lets also assume the, cpuset cgroups are mounted at `/sys/fs/cgroup/cpuset`. The jailer will create the following subfolder (which will inherit settings from the parent cgroup): `/sys/fs/cgroup/cpuset/firecracker/551e7604-e35c-42b3-b825-416853441234` Its worth noting that, whenever a folder already exists, nothing will be done, and we move on to the next directory that needs to be created. This should only happen for the common `firecracker` subfolder (but, as for creating the chroot path before, we do not issue an error if folders directly associated with the supposedly unique `id` already exist). The jailer then writes the current pid to `/sys/fs/cgroup/cpuset/firecracker/551e7604-e35c-42b3-b825-416853441234/tasks`, It also writes `0` to `/sys/fs/cgroup/cpuset/firecracker/551e7604-e35c-42b3-b825-416853441234/cpuset.mems`, And the corresponding CPUs to `/sys/fs/cgroup/cpuset/firecracker/551e7604-e35c-42b3-b825-416853441234/cpuset.cpus`. Since the `--netns` parameter is specified in our example, the jailer opens `/var/run/netns/my_netns` to get a file descriptor `fd`, uses `setns(fd, CLONE_NEWNET)` to join the associated network namespace, and then closes `fd`. The `--daemonize` flag is also present, so the jailers opens `/dev/null` as RW and keeps the associate file descriptor as `devnullfd` (we do this before going inside the jail), to be used later. Build the chroot jail. First, the jailer uses `unshare()` to enter a new mount namespace, and changes the propagation of all mount points in the new namespace to private using `mount(NULL, /, NULL, MSPRIVATE | MSREC, NULL)`, as a prerequisite to `pivot_root()`. Another required operation is to bind mount `<chroot_dir>` on top of itself using `mount(<chrootdir>, <chrootdir>, NULL, MS_BIND, NULL)`. At this point, the jailer creates the folder `<chrootdir>/oldroot`, changes the current directory to `<chrootdir>`, and calls `syscall(SYSpivotroot, .,"
},
{
"data": "The final steps of building the jail are unmounting `old_root` using `umount2(oldroot, MNTDETACH)`, deleting `old_root` with `rmdir`, and finally calling `chroot(.)` for good measure. From now, the process is jailed in `<chroot_dir>`. Create the special file `/dev/net/tun`, using `mknod(/dev/net/tun, SIFCHR | SIRUSR | S_IWUSR, makedev(10, 200))`, and then call `chown(/dev/net/tun, 123, 100)`, so Firecracker can use it after dropping privileges. This is required to use multiple TAP interfaces when running jailed. Do the same for `/dev/kvm`. Change ownership of `<chroot_dir>` to `uid:gid` so that Firecracker can create its API socket there. Since the `--daemonize` flag is present, call `setsid()` to join a new session, a new process group, and to detach from the controlling terminal. Then, redirect standard file descriptors to `/dev/null` by calling `dup2(devnullfd, STDIN)`, `dup2(devnullfd, STDOUT)`, and `dup2(devnullfd, STDERR)`. Close `devnullfd`, because it is no longer necessary. Finally, the jailer switches the `uid` to `123`, and `gid` to `100`, and execs ```console ./firecracker \\ --id=\"551e7604-e35c-42b3-b825-416853441234\" \\ --start-time-us=<opaque> \\ --start-time-cpu-us=<opaque> ``` Now firecracker creates the socket at `/srv/jailer/firecracker/551e7604-e35c-42b3-b825-416853441234/root/<api-sock>` to interact with the VM. Note: default value for `<api-sock>` is `/run/firecracker.socket`. The user must create hard links for (or copy) any resources which will be provided to the VM via the API (disk images, kernel images, named pipes, etc) inside the jailed root folder. Also, permissions must be properly managed for these resources; for example the user which Firecracker runs as must have both read and write permissions to the backing file for a RW block device. By default the VMs are not asigned to any NUMA node or pinned to any CPU. The user must manage any fine tuning of resource partitioning via cgroups, by using the `--cgroup` command line argument. Its up to the user to handle cleanup after running the jailer. One way to do this involves registering handlers with the cgroup `notifyonrelease` mechanism, while being wary about potential race conditions (the instance crashing before the subscription process is complete, for example). For extra resilience, the `--new-pid-ns` flag enables the Jailer to exec the binary file in a new PID namespace, in order to become a pseudo-init process. Alternatively, the user can spawn the jailer in a new PID namespace via a combination of `clone()` with the `CLONE_NEWPID` flag and `exec()`. We run the jailer as the `root` user; it actually requires a more restricted set of capabilities, but that's to be determined as features stabilize. The jailer can only log messages to stdout/err for now, which is why the logic associated with `--daemonize` runs towards the end, instead of the very beginning. We are working on adding better logging capabilities. When passing the --daemonize option to Firecracker without the --new-ns-pid option, the Firecracker process will have a different PID than the Jailer process and killing the Jailer will not kill the Firecracker process. As a workaround to get Firecracker PID, the Jailer stores the PID of the child process in the jail root directory inside `<execfilename>.pid` for all cases regardless of whether `--new-pid-ns` was provided. The suggested way to fetch Firecracker's PID when using the Jailer is to read the `firecracker.pid` file present in the Jailer's root directory. If all the cgroup controllers are bunched up on a single mount point using the \"all\" option, our current program logic will complain it cannot detect individual controller mount points."
}
] |
{
"category": "Runtime",
"file_name": "jailer.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Src | string | | [default to \"/dev/urandom\"] Iommu | Pointer to bool | | [optional] [default to false] `func NewRngConfig(src string, ) *RngConfig` NewRngConfig instantiates a new RngConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewRngConfigWithDefaults() *RngConfig` NewRngConfigWithDefaults instantiates a new RngConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *RngConfig) GetSrc() string` GetSrc returns the Src field if non-nil, zero value otherwise. `func (o RngConfig) GetSrcOk() (string, bool)` GetSrcOk returns a tuple with the Src field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *RngConfig) SetSrc(v string)` SetSrc sets Src field to given value. `func (o *RngConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o RngConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *RngConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *RngConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "RngConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at ."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "In this example, we'll use a wasm module, in which a function `run` is exported and it will call a function `sayhello` from an import module named `env`. The imported function `sayhello` has no inputs and outputs, and only prints a greeting message out. The code in the following example is verified on wasmedge-sdk v0.5.0 wasmedge-sys v0.10.0 wasmedge-types v0.3.0 Let's start off by getting all imports right away so you can follow along ```rust // please add this feature if you're using rust of version < 1.63 // #![feature(explicitgenericargswithimpl_trait)] use wasmedge_sdk::{ error::HostFuncError, host_function, params, wat2wasm, Caller, Executor, ImportObjectBuilder, Module, Store, WasmValue, }; ``` First, let's define a native function named `sayhelloworld` that prints out `Hello, World!`. ```rust fn sayhello(caller: &Caller, args: Vec<WasmValue>) -> Result<Vec<WasmValue>, HostFuncError> { println!(\"Hello, world!\"); Ok(vec![]) } ``` To use the native function as an import function in the `WasmEdge` runtime, we need an `ImportObject`. `wasmedge-sdk` defines a , which provides a group of chaining methods used to create an `ImportObject`. Let's see how to do it. ```rust // create an import module let import = ImportObjectBuilder::new() .withfunc::<(), (), !>(\"sayhello\", say_hello, None)? .build(\"env\")?; ``` Now, we have an import module named `env` which holds a host function `say_hello`. As you may notice, the names we used for the import module and the host function are exactly the same as the ones appearing in the wasm module. You can find the wasm module in . Now, let's load a wasm module. `wasmedge-sdk` defines two methods in `Module`: loads a wasm module from a file, and meanwhile, validates the loaded wasm module. loads a wasm module from an array of in-memory bytes, and meanwhile, validates the loaded wasm module. Here we choose `Module::from_bytes` method to load our wasm module from an array of in-memory bytes. ```rust let wasm_bytes = wat2wasm( br#\" (module ;; First we define a type with no parameters and no"
},
{
"data": "(type $noargsnoretst (func (param) (result))) ;; Then we declare that we want to import a function named \"env\" \"say_hello\" with ;; that type signature. (import \"env\" \"sayhello\" (func $sayhello (type $noargsnoretst))) ;; Finally we create an entrypoint that calls our imported function. (func $run (type $noargsnoretst) (call $say_hello)) ;; And mark it as an exported function named \"run\". (export \"run\" (func $run))) \"#, )?; // loads a wasm module from the given in-memory bytes and returns a compiled module let module = Module::frombytes(None, &wasmbytes)?; ``` To register a compiled module, we need to check if it has dependency on some import modules. In the wasm module this statement `(import \"env\" \"sayhello\" (func $sayhello (type $noargsnoretst)))` tells us that it depends on an import module named `env`. Therefore, we need to register the import module first before registering the compiled wasm module. ```rust // loads a wasm module from the given in-memory bytes let module = Module::frombytes(None, &wasmbytes)?; // create an executor let mut executor = Executor::new(None, None)?; // create a store let mut store = Store::new()?; // register the module into the store store.registerimportmodule(&mut executor, &import)?; // register the compiled module into the store and get an module instance let externinstance = store.registernamed_module(&mut executor, \"extern\", &module)?; ``` In the code above we use and to register the import module and the compiled module. `wasmedge-sdk` also provides alternative APIs to do the same thing: and . Now we are ready to run the exported function. ```rust // get the exported function \"run\" let run = extern_instance .func(\"run\") .okorelse(|| anyhow::Error::msg(\"Not found exported function named 'run'.\"))?; // run host function run.call(&mut executor, params!())?; ``` In this example we created an instance of `Executor`, hence, we have two choices to call a : Any one of these two methods requires that you have to get a . In addition, defines a group of methods which can invoke host function in different ways. For details, please reference . The complete example can be found in ."
}
] |
{
"category": "Runtime",
"file_name": "say_hello.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Direct access to local BPF maps ``` -h, --help help for bpf ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Manage authenticated connections between identities - BPF datapath bandwidth settings - Manage runtime config - Connection tracking tables - Manage the egress routing rules - Local endpoint map - BPF filesystem mount - Manage the IPCache mappings for IP/CIDR <-> Identity - ip-masq-agent CIDRs - Load-balancing configuration - BPF datapath traffic metrics - Manage multicast BPF programs - NAT mapping tables - Manage the node IDs - Manage policy related BPF maps - PCAP recorder - Manage compiled BPF template objects - Manage the SRv6 routing rules - Tunnel endpoint map - Manage the VTEP mappings for IP/CIDR <-> VTEP MAC/IP"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: \"ark delete backup\" layout: docs Delete a backup Delete a backup ``` ark delete backup NAME [flags] ``` ``` --confirm Confirm deletion -h, --help help for backup ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Delete ark resources"
}
] |
{
"category": "Runtime",
"file_name": "ark_delete_backup.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Join the [kubernetes-security-announce] group for security and vulnerability announcements. You can also subscribe to an RSS feed of the above using . Instructions for reporting a vulnerability can be found on the [Kubernetes Security and Disclosure Information] page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website."
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "For the logging capability, Firecracker uses a single Logger object. The Logger can be configured either by sending a `PUT` API Request to the `/logger` path or by command line. You can configure the Logger only once (by using one of these options) and once configured, you can not update it. In order to configure the Logger, first you have to create the resource that will be used for logging: ```bash mkfifo logs.fifo touch logs.file ``` You can configure the Logger by sending the following API command: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT \"http://localhost/logger\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \"log_path\": \"logs.fifo\", \"level\": \"Warning\", \"show_level\": false, \"showlogorigin\": false }\" ``` Details about the required and optional fields can be found in the . If you want to configure the Logger on startup and without using the API socket, you can do that by passing the parameter `--log-path` to the Firecracker process: ```bash ./firecracker --api-sock /tmp/firecracker.socket --log-path <pathtotheloggingfifoorfile> ``` The other Logger fields have, in this case, the default values: `Level -> Warning`, `showlevel -> false`, `showlog_origin -> false`. For configuring these too, you can also pass the following optional parameters: `--level <log_level>`, `--show-level`, `--show-log-origin`: ```bash ./firecracker --api-sock /tmp/firecracker.socket --log-path logs.fifo --level Error --show-level --show-log-origin ``` The `logs.fifo` pipe will store the human readable logs, e.g. errors, warnings etc.(depending on the level). If the path provided is a named pipe, you can use the script below to read from it: ```shell logs=logs.fifo while true do if read line <$logs; then echo $line fi done echo \"Reader exiting\" ``` Otherwise, if the path points to a normal file, you can simply do: ```shell script cat logs.file ```"
}
] |
{
"category": "Runtime",
"file_name": "logger.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The vCPU manager is to manage all vCPU related actions, we will dive into some of the important structure members in this doc. For now, aarch64 vCPU support is still under development, we'll introduce it when we merge `runtime-rs` to the master branch. (issue: #4445) `VcpuConfig` is used to configure guest overall CPU info. `bootvcpucount` is used to define the initial vCPU number. `maxvcpucount` is used to define the maximum vCPU number and it's used for the upper boundary for CPU hotplug feature `threadpercore`, `coresperdie`, `diespersocket` and `socket` are used to define CPU topology. `vpmu_feature` is used to define `vPMU` feature level. If `vPMU` feature is `Disabled`, it means `vPMU` feature is off (by default). If `vPMU` feature is `LimitedlyEnabled`, it means minimal `vPMU` counters are supported (cycles and instructions). If `vPMU` feature is `FullyEnabled`, it means all `vPMU` counters are supported There are four states for vCPU state machine: `running`, `paused`, `waiting_exit`, `exited`. There is a state machine to maintain the task flow. When the vCPU is created, it'll turn to `paused` state. After vCPU resource is ready at VMM, it'll send a `Resume` event to the vCPU thread, and then vCPU state will change to `running`. During the `running` state, VMM will catch vCPU exit and execute different logic according to the exit reason. If the VMM catch some exit reasons that it cannot handle, the state will change to `waiting_exit` and VMM will stop the virtual machine. When the state switches to `waitingexit`, an exit event will be sent to vCPU `exitevt`, event manager will detect the change in `exitevt` and set VMM `exitevtflag` as 1. A thread serving for VMM event loop will check `exitevt_flag` and if the flag is 1, it'll stop the VMM. When the VMM is stopped / destroyed, the state will change to `exited`. Since `Dragonball Sandbox` doesn't support virtualization of ACPI system, we use to establish a direct communication channel between `Dragonball` and Guest in order to trigger vCPU hotplug. To use `upcall`, kernel patches are needed, you can get the patches from page, and we'll provide a ready-to-use guest kernel binary for you to try. vCPU hot plug / hot unplug range is [1, `maxvcpucount`]. Operations not in this range will be invalid."
}
] |
{
"category": "Runtime",
"file_name": "vcpu.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"Instructions for Maintainers\" layout: docs toc: \"true\" There are some guidelines maintainers need to follow. We list them here for quick reference, especially for new maintainers. These guidelines apply to all projects in the Velero org, including the main project, the Velero Helm chart, and all other . Please be sure to also go through the guidance under the entire section. PRs require 2 approvals before it is mergeable. The second reviewer usually merges the PR (if you notice a PR open for a while and with 2 approvals, go ahead and merge it!) As you review a PR that is not yet ready to merge, please check if the \"request review\" needs to be refreshed for any reviewer (this is better than @mention at them) Refrain from @mention other maintainers to review the PR unless it is an immediate need. All maintainers already get notified through the automated add to the \"request review\". If it is an urgent need, please add a helpful message as to why it is so people can properly prioritize work. There is no need to manually request reviewers: after the PR is created, all maintainers will be automatically added to the list (note: feel free to remove people if they are on PTO, etc). Be familiar with the policy for the project. Some tips for doing reviews: There are some we aim for We have When reviewing a design document, ensure it follows . Also, when reviewing a PR that implements a previously accepted design, ensure the associated design doc is moved to the folder. Maintainers are expected to create releases for the project. We have parts of the process automated, and full . We are working towards automating more the Velero testing, but there is still a need for manual testing as part of the release process. The manual test cases for release testing are documented . Maintainers are expected to participate in the community support rotation. We have guidelines for how we handle the . Maintainers for the Velero project are highly involved with the open source community. All the online community meetings for the project are listed in our page. The Velero project welcomes contributors of all kinds. We are also always on the look out for a high level of engagement from contributors and opportunities to bring in new maintainers. If this is of interest, take a look at how is decided."
}
] |
{
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This folder contains the copyright templates for source files of NSM project. Below is an example of valid copyright header for `.go` files: ``` // Copyright (c) 2020 Doc.ai and/or its affiliates. // // Copyright (c) 2020 Cisco and/or its affiliates. // // SPDX-License-Identifier: Apache-2.0 // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at: // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. ``` Note you can use your company name instead of `Cisco and/or its affiliates`. Also, source code files can have multi copyright holders, for example: ``` // Copyright (c) 2020 Doc.ai and/or its affiliates. // // Copyright (c) 2020 Cisco and/or its affiliates. // // Copyright (c) 2020 Red Hat Inc. and/or its affiliates. // // Copyright (c) 2020 VMware, Inc. // // SPDX-License-Identifier: Apache-2.0 // // Licensed under the Apache License, Version 2.0 (the \"License\"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at: // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an \"AS IS\" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. ```"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Network Service Mesh",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(requirements-go)= Incus requires Go 1.21 or higher and is only tested with the Golang compiler. We recommend having at least 2GiB of RAM to allow the build to complete. The minimum supported kernel version is 5.4. Incus requires a kernel with support for: Namespaces (`pid`, `net`, `uts`, `ipc` and `mount`) Seccomp Native Linux AIO (, etc.) The following optional features also require extra kernel options: Namespaces (`user` and `cgroup`) AppArmor (including Ubuntu patch for mount mediation) Control Groups (`blkio`, `cpuset`, `devices`, `memory` and `pids`) CRIU (exact details to be found with CRIU upstream) As well as any other kernel feature required by the LXC version in use. Incus requires LXC 5.0.0 or higher with the following build options: `apparmor` (if using Incus' AppArmor support) `seccomp` To run recent version of various distributions, including Ubuntu, LXCFS should also be installed. For virtual machines, QEMU 6.0 or higher is required. Incus uses `cowsql` for its database, to build and set it up, you can run `make deps`. Incus itself also uses a number of (usually packaged) C libraries: `libacl1` `libcap2` `libuv1` (for `cowsql`) `libsqlite3` >= 3.25.0 (for `cowsql`) Make sure you have all these libraries themselves and their development headers (`-dev` packages) installed."
}
] |
{
"category": "Runtime",
"file_name": "requirements.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-bugtool completion fish | source To load completions for every new session, execute once: cilium-bugtool completion fish > ~/.config/fish/completions/cilium-bugtool.fish You will need to start a new shell for this setup to take effect. ``` cilium-bugtool completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-bugtool_completion_fish.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "name: Support Request about: Ask questions about this project <!-- STOP -- PLEASE READ! GitHub is not the right place for support requests. If you're looking for help, post your question on the Sig-Storage Channel. If the matter is security related, please disclose it privately via https://kubernetes.io/security/. -->"
}
] |
{
"category": "Runtime",
"file_name": "support.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "containerd-shim-rune-v2 is a shim for Inclavare Containers(runE). Carrier is a abstract framework to build an enclave for the specified enclave runtime (OcclumGraphene ..) . Go 1.13.x or above. ```bash mkdir -p $GOPATH/src/github.com/alibaba cd $GOPATH/src/github.com/alibaba git clone https://github.com/alibaba/inclavare-containers.git cd shim GOOS=linux make binaries make install ls -l /usr/local/bin/containerd-shim-rune-v2 ``` The Configuration file of Inclavare Containers MUST BE placed into `/etc/inclavare-containers/config.toml` ```toml log_level = \"info\" # \"debug\" \"info\" \"warn\" \"error\" sgxtoolsign = \"/opt/intel/sgxsdk/bin/x64/sgx_sign\" [containerd] socket = \"/run/containerd/containerd.sock\" [epm] socket = \"/var/run/epm/epm.sock\" [enclave_runtime] signature_method = \"server\" [enclave_runtime.occlum] enclaveruntimepath = \"/opt/occlum/build/lib/libocclum-pal.so\" enclavelibospath = \"/opt/occlum/build/lib/libocclum-libos.so\" [enclave_runtime.graphene] ``` Modify containerd configuration file(/etc/containerd/config.toml) and add runtimes rune into it. ```toml [plugins.cri.containerd.runtimes.rune] runtime_type = \"io.containerd.rune.v2\" ``` Add RuntimeClass rune into your kubernetes cluster. ```bash cat <<EOF | kubectl create -f - apiVersion: node.k8s.io/v1beta1 kind: RuntimeClass metadata: name: rune handler: rune scheduling: nodeSelector: EOF ``` ```bash cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: labels: run: helloworld-in-tee name: helloworld-in-tee spec: runtimeClassName: rune containers: command: /bin/hello_world env: name: RUNE_CARRIER value: occlum image: registry.cn-shanghai.aliyuncs.com/larus-test/hello-world:v2 imagePullPolicy: IfNotPresent name: helloworld workingDir: /var/run/rune EOF ```"
}
] |
{
"category": "Runtime",
"file_name": "README-zh_CN.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "][build] ][releases] ][releases] ][license] ](https://bestpractices.coreinfrastructure.org/projects/5388) ](https://api.securityscorecards.dev/projects/github.com/k8up-io/k8up) ](https://artifacthub.io/packages/helm/k8up/k8up) K8up is a Kubernetes backup operator based on that will handle PVC and application backups on a Kubernetes or OpenShift cluster. Just create a `schedule` and a `credentials` object in the namespace youd like to backup. Its that easy. K8up takes care of the rest. It also provides a Prometheus endpoint for monitoring. K8up is production ready. It is used in production deployments since 2019. The documentation is written in AsciiDoc and published with Antora to . It's source is available in the `docs/` directory. Run `make docs-preview` to build the docs and preview changes. K8up is written using . You'll need: A running Kubernetes cluster (minishift, minikube, k3s, ... you name it) Go development environment Your favorite IDE (with a Go plugin) Docker `make` `sed` (or `gsed` for MacOS) To run the end-to-end test (e.g. `make e2e-test`), you additionally need: `helm` (version 3) `jq` `yq` `node` and `npm` `bash` (installed, doesn't have to be your default shell) `base64` `find` These are the most common make targets: `build`, `test`, `docker-build`, `run`, `kind-run`. Run `make help` to get an overview over the relevant targets and their intentions. You can find the project roadmap at . We use to test the code regularly for vulnerabilities and other security issues. If you find any security issue, please follow our process. K8s consists of two main modules: The operator module is the part that runs constantly within K8s and contains the various reconciliation loops. The restic module is our interface to the `restic` binary and is invoked whenever a `Backup` or `Restore` (or similar) custom resource is instantiated. If it's job (like doing a backup or a restore) is done, the process ends. ```asciidoc / api Go Types for the Custom Resource Definitions (CRDs) [o] cmd CLI definition and entrypoints common Code that is not specific to either config Various configuration files for the Operator SDK [o] controllers The reconciliation loops of the operator module [o] docs Out ASCIIdoc code as published on https://k8up.io e2e The Bats-based End-To-End tests envtest Infrastructure code for the integration tests operator Code that is otherwise related to the operator module, but not part of the recommended Operator SDK structure. restic Code that makes up the restic module. ``` If you make changes to the CRD structs you'll need to run code generation. This can be done with make: ```bash make generate ``` CRDs can be either installed on the cluster by running `make install` or using `kubectl apply -f config/crd/apiextensions.k8s.io/v1`. Currently there's an issue using related to how the CRDs are specified. Therefore settle to the second approach for now. You can run the operator in different ways: as a container image (see ) using `make run-operator` (provide your own kubeconfig) using `make kind-run` (uses KIND to install a cluster in docker and provides its own kubeconfig in `testbin/`) using a configuration of your favorite IDE Best is if you have installed somewhere to be able to setup the needed env values. It needs to be reachable from within your dev cluster. You need `node` and `npm` to run the tests, as it runs with"
},
{
"data": "To run e2e tests, execute: ```bash make e2e-test ``` To test just a specific e2e test, run: ```bash make e2e-test -e BATS_FILES=test-02-deployment.bats ``` To remove the local KIND cluster and other e2e resources, run: ```bash make e2e-clean ``` To cleanup all created artifacts, there's always: ```bash make clean ``` There are a number of example configurations in . Apply them using `kubectl apply -f config/samples/somesample.yaml` Read more about our community . The K8up project is present in the in the . We host a monthly community meeting. For more information, head over to . Our code of conduct can be read at . `\"128Mi\"` | Memory request of K8up operator. See . | | securityContext | object | `{}` | Container security context | | serviceAccount.annotations | object | `{}` | Annotations to add to the service account. | | serviceAccount.create | bool | `true` | Specifies whether a service account should be created | | serviceAccount.name | string | `\"\"` | The name of the service account to use. If not set and create is true, a name is generated using the fullname template | | tolerations | list | `[]` | | In `image.repository` the registry domain was moved into its own parameter `image.registry`. K8up 1.x features leader election, this enables rolling updates and multiple replicas. `k8up.enableLeaderElection` defaults to `true`. Disable this for older Kubernetes versions (<= 1.15) `replicaCount` is now configurable, defaults to `1`. Note: Deployment strategy type has changed from `Recreate` to `RollingUpdate`. CRDs need to be installed separately, they are no longer included in this chart. Note: `image.repository` changed from `vshn/k8up` to `k8up-io/k8up`. Note: `image.registry` changed from `quay.io` to `ghcr.io`. Note: `image.tag` changed from `v1.x` to `v2.x`. Please see the . `metrics.prometheusRule.legacyRules` has been removed (no support for OpenShift 3.11 anymore). Note: `k8up.backupImage.repository` changed from `quay.io/vshn/wrestic` to `ghcr.io/k8up-io/k8up` (`wrestic` is not needed anymore in K8up v2). Due to the migration of the chart from to this repo, we decided to make a breaking change for the chart. Only chart archives from version 3.x can be downloaded from the https://k8up-io.github.io/k8up index. No 2.x chart releases will be migrated from the APPUiO Helm repo. Some RBAC roles and role bindings have change the name. In most cases this shouldn't be an issue and Helm should be able to cleanup the old resources without impact on the RBAC permissions. New parameter: `podAnnotations`, default `{}`. New parameter: `service.annotations`, default `{}`. Parameter changed: `image.tag` now defaults to `v2` instead of a pinned version. Parameter changed: `image.pullPolicy` now defaults to `Always` instead of `IfNotPresent`. Note: Renamed ClusterRole `${release-name}-manager-role` to `${release-name}-manager`. Note: Spec of ClusterRole `${release-name}-leader-election-role` moved to `${release-name}-manager`. Note: Renamed ClusterRoleBinding `${release-name}-manager-rolebinding` to `${release-name}`. Note: ClusterRoleBinding `${release-name}-leader-election-rolebinding` removed (not needed anymore). Note: Renamed ClusterRole `${release-name}-k8up-view` to `${release-name}-view`. Note: Renamed ClusterRole `${release-name}-k8up-edit` to `${release-name}-edit`. The image tag is now pinned again and not using a floating tag. Parameter changed: `image.tag` now defaults to a pinned version. Each new K8up version now requires also a new chart version. Parameter changed: `image.pullPolicy` now defaults to `IfNotPresent` instead of `Always`. Parameter changed: `k8up.backupImage.repository` is now unset, which defaults to the same image as defined in `image.{registry/repository}`. Parameter changed: `k8up.backupImage.tag` is now unset, which defaults to the same image tag as defined in `image.tag`. <https://github.com/k8up-io/k8up> <! Common/Useful Link references from values.yaml -->"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "K8up",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"ark restore delete\" layout: docs Delete a restore Delete a restore ``` ark restore delete NAME [flags] ``` ``` -h, --help help for delete ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restores"
}
] |
{
"category": "Runtime",
"file_name": "ark_restore_delete.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: CephObjectZone CRD Rook allows creation of zones in a ceph cluster for a configuration through a CRD. The following settings are available for Ceph object store zones. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectZone metadata: name: zone-a namespace: rook-ceph spec: zoneGroup: zonegroup-a metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: osd erasureCoded: dataChunks: 2 codingChunks: 1 customEndpoints: \"http://rgw-a.fqdn\" preservePoolsOnDelete: true ``` `name`: The name of the object zone to create `namespace`: The namespace of the Rook cluster where the object zone is created. The pools allow all of the settings defined in the Pool CRD spec. For more details, see the settings. In the example above, there must be at least three hosts (size 3) and at least three devices (2 data + 1 coding chunks) in the cluster. `zonegroup`: The object zonegroup in which the zone will be created. This matches the name of the object zone group CRD. `metadataPool`: The settings used to create all of the object store metadata pools. Must use replication. `dataPool`: The settings to create the object store data pool. Can use replication or erasure coding. `customEndpoints`: Specify the endpoint(s) that will accept multisite replication traffic for this zone. You may include the port in the definition if necessary. For example: \"https://my-object-store.my-domain.net:443\". By default, Rook will set this to the DNS name of the ClusterIP Service created for the CephObjectStore that corresponds to this zone. Most multisite configurations will not exist within the same Kubernetes cluster, meaning the default value will not be useful. In these cases, you will be required to create your own custom ingress resource for the CephObjectStore in order to make the zone available for replication. You must add the endpoint for your custom ingress resource to this list to allow the store to accept replication traffic. In the case of multiple stores (or multiple endpoints for a single store), you are not required to put all endpoints in this list. Only specify the endpoints that should be used for replication traffic. If you update `customEndpoints` to return to an empty list, you must the Rook operator to automatically add the CephObjectStore service endpoint to Ceph's internal configuration. `preservePoolsOnDelete`: If it is set to 'true' the pools used to support the CephObjectZone will remain when it is deleted. This is a security measure to avoid accidental loss of data. It is set to 'true' by default. It is better to check whether data synced with other peer zones before triggering the deletion to avoid accidental loss of data via steps mentioned When deleting a CephObjectZone, deletion will be blocked until all `CephObjectStores` belonging to the zone are removed."
}
] |
{
"category": "Runtime",
"file_name": "ceph-object-zone-crd.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "- https://github.com/heptio/ark/releases/tag/v0.10.2 upgrade restic to v0.9.4 & replace --hostname flag with --host (#1156, @skriss) use 'restic stats' instead of 'restic check' to determine if repo exists (#1171, @skriss) Fix concurrency bug in code ensuring restic repository exists (#1235, @skriss) https://github.com/heptio/ark/releases/tag/v0.10.1 Fix minio setup job command (#1118, @acbramley) Add debugging-install link in doc get-started.md (#1131, @hex108) `ark version`: show full git SHA & combine git tree state indicator with git SHA line (#1124, @skriss) Delete spec.priority in pod restore action (#879, @mwieczorek) Allow to use AWS Signature v1 for creating signed AWS urls (#811, @bashofmann) add multizone/regional support to gcp (#765, @wwitzel3) Fixed the newline output when deleting a schedule. (#1120, @jwhitcraft) Remove obsolete make targets and rename 'make goreleaser' to 'make release' (#1114, @skriss) Update to go 1.11 (#1069, @gliptak) Update CHANGELOGs (#1063, @wwitzel3) Initialize empty schedule metrics on server init (#1054, @cbeneke) Added brew reference (#1051, @omerlh) Remove default token from all service accounts (#1048, @ncdc) Add pprof support to the Ark server (#234, @ncdc) https://github.com/heptio/ark/releases/tag/v0.10.0 We've introduced two new custom resource definitions, `BackupStorageLocation` and `VolumeSnapshotLocation`, that replace the `Config` CRD from previous versions. As part of this, you may now configure more than one possible location for where backups and snapshots are stored, and when you create a `Backup` you can select the location where you'd like that particular backup to be stored. See the for an overview of this feature. Ark's plugin system has been significantly refactored to improve robustness and ease of development. Plugin processes are now automatically restarted if they unexpectedly terminate. Additionally, plugin binaries can now contain more than one plugin implementation (e.g. and object store and a block store, or many backup item actions). The sync process, which ensures that Backup custom resources exist for each backup in object storage, has been revamped to run much more frequently (once per minute rather than once per hour), to use significantly fewer cloud provider API calls, and to not generate spurious Kubernetes API errors. Ark can now be configured to store all data under a prefix within an object storage bucket. This means that you no longer need a separate bucket per Ark instance; you can now have all of your clusters' Ark backups go into a single bucket, with each cluster having its own prefix/subdirectory within that bucket. Restic backup data is now automatically stored within the same bucket/prefix as the rest of the Ark data. A separate bucket is no longer required (or allowed). Ark resources (backups, restores, schedules) can now be bulk-deleted through the `ark` CLI, using the `--all` or `--selector` flags, or by specifying multiple resource names as arguments to the `delete`"
},
{
"data": "The `ark` CLI now supports waiting for backups and restores to complete with the `--wait` flag for `ark backup create` and `ark restore create` Restores can be created directly from the most recent backup for a schedule, using `ark restore create --from-schedule SCHEDULE_NAME` Heptio Ark v0.10 contains a number of breaking changes. Upgrading will require some additional steps beyond just updating your client binary and your container image tag. We've provided a to help you with the upgrade process. Please read and follow these instructions carefully to ensure a successful upgrade! The `Config` CRD has been replaced by `BackupStorageLocation` and `VolumeSnapshotLocation` CRDs. The interface for external plugins (object/block stores, backup/restore item actions) has changed. If you have authored any custom plugins, they'll need to be updated for v0.10. The signature has changed to add a `prefix` parameter. Registering plugins has changed. Create a new plugin server with the `NewServer` function, and register plugins with the appropriate functions. See the interface for details. The organization of Ark data in object storage has changed. Existing data will need to be moved around to conform to the new layout. update docs to reference config/ dir within release tarballs goreleaser: update example image tags to match version being released add rbac content, rework get-started for NodePort and publicUrl, add versioning information add content for docs issue 819 add doc explaining locations Added qps and burst to server's client Support a separate URL base for pre-signed URLs Update storage-layout-reorg-v0.10.md lower some noisy logs to debug level add troubleshooting for loadbalancer restores remove code that checks directly for a backup from restore controller Move clearing up of metadata before plugin's actions Document upgrading plugins in the deployment fix goreleaser bugs Add correct link and change role add 0.10 breaking changes warning to readme in main add content for issue 994 address docs issue #978 don't require a default provider VSL if there's only 1 v0.10 changelog add docs page on versions and upgrading goreleaser scripts for building/creating a release on a workstation update restic prerequisite with min k8s version Silence git detached HEAD advice in build container instructions for upgrading to v0.10 sync controller: fill in missing .spec.storageLocation fix bug preventing PV snapshots from v0.10 backups from restoring Run 'make update' to update formatting Update formatting script support restoring/deleting legacy backups with .status.volumeBackups rename variables #967 fix broken link restore storageclasses before pvs and pvcs backup describer: show snapshot summary by default, details optionally remove pvProviderExists param from NewRestoreController create a struct for multiple return of same type in restore_contoroller #967 Corrected grammatical error Specify return arguments"
},
{
"data": "#424: Add CRDs to list of prioritized resources fix bugs in GetBackupVolumeSnapshots and add test remove all references to Config from docs/examples remove Config-related code update restore process using snapshot locations avoid panics if can't get block store during deletion update backup deletion controller for snapshot locations include snapshot locations in created schedule's backup spec azure: update blockstore to allow storing snaps in different resource group close gzip writer before uploading volumesnapshots file store volume snapshot info as JSON in backup storage add --volume-snapshot-locations flag to ark backup create update backup code to work with volume snapshot locations add unit test for getDefaultVolumeSnapshotLocations add default-volume-snapshot-locations to server cmd Default and validate VolumeSnapshotLocations add create CLI command for snapshot locations Add printer for snapshot locations Add volume snapshot CLI get command Add VolumeLocation and Snapshot. upgrade to restic v0.9.3 Remove broken references to docs that are not existing Fixed relative link for image don't require a default backup storage location to exist templatize error message in DeleteOptions add support for bulk deletion to ark schedule delete add azure-specific code to support multi-location restic update restic to support multiple backup storage locations Change link for the support matrix Fix broken storage providers link fix backup storage location example YAMLs only sync a backup location if it's changed since last sync clarify Azure resource group usage in docs Minor code cleanup Fix formatting for live site add documentation on running Ark on-premises have restic share main Ark bucket refactor to make valid dirs part of an object store layout store backups & restores in backups/, restores/ subdirs in obj storage add support for bulk deletion to ark restore delete remove deps used for docs gen remove script for generating docs remove cli reference docs and related scripts Fix infinite sleep in fsfreeze container Add links for Portworx plugin support Fix Portworx name in doc Make fsfreeze image building consistent get a new metadata accessor after calling backup item actions Adding support for the AWSCLUSTERNAME env variable allowing to claim volumes ownership Document single binary plugins Remove ROADMAP.md, update ZenHub link to Ark board convert all controllers to use genericController, logContext -> log Document SignatureDoesNotMatch error and triaging move ObjectStore mock into pkg/cloudprovider/mocks add a BackupStore to pkg/persistence that supports prefixes create pkg/persistence and move relevant code from pkg/cloudprovider into it move object and block store interfaces to their own files Set schedule labels to subsequent backups set azure restic env vars based on default backup location's config Regenerate CLI docs Pin cobra version Update pflag version azure: update documentation and examples azure: refactor to not use helpers/ pkg, validate all env/config inputs azure: support different RGs/storage accounts per backup location azure: fix for breaking change in"
},
{
"data": "bump Azure SDK version and include storage mgmt package server: remove unused code, replace deprecated func controllers: take a newPluginManager func in constructors Update examples and docs for backup locations backup sync: process the default location first generic controller: allow controllers with only a resync func remove Config CRD's BackupStorageProvider & other obsolete code move server's defaultBackupLocation into config struct update sync controller for backup locations Use backup storage location during restore use the default backup storage location for restic Add storage location to backup get/describe download request: fix setting of log level for plugin manager backup deletion: fix setting of log level in plugin manager download request controller: fix bug in determining expiration refactor download request controller test and add test cases download request controller: use backup location for object store backup deletion controller: use backup location for object store Use backup location in the backup controller add create and get CLI commands for backup locations add --default-backup-storage-location flag to server cmd Add --storage-location argument to create commands Correct metadata for BackupStorageLocationList Generate clients for BackupStorageLocation Add BackupStorageLocation API type apply annotations on single line, no restore mode minor word updates and command wrapping Update hooks/fsfreeze example add an ark bug command Add DigitalOcean to S3-compatible backup providers Fix map merging logic Switch Config CRD elements to server flags start using a namespaced label on restored objects, deprecate old label Bring back 'make local' add bulk deletion support to ark backup delete Preserve node ports during restore when annotations hold specification. Add --wait support to ark backup create Document CRD not found errors Extend doc about synchronization Add --wait support to `ark restore create` Only delete unused backup if they are complete remove SnapshotService, replace with direct BlockStore usage Refactor plugin management Add restore failed phase and metrics update testify to latest released version Add schedule command info to quickstart fix bug preventing backup item action item updates from saving Delete backups from etcd if they're not in storage Fix ZenHub link on Readme.md Update gcp-config.md check s3URL scheme upon AWS ObjectStore Init() Add contributor docs for our ZenHub usage cleanup service account action log statement Initialize schedule Prometheus metrics to have them created beforehand (see https://prometheus.io/docs/practices/instrumentation/#avoid-missing-metrics) Clarify that object storage should per-cluster delete old deletion requests for backup when processing a new one return nil error if 404 encountered when deleting snapshots fix tagging latest by using make's ifeq Add commands for context to the bug template Update Ark library code to work with Kubernetes 1.11 Add some basic troubleshooting commands require namespace for backups/etc. to exist at server startup switch to using .status.startTimestamp for sorting backups Record backup completion time before uploading Add example ark version command to issue templates Add minor improvements and aws example<Plug>delimitMateCR Skip backup sync if it already exists in k8s restore controller: switch to 'c' for receiver name enable a schedule to be provided as the source for a restore fix up Slack link in troubleshooting on main branch Document how to run the Ark server locally Remove outdated namespace deletion content fix paths use posix-compliant conditional for checking TAG_LATEST Added new templates replace pkg/restore's osFileSystem with pkg/util/filesystem's Update generated Ark code based on the 1.11 k8s.io/code-generator script Update vendored library code for Kubernetes 1.11"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-0.10.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This is an example configuration and quick start guide for the installation of rkt from source on Ubuntu 16.04 GNOME. For a detailed developer's reference, see . In this example ~/Repos is a personal workspace where all repos are stored ```sh $ mkdir ~/Repos && cd ~/Repos $ mkdir -p ~/.local/gopath/src/github.com/rkt $ sudo apt-get install git $ git -C ~/.local/gopath/src/github.com/rkt clone https://github.com/rkt/rkt.git $ ln -s ~/.local/gopath/src/github.com/rkt/rkt rkt ``` On a fresh system installation, few additional software packages are needed to correctly build rkt: ```sh $ sudo ~/Repos/rkt/scripts/install-deps-debian-sid.sh ``` See also . ``` $ cd ~/Downloads $ wget https://storage.googleapis.com/golang/go1.6.1.linux-amd64.tar.gz $ tar -xvf go1.6.1.linux-amd64.tar.gz $ mv go ~/.local ``` Add GO variables to .bashrc file: ```sh export PATH=~/.local/bin:~/.local/go/bin:$PATH export GOPATH=~/.local/gopath export GOROOT=~/.local/go ``` Ccache can save a lot of time. If you build a kernel once, most of the compiled code can just be taken from the cache. Ccache can be configured in a few easy steps: ```sh $ sudo apt-get install ccache $ ccache --max-size=10G $ sudo ln -s /usr/bin/ccache /usr/local/bin/gcc ``` The maximum cache size is 10GB now (the default value is too small to cache kernel compilation). Run the autogen and configure commands with the relevant arguments, for example (kvm as flavor): ```sh $ cd ~/Repos/rkt $ ./autogen.sh && ./configure --enable-functional-tests --enable-incremental-build --with-stage1-flavors=kvm ``` Now build rkt with: ```sh $ make V=2 -j ``` REMEMBER: If you want to test somebody else's changes: ```sh $ git checkout <branch> $ make clean $ ./autogen.sh && ./configure <proper arguments> ``` ```sh $ ./tests/build-and-run-tests.sh -f kvm ``` ```sh $ make functional-check ``` ```sh $ make functional-check GOTESTFUNCARGS='-run TESTNAME_HERE' ``` See more in page. ```sh $ sudo ./build-rkt-*/bin/rkt run --insecure-options=image --interactive docker://busybox $ exit $ sudo ./build-rkt-*/bin/rkt gc --grace-period=0 ``` ```sh for link in $(ip link | grep rkt | cut -d':' -f2 | cut -d'@' -f1); sudo ip link del \"${link}\" done ``` ```sh gofmt -s -w file.go ```"
}
] |
{
"category": "Runtime",
"file_name": "quickstart-dev.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This blog is a space for engineers and community members to share perspectives and deep dives on technology and design within the gVisor project. Though our logo suggests we're in the business of space exploration (or perhaps fighting sea monsters), we're actually in the business of sandboxing Linux containers. When we created gVisor, we had three specific goals in mind; _container-native security, resource efficiency, and platform portability_. To put it simply, gVisor provides efficient defense-in-depth for containers anywhere. This post addresses gVisor's container-native security, specifically how gVisor provides strong isolation between an application and the host OS. Future posts will address resource efficiency (how gVisor preserves container benefits like fast starts, smaller snapshots, and less memory overhead than VMs) and platform portability (run gVisor wherever Linux OCI containers run). Delivering on each of these goals requires careful security considerations and a robust design. gVisor allows the execution of untrusted containers, preventing them from adversely affecting the host. This means that the untrusted container is prevented from attacking or spying on either the host kernel or any other peer userspace processes on the host. For example, if you are a cloud container hosting service, running containers from different customers on the same virtual machine means that compromises expose customer data. Properly configured, gVisor can provide sufficient isolation to allow different customers to run containers on the same host. There are many aspects to the proper configuration, including limiting file and network access, which we will discuss in future posts. gVisor was designed around the premise that any security boundary could potentially be compromised with enough time and resources. We tried to optimize for a solution that was as costly and time-consuming for an attacker as possible, at every layer. Consequently, gVisor was built through a combination of intentional design principles and specific technology choices that work together to provide the security isolation needed for running hostile containers on a host. We'll dig into it in the next section! gVisor was designed with some common principles in mind: Defense-in-Depth, Principle of Least-Privilege, Attack Surface Reduction and Secure-by-Default[^1]. In general, Design Principles outline good engineering practices, but in the case of security, they also can be thought of as a set of tactics. In a real-life castle, there is no single defensive feature. Rather, there are many in combination: redundant walls, scattered draw bridges, small bottle-neck entrances, moats, etc. A simplified version of the design is below ()[^2]: In order to discuss design principles, the following components are important to know: runsc - binary that packages the Sentry, platform, and Gofer(s) that run containers. runsc is the drop-in binary for running gVisor in Docker and Kubernetes. Untrusted Application - container running in the sandbox. Untrusted application/container are used interchangeably in this article. Platform Syscall Switcher - intercepts syscalls from the application and passes them to the Sentry with no further handling. Sentry - The \"application kernel\" in userspace that serves the untrusted application. Each application instance has its own Sentry. The Sentry handles syscalls, routes I/O to gofers, and manages memory and CPU, all in userspace. The Sentry is allowed to make limited, filtered syscalls to the host"
},
{
"data": "Gofer - a process that specifically handles different types of I/O for the Sentry (usually disk I/O). Gofers are also allowed to make filtered syscalls to the Host OS. Host OS - the actual OS on which gVisor containers are running, always some flavor of Linux (sorry, Windows/MacOS users). It is important to emphasize what is being protected from the untrusted application in this diagram: the host OS and other userspace applications. In this post, we are only discussing security-related features of gVisor, and you might ask, \"What about performance, compatibility and stability?\" We will cover these considerations in future posts. For gVisor, Defense-in-Depth means each component of the software stack trusts the other components as little as possible. It may seem strange that we would want our own software components to distrust each other. But by limiting the trust between small, discrete components, each component is forced to defend itself against potentially malicious input. And when you stack these components on top of each other, you can ensure that multiple security barriers must be overcome by an attacker. And this leads us to how Defense-in-Depth is applied to gVisor: no single vulnerability should compromise the host. In the \"Attacker's Advantage / Defender's Dilemma,\" the defender must succeed all the time while the attacker only needs to succeed once. Defense in Depth inverts this principle: once the attacker successfully compromises any given software component, they are immediately faced with needing to compromise a subsequent, distinct layer in order to move laterally or acquire more privilege. For example, the untrusted container is isolated from the Sentry. The Sentry is isolated from host I/O operations by serving those requests in separate processes called Gofers. And both the untrusted container and its associated Gofers are isolated from the host process that is running the sandbox. An additional benefit is that this generally leads to more robust and stable software, forcing interfaces to be strictly defined and tested to ensure all inputs are properly parsed and bounds checked. The principle of Least-Privilege implies that each software component has only the permissions it needs to function, and no more. Least-Privilege is applied throughout gVisor. Each component and more importantly, each interface between the components, is designed so that only the minimum level of permission is required for it to perform its function. Specifically, the closer you are to the untrusted application, the less privilege you have. This is evident in how runsc (the drop in gVisor binary for Docker/Kubernetes) constructs the sandbox. The Sentry has the least privilege possible (it can't even open a file!). Gofers are only allowed file access, so even if it were compromised, the host network would be unavailable. Only the runsc binary itself has full access to the host OS, and even runsc's access to the host OS is often limited through capabilities / chroot / namespacing. Designing a system with Defense-in-Depth and Least-Privilege in mind encourages small, separate, single-purpose components, each with very restricted privileges. There are no bugs in unwritten code. In other words, gVisor supports a feature if and only if it is needed to run host Linux containers. There are a lot of things gVisor does not need to"
},
{
"data": "For example, it does not need to support arbitrary device drivers, nor does it need to support video playback. By not implementing what will not be used, we avoid introducing potential bugs in our code. That is not to say gVisor has limited functionality! Quite the opposite, we analyzed what is actually needed to run Linux containers and today the Sentry supports 237 syscalls[^3]<sup>,</sup>[^4], along with the range of critical /proc and /dev files. However, gVisor does not support every syscall in the Linux kernel. There are about 350 syscalls[^5] within the 5.3.11 version of the Linux kernel, many of which do not apply to Linux containers that typically host cloud-like workloads. For example, we don't support old versions of epoll (epollctlold, epollwaitold), because they are deprecated in Linux and no supported workloads use them. Furthermore, any exploited vulnerabilities in the implemented syscalls (or Sentry code in general) only apply to gaining control of the Sentry. More on this in a later post. The Sentry's interactions with the Host OS are restricted in many ways. For instance, no syscall is \"passed-through\" from the untrusted application to the host OS. All syscalls are intercepted and interpreted. In the case where the Sentry needs to call the Host OS, we severely limit the syscalls that the Sentry itself is allowed to make to the host kernel[^6]. For example, there are many file-system based attacks, where manipulation of files or their paths, can lead to compromise of the host[^7]. As a result, the Sentry does not allow any syscall that creates or opens a file descriptor. All file descriptors must be donated to the sandbox. By disallowing open or creation of file descriptors, we eliminate entire categories of these file-based attacks. This does not affect functionality though. For example, during startup, runsc will donate FDs the Sentry that allow for mapping STDIN/STDOUT/STDERR to the sandboxed application. Also the Gofer may donate an FD to the Sentry, allowing for direct access to some files. And most files will be remotely accessed through the Gofers, in which case no FDs are donated to the Sentry. The Sentry itself is only allowed access to specific . Without networking, the Sentry needs 53 host syscalls in order to function, and with networking, it uses an additional 15[^8]. By limiting the allowlist to only these needed syscalls, we radically reduce the amount of host OS attack surface. If any attempts are made to call something outside the allowlist, it is immediately blocked and the sandbox is killed by the Host OS. The Sentry communicates with the Gofer through a local unix domain socket (UDS) via a version of the 9P protocol[^9]. The UDS file descriptor is passed to the sandbox during initialization and all communication between the Sentry and Gofer happens via 9P. We will go more into how Gofers work in future posts. So, of the 350 syscalls in the Linux kernel, the Sentry needs to implement only 237 of them to support containers. At most, the Sentry only needs to call 68 of the host Linux syscalls. In other words, with gVisor, applications get the vast majority (and growing) functionality of Linux containers for only 68 possible syscalls to the Host OS. 350 syscalls to 68 is attack surface reduction. The default choice for a user should be"
},
{
"data": "If users need to run a less secure configuration of the sandbox for the sake of performance or application compatibility, they must make the choice explicitly. An example of this might be a networking application that is performance sensitive. Instead of using the safer, Go-based Netstack in the Sentry, the untrusted container can instead use the host Linux networking stack directly. However, this means the untrusted container will be directly interacting with the host, without the safety benefits of the sandbox. It also means that an attack could directly compromise the host through his path. These less secure configurations are not the default. In fact, the user must take action to change the configuration and run in a less secure mode. Additionally, these actions make it very obvious that a less secure configuration is being used. This can be as simple as forcing a default runtime flag option to the secure option. gVisor does this by always using its internal netstack by default. However, for certain performance sensitive applications, we allow the usage of the host OS networking stack, but it requires the user to actively set a flag[^10]. Technology choices for gVisor mainly involve things that will give us a security boundary. At a higher level, boundaries in software might be describing a great many things. It may be discussing the boundaries between threads, boundaries between processes, boundaries between CPU privilege levels, and more. Security boundaries are interfaces that are designed and built so that entire classes of bugs/vulnerabilities are eliminated. For example, the Sentry and Gofers are implemented using Go. Go was chosen for a number of the features it provided. Go is a fast, statically-typed, compiled language that has efficient multi-threading support, garbage collection and a constrained set of \"unsafe\" operations. Using these features enabled safe array and pointer handling. This means entire classes of vulnerabilities were eliminated, such as buffer overflows and use-after-free. Another example is our use of very strict syscall switching to ensure that the Sentry is always the first software component that parses and interprets the calls being made by the untrusted container. Here is an instance where different platforms use different solutions, but all of them share this common trait, whether it is through the use of ptrace \"a la PTRACE_ATTACH\"[^11] or kvm's ring0[^12]. Finally, one of the most restrictive choices was to use seccomp, to restrict the Sentry from being able to open or create a file descriptor on the host. All file I/O is required to go through Gofers. Preventing the opening or creation of file descriptions eliminates whole categories of bugs around file permissions [^13]. In part 2 of this blog post, we will explore gVisor from an attacker's point of view. We will use it as an opportunity to examine the specific strengths and weaknesses of each gVisor component. We will also use it to introduce Google's Vulnerability Reward Program[^14], and other ways the community can contribute to help make gVisor safe, fast and stable. <br> <br> Updated (2021-07-14): this post was updated to use more inclusive language. <br> -- <!-- mdformat off(mdformat formats this into multiple lines) --> <!-- mdformat on --> <!-- mdformat off(mdformat breaks this url by escaping the parenthesis) --> ) <!-- mdformat on -->"
}
] |
{
"category": "Runtime",
"file_name": "2019-11-18-security-basics.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Cilium Health Client Client for querying the Cilium health status API ``` cilium-health [flags] ``` ``` -D, --debug Enable debug messages -h, --help help for cilium-health -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Generate the autocompletion script for the specified shell - Display local cilium agent status - Check whether the cilium-health API is up - Display cilium connectivity to other nodes"
}
] |
{
"category": "Runtime",
"file_name": "cilium-health.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document borrows concepts, conventions, and text mainly from the following sources, extending them in order to provide a sensible guideline for writing commit messages for OpenEBS projects. Tim Pope's on readable commit messages Thanks to @stephenparish https://gist.github.com/stephenparish/9941e89d80e2bc58a153 Thanks to @abravalheri https://gist.github.com/abravalheri/34aeb7b18d61392251a2 These conventions are aimed at tools to automatically generate useful documentation, or by developers during debugging process. Any line of the commit message cannot be longer than 80 characters! This allows the message to be easier to read on github as well as in various git tools. ``` [TICKET] <type>(<scope>): <subject> <meta> <BLANK LINE> <body> <BLANK LINE> <footer> ``` Subject line may be prefixed for continuous integration purposes or better project management with a ticket id. The ticket id could be a Github Issue, Rally Id, JIRA Id, etc., For example, if you use Rally to track your development, the subject could be \"[TA-1234] test(mayactl): add unit tests for cstor volume list\" feat*: A new feature fix*: A bug fix docs*: Documentation only changes style*: Changes that do not affect the meaning of the code (white-space, formatting, missing semi-colons, etc) refactor*: A code change that neither fixes a bug nor adds a feature perf*: A code change that improves performance test*: Adding missing tests chore*: Changes to the build process or auxiliary tools and libraries such as documentation generation Scope could be anything specifying impacted module/package. For example: when committing to openebs/maya repo, the scope can be components : mayactl, m-apiserver, spc-watcher, cast, install, util, etc. generic : compile, travis-ci, etc. Subject line should contains succinct description of the change. use imperative, present tense: change not changed nor changes don't capitalize first letter no dot (.) at the end Additionally, the end of subject-line may contain twitter-inspired markup `#wip` - indicate for contributors the feature being implemented is not complete yet. Should not be included in changelogs (just the last commit for a feature goes to the changelog). `#nitpick` - the commit does not add useful information. Used when fixing typos, etc... Should not be included in changelogs. just as in `<subject>` use imperative, present tense: change not changed nor changes includes motivation for the change and contrasts with previous behavior All breaking changes have to be mentioned in footer with the description of the change, justification and migration notes ``` BREAKING CHANGE: DEFAULTREPLICANODE_SELECTOR will be ignored The support for using this ENV has been removed. To migrate you are required to mention the node selector in StorageClass Before: Specified as a string in the m-apiserver ENV as follows: name: DEFAULTREPLICANODE_SELECTOR value: \"nodetype=storage\" After: Specify in the StorageClass apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-jiva-nodeselector annotations: cas.openebs.io/config: | name: ReplicaNodeSelector value: |- nodetype: storage User feedback was to have node-selector configurable per StorageClass to allow scheduling volumes based on the storage attached to the nodes. ``` Fixed bugs should be listed on a separate line in the footer prefixed with \"Fixes\" keyword like this: ``` Fixes #234 ``` or in case of multiple issues: ``` Fixes #123, #246, #333 ``` If the commit reverts a previous commit, it should begin with revert:, followed by the header of the reverted commit. In the body it should say: This reverts commit <hash>., where the hash is the SHA of the commit being reverted. Here are some PRs that follow the convention proposed in this document. https://github.com/openebs/openebs/pull/1876 https://github.com/openebs/maya/pull/502 https://github.com/openebs/jiva/pull/110 https://github.com/openebs/cstor/pull/38"
}
] |
{
"category": "Runtime",
"file_name": "git-commit-message.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Cobra can generate shell completions for multiple shells. The currently supported shells are: Bash Zsh fish PowerShell Cobra will automatically provide your program with a fully functional `completion` command, similarly to how it provides the `help` command. If you do not wish to use the default `completion` command, you can choose to provide your own, which will take precedence over the default one. (This also provides backwards-compatibility with programs that already have their own `completion` command.) If you are using the generator, you can create a completion command by running ```bash cobra add completion ``` and then modifying the generated `cmd/completion.go` file to look something like this (writing the shell script to stdout allows the most flexible use): ```go var completionCmd = &cobra.Command{ Use: \"completion [bash|zsh|fish|powershell]\", Short: \"Generate completion script\", Long: `To load completions: Bash: $ source <(yourprogram completion bash) $ yourprogram completion bash > /etc/bash_completion.d/yourprogram $ yourprogram completion bash > /usr/local/etc/bash_completion.d/yourprogram Zsh: $ echo \"autoload -U compinit; compinit\" >> ~/.zshrc $ yourprogram completion zsh > \"${fpath[1]}/_yourprogram\" fish: $ yourprogram completion fish | source $ yourprogram completion fish > ~/.config/fish/completions/yourprogram.fish PowerShell: PS> yourprogram completion powershell | Out-String | Invoke-Expression PS> yourprogram completion powershell > yourprogram.ps1 `, DisableFlagsInUseLine: true, ValidArgs: []string{\"bash\", \"zsh\", \"fish\", \"powershell\"}, Args: cobra.ExactValidArgs(1), Run: func(cmd *cobra.Command, args []string) { switch args[0] { case \"bash\": cmd.Root().GenBashCompletion(os.Stdout) case \"zsh\": cmd.Root().GenZshCompletion(os.Stdout) case \"fish\": cmd.Root().GenFishCompletion(os.Stdout, true) case \"powershell\": cmd.Root().GenPowerShellCompletionWithDesc(os.Stdout) } }, } ``` Note: The cobra generator may include messages printed to stdout, for example, if the config file is loaded; this will break the auto-completion script so must be removed. Cobra provides a few options for the default `completion` command. To configure such options you must set the `CompletionOptions` field on the root command. To tell Cobra not to provide the default `completion` command: ``` rootCmd.CompletionOptions.DisableDefaultCmd = true ``` To tell Cobra not to provide the user with the `--no-descriptions` flag to the completion sub-commands: ``` rootCmd.CompletionOptions.DisableNoDescFlag = true ``` To tell Cobra to completely disable descriptions for completions: ``` rootCmd.CompletionOptions.DisableDescriptions = true ``` The generated completion scripts will automatically handle completing commands and flags. However, you can make your completions much more powerful by providing information to complete your program's nouns and flag values. Cobra allows you to provide a pre-defined list of completion choices for your nouns using the `ValidArgs` field. For example, if you want `kubectl get ` to show a list of valid \"nouns\" you have to set them. Some simplified code from `kubectl get` looks like: ```go validArgs []string = { \"pod\", \"node\", \"service\", \"replicationcontroller\" } cmd := &cobra.Command{ Use: \"get [(-o|--output=)json|yaml|template|...] (RESOURCE [NAME] | RESOURCE/NAME ...)\", Short: \"Display one or many resources\", Long: get_long, Example: get_example, Run: func(cmd *cobra.Command, args []string) { cobra.CheckErr(RunGet(f, out, cmd, args)) }, ValidArgs: validArgs, } ``` Notice we put the `ValidArgs` field on the `get` sub-command. Doing so will give results like: ```bash $ kubectl get node pod replicationcontroller service ``` If your nouns have aliases, you can define them alongside `ValidArgs` using `ArgAliases`: ```go argAliases []string = { \"pods\", \"nodes\", \"services\", \"svc\", \"replicationcontrollers\", \"rc\" } cmd := &cobra.Command{ ... ValidArgs: validArgs, ArgAliases: argAliases } ``` The aliases are not shown to the user on tab completion, but they are accepted as valid nouns by the completion algorithm if entered manually, e.g. in: ```bash $ kubectl get rc backend frontend database ``` Note that without declaring `rc` as an alias, the completion algorithm would not know to show the list of replication controllers following `rc`. In some cases it is not possible to provide a list of completions in"
},
{
"data": "Instead, the list of completions must be determined at execution-time. In a similar fashion as for static completions, you can use the `ValidArgsFunction` field to provide a Go function that Cobra will execute when it needs the list of completion choices for the nouns of a command. Note that either `ValidArgs` or `ValidArgsFunction` can be used for a single cobra command, but not both. Simplified code from `helm status` looks like: ```go cmd := &cobra.Command{ Use: \"status RELEASE_NAME\", Short: \"Display the status of the named release\", Long: status_long, RunE: func(cmd *cobra.Command, args []string) { RunGet(args[0]) }, ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { if len(args) != 0 { return nil, cobra.ShellCompDirectiveNoFileComp } return getReleasesFromCluster(toComplete), cobra.ShellCompDirectiveNoFileComp }, } ``` Where `getReleasesFromCluster()` is a Go function that obtains the list of current Helm releases running on the Kubernetes cluster. Notice we put the `ValidArgsFunction` on the `status` sub-command. Let's assume the Helm releases on the cluster are: `harbor`, `notary`, `rook` and `thanos` then this dynamic completion will give results like: ```bash $ helm status harbor notary rook thanos ``` You may have noticed the use of `cobra.ShellCompDirective`. These directives are bit fields allowing to control some shell completion behaviors for your particular completion. You can combine them with the bit-or operator such as `cobra.ShellCompDirectiveNoSpace | cobra.ShellCompDirectiveNoFileComp` ```go // Indicates that the shell will perform its default behavior after completions // have been provided (this implies none of the other directives). ShellCompDirectiveDefault // Indicates an error occurred and completions should be ignored. ShellCompDirectiveError // Indicates that the shell should not add a space after the completion, // even if there is a single completion provided. ShellCompDirectiveNoSpace // Indicates that the shell should not provide file completion even when // no completion is provided. ShellCompDirectiveNoFileComp // Indicates that the returned completions should be used as file extension filters. // For example, to complete only files of the form .json or .yaml: // return []string{\"yaml\", \"json\"}, ShellCompDirectiveFilterFileExt // For flags, using MarkFlagFilename() and MarkPersistentFlagFilename() // is a shortcut to using this directive explicitly. // ShellCompDirectiveFilterFileExt // Indicates that only directory names should be provided in file completion. // For example: // return nil, ShellCompDirectiveFilterDirs // For flags, using MarkFlagDirname() is a shortcut to using this directive explicitly. // // To request directory names within another directory, the returned completions // should specify a single directory name within which to search. For example, // to complete directories within \"themes/\": // return []string{\"themes\"}, ShellCompDirectiveFilterDirs // ShellCompDirectiveFilterDirs ``` *Note*: When using the `ValidArgsFunction`, Cobra will call your registered function after having parsed all flags and arguments provided in the command-line. You therefore don't need to do this parsing yourself. For example, when a user calls `helm status --namespace my-rook-ns `, Cobra will call your registered `ValidArgsFunction` after having parsed the `--namespace` flag, as it would have done when calling the `RunE` function. Cobra achieves dynamic completion through the use of a hidden command called by the completion script. To debug your Go completion code, you can call this hidden command directly: ```bash $ helm complete status har<ENTER> harbor :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` *Important:* If the noun to complete is empty (when the user has not yet typed any letters of that noun), you must pass an empty parameter to the `complete` command: ```bash $ helm complete status \"\"<ENTER> harbor notary rook thanos :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` Calling the `complete` command directly allows you to run the Go debugger to troubleshoot your"
},
{
"data": "You can also add printouts to your code; Cobra provides the following functions to use for printouts in Go completion code: ```go // Prints to the completion script debug file (if BASHCOMPDEBUG_FILE // is set to a file path) and optionally prints to stderr. cobra.CompDebug(msg string, printToStdErr bool) { cobra.CompDebugln(msg string, printToStdErr bool) // Prints to the completion script debug file (if BASHCOMPDEBUG_FILE // is set to a file path) and to stderr. cobra.CompError(msg string) cobra.CompErrorln(msg string) ``` *Important: You should not* leave traces that print directly to stdout in your completion code as they will be interpreted as completion choices by the completion script. Instead, use the cobra-provided debugging traces functions mentioned above. Most of the time completions will only show sub-commands. But if a flag is required to make a sub-command work, you probably want it to show up when the user types . You can mark a flag as 'Required' like so: ```go cmd.MarkFlagRequired(\"pod\") cmd.MarkFlagRequired(\"container\") ``` and you'll get something like ```bash $ kubectl exec -c --container= -p --pod= ``` As for nouns, Cobra provides a way of defining dynamic completion of flags. To provide a Go function that Cobra will execute when it needs the list of completion choices for a flag, you must register the function using the `command.RegisterFlagCompletionFunc()` function. ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"json\", \"table\", \"yaml\"}, cobra.ShellCompDirectiveDefault }) ``` Notice that calling `RegisterFlagCompletionFunc()` is done through the `command` with which the flag is associated. In our example this dynamic completion will give results like so: ```bash $ helm status --output json table yaml ``` You can also easily debug your Go completion code for flags: ```bash $ helm complete status --output \"\" json table yaml :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` *Important: You should not* leave traces that print to stdout in your completion code as they will be interpreted as completion choices by the completion script. Instead, use the cobra-provided debugging traces functions mentioned further above. To limit completions of flag values to file names with certain extensions you can either use the different `MarkFlagFilename()` functions or a combination of `RegisterFlagCompletionFunc()` and `ShellCompDirectiveFilterFileExt`, like so: ```go flagName := \"output\" cmd.MarkFlagFilename(flagName, \"yaml\", \"json\") ``` or ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"yaml\", \"json\"}, ShellCompDirectiveFilterFileExt}) ``` To limit completions of flag values to directory names you can either use the `MarkFlagDirname()` functions or a combination of `RegisterFlagCompletionFunc()` and `ShellCompDirectiveFilterDirs`, like so: ```go flagName := \"output\" cmd.MarkFlagDirname(flagName) ``` or ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return nil, cobra.ShellCompDirectiveFilterDirs }) ``` To limit completions of flag values to directory names within another directory you can use a combination of `RegisterFlagCompletionFunc()` and `ShellCompDirectiveFilterDirs` like so: ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"themes\"}, cobra.ShellCompDirectiveFilterDirs }) ``` Cobra provides support for completion descriptions. Such descriptions are supported for each shell (however, for bash, it is only available in the ). For commands and flags, Cobra will provide the descriptions automatically, based on usage information. For example, using zsh: ``` $ helm s[tab] search -- search for a keyword in charts show -- show information of a chart status -- displays the status of the named release ``` while using fish: ``` $ helm s[tab] search (search for a keyword in charts) show (show information of a chart) status (displays the status of the named release) ``` Cobra allows you to add descriptions to your own"
},
{
"data": "Simply add the description text after each completion, following a `\\t` separator. This technique applies to completions returned by `ValidArgs`, `ValidArgsFunction` and `RegisterFlagCompletionFunc()`. For example: ```go ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"harbor\\tAn image registry\", \"thanos\\tLong-term metrics\"}, cobra.ShellCompDirectiveNoFileComp }} ``` or ```go ValidArgs: []string{\"bash\\tCompletions for bash\", \"zsh\\tCompletions for zsh\"} ``` The bash completion script generated by Cobra requires the `bashcompletion` package. You should update the help text of your completion command to show how to install the `bashcompletion` package () You can also configure `bash` aliases for your program and they will also support completions. ```bash alias aliasname=origcommand complete -o default -F start_origcommand aliasname $ aliasname <tab><tab> completion firstcommand secondcommand ``` For backward compatibility, Cobra still supports its bash legacy dynamic completion solution. Please refer to for details. Cobra provides two versions for bash completion. The original bash completion (which started it all!) can be used by calling `GenBashCompletion()` or `GenBashCompletionFile()`. A new V2 bash completion version is also available. This version can be used by calling `GenBashCompletionV2()` or `GenBashCompletionFileV2()`. The V2 version does not support the legacy dynamic completion (see ) but instead works only with the Go dynamic completion solution described in this document. Unless your program already uses the legacy dynamic completion solution, it is recommended that you use the bash completion V2 solution which provides the following extra features: Supports completion descriptions (like the other shells) Small completion script of less than 300 lines (v1 generates scripts of thousands of lines; `kubectl` for example has a bash v1 completion script of over 13K lines) Streamlined user experience thanks to a completion behavior aligned with the other shells `Bash` completion V2 supports descriptions for completions. When calling `GenBashCompletionV2()` or `GenBashCompletionFileV2()` you must provide these functions with a parameter indicating if the completions should be annotated with a description; Cobra will provide the description automatically based on usage information. You can choose to make this option configurable by your users. ``` $ helm s search (search for a keyword in charts) status (display the status of the named release) show (show information of a chart) $ helm s search show status ``` Note: Cobra's default `completion` command uses bash completion V2. If for some reason you need to use bash completion V1, you will need to implement your own `completion` command. Cobra supports native zsh completion generated from the root `cobra.Command`. The generated completion script should be put somewhere in your `$fpath` and be named `_<yourProgram>`. You will need to start a new shell for the completions to become available. Zsh supports descriptions for completions. Cobra will provide the description automatically, based on usage information. Cobra provides a way to completely disable such descriptions by using `GenZshCompletionNoDesc()` or `GenZshCompletionFileNoDesc()`. You can choose to make this a configurable option to your users. ``` $ helm s[tab] search -- search for a keyword in charts show -- show information of a chart status -- displays the status of the named release $ helm s[tab] search show status ``` Note: Because of backward-compatibility requirements, we were forced to have a different API to disable completion descriptions between `zsh` and `fish`. Custom completions implemented in Bash scripting (legacy) are not supported and will be ignored for `zsh` (including the use of the `BashCompCustom` flag annotation). You should instead use `ValidArgsFunction` and `RegisterFlagCompletionFunc()` which are portable to the different shells (`bash`, `zsh`, `fish`, `powershell`). The function `MarkFlagCustom()` is not supported and will be ignored for `zsh`. You should instead use `RegisterFlagCompletionFunc()`. Cobra 1.1 standardized its zsh completion support to align it with its other shell"
},
{
"data": "Although the API was kept backward-compatible, some small changes in behavior were introduced. Please refer to for details. Cobra supports native fish completions generated from the root `cobra.Command`. You can use the `command.GenFishCompletion()` or `command.GenFishCompletionFile()` functions. You must provide these functions with a parameter indicating if the completions should be annotated with a description; Cobra will provide the description automatically based on usage information. You can choose to make this option configurable by your users. ``` $ helm s[tab] search (search for a keyword in charts) show (show information of a chart) status (displays the status of the named release) $ helm s[tab] search show status ``` Note: Because of backward-compatibility requirements, we were forced to have a different API to disable completion descriptions between `zsh` and `fish`. Custom completions implemented in bash scripting (legacy) are not supported and will be ignored for `fish` (including the use of the `BashCompCustom` flag annotation). You should instead use `ValidArgsFunction` and `RegisterFlagCompletionFunc()` which are portable to the different shells (`bash`, `zsh`, `fish`, `powershell`). The function `MarkFlagCustom()` is not supported and will be ignored for `fish`. You should instead use `RegisterFlagCompletionFunc()`. The following flag completion annotations are not supported and will be ignored for `fish`: `BashCompFilenameExt` (filtering by file extension) `BashCompSubdirsInDir` (filtering by directory) The functions corresponding to the above annotations are consequently not supported and will be ignored for `fish`: `MarkFlagFilename()` and `MarkPersistentFlagFilename()` (filtering by file extension) `MarkFlagDirname()` and `MarkPersistentFlagDirname()` (filtering by directory) Similarly, the following completion directives are not supported and will be ignored for `fish`: `ShellCompDirectiveFilterFileExt` (filtering by file extension) `ShellCompDirectiveFilterDirs` (filtering by directory) Cobra supports native PowerShell completions generated from the root `cobra.Command`. You can use the `command.GenPowerShellCompletion()` or `command.GenPowerShellCompletionFile()` functions. To include descriptions use `command.GenPowerShellCompletionWithDesc()` and `command.GenPowerShellCompletionFileWithDesc()`. Cobra will provide the description automatically based on usage information. You can choose to make this option configurable by your users. The script is designed to support all three PowerShell completion modes: TabCompleteNext (default windows style - on each key press the next option is displayed) Complete (works like bash) MenuComplete (works like zsh) You set the mode with `Set-PSReadLineKeyHandler -Key Tab -Function <mode>`. Descriptions are only displayed when using the `Complete` or `MenuComplete` mode. Users need PowerShell version 5.0 or above, which comes with Windows 10 and can be downloaded separately for Windows 7 or 8.1. They can then write the completions to a file and source this file from their PowerShell profile, which is referenced by the `$Profile` environment variable. See `Get-Help about_Profiles` for more info about PowerShell profiles. ``` $ helm s[tab] search (search for a keyword in charts) show (show information of a chart) status (displays the status of the named release) $ helm s[tab] search show status search for a keyword in charts $ helm s[tab] search show status ``` Custom completions implemented in bash scripting (legacy) are not supported and will be ignored for `powershell` (including the use of the `BashCompCustom` flag annotation). You should instead use `ValidArgsFunction` and `RegisterFlagCompletionFunc()` which are portable to the different shells (`bash`, `zsh`, `fish`, `powershell`). The function `MarkFlagCustom()` is not supported and will be ignored for `powershell`. You should instead use `RegisterFlagCompletionFunc()`. The following flag completion annotations are not supported and will be ignored for `powershell`: `BashCompFilenameExt` (filtering by file extension) `BashCompSubdirsInDir` (filtering by directory) The functions corresponding to the above annotations are consequently not supported and will be ignored for `powershell`: `MarkFlagFilename()` and `MarkPersistentFlagFilename()` (filtering by file extension) `MarkFlagDirname()` and `MarkPersistentFlagDirname()` (filtering by directory) Similarly, the following completion directives are not supported and will be ignored for `powershell`: `ShellCompDirectiveFilterFileExt` (filtering by file extension) `ShellCompDirectiveFilterDirs` (filtering by directory)"
}
] |
{
"category": "Runtime",
"file_name": "shell_completions.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The `extension` backend provides an easy way for prototyping new backend types for flannel. It is not recommended for production use, for example it doesn't have a built in retry mechanism. This backend has the following configuration `Type` (string): `extension` `PreStartupCommand` (string): Command to run before allocating a network to this host The stdout of the process is captured and passed to the stdin of the SubnetAdd/Remove commands. `PostStartupCommand` (string): Command to run after allocating a network to this host The following environment variable is set SUBNET - The subnet of the remote host that was added. `SubnetAddCommand` (string): Command to run when a subnet is added stdin - The output from `PreStartupCommand` is passed in. The following environment variables are set SUBNET - The ipv4 subnet of the remote host that was added. IPV6SUBNET - The ipv6 subnet of the remote host that was added. PUBLIC_IP - The public IP of the remote host. PUBLIC_IPV6 - The public IPv6 of the remote host. `SubnetRemoveCommand`(string): Command to run when a subnet is removed stdin - The output from `PreStartupCommand` is passed in. The following environment variables are set SUBNET - The ipv4 subnet of the remote host that was removed. IPV6SUBNET - The ipv6 subnet of the remote host that was removed. PUBLIC_IP - The public IP of the remote host. PUBLIC_IPV6 - The public IPv6 of the remote host. All commands are run through the `sh` shell and are run with the same permissions as the flannel daemon. To replicate the functionality of the host-gw plugin, there's no need for a startup command. The backend just needs to manage the route to subnets when they are added or removed. An example ```json { \"Network\": \"10.0.0.0/16\", \"Backend\": { \"Type\": \"extension\", \"SubnetAddCommand\": \"ip route add $SUBNET via $PUBLIC_IP\", \"SubnetRemoveCommand\": \"ip route del $SUBNET via $PUBLIC_IP\" } } ``` VXLAN is more complex. It needs to store the MAC address of the vxlan device when it's created and to make it available to the flannel daemon running on other hosts. The address of the vxlan device also needs to be set after the subnet has been allocated. An example ```json { \"Network\": \"10.50.0.0/16\", \"Backend\": { \"Type\": \"extension\", \"PreStartupCommand\": \"export VNI=1; export IFNAME=flannel-vxlan; ip link del $IFNAME 2>/dev/null; ip link add $IFNAME type vxlan id $VNI dstport 8472 && cat /sys/class/net/$IFNAME/address\", \"PostStartupCommand\": \"export IFNAME=flannel-vxlan; export SUBNETIP=`echo $SUBNET | cut -d'/' -f 1`; ip addr add $SUBNETIP/32 dev $IFNAME && ip link set $IF_NAME up\", \"SubnetAddCommand\": \"export SUBNETIP=`echo $SUBNET | cut -d'/' -f 1`; export IFNAME=flannel-vxlan; read VTEP; ip route add $SUBNET nexthop via $SUBNETIP dev $IFNAME onlink && ip neigh replace $SUBNETIP dev $IFNAME lladdr $VTEP && bridge fdb add $VTEP dev $IFNAME self dst $PUBLICIP\" } } ```"
}
] |
{
"category": "Runtime",
"file_name": "extension.md",
"project_name": "Flannel",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "In July 2022, CubeFS was successfully promoted to a CNCF incubation project. In March 2022, an incubation application was submitted to CNCF. It was discovered that ChubaoFS was difficult to pronounce in English, so it was renamed CubeFS. In August 2020, OPPO joined the open-source project and became a core contributor and main promoter. In June 2019, JD.com donated ChubaoFS to the Cloud Native Computing Foundation (CNCF) and entered the CNCF sandbox project in December 2019."
}
] |
{
"category": "Runtime",
"file_name": "development.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Command line flags override the confd . ``` confd -h ``` ```Text Usage of confd: -app-id string Vault app-id to use with the app-id backend (only used with -backend=vault and auth-type=app-id) -auth-token string Auth bearer token to use -auth-type string Vault auth backend type to use (only used with -backend=vault) -backend string backend to use (default \"etcd\") -basic-auth Use Basic Auth to authenticate (only used with -backend=etcd) -client-ca-keys string client ca keys -client-cert string the client cert -client-key string the client key -confdir string confd conf directory (default \"/etc/confd\") -config-file string the confd config file -interval int backend polling interval (default 600) -keep-stage-file keep staged files -log-level string level which confd should log messages -node value list of backend nodes (default []) -noop only show pending changes -onetime run once and exit -password string the password to authenticate with (only used with vault and etcd backends) -prefix string key path prefix (default \"/\") -scheme string the backend URI scheme for nodes retrieved from DNS SRV records (http or https) (default \"http\") -srv-domain string the name of the resource record -srv-record string the SRV record to search for backends nodes. Example: etcd-client.tcp.example.com -sync-only sync without checkcmd and reloadcmd -table string the name of the DynamoDB table (only used with -backend=dynamodb) -user-id string Vault user-id to use with the app-id backend (only used with -backend=value and auth-type=app-id) -username string the username to authenticate as (only used with vault and etcd backends) -version print version and exit -watch enable watch support ``` The -scheme flag is only used to set the URL scheme for nodes retrieved from DNS SRV records."
}
] |
{
"category": "Runtime",
"file_name": "command-line-flags.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The development of Curve aims at implementing a unified distributed storage system that support different scenarios. High performance block storage is our first priority, and scenarios like nearline/online object storage, file storage and big data will be supported step by step in our future works. As a crucial service provider in the I/O path, Chunkserver should provide high I/O performance, storage efficiency, availability and reliability. Chunkserver is designed referring to the structure of GFS, based on and encapsulated ext4 file system, serves as the data node of CURVE and provides interface of data I/O (read, write, snapshot etc.) and nodes management (fetching Chunkserver status, collecting metrical data, modifying runtime parameters of different modules etc.). Interacts with Client and MetaServer, Chunkserver execute I/O requests from client efficiently, and responds to the MetaServer requests like scheduling and allocation, data migration and recovering, I/O Qos and status inquiries or client controlling, and maintain data reliability and the consensus of replicas. As a service provider in the I/O path of unified storage system, especially a high performance distributed block storage, Chunkserver is required to perform well in performance, availability, reliability and maintainability, which can be reflected in characteristics below: High concurrency: High IOPS requests are supported. Low latency: Average client I/O should be lower than 1ms. High fault tolerance: Executing data migration and recovering under MDS's scheduling, during which also maintaining a high I/O performance. Also, Chunkserver should tolerant disk failures and guarantee the reliability of the data in cluster. Snapshot: Support snapshot in chunk level. Hot upgrade: Cluster should be able to upgrade without aborting I/O service of Chunkserver. 2.1.1 Design Concepts Now let's focus on some important conceptsin the design of Chunkserver. Management domain: The management domain of a Chunkserver is a disk, and there are more than one disk on a server. Every disk correnponds to a Chunkserver instance, and every Chunkserver instance can be reflected as a user mode process. We have implemented a process level isolation between Chunkservers, which means the running of the server will not be affected by the failure of single Chunkserver, and the influence of the performance issue caused by single disk unstable or software defect will be limited in that single disk. Copyset: Copyset serves as the unit of replication and consensus management of I/O data blocks. In order to balance the data recovering time and for load balancing and failure isolation, copysets are distributed evenly in every disks. For nodes in a copyset, one of them will be elected as the leader, and the others are followers. When writing data to data nodes, the client will send the data to the leader node according to the copyset info it has, and data replicas will be sent to followers by the leader. After receiving the acknowledgement from followers, leader will return the result to client. Asynchronous I/O: On the I/O path from chunkserver receiving I/O request and returning the result, there are time consuming operations like network data receiving and dispatching, copy replication and writing transaction. For synchronous I/O, these operations will be serialized and thus takes a lot of time. By making it asynchronous, we can improve the performance of chunkserver a lot since the concurrency of the I/O is"
},
{
"data": "Thread pool: A huge amount of threads are required for the concurrency operations on the I/O path since there are lots of copysets and asynchronous I/O operations. By managing the life cycle of these threads using the thread pool, we can free those I/O operations from thread management, and also reuse the life cycles of these threads, offering aCPU architecture friendly environment, thus improve the performance. Replication: Most of the replicas will replicate synchronously, but few of them will do it in an asynchronous way considering the coexistence of quick and slow disk. Figure 1 shows the general structure of Chunkserver. The RPC network layer receive RPC message from Client, MDS and other Chunkservers, including I/O request, controlling request and event notification. Message received will be dispatched to different sub-services in RPC service layer for routing by RPC network layer, and different sub-service corresponds to different sub-systems in Chunkserver, including I/O processing, snapshot, recovering, data verification, configuration, monitoring metrics and Chunkserver controller. In each sub-system there's a set of interfaces, offering services by calling their own functions within. Sub-systems can also coordinate with each other using interfaces of each other. <p align=\"center\"> <img src=\"../images/chunkserverstructure.png\"><br> <font size=3> Figure 1: Chunkserver structure</font> </p> RPC Service Layer Provide RPC service of Chunkserver, including: ChunkService ChunkService is the core service of Chunkserver that process I/O relative tasks, including the reading/writing(create when writing)/deletion of chunks, reading/deleting chunk snapshot, fetching chunk info and the creation and cloning of chunks. CliService Provides some Raft configuration changing RPC services, including AddPeerRemovePeerChangePeerGetLeaderChangeLeader and ResetPeer. CliService is implemented by calling Braft configuration changing interface. CopySetService Creates CopysetNode (Raft node) by calling CopySetNodeManager. During the initialization of Chunkserver, CopySetService is called by MDS to create Raft nodes on each Chunkserver and run Raft node services. RaftService Raft services built in Braft, providing RPC services and will be started during the initialization of Chunkserver. Internal Service Layer CopysetNodeManager Managing the creation and deletion of CopysetNode (RaftNode). HeartBeat Heartbeat module reports Chunkserver status to MDS regularly for liveness probing and error detection. Also, status like disk utilization are included. CloneManager CloneManage is for the cloning service, with a thread pool inside. This module is mainly for completing the data of cloned chunks asynchronously. For more details about cloning, please refer to . CopysetNode An encapsulation of Raft state machine, served as a core component of Chunkserver. ChunkOpRequest ChunkOpRequest encapsulates the process procedures of Chunkserver to arriving I/O requests. ChunkServerMetric ChunkServerMetric is an encapsulation of every metrics of Chunkserver. This module collects internal metrics of Chunkserver for Prometheus and Graphna to display, which help monitoring and diagnosing issues of Chunkserver. ConcurrentApplyModule Concurrency control layer for I/O requests of Chunkserver. This module will do hashing by chunks to requests from upper layer for the concurrent execution of requests from different chunks. DataStore DataStore is the encapsulation of the process for data writing, including chunk file creation and deletion, the read/write of chunk data, COW of chunk and the management of chunk cloning. LocalFileSystermAdaptor LocalFileSystermAdaptor is an abstraction of the file system underneath. This layer is for suiting different file system and currently, interface for ext4 is"
},
{
"data": "In order to join the CURVE cluster, a Chunkserver has to get its own ID and token as the only legitimate identifier from MDS by registration and use them as the identifier in their following communications. Chunkserver only register once. At its first time joining the cluster, it sends register request to MDS and as a result, a new info about this Chunkserver and a scheduling target will be added in MDS. After getting the ID and token from MDS on the heartbeat reply, registration is no longer needed even when the Chunkserver restart. Here's what the Chunkserver will do when starting: Detect the Chunkserver ID on the disk corresponding to the local Chunkserver, if it exists, it means that it has been registered, and skip the next steps directly. Construct ChunkServerRegistRequest message and send it to MDS using RegistChunkServer interface. If the response times out, it may mean that the MDS is temporarily unavailable, go back to step 2 and try again; if the statusCode means invalid registration, etc., chunkserver exits. Persist the chunkserverID and token to the disk, and the registration is complete. The data format of persisted data will varies according to the file system it stored in. For ext4, data are stored in file chunkserver.dat under the root data directory. When storing, a checksum for Chunkserver ID, token and data epoch will be calculated and stored in a JSON data structure called ChunkserverPersistentData in chunkserver.dat. When reading, the checksum in ChunkserverPersistentData will first be inspected in case there's any data corruption, and make sure that the version is supported. During restarting, the Chunkserver should reconstruct the copysets allocated before in order to reply to the copyset request from Client and MDS. When copysets are created and initialized, corresponding sub-directory will be created under data directory, which provides us the list of allocated copysets. Braft has already persisted Raft configuration, if same copyset data directory is provided, Braft can resume Raft configuration automatically from persisted data. Thus, persisted copyset data can be fetched from Chunkserver local storage. By scanning sub-directory \"LogicPoolId/CopysetId\" when starting, Chunkserver can resolve the pool ID and copyset ID of every copyset since these data are encoded in the name of the copyset data under the directory. Using empty Raft configuration, we can reconstruct the copyset instances. After the reconstruction, since same directory is used, Braft can recover Raft configurations from snapshot metadata, then load snapshot and log and finish the initialization. Live messages are required by MDS for monitoring the liveness of Chunkservers. Also, MDS create instruction for Chunkserver and copyset according to the status and statistical data reported. Chunkserver completes the above functions in the form of heartbeats. Through periodic heartbeat messages, it updates the information of Chunkserver and Copyset, feeds back the command execution results of the previous heartbeat, and accepts, parses and executes new heartbeat response messages. Chunkserver regularly updates status information of Chunkserver, Copyset and disk, configuration change command execution status and other information to MDS through heartbeat messages. After MDS receives the request, it adds the new copyset configuration change command to the message response according to the scheduling logic. Figure 2 shows the entire procedure of heartbeat. <p align=\"center\"> <img src=\"../images/chunkserver_heartbeat.png\"><br> <font size=3> Figure 2: Heartbeat procedure</font> </p> CopysetNode encapsulated the state machine of RaftNode, and is the core module of Chunkserver. Figure 3 shows the architecture of this module. <p align=\"center\"> <img"
},
{
"data": "<font size=3> Figure 3: CopysetNode structure</font> </p> In figure 3, you can see components below: Node Interface Node Interface is the interface of CURVE-Raft that exposed to the users for submitting their requests, and convert the request to the propose for Raft state machine. Raft Layer This layer is the key of this module, including components: 1NodeImpl NodeImpl receives requests from other modules, and submit to state machine using the interface of RawNode. Also, there will be aexecution queue for Propose for driving the Raft state machine and output the result of state machine, a Ready structure, for RaftNode. Also, NodeImpl implements the Node interface of Raft, and deals with event Tick timeout and task InstallSnapshot. 2RaftNode RaftNode receives and processes structure Ready from the state machine using an execute queue inside. From the Ready structure, RaftNode can resolve many data and dispatch the data to different components, including: message to other peers, log entries for logging and committed entries from request before for FSMCaller to apply. 3FSMCaller FSMCaller is for applying the commits when receive committed entries from RaftNode. It includes corresponding batch for processing requests that need to be applied. Persistence and Network Interface 1Snapshot Storage: Responsible for snapshot metadata and provides snapshot read/write interface. 2LogManager: Manager of Op log, provides interface for the reading and writing of Op log, the batch control of logs writing and the caching of Op log. The actual operations of reading and writing log are implemented by LogStorage components below. 3Raft Meta Storage: For the persistence of Raft metadata (term, vote). 4Transport: Managing the connection between peers in a copyset, providing interface for message sending and peers connection managing. ChunkFilePool is located in DataStore Level, using local file system based on ext4. Since there will be considerable I/O amplification during the writing, a certain space for files will be pre-allocated when creating chunk files by calling fallocate. But if only use fallocate metadata will still be change when writing new blocks and thus amplification still exist. What CURVE did is using files already written. Chunk file and snapshot of Chunkserver have fixed size, thus we can use fallocate to pre-allocate some spaces and write empty data on it. When chunk files or snapshot files are created, we can just fetch files on this file 'pool' and rename. When deleting, we do it reversely by renaming. These pre-allocated files are what we call the chunkfilepool. Here we show the directory structure of Chunkserver: ``` chunkserver/ copyset_id/ data/ log/ meta/ snapshot/ ... chunkfilepool/ chunkfilexxxtmp chunkfilexxxtmp ... ``` During the initialization of the system, we pre-allocate some chunk files, of which the amount is adjustable. When the system restart, we traverse the whole chunk file pool directory, and collect data of temporary chunk files that are still not been allocated into a vector for following inquiries. Interface GetChunk will fetch a free chunk from chunk file pool, and if there isn't any, files will be created synchronously. When the number of chunks available is low, an asynchronous task will be initiated for creating files. The file allocation is based on the rename operation of the operation system, of which the atomicity is guaranteed, and thus guarantee the atomicity of file creation. When a chunk is deleted, the chunk is recycle and reset, avoiding the reallocation."
}
] |
{
"category": "Runtime",
"file_name": "chunkserver_design_en.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Restic Integration\" layout: docs Velero supports backing up and restoring Kubernetes volumes using a free open-source backup tool called to understand if it fits your use case. Velero allows you to take snapshots of persistent volumes as part of your backups if youre using one of the supported cloud providers block storage offerings (Amazon EBS Volumes, Azure Managed Disks, Google Persistent Disks). It also provides a plugin model that enables anyone to implement additional object and block storage backends, outside the main Velero repository. Velero's Restic integration was added to give you an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume. This integration is an addition to Velero's capabilities, not a replacement for existing functionality. If you're running on AWS, and taking EBS snapshots as part of your regular Velero backups, there's no need to switch to using Restic. However, if you need a volume snapshot plugin for your storage platform, or if you're using EFS, AzureFile, NFS, emptyDir, local, or any other volume type that doesn't have a native snapshot concept, Restic might be for you. Restic is not tied to a specific storage platform, which means that this integration also paves the way for future work to enable cross-volume-type data migrations. NOTE: hostPath volumes are not supported, but the is supported. Understand how Velero performs . the latest Velero release. Kubernetes v1.16.0 and later. Velero's Restic integration requires the Kubernetes . To install Restic, use the `--use-restic` flag in the `velero install` command. See the for more details on other flags for the install command. ``` velero install --use-restic ``` When using Restic on a storage provider that doesn't have Velero support for snapshots, the `--use-volume-snapshots=false` flag prevents an unused `VolumeSnapshotLocation` from being created on installation. Velero handles the creation of the restic repo prefix for Amazon, Azure, and GCP plugins, if you are using a different , then you will need to make sure the `resticRepoPrefix` is set in the . The value for `resticRepoPrefix` should be the cloud storage URL where all namespace restic repos will be created. Velero creates one restic repo per namespace. For example, if backing up 2 namespaces, namespace1 and namespace2, using restic on AWS, the `resticRepoPrefix` would be something like `s3:s3-us-west-2.amazonaws.com/bucket/restic` and the full restic repo path for namespace1 would be `s3:s3-us-west-2.amazonaws.com/bucket/restic/ns1` and for namespace2 would be `s3:s3-us-west-2.amazonaws.com/bucket/restic/ns2`. There may be additional installation steps depending on the cloud provider plugin you are using. You should refer to the for the must up to date information. After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the Restic DaemonSet spec. The steps in this section are only needed if you are installing on RancherOS, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure. RancherOS Update the host path for volumes in the Restic DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`. ```yaml hostPath: path: /var/lib/kubelet/pods ``` to ```yaml hostPath: path: /opt/rke/var/lib/kubelet/pods ``` OpenShift To mount the correct hostpath to pods volumes, run the Restic pod in `privileged` mode. Add the `velero` ServiceAccount to the `privileged` SCC: ``` $ oc adm policy add-scc-to-user privileged -z velero -n velero ``` For OpenShift version >="
},
{
"data": "modify the DaemonSet yaml to request a privileged mode: ```diff @@ -67,3 +67,5 @@ spec: value: /credentials/cloud name: VELEROSCRATCHDIR value: /scratch securityContext: privileged: true ``` or ```shell oc patch ds/restic \\ --namespace velero \\ --type json \\ -p '[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/securityContext\",\"value\": { \"privileged\": true}}]' ``` For OpenShift version < `4.1`, modify the DaemonSet yaml to request a privileged mode and mount the correct hostpath to pods volumes. ```diff @@ -35,7 +35,7 @@ spec: secretName: cloud-credentials name: host-pods hostPath: path: /var/lib/kubelet/pods path: /var/lib/origin/openshift.local.volumes/pods name: scratch emptyDir: {} containers: @@ -67,3 +67,5 @@ spec: value: /credentials/cloud name: VELEROSCRATCHDIR value: /scratch securityContext: privileged: true ``` or ```shell oc patch ds/restic \\ --namespace velero \\ --type json \\ -p '[{\"op\":\"add\",\"path\":\"/spec/template/spec/containers/0/securityContext\",\"value\": { \"privileged\": true}}]' oc patch ds/restic \\ --namespace velero \\ --type json \\ -p '[{\"op\":\"replace\",\"path\":\"/spec/template/spec/volumes/0/hostPath\",\"value\": { \"path\": \"/var/lib/origin/openshift.local.volumes/pods\"}}]' ``` If Restic is not running in a privileged mode, it will not be able to access pods volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can to relax the security in your cluster so that Restic pods are allowed to use the hostPath volume plug-in without granting them access to the `privileged` SCC. By default a userland openshift namespace will not schedule pods on all nodes in the cluster. To schedule on all nodes the namespace needs an annotation: ``` oc annotate namespace <velero namespace> openshift.io/node-selector=\"\" ``` This should be done before velero installation. Or the ds needs to be deleted and recreated: ``` oc get ds restic -o yaml -n <velero namespace> > ds.yaml oc annotate namespace <velero namespace> openshift.io/node-selector=\"\" oc create -n <velero namespace> -f ds.yaml ``` VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS) You need to enable the `Allow Privileged` option in your plan configuration so that Restic is able to mount the hostpath. The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods` ```yaml hostPath: path: /var/vcap/data/kubelet/pods ``` Microsoft Azure If you are using , you need to add `nouser_xattr` to your storage class's `mountOptions`. See for more details. You can use the following command to patch the storage class: ```bash kubectl patch storageclass/<YOURAZUREFILESTORAGECLASS_NAME> \\ --type json \\ --patch '[{\"op\":\"add\",\"path\":\"/mountOptions/-\",\"value\":\"nouser_xattr\"}]' ``` Velero supports two approaches of discovering pod volumes that need to be backed up using Restic: Opt-in approach: Where every pod containing a volume to be backed up using Restic must be annotated with the volume's name. Opt-out approach: Where all pod volumes are backed up using Restic, with the ability to opt-out any volumes that should not be backed up. The following sections provide more details on the two approaches. In this approach, Velero will back up all pod volumes using Restic with the exception of: Volumes mounting the default service account token, Kubernetes Secrets, and ConfigMaps Hostpath volumes It is possible to exclude volumes from being backed up using the `backup.velero.io/backup-volumes-excludes` annotation on the pod. Instructions to back up using this approach are as follows: Run the following command on each pod that contains volumes that should not be backed up using Restic ```bash kubectl -n YOURPODNAMESPACE annotate pod/YOURPODNAME backup.velero.io/backup-volumes-excludes=YOURVOLUMENAME1,YOURVOLUMENAME2,... ``` where the volume names are the names of the volumes in the pod spec. For example, in the following pod: ```yaml apiVersion: v1 kind: Pod metadata: name: app1 namespace: sample spec: containers: image:"
},
{
"data": "name: test-webserver volumeMounts: name: pvc1-vm mountPath: /volume-1 name: pvc2-vm mountPath: /volume-2 volumes: name: pvc1-vm persistentVolumeClaim: claimName: pvc1 name: pvc2-vm claimName: pvc2 ``` to exclude Restic backup of volume `pvc1-vm`, you would run: ```bash kubectl -n sample annotate pod/app1 backup.velero.io/backup-volumes-excludes=pvc1-vm ``` Take a Velero backup: ```bash velero backup create BACKUPNAME --default-volumes-to-restic OTHEROPTIONS ``` The above steps uses the opt-out approach on a per backup basis. Alternatively, this behavior may be enabled on all velero backups running the `velero install` command with the `--default-volumes-to-restic` flag. Refer for details. When the backup completes, view information about the backups: ```bash velero backup describe YOURBACKUPNAME ``` ```bash kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOURBACKUPNAME -o yaml ``` Velero, by default, uses this approach to discover pod volumes that need to be backed up using Restic. Every pod containing a volume to be backed up using Restic must be annotated with the volume's name using the `backup.velero.io/backup-volumes` annotation. Instructions to back up using this approach are as follows: Run the following for each pod that contains a volume to back up: ```bash kubectl -n YOURPODNAMESPACE annotate pod/YOURPODNAME backup.velero.io/backup-volumes=YOURVOLUMENAME1,YOURVOLUMENAME2,... ``` where the volume names are the names of the volumes in the pod spec. For example, for the following pod: ```yaml apiVersion: v1 kind: Pod metadata: name: sample namespace: foo spec: containers: image: k8s.gcr.io/test-webserver name: test-webserver volumeMounts: name: pvc-volume mountPath: /volume-1 name: emptydir-volume mountPath: /volume-2 volumes: name: pvc-volume persistentVolumeClaim: claimName: test-volume-claim name: emptydir-volume emptyDir: {} ``` You'd run: ```bash kubectl -n foo annotate pod/sample backup.velero.io/backup-volumes=pvc-volume,emptydir-volume ``` This annotation can also be provided in a pod template spec if you use a controller to manage your pods. Take a Velero backup: ```bash velero backup create NAME OPTIONS... ``` When the backup completes, view information about the backups: ```bash velero backup describe YOURBACKUPNAME ``` ```bash kubectl -n velero get podvolumebackups -l velero.io/backup-name=YOURBACKUPNAME -o yaml ``` Regardless of how volumes are discovered for backup using Restic, the process of restoring remains the same. Restore from your Velero backup: ```bash velero restore create --from-backup BACKUP_NAME OPTIONS... ``` When the restore completes, view information about your pod volume restores: ```bash velero restore describe YOURRESTORENAME ``` ```bash kubectl -n velero get podvolumerestores -l velero.io/restore-name=YOURRESTORENAME -o yaml ``` `hostPath` volumes are not supported. are supported. Those of you familiar with may know that it encrypts all of its data. Velero uses a static, common encryption key for all Restic repositories it creates. This means that anyone who has access to your bucket can decrypt your Restic backup data. Make sure that you limit access to the Restic bucket appropriately. An incremental backup chain will be maintained across pod reschedules for PVCs. However, for pod volumes that are not PVCs, such as `emptyDir` volumes, when a pod is deleted/recreated (for example, by a ReplicaSet/Deployment), the next backup of those volumes will be full rather than incremental, because the pod volume's lifecycle is assumed to be defined by its pod. Restic scans each file in a single thread. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual difference is small. If you plan to use Velero's Restic integration to backup 100GB of data or more, you may need to to make sure backups complete successfully. Velero's Restic integration backs up data from volumes by accessing the node's filesystem, on which the pod is running. For this reason, Velero's Restic integration can only backup volumes that are mounted by a pod and not directly from the"
},
{
"data": "For orphan PVC/PV pairs (without running pods), some Velero users overcame this limitation running a staging pod (i.e. a busybox or alpine container with an infinite sleep) to mount these PVC/PV pairs prior taking a Velero backup. Velero uses a helper init container when performing a Restic restore. By default, the image for this container is `velero/velero-restic-restore-helper:<VERSION>`, where `VERSION` matches the version/tag of the main Velero image. You can customize the image that is used for this helper by creating a ConfigMap in the Velero namespace with the alternate image. In addition, you can customize the resource requirements for the init container, should you need. The ConfigMap must look like the following: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: restic-restore-action-config namespace: velero labels: velero.io/plugin-config: \"\" velero.io/restic: RestoreItemAction data: image: myregistry.io/my-custom-helper-image[:OPTIONAL_TAG] cpuRequest: 200m memRequest: 128Mi cpuLimit: 200m memLimit: 128Mi secCtxRunAsUser: 1001 secCtxRunAsGroup: 999 secCtxAllowPrivilegeEscalation: false secCtx: | capabilities: drop: ALL add: [] allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 999 ``` Run the following checks: Are your Velero server and daemonset pods running? ```bash kubectl get pods -n velero ``` Does your Restic repository exist, and is it ready? ```bash velero restic repo get velero restic repo get REPO_NAME -o yaml ``` Are there any errors in your Velero backup/restore? ```bash velero backup describe BACKUP_NAME velero backup logs BACKUP_NAME velero restore describe RESTORE_NAME velero restore logs RESTORE_NAME ``` What is the status of your pod volume backups/restores? ```bash kubectl -n velero get podvolumebackups -l velero.io/backup-name=BACKUP_NAME -o yaml kubectl -n velero get podvolumerestores -l velero.io/restore-name=RESTORE_NAME -o yaml ``` Is there any useful information in the Velero server or daemon pod logs? ```bash kubectl -n velero logs deploy/velero kubectl -n velero logs DAEMONPODNAME ``` NOTE: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument to the container command in the deployment/daemonset pod template spec. Velero has three custom resource definitions and associated controllers: `ResticRepository` - represents/manages the lifecycle of Velero's . Velero creates a Restic repository per namespace when the first Restic backup for a namespace is requested. The controller for this custom resource executes Restic repository lifecycle commands -- `restic init`, `restic check`, and `restic prune`. You can see information about your Velero's Restic repositories by running `velero restic repo get`. `PodVolumeBackup` - represents a Restic backup of a volume in a pod. The main Velero backup process creates one or more of these when it finds an annotated pod. Each node in the cluster runs a controller for this resource (in a daemonset) that handles the `PodVolumeBackups` for pods on that node. The controller executes `restic backup` commands to backup pod volume data. `PodVolumeRestore` - represents a Restic restore of a pod volume. The main Velero restore process creates one or more of these when it encounters a pod that has associated Restic backups. Each node in the cluster runs a controller for this resource (in the same daemonset as above) that handles the `PodVolumeRestores` for pods on that node. The controller executes `restic restore` commands to restore pod volume data. Based on configuration, the main Velero backup process uses the opt-in or opt-out approach to check each pod that it's backing up for the volumes to be backed up using"
},
{
"data": "When found, Velero first ensures a Restic repository exists for the pod's namespace, by: checking if a `ResticRepository` custom resource already exists if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it Velero then creates a `PodVolumeBackup` custom resource per volume listed in the pod annotation The main Velero process now waits for the `PodVolumeBackup` resources to complete or fail Meanwhile, each `PodVolumeBackup` is handled by the controller on the appropriate node, which: has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data finds the pod volume's subdirectory within the above volume runs `restic backup` updates the status of the custom resource to `Completed` or `Failed` As each `PodVolumeBackup` finishes, the main Velero process adds it to the Velero backup in a file named `<backup-name>-podvolumebackups.json.gz`. This file gets uploaded to object storage alongside the backup tarball. It will be used for restores, as seen in the next section. The main Velero restore process checks each existing `PodVolumeBackup` custom resource in the cluster to backup from. For each `PodVolumeBackup` found, Velero first ensures a Restic repository exists for the pod's namespace, by: checking if a `ResticRepository` custom resource already exists if not, creating a new one, and waiting for the `ResticRepository` controller to init/check it (note that in this case, the actual repository should already exist in object storage, so the Velero controller will simply check it for integrity) Velero adds an init container to the pod, whose job is to wait for all Restic restores for the pod to complete (more on this shortly) Velero creates the pod, with the added init container, by submitting it to the Kubernetes API. Then, the Kubernetes scheduler schedules this pod to a worker node, and the pod must be in a running state. If the pod fails to start for some reason (i.e. lack of cluster resources), the Restic restore will not be done. Velero creates a `PodVolumeRestore` custom resource for each volume to be restored in the pod The main Velero process now waits for each `PodVolumeRestore` resource to complete or fail Meanwhile, each `PodVolumeRestore` is handled by the controller on the appropriate node, which: has a hostPath volume mount of `/var/lib/kubelet/pods` to access the pod volume data waits for the pod to be running the init container finds the pod volume's subdirectory within the above volume runs `restic restore` on success, writes a file into the pod volume, in a `.velero` subdirectory, whose name is the UID of the Velero restore that this pod volume restore is for updates the status of the custom resource to `Completed` or `Failed` The init container that was added to the pod is running a process that waits until it finds a file within each restored volume, under `.velero`, whose name is the UID of the Velero restore being run Once all such files are found, the init container's process terminates successfully and the pod moves on to running other init containers/the main containers. Velero won't restore a resource if a that resource is scaled to 0 and already exists in the cluster. If Velero restored the requested pods in this scenario, the Kubernetes reconciliation loops that manage resources would delete the running pods because its scaled to be 0. Velero will be able to restore once the resources is scaled up, and the pods are created and remain running. Velero does not provide a mechanism to detect persistent volume claims that are missing the Restic backup annotation. To solve this, a controller was written by Thomann Bits&Beats:"
}
] |
{
"category": "Runtime",
"file_name": "restic.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "To understand in plain terms, let us take an example where we end up writing the same Go packages, repeatedly, to do the same task at different levels in different Go projects under the same organization. We are all familiar with the custom logger package in the different Go projects. What if, the custom logger package is the same across the organization and can be reused by simply importing it, then this custom logger package is the perfect fit for the kit project. The advantages of this approach go beyond avoiding duplicate code, improved readability of the projects in an organization, to savings in terms of time and cost as well :-) If you go through , you will notice that the kit project is characterized by usability, purpose, and portability. In this blog, we will discuss how we have refactored the code to use the Kit Project pattern for . OpenEBS being a container native project is delivered via a set of containers. For instance, with OpenEBS 0.3 release we have the following active maya related projects: openebs/maya aka maya-cli : is the command-line interface like kubectl for interacting with Maya services for performing storage operations. openebs/mayaserver : or m-apiserver abstracts a generic volume API that can be used to provision OpenEBS Disks using containers launched using the CO like K8s, nomad etc., openebs/openebs-k8s-provisioner : is the K8s controller for dynamically creating OpenEBS PVs With these projects, we are already seeing how code gets duplicated when each of these projects are independently developed. For example maya-cli and openebs-k8s-provisioner both need to interact with maya-apiserver, which resulted in maya-apiserver-client code being written in maya-cli and openebs-k8s-provisioner. Similarly, openebs-k8s-provisioner and maya-apiserver have duplicated code w.r.t to accessing the K8s services. To avoid this duplicity of code using the kit project, we are transforming openebs/maya into a Kit project for application projects like maya-apiserver, openebs-k8s-provisioner and many more coming up in the future. openebs/maya contains all the Kubernetes & nomad APIs, common utilities, etc. needed for development of maya-apiserver and maya-storage-bot. Shortly, we are trying to push our custom libraries to maya, so that, it will become a promising Go kit project for OpenEBS"
},
{
"data": "Let us now see, how maya (as a kit project) adheres to the package oriented design principles: Usability We moved common packages such as orchprovider, types, pkg to maya from maya-apiserver. These packages are very generic and can be used in most of the Go projects in the OpenEBS organization. Brief details about new packages in maya are as follows: orchprovider : orchprovider contains packages of different orchestrators such as Kubernetes and nomad. types: types provide all the generic types related to orchestrator. pkg: pkg contains packages like nethelper, util, etc. volumes: volumes contain packages related to volume provisioner and profiles. Purpose While the packages in the kit project are categorized as per the functionality, the naming convention should ideally provide the reader with the information on what the package provides. So, the packages (in a kit project) must provide, not contain. In maya, we have packages like types, orchprovider, volumes, etc. The name of these packages suggests the functionality provided by them. Portability Portability is an important factor for packages in a kit project. Hence, we are making maya in such a way that it will be easy to import and use in any Go project. Packages in maya are not a single point of dependency and all the packages are independent of each other. For example, types directory contains versioned Kubernetes and Nomad packages. These packages are simply importable to any project to use Kubernetes and Nomad APIs. Maya-apiserver uses maya as a kit project. Maya-apiserver exposes OpenEBS operations in form of REST APIs. This allows multiple clients e.g. volume-related plugins to consume OpenEBS storage operations exposed by maya-apiserver. Maya-apiserver will use volume provisioner as well as orchestration provider modules from maya. Maya-apiserver will always have HTTP endpoints to do OpenEBS operations. Similarly, openebs-k8s-provisioner will use the maya-kit project Kubernetes API to query for details about the storage classes, etc. Another usage is of the maya-kit project, maya-apiserver client that is accessed by maya-cli as well as the openebs-k8s-provisioner to talk to maya-apiserver. Go kit project should contain packages that are usable, purposeful and portable. Go Kit projects will improve the efficiency of the organization at both human and code level."
}
] |
{
"category": "Runtime",
"file_name": "gokit.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This guide is useful if you intend to contribute on containerd. Thanks for your effort. Every contribution is very appreciated. This doc includes: To build the `containerd` daemon, and the `ctr` simple test client, the following build system dependencies are required: Go 1.19.x or above Protoc 3.x compiler and headers (download at the ) Btrfs headers and libraries for your distribution. Note that building the btrfs driver can be disabled via the build tag `no_btrfs`, removing this dependency. Note: On macOS, you need a third party runtime to run containers on containerd First you need to setup your Go development environment. You can follow this guideline and at the end you have `go` command in your `PATH`. You need `git` to checkout the source code: ```sh git clone https://github.com/containerd/containerd ``` For proper results, install the `protoc` release into `/usr/local` on your build system. For example, the following commands will download and install the 3.11.4 release for a 64-bit Linux host: ```sh wget -c https://github.com/protocolbuffers/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip sudo unzip protoc-3.11.4-linux-x86_64.zip -d /usr/local ``` To enable optional snapshotter, you should have the headers from the Linux kernel 4.12 or later. The dependency on the kernel headers only affects users building containerd from source. Users on older kernels may opt to not compile the btrfs support (see `BUILDTAGS=no_btrfs` below), or to provide headers from a newer kernel. Note The dependency on the Linux kernel headers 4.12 was introduced in containerd 1.7.0-beta.4. containerd 1.6 has different set of dependencies for enabling btrfs. containerd 1.6 users should refer to https://github.com/containerd/containerd/blob/release/1.6/BUILDING.md#build-the-development-environment At this point you are ready to build `containerd` yourself! Runc is the default container runtime used by `containerd` and is required to run containerd. While it is okay to download a `runc` binary and install that on the system, sometimes it is necessary to build runc directly when working with container runtime development. Make sure to follow the guidelines for versioning in for the best results. Note: Runc only supports Linux `containerd` uses `make` to create a repeatable build flow. It means that you can run: ```sh cd containerd make ``` This is going to build all the project binaries in the `./bin/` directory. You can move them in your global path, `/usr/local/bin` with: ```sh sudo make install ``` The install prefix can be changed by passing the `PREFIX` variable (defaults to `/usr/local`). Note: if you set one of these vars, set them to the same values on all make stages (build as well as install). If you want to prepend an additional prefix on actual installation (eg. packaging or chroot install), you can pass it via `DESTDIR` variable: ```sh sudo make install DESTDIR=/tmp/install-x973234/ ``` The above command installs the `containerd` binary to `/tmp/install-x973234/usr/local/bin/containerd` The current `DESTDIR` convention is supported since containerd v1.6. Older releases was using `DESTDIR` for a different purpose that is similar to"
},
{
"data": "When making any changes to the gRPC API, you can use the installed `protoc` compiler to regenerate the API generated code packages with: ```sh make generate ``` Note: Several build tags are currently available: `no_cri`: A build tag disables building Kubernetes support into containerd. See for build tags of CRI plugin. snapshotters (alphabetical order) `no_aufs`: A build tag disables building the aufs snapshot driver. `no_btrfs`: A build tag disables building the Btrfs snapshot driver. `no_devmapper`: A build tag disables building the device mapper snapshot driver. `no_zfs`: A build tag disables building the ZFS snapshot driver. For example, adding `BUILDTAGS=no_btrfs` to your environment before calling the binaries Makefile target will disable the btrfs driver within the containerd Go build. Vendoring of external imports uses the . You need to use `go mod` command to modify the dependencies. After modifition, you should run `go mod tidy` and `go mod vendor` to ensure the `go.mod`, `go.sum` files and `vendor` directory are up to date. Changes to these files should become a single commit for a PR which relies on vendored updates. Please refer to for the currently supported version of `runc` that is used by containerd. Note: On macOS, the containerd daemon can be built and run natively. However, as stated above, runc only supports linux. You can build static binaries by providing a few variables to `make`: ```sh make STATIC=1 ``` Note: static build is discouraged static containerd binary does not support loading shared object plugins (`*.so`) static build binaries are not position-independent The following instructions assume you are at the parent directory of containerd source directory. You can build `containerd` via a Linux-based Docker container. You can build an image from this `Dockerfile`: ```dockerfile FROM golang ``` Let's suppose that you built an image called `containerd/build`. From the containerd source root directory you can run the following command: ```sh docker run -it \\ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \\ -e GOPATH=/go \\ -w /go/src/github.com/containerd/containerd containerd/build sh ``` This mounts `containerd` repository You are now ready to : ```sh make && make install ``` To have complete core container runtime, you will need both `containerd` and `runc`. It is possible to build both of these via Docker container. You can use `git` to checkout `runc`: ```sh git clone https://github.com/opencontainers/runc ``` We can build an image from this `Dockerfile`: ```sh FROM golang RUN apt-get update && \\ apt-get install -y libseccomp-dev ``` In our Docker container we will build `runc` build, which includes , , and support. Seccomp support in runc requires `libseccomp-dev` as a dependency (AppArmor and SELinux support do not require external libraries at build time). Refer to in the docs directory to for details about building runc, and to learn about supported versions of `runc` as used by containerd. Let's suppose you build an image called `containerd/build` from the above Dockerfile. You can run the following command: ```sh docker run -it --privileged \\ -v /var/lib/containerd \\ -v ${PWD}/runc:/go/src/github.com/opencontainers/runc \\ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \\ -e GOPATH=/go \\ -w /go/src/github.com/containerd/containerd containerd/build sh ``` This mounts both `runc` and `containerd` repositories in our Docker container. From within our Docker container let's build `containerd`: ```sh cd /go/src/github.com/containerd/containerd make && make install ``` These binaries can be found in the `./bin` directory in your host. `make install` will move the binaries in your"
},
{
"data": "Next, let's build `runc`: ```sh cd /go/src/github.com/opencontainers/runc make && make install ``` For further details about building runc, refer to in the docs directory. When working with `ctr`, the simple test client we just built, don't forget to start the daemon! ```sh containerd --config config.toml ``` During the automated CI the unit tests and integration tests are run as part of the PR validation. As a developer you can run these tests locally by using any of the following `Makefile` targets: `make test`: run all non-integration tests that do not require `root` privileges `make root-test`: run all non-integration tests which require `root` `make integration`: run all tests, including integration tests and those which require `root`. `TESTFLAGSPARALLEL` can be used to control parallelism. For example, `TESTFLAGSPARALLEL=1 make integration` will lead a non-parallel execution. The default value of `TESTFLAGS_PARALLEL` is 8. `make cri-integration`: run cri integration tests To execute a specific test or set of tests you can use the `go test` capabilities without using the `Makefile` targets. The following examples show how to specify a test name and also how to use the flag directly against `go test` to run root-requiring tests. ```sh go test -v -run \"<TEST_NAME>\" . go test -v -run . -test.root ``` Example output from directly running `go test` to execute the `TestContainerList` test: ```sh sudo go test -v -run \"TestContainerList\" . -test.root INFO[0000] running tests against containerd revision=f2ae8a020a985a8d9862c9eb5ab66902c2888361 version=v1.0.0-beta.2-49-gf2ae8a0 === RUN TestContainerList PASS: TestContainerList (0.00s) PASS ok github.com/containerd/containerd 4.778s ``` Note: in order to run `sudo go` you need to either keep user PATH environment variable. ex: `sudo \"PATH=$PATH\" env go test <args>` or use `go test -exec` ex: `go test -exec sudo -v -run \"TestTarWithXattr\" ./archive/ -test.root` In addition to `go test`-based testing executed via the `Makefile` targets, the `containerd-stress` tool is available and built with the `all` or `binaries` targets and installed during `make install`. With this tool you can stress a running containerd daemon for a specified period of time, selecting a concurrency level to generate stress against the daemon. The following command is an example of having five workers running for two hours against a default containerd gRPC socket address: ```sh containerd-stress -c 5 -d 120m ``` For more information on this tool's options please run `containerd-stress --help`. is an external tool which can be used to drive load against a container runtime, specifying a particular set of lifecycle operations to run with a specified amount of concurrency. Bucketbench is more focused on generating performance details than simply inducing load against containerd. Bucketbench differs from the `containerd-stress` tool in a few ways: Bucketbench has support for testing the Docker engine, the `runc` binary, and containerd 0.2.x (via `ctr`) and 1.0 (via the client library) branches. Bucketbench is driven via configuration file that allows specifying a list of lifecycle operations to execute. This can be used to generate detailed statistics per-command (e.g. start, stop, pause, delete). Bucketbench generates detailed reports and timing data at the end of the configured test run. More details on how to install and run `bucketbench` are available at the ."
}
] |
{
"category": "Runtime",
"file_name": "BUILDING.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage endpoints ``` -h, --help help for endpoint ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - View & modify endpoint configuration - Disconnect an endpoint from the network - Display endpoint information - View endpoint health - Manage label configuration of endpoint - List all endpoints - View endpoint status log"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_endpoint.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document describes the method to configure image decryption support. Please note that this is still an experimental feature. Encrypted container images are OCI images that contain encrypted blobs. An example of how these encrypted images can be created through the use of . To decrypt these images, `CRI-O` needs to have access to the corresponding private key(s). Encryption ties trust to an entity based on the model in which a key is associated with it. We call this the key model. There are two currently supported key models in which encrypted containers can be used. These are based on two main use cases. Node Key Model - In this model encryption is tied to workers. The use case here revolves around the idea that an image should be only decryptable on the trusted host. Although the granularity of access is more relaxed (per node), it is beneficial because of the various node based technologies that help bootstrap trust in worker nodes and perform secure key distribution (i.e. TPM, host attestation, secure/measured boot). In this scenario, runtimes are capable of fetching the necessary decryption keys. Multitenant Key Model - This model is not yet supported by CRI-O, but will be in the future. In this model,the trust of encryption is tied to the cluster or users within a cluster. This allows multi-tenancy of users and is useful in the case where multiple users of Kubernetes each want to bring their encrypted images. This is based on the that introduces `ImageDecryptSecrets`. In order to set up image decryption support, add an overwrite to `/etc/crio/crio.conf.d/01-decrypt.conf` as follows: ```toml [crio.runtime] decryptionkeyspath = \"/etc/crio/keys/\" ``` `decryptionkeyspath` should be the path where `CRI-O` can find the keys required for image decryption. After modifying this config, you need to restart the `CRI-O` service. Alternatively, if you are starting the `CRI-O` from the command line, the argument `--decryption-keys-path` can be provided pointing to the folder that contains required decryption"
},
{
"data": "Although the latest master branch of the `docker/distribution` registry supports encrypted images, many popular public registries such as Docker hub or `quay.io` don't support encrypted images yet. For the easy verification of the image decryption capabilities, we are hosting a test image at, `docker.io/enccont/encrypted_image:encrypted` Go ahead and try to download this image using the read-only credentials given below, ```shell crictl -r unix:///var/run/crio/crio.sock pull docker.io/enccont/encrypted_image:encrypted ``` Since we haven't provided `CRI-O` the access to the private key required to decrypt this image you should see a failure like this on your console, <!-- markdownlint-disable MD013 --> ```text FATA[0010] pulling image failed: rpc error: code = Unknown desc = Error decrypting layer sha256:ecbef970c60906b9d4249b47273113ef008b91ce8046f6ae9d82761b9ffcc3c0: missing private key needed for decryption ``` <!-- markdownlint-enable MD013 --> This is intended behavior and proof that indeed the encrypted images cannot be used without having access to the correct private key. The image can be decrypted using following key, ```text --BEGIN RSA PRIVATE KEY-- MIICXAIBAAKBgQDoJBuK1hQ5aCbF93uE6jzRm8v5icUNFL5j+DO9hnM5j/8XFTzp 40N2M2/ObLf2qwmWSivwj5LJR/+5ceS8jqVBAcJpckwOXupu3A5o4KgJo15s6v57 4+0wfraNJ/OapqBc7lGFBsj+XwdmegwYYqy41DnYNSzYS4Mov+v7RI014wIDAQAB AoGALCuiqfouAvZUWlrKv/Gp/OA+IY8bVW/bAj6Z6bgJeKxzhzrdSkuZ7IXBAnAh WOgWfOhEEBPhhDcU635GXbJusuD/bLBJPOTxiwCFazffm8zVGSQCndfTVxgCM4hn +5bH6o/cSGQ4E6SLJQeEr8y/J0bMlNMkOco9F1FL1ZgwXGECQQD9/mDwWLJjbdEa jmGtoPspGz80XDb1jRI09jKDXB826/cBUD+X/P50aTkU+XSJXVfa5F6zhzf/O7C8 07bVnn2pAkEA6fmJ+Jx/Cupy7jRHzIdKAN/7T9QJBIXVDZLz5ulFWLjYkNotpkxk f0ZSIOvlD7vv5lOifRFivd680XjxIATWqwJARJ35QFUl9DiRuhPnDYok8Cj9PT8A VfwDhC1S3iv//s1mkINGeuANOhPHKQRvWEDQYEE72FJabWiJyamEhldn6QJAFjLw 3j+q5hQ8d1FKhqNHaDHYHEjX2jAAeNs6fOwhAjv3gDbTIfYZiuHXJPx8rTN9nXLN 9ePSZIVfkNhSuGD9JQJBAI+mobcxj7WkdLHuATdAso+N89Yt7xHoG49c8gz81ufP vvLPtYytL4ftpiVO3fTfPP90ze8qYPiNaFqMHYDkQ+M= --END RSA PRIVATE KEY-- ``` Please save this key in the folder `/etc/crio/keys` with the name of your choice. Now that we have the private key that can be used by `CRI-O`, let's try to download the image again, ```shell crictl -r unix:///var/run/crio/crio.sock pull docker.io/enccont/encrypted_image:encrypted ``` If `CRI-O` was able to read the keys, it would have decrypted the image. You should see something like this on your console, ```text Image is up to date for docker.io/enccont/encrypted_image@sha256:2c3c078642b13e34069e55adfd8b93186950860383e49bdeab4858b4a4bdb1bd ``` Verify that image indeed got downloaded and decrypted using `crictl -r unix:///var/run/crio/crio.sock images` ```text IMAGE TAG IMAGE ID SIZE docker.io/enccont/encrypted_image encrypted 5eb6083c55f01 130MB ``` Please note that the confidentiality provided by the encrypted images could get compromised if the private keys are accessed by unauthorized and/or unintended entities. Also, in case of loss of private key, there is no way to access the contents of the encrypted image rendering it completely unusable. Hence, it's extremely important to keep the private keys securely and safely."
}
] |
{
"category": "Runtime",
"file_name": "decryption.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - BuildVersion | Pointer to string | | [optional] Version | string | | Pid | Pointer to int64 | | [optional] Features | Pointer to []string | | [optional] `func NewVmmPingResponse(version string, ) *VmmPingResponse` NewVmmPingResponse instantiates a new VmmPingResponse object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVmmPingResponseWithDefaults() *VmmPingResponse` NewVmmPingResponseWithDefaults instantiates a new VmmPingResponse object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VmmPingResponse) GetBuildVersion() string` GetBuildVersion returns the BuildVersion field if non-nil, zero value otherwise. `func (o VmmPingResponse) GetBuildVersionOk() (string, bool)` GetBuildVersionOk returns a tuple with the BuildVersion field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmmPingResponse) SetBuildVersion(v string)` SetBuildVersion sets BuildVersion field to given value. `func (o *VmmPingResponse) HasBuildVersion() bool` HasBuildVersion returns a boolean if a field has been set. `func (o *VmmPingResponse) GetVersion() string` GetVersion returns the Version field if non-nil, zero value otherwise. `func (o VmmPingResponse) GetVersionOk() (string, bool)` GetVersionOk returns a tuple with the Version field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmmPingResponse) SetVersion(v string)` SetVersion sets Version field to given value. `func (o *VmmPingResponse) GetPid() int64` GetPid returns the Pid field if non-nil, zero value otherwise. `func (o VmmPingResponse) GetPidOk() (int64, bool)` GetPidOk returns a tuple with the Pid field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmmPingResponse) SetPid(v int64)` SetPid sets Pid field to given value. `func (o *VmmPingResponse) HasPid() bool` HasPid returns a boolean if a field has been set. `func (o *VmmPingResponse) GetFeatures() []string` GetFeatures returns the Features field if non-nil, zero value otherwise. `func (o VmmPingResponse) GetFeaturesOk() ([]string, bool)` GetFeaturesOk returns a tuple with the Features field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmmPingResponse) SetFeatures(v []string)` SetFeatures sets Features field to given value. `func (o *VmmPingResponse) HasFeatures() bool` HasFeatures returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "VmmPingResponse.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "layout: global title: Quick Start Guide This quick start guide goes over how to run Alluxio on a local machine. The guide will cover the following tasks: Download and configure Alluxio Start Alluxio locally Perform basic tasks via Alluxio Shell [Bonus]* Mount a public Amazon S3 bucket in Alluxio [Bonus]* Mount HDFS under storage in Alluxio Stop Alluxio This guide contains optional tasks labeled with [Bonus] that use credentials from an . {:target=\"_blank\"} (2:36) Note: This guide is designed to start an Alluxio system with minimal setup on a single machine. If you are trying to speedup SQL analytics, you can try the tutorial. MacOS or Linux Enable remote login: see [Bonus]* AWS account and keys Download Alluxio from . Select the desired release followed by the distribution built for default Hadoop. Unpack the downloaded file with the following commands. ```shell $ tar -xzf alluxio-{{site.ALLUXIOVERSIONSTRING}}-bin.tar.gz $ cd alluxio-{{site.ALLUXIOVERSIONSTRING}} ``` This creates a directory `alluxio-{{site.ALLUXIOVERSIONSTRING}}` with all of the Alluxio source files and Java binaries. Through this tutorial, the path of this directory will be referred to as `${ALLUXIO_HOME}`. In the `${ALLUXIO_HOME}/conf` directory, create the `conf/alluxio-env.sh` configuration file by copying the template file. ```shell $ cp conf/alluxio-env.sh.template conf/alluxio-env.sh ``` In `conf/alluxio-env.sh`, adds configuration for `JAVA_HOME`. For example: ```shell $ echo \"JAVA_HOME=/path/to/java/home\" >> conf/alluxio-env.sh ``` In the `${ALLUXIO_HOME}/conf` directory, create the `conf/alluxio-site.properties` configuration file by copying the template file. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Set `alluxio.master.hostname` in `conf/alluxio-site.properties` to `localhost`. ```shell $ echo \"alluxio.master.hostname=localhost\" >> conf/alluxio-site.properties ``` Set additional parameters in `conf/alluxio-site.properties` ```shell $ echo \"alluxio.dora.client.read.location.policy.enabled=true\" >> conf/alluxio-site.properties $ echo \"alluxio.user.short.circuit.enabled=false\" >> conf/alluxio-site.properties $ echo \"alluxio.master.worker.register.lease.enabled=false\" >> conf/alluxio-site.properties $ echo \"alluxio.worker.block.store.type=PAGE\" >> conf/alluxio-site.properties $ echo \"alluxio.worker.page.store.type=LOCAL\" >> conf/alluxio-site.properties $ echo \"alluxio.worker.page.store.sizes=1GB\" >> conf/alluxio-site.properties $ echo \"alluxio.worker.page.store.page.size=1MB\" >> conf/alluxio-site.properties ``` Set the page store directories to an existing directory which the current user has read/write permissions to. The following uses `/mnt/ramdisk` as an example. ```shell $ echo \"alluxio.worker.page.store.dirs=/mnt/ramdisk\" >> conf/alluxio-site.properties ``` The has more information about how to configure page block store. Configure Alluxio ufs: ```shell $ echo \"alluxio.dora.client.ufs.root=/tmp\" >> conf/alluxio-site.properties ``` `<UFS_URI>` should be a full ufs uri. This can be set to a local folder (e.g. default value `/tmp`) in a single node deployment or a full ufs uri (e.g.`hdfs://namenode:port/path/` or `s3://bucket/path`). To configure Alluxio to interact with Amazon S3, add AWS access information to the Alluxio configuration in `conf/alluxio-site.properties`. ```shell $ echo \"alluxio.dora.client.ufs.root=s3://<BUCKET_NAME>/<DIR>\" >> conf/alluxio-site.properties $ echo \"s3a.accessKeyId=<AWSACCESSKEY_ID>\" >> conf/alluxio-site.properties $ echo \"s3a.secretKey=<AWSSECRETACCESS_KEY>\" >> conf/alluxio-site.properties ``` Replace `s3://<BUCKETNAME>/<DIR>`, `<AWSACCESSKEYID>` and `<AWSSECRETACCESS_KEY>` with a valid AWS S3 address, AWS access key ID and AWS secret access key respectively. For more information, please refer to the . To configure Alluxio to interact with HDFS, provide the path to HDFS configuration files available locally on each node in `conf/alluxio-site.properties`. ```shell $ echo \"alluxio.dora.client.ufs.root=hdfs://nameservice/<DIR>\" >> conf/alluxio-site.properties $ echo \"alluxio.underfs.hdfs.configuration=/path/to/hdfs/conf/core-site.xml:/path/to/hdfs/conf/hdfs-site.xml\" >> conf/alluxio-site.properties ``` Replace `nameservice/<DIR>` and `/path/to/hdfs/conf` with the actual values. For more information, please refer to the . Alluxio needs to be formatted before starting the process. The following command formats the Alluxio journal and worker storage directories. ```shell $ ./bin/alluxio init format ``` Start the Alluxio services ```shell $"
},
{
"data": "process start local ``` Congratulations! Alluxio is now up and running! The provides command line operations for interacting with Alluxio. To see a list of filesystem operations, run ```shell $ ./bin/alluxio fs ``` List files in Alluxio with the `ls` command. To list all files in the root directory, use the following command: ```shell $ ./bin/alluxio fs ls / ``` At this moment, there are no files in Alluxio. Copy a file into Alluxio by using the `copyFromLocal` shell command. ```shell $ ./bin/alluxio fs copyFromLocal ${ALLUXIO_HOME}/LICENSE /LICENSE Copied file://${ALLUXIO_HOME}/LICENSE to /LICENSE ``` List the files in Alluxio again to see the `LICENSE` file. ```shell $ ./bin/alluxio fs ls / -rw-r--r-- staff staff 27040 02-17-2021 16:21:11:061 0% /LICENSE ``` The output shows the file has been written to Alluxio under storage successfully. Check the directory set as the value of `alluxio.dora.client.ufs.root`, which is `/tmp` by default. ```shell $ ls /tmp LICENSE ``` The `cat` command prints the contents of the file. ```shell $ ./bin/alluxio fs cat /LICENSE Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION ... ``` When the file is read, it will also be cached by Alluxio to speed up future data access. Stop Alluxio with the following command: ```shell $ ./bin/alluxio process stop local ``` Congratulations on getting Alluxio started! This guide covered how to download and install Alluxio locally with examples of basic interactions via the Alluxio shell. There are several next steps available: Learn more about the various features of Alluxio in our documentation, such as and . See how you can You can also with Alluxio K8s Helm Chart or Alluxio K8s Operator Connect a compute engine such as , , or Connect an under file storage such as , , or Check out our if you're interested in becoming a contributor! For the users who are using macOS 11(Big Sur) or later, when running the command ```shell $ ./bin/alluxio init format ``` you might get the error message: ``` alluxio-{{site.ALLUXIOVERSIONSTRING}}/bin/alluxio: Operation not permitted ``` This can be caused by the newly added setting options to macOS. To fix it, open `System Preferences` and open `Sharing`. On the left, check the box next to `Remote Login`. If there is `Allow full access to remote users` as shown in the image, check the box next to it. Besides, click the `+` button and add yourself to the list of users that are allowed for Remote Login if you are not already in it. By default, Dora worker caches metadata and data. Set `alluxio.dora.client.metadata.cache.enabled` to `false` to disable the metadata cache. If disabled, client will always fetch metadata from under storage directly. Set `alluxio.user.netty.data.transmission.enabled` to `true` to enable transmission of data between clients and Dora cache nodes over Netty. This avoids serialization and deserialization cost of gRPC, as well as consumes less resources on the worker side. Only one UFS is supported by Dora. Nested mounts are not supported yet. The Alluxio Master node still needs to be up and running. It is used for Dora worker discovery, cluster configuration updates, as well as handling write I/O operations."
}
] |
{
"category": "Runtime",
"file_name": "Get-Started.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Results: https://www.weave.works/weave-net-performance-fast-datapath/ Two `c3.8xlarge` instances running Ubuntu 16.04 LTS (`ami-d8f4deab`) in `eu-west-1` region of AWS. 10 Gigabit . `iperf 3.0.11`. Encrypted: ``` host1$ WEAVE_MTU=8916 weave launch --password=foobar host1$ docker $(weave config) run --rm -ti --name server networkstatic/iperf3 -s host2$ WEAVE_MTU=8916 weave launch --password=foobar host1 host2$ docker $(weave config) run --rm -ti networkstatic/iperf3 -c server ``` Non-encrypted: ``` host1$ WEAVE_MTU=8950 weave launch host1$ docker $(weave config) run --rm -ti --name server networkstatic/iperf3 -s host2$ WEAVE_MTU=8950 weave launch --password=foobar host1 host2$ docker $(weave config) run --rm -ti networkstatic/iperf3 -c server ``` Encrypted: ``` host1$ WEAVENOFASTDP=1 weave launch --password=foobar host1$ docker $(weave config) run --rm -ti --name server networkstatic/iperf3 -s host2$ WEAVENOFASTDP=1 weave launch --password=foobar host1 host2$ docker $(weave config) run --rm -ti networkstatic/iperf3 -c server ``` Non-encrypted: ``` host1$ WEAVENOFASTDP=1 weave launch host1$ docker $(weave config) run --rm -ti --name server networkstatic/iperf3 -s host2$ WEAVENOFASTDP=1 weave launch --password=foobar host1 host2$ docker $(weave config) run --rm -ti networkstatic/iperf3 -c server ``` Non-encrypted: ``` host1$ iperf -s host2$ iperf -c host1 ``` Encrypted: ``` host1$ export KEY1=\"0x466454f2b1a770f8f872f9afbc35ebeac57e00fc11ac86ed1f82716f010b20f0cf532274\" host1$ export KEY2=\"0x3531151241a770f8f872f9afbc35ebeac57e00fc11ac86ed1f82716f010b20f0cf532274\" host1# ip xfrm state add src ${IP1} dst ${IP2} proto esp spi 0x4c856ffc replay-window 256 flag esn reqid 0 mode transport aead 'rfc4106(gcm(aes))' ${KEY1} 128 host1# ip xfrm state add src ${IP2} dst ${IP1} proto esp spi 0x9b0830bc replay-window 256 flag esn reqid 0 mode transport aead 'rfc4106(gcm(aes))' ${KEY2} 128 host1# ip xfrm policy add src ${IP1}/32 dst ${IP2}/32 dir out tmpl src ${IP1} dst ${IP2} proto esp spi 0x4c856ffc reqid 0 mode transport host1$ iperf -s host2$ export KEY1=\"0x466454f2b1a770f8f872f9afbc35ebeac57e00fc11ac86ed1f82716f010b20f0cf532274\" host2$ export KEY2=\"0x3531151241a770f8f872f9afbc35ebeac57e00fc11ac86ed1f82716f010b20f0cf532274\" host2# ip xfrm state add src ${IP2} dst ${IP1} proto esp spi 0x9b0830bc reqid 0 replay-window 256 flag esn mode transport aead 'rfc4106(gcm(aes))' ${KEY2} 128 host2# ip xfrm state add src ${IP1} dst ${IP2} proto esp spi 0x4c856ffc reqid 0 replay-window 256 flag esn mode transport aead 'rfc4106(gcm(aes))' ${KEY1} 128 host2# ip xfrm policy add src ${IP2}/32 dst ${IP1}/32 dir out tmpl src ${IP1} dst ${IP2} proto esp spi 0x9b0830bc reqid 0 mode transport host2$ iperf -c host1 ```"
}
] |
{
"category": "Runtime",
"file_name": "benchmarks.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "sidebar_position: 3 sidebar_label: \"CSI\" CSI is the abbreviation of Container Storage Interface. To have a better understanding of what we're going to do, the first thing we need to know is what the Container Storage Interface is. Currently, there are still some problems for already existing storage subsystem within Kubernetes. Storage driver code is maintained in the Kubernetes core repository which is difficult to test. But beyond that, Kubernetes needs to give permissions to storage vendors to check code into the Kubernetes core repository. Ideally, that should be implemented externally. CSI is designed to define an industry standard that will enable storage providers who enable CSI to be available across container orchestration systems that support CSI. The figure below shows a kind of high-level Kubernetes archetypes integrated with CSI. Three new external components are introduced to decouple Kubernetes and Storage Provider logic Blue arrows present the conventional way to call against API Server Red arrows present gRPC to call against Volume Driver In order to enable the feature of expanding volume atop Kubernetes, we should extend several components including CSI specification, in-tree volume plugin, external-provisioner and external-attacher. The feature of expanding volume is still undefined in latest CSI 0.2.0. The new 3 RPCs, including `RequiresFSResize`, `ControllerResizeVolume` and `NodeResizeVolume`, should be introduced. ```jade service Controller { rpc CreateVolume (CreateVolumeRequest) returns (CreateVolumeResponse) {} rpc RequiresFSResize (RequiresFSResizeRequest) returns (RequiresFSResizeResponse) {} rpc ControllerResizeVolume (ControllerResizeVolumeRequest) returns (ControllerResizeVolumeResponse) {} } service Node { rpc NodeStageVolume (NodeStageVolumeRequest) returns (NodeStageVolumeResponse) {} rpc NodeResizeVolume (NodeResizeVolumeRequest) returns (NodeResizeVolumeResponse) {} } ``` In addition to the extend CSI specification, the `csiPlugin` interface within Kubernetes should also implement `expandablePlugin`. The `csiPlugin` interface will expand `PersistentVolumeClaim` representing for `ExpanderController`. ```jade type ExpandableVolumePlugin interface { VolumePlugin ExpandVolumeDevice(spec Spec, newSize resource.Quantity, oldSize resource.Quantity) (resource.Quantity, error) RequiresFSResize() bool } ``` Finally, to abstract complexity of the implementation, we should hard code the separate storage provider management logic into the following functions which is well-defined in the CSI specification: CreateVolume DeleteVolume ControllerPublishVolume ControllerUnpublishVolume ValidateVolumeCapabilities ListVolumes GetCapacity ControllerGetCapabilities RequiresFSResize ControllerResizeVolume Lets demonstrate this feature with a concrete user case. Create storage class for CSI storage provisioner ```yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-qcfs parameters: csiProvisionerSecretName: orain-test csiProvisionerSecretNamespace: default provisioner: csi-qcfsplugin reclaimPolicy: Delete volumeBindingMode: Immediate ``` Deploy CSI Volume Driver including storage provisioner `csi-qcfsplugin` across Kubernetes cluster Create PVC `qcfs-pvc` which will be dynamically provisioned by storage class `csi-qcfs` ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: qcfs-pvc namespace: default .... accessModes: ReadWriteOnce resources: requests: storage: 300Gi storageClassName: csi-qcfs ``` Create MySQL 5.7 instance to use PVC `qcfs-pvc` In order to mirror the exact same production-level scenario, there are actually two different types of workloads including: Batch insert to make MySQL consuming more file system capacity Surge query request Dynamically expand volume capacity through edit pvc `qcfs-pvc` configuration"
}
] |
{
"category": "Runtime",
"file_name": "csi.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document is a walk-through guide describing how to use rkt isolators for . Linux seccomp (short for SECure COMputing) filtering allows one to specify which system calls a process should be allowed to invoke, reducing the kernel surface exposed to applications. This provides a clearly defined mechanism to build sandboxed environments, where processes can run having access only to a specific reduced set of system calls. In the context of containers, seccomp filtering is useful for: Restricting applications from invoking syscalls that can affect the host Reducing kernel attack surface in case of security bugs For more details on how Linux seccomp filtering works, see . By default, rkt comes with a set of predefined filtering groups that can be used to quickly build sandboxed environments for containerized applications. Each set is simply a reference to a group of syscalls, covering a single functional area or kernel subsystem. They can be further combined to build more complex filters, either by blacklisting or by whitelisting specific system calls. To distinguish these predefined groups from real syscall names, wildcard labels are prefixed with a `@` symbols and are namespaced. The App Container Spec (appc) defines : `@appc.io/all` represents the set of all available syscalls. `@appc.io/empty` represents the empty set. rkt provides two default groups for generic usage: `@rkt/default-blacklist` represents a broad-scope filter than can be used for generic blacklisting `@rkt/default-whitelist` represents a broad-scope filter than can be used for generic whitelisting For compatibility reasons, two groups are provided mirroring : `@docker/default-blacklist` `@docker/default-whitelist` When using stage1 images with systemd >= v231, some are also available: `@systemd/clock` for syscalls manipulating the system clock `@systemd/default-whitelist` for a generic set of typically whitelisted syscalls `@systemd/mount` for filesystem mounting and unmounting `@systemd/network-io` for socket I/O operations `@systemd/obsolete` for unusual, obsolete or unimplemented syscalls `@systemd/privileged` for syscalls which need super-user syscalls `@systemd/process` for syscalls acting on process control, execution and namespacing `@systemd/raw-io` for raw I/O port access When no seccomp filtering is specified, by default rkt whitelists all the generic syscalls typically needed by applications for common operations. This is the same set defined by `@rkt/default-whitelist`. The default set is tailored to stop applications from performing a large variety of privileged actions, while not impacting their normal behavior. Operations which are typically not needed in containers and which may impact host state, eg. invoking , are denied in this way. However, this default set is mostly meant as a safety precaution against erratic and misbehaving applications, and will not suffice against tailored attacks. As such, it is recommended to fine-tune seccomp filtering using one of the customizable isolators available in rkt. When running Linux containers, rkt provides two mutually exclusive isolators to define a seccomp filter for an application: `os/linux/seccomp-retain-set` `os/linux/seccomp-remove-set` Those isolators cover different use-cases and employ different techniques to achieve the same goal of limiting available"
},
{
"data": "As such, they cannot be used together at the same time, and recommended usage varies on a case-by-case basis. Seccomp isolators work by defining a set of syscalls than can be either blocked (\"remove-set\") or allowed (\"retain-set\"). Once an application tries to invoke a blocked syscall, the kernel will deny this operation and the application will be notified about the failure. By default, invoking blocked syscalls will result in the application being immediately terminated with a `SIGSYS` signal. This behavior can be tweaked by returning a specific error code (\"errno\") to the application instead of terminating it. For both isolators, this can be customized by specifying an additional `errno` parameter with the desired symbolic errno name. For a list of errno labels, check the at `man 3 errno`. `os/linux/seccomp-retain-set` allows for an additive approach to build a seccomp filter: applications will not able to use any syscalls, except the ones listed in this isolator. This whitelisting approach is useful for completely locking down environments and whenever application requirements (in terms of syscalls) are well-defined in advance. It allows one to ensure that exactly and only the specified syscalls could ever be used. For example, the \"retain-set\" for a typical network application will include entries for generic POSIX operations (available in `@systemd/default-whitelist`), socket operations (`@systemd/network-io`) and reacting to I/O events (`@systemd/io-event`). `os/linux/seccomp-remove-set` tackles syscalls in a subtractive way: starting from all available syscalls, single entries can be forbidden in order to prevent specific actions. This blacklisting approach is useful to somehow limit applications which have broad requirements in terms of syscalls, in order to deny access to some clearly unused but potentially exploitable syscalls. For example, an application that will need to perform multiple operations but is known to never touch mountpoints could have `@systemd/mount` specified in its \"remove-set\". The goal of these examples is to show how to build ACI images with , where some syscalls are either explicitly blocked or allowed. For simplicity, the starting point will be a bare Alpine Linux image which ships with `ping` and `umount` commands (from busybox). Those commands respectively requires and syscalls in order to perform privileged operations. To block their usage, a syscalls filter can be installed via `os/linux/seccomp-remove-set` or `os/linux/seccomp-retain-set`; both approaches are shown here. This example shows how to block socket operation (e.g. with `ping`), by removing `socket()` from the set of allowed syscalls. First, a local image is built with an explicit \"remove-set\" isolator. This set contains the syscalls that need to be forbidden in order to block socket setup: ``` $ acbuild begin $ acbuild set-name localhost/seccomp-remove-set-example $ acbuild dependency add quay.io/coreos/alpine-sh $ acbuild set-exec -- /bin/sh $ echo '{ \"set\": [\"@rkt/default-blacklist\", \"socket\"] }' | acbuild isolator add \"os/linux/seccomp-remove-set\" - $ acbuild write"
},
{
"data": "$ acbuild end ``` Once properly built, this image can be run in order to check that `ping` usage is now blocked by the seccomp filter. At the same time, the default blacklist will also block other dangerous syscalls like `umount(2)`: ``` $ sudo rkt run --interactive --insecure-options=image seccomp-remove-set-example.aci image: using image from file stage1-coreos.aci image: using image from file seccomp-remove-set-example.aci image: using image from local store for image name quay.io/coreos/alpine-sh / # whoami root / # ping -c1 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes Bad system call / # umount /proc/bus/ Bad system call ``` This means that `socket(2)` and `umount(2)` have been both effectively disabled inside the container. In contrast to the example above, this one shows how to allow some operations only (e.g. network communication via `ping`), by whitelisting all required syscalls. This means that syscalls outside of this set will be blocked. First, a local image is built with an explicit \"retain-set\" isolator. This set contains the rkt wildcard \"default-whitelist\" (which already provides all socket-related entries), plus some custom syscalls (e.g. `umount(2)`) which are typically not allowed: ``` $ acbuild begin $ acbuild set-name localhost/seccomp-retain-set-example $ acbuild dependency add quay.io/coreos/alpine-sh $ acbuild set-exec -- /bin/sh $ echo '{ \"set\": [\"@rkt/default-whitelist\", \"umount\", \"umount2\"] }' | acbuild isolator add \"os/linux/seccomp-retain-set\" - $ acbuild write seccomp-retain-set-example.aci $ acbuild end ``` Once run, it can be easily verified that both `ping` and `umount` are now functional inside the container. These operations also require [additional capabilities][capabilities-guide] to be retained in order to work: ``` $ sudo rkt run --interactive --insecure-options=image seccomp-retain-set-example.aci --caps-retain=CAPSYSADMIN,CAPNETRAW image: using image from file stage1-coreos.aci image: using image from file seccomp-retain-set-example.aci image: using image from local store for image name quay.io/coreos/alpine-sh / # whoami root / # ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes 64 bytes from 8.8.8.8: seq=0 ttl=41 time=24.910 ms 8.8.8.8 ping statistics 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 24.910/24.910/24.910 ms / # mount | grep /proc/bus proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime) / # umount /proc/bus / # mount | grep /proc/bus ``` However, others syscalls are still not available to the application. For example, trying to set the time will result in a failure due to invoking non-whitelisted syscalls: ``` $ sudo rkt run --interactive --insecure-options=image seccomp-retain-set-example.aci image: using image from file stage1-coreos.aci image: using image from file seccomp-retain-set-example.aci image: using image from local store for image name quay.io/coreos/alpine-sh / # whoami root / # adjtimex -f 0 Bad system call ``` Seccomp filters are typically defined when creating images, as they are tightly linked to specific app requirements. However, image consumers may need to further tweak/restrict the set of available syscalls in specific local scenarios. This can be done either by permanently patching the manifest of specific images, or by overriding seccomp isolators with command line"
},
{
"data": "Image manifests can be manipulated manually, by unpacking the image and editing the manifest file, or with helper tools like . To override an image's pre-defined syscalls set, just replace the existing seccomp isolators in the image with new isolators defining the desired syscalls. The `patch-manifest` subcommand to `actool` manipulates the syscalls sets defined in an image. `actool patch-manifest -seccomp-mode=... -seccomp-set=...` options can be used together to override any seccomp filters by specifying a new mode (retain or reset), an optional custom errno, and a set of syscalls to filter. These commands take an input image, modify any existing seccomp isolators, and write the changes to an output image, as shown in the example: ``` $ actool cat-manifest seccomp-retain-set-example.aci ... \"isolators\": [ { \"name\": \"os/linux/seccomp-retain-set\", \"value\": { \"set\": [ \"@rkt/default-whitelist\", \"umount\", \"umount2\" ] } } ] ... $ actool patch-manifest -seccomp-mode=retain,errno=ENOSYS -seccomp-set=@rkt/default-whitelist seccomp-retain-set-example.aci seccomp-retain-set-patched.aci $ actool cat-manifest seccomp-retain-set-patched.aci ... \"isolators\": [ { \"name\": \"os/linux/seccomp-retain-set\", \"value\": { \"set\": [ \"@rkt/default-whitelist\", ], \"errno\": \"ENOSYS\" } } ] ... ``` Now run the image to verify that the `umount(2)` syscall is no longer allowed, and a custom error is returned: ``` $ sudo rkt run --interactive --insecure-options=image seccomp-retain-set-patched.aci image: using image from file stage1-coreos.aci image: using image from file seccomp-retain-set-patched.aci image: using image from local store for image name quay.io/coreos/alpine-sh / # mount | grep /proc/bus proc on /proc/bus type proc (ro,nosuid,nodev,noexec,relatime) / # umount /proc/bus/ umount: can't umount /proc/bus: Function not implemented ``` Seccomp filters can be directly overridden at run time from the command-line, without changing the executed images. The `--seccomp` option to `rkt run` can manipulate both the \"retain\" and the \"remove\" isolators. Isolator overridden from the command-line will replace all seccomp settings in the image manifest, and can be specified as shown in this example: ``` $ sudo rkt run --interactive quay.io/coreos/alpine-sh --seccomp mode=remove,errno=ENOTSUP,socket image: using image from file /usr/local/bin/stage1-coreos.aci image: using image from local store for image name quay.io/coreos/alpine-sh / # whoami root / # ping -c 1 8.8.8.8 PING 8.8.8.8 (8.8.8.8): 56 data bytes ping: can't create raw socket: Not supported ``` Seccomp isolators are application-specific configuration entries, and in a `rkt run` command line they must follow the application container image to which they apply. Each application within a pod can have different seccomp filters. As with most security features, seccomp isolators may require some application-specific tuning in order to be maximally effective. For this reason, for security-sensitive environments it is recommended to have a well-specified set of syscalls requirements and follow best practices: Only allow syscalls needed by an application, according to its typical usage. While it is possible to completely disable seccomp, it is rarely needed and should be generally avoided. Tweaking the syscalls set is a better approach instead. Avoid granting access to dangerous syscalls. For example, and are typically abused to escape containers. Prefer a whitelisting approach, trying to keep the \"retain-set\" as small as possible."
}
] |
{
"category": "Runtime",
"file_name": "seccomp-guide.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Manta uses a REST API to read, write, and delete objects. This document assumes that you are familiar with HTTP-based REST systems, including HTTP requests, responses, status codes, and headers. If you want to start with basic information on Manta, see the tutorial. Unless otherwise specified, the semantics described here are stable, which means that you can expect that future updates will not change the documented behavior. You should avoid relying on behavior not specified here. The storage service is based on two concepts: object and directories. Objects* consist of data and metadata you can read, write, and delete from the storage service. The data portion is opaque. The metadata is a set of HTTP headers that describe the object, such as `Content-Type` and `Content-MD5`. An object is identified by a name. Directories* are named groups of objects, as on traditional file systems. Every object belongs to a directory. The private storage directory, `/:login/stor` functions as the top level, or root directory. Objects are the primary entity you store in Manta. Objects can be of any size, including zero bytes. Objects consist of your raw, uninterpreted data, as well as the metadata (HTTP headers) returned when you retrieve an object. There are several headers for objects that control HTTP semantics in Manta. When you write an object, you must use one of two headers: Use `Content-Length` if you can specify the object size in bytes. Use `transfer-encoding: chunked` to upload objects using . Chunked encoding lets you stream an object for storage without knowing the size of the object ahead of time. By default, the maximum amount of data you can send this way is 5GB. You can use the optional `max-content-length` header to specify how much space you estimate the object requires. This estimate is only an upper bound. The system will record how much data you actually transferred and record that. Subsequent GET requests will return the actual size of the object. You can, but are not required to, use the header in your write requests. Using this header saves network bandwidth. If the write request would fail, the system returns an error without transferring any data. The node-manta CLI use this feature. You should always specify a `Content-Type` header, which will be stored and returned back (HTTP content-negotiation will be handled). If you do not specify a content type, the default is `application/octet-stream`. If you specify a `Content-MD5` header, the system validates that the content uploaded matches the value of the header. You must encode MD5 headers in Base64, as described in . The `durability-level` header is a value from 1 to 6 that specifies how many copies of an object the system stores. If you do not specify a durability level, the default is"
},
{
"data": "While you can set a durability-level of 1, doing so would put your data at a higher risk of loss. The system honors the standard HTTP conditional requests such as , , etc. Cross-Origin Resource Sharing headers are supported on a per object basis. If `access-control-allow-origin` is sent on a `PUT`, it can be a comma separated list of `origin` values. When a request is sent with the `origin` header, the list of values of the stored `access-control-allow-origin` header is processed and only the matching value is returned, if any. For example: $ echo \"foo\" | \\ mput -q -H 'access-control-allow-origin: foo.com,bar.com' /:login/public/foo $ curl -is -X HEAD -H 'origin: foo.com' http://10.2.121.5/:login/public/foo HTTP/1.1 200 OK Connection: close Etag: f7c79088-d70d-4725-b716-7b85a40ede6a Last-Modified: Fri, 17 May 2013 20:04:51 GMT access-control-allow-origin: foo.com Content-MD5: 07BzhNET7exJ6qYjitX/AA== Durability-Level: 2 Content-Length: 4 Content-Type: application/octet-stream Date: Fri, 17 May 2013 20:05:58 GMT Server: Manta x-request-id: 30afb030-bf2d-11e2-be7d-99e967737d07 x-response-time: 7 x-server-name: fef8c5b8-3483-458f-95dc-7d9172ecefd1 If no `origin` header is sent, the system assumes that the request did not originate from a browser and the original list of values is echoed back. While this behavior does not conform to the CORS specification, it does allow you to administratively see what is stored on your object. `access-control-expose-headers` is supported as a list of HTTP headers that a browser will expose. This list is not interpreted by the system. `access-control-allow-methods` is supported as a list of HTTP methods that the system will honor for this request. You can only specify HTTP operations the system supports: HEAD, GET, PUT, DELETE. `access-control-max-age` is supported and uninterpreted by the system. The HTTP `cache-control` header is stored and returned by the system. This is useful for controlling how long CDNs or a web caching agent caches a version of an object. You can store custom headers with your object by prefixing them with `m-`. For example, you might use the header `m-local-user: jill` to tag an object with the name of the user who created it. You can use up to 4 KB of header data. Directories contain objects and other directories. All objects are stored at the top level or subdirectory one of the following directories: | Directory | Description | | - | -- | | `/:login/stor` | private object storage | | `/:login/public` | public object storage | As noted above, `/:login/stor` functions as the top level, or root, directory where you store objects and create directories. Only you can read, write, and delete data here. You can create any number of directories and objects in this directory. While the system does not yet support discretionary access controls on objects or directories, you can grant access to individual objects in this namespace by using signed URLs, which are explained"
},
{
"data": "With the exception of signed URL requests, all traffic to `/:login/stor` must be made over a secure channel (TLS). `/:login/public` is a world-readable namespace. Only you can create and delete objects in this directory. Read access to objects in this namespace is available through HTTP and HTTPS without authorization headers. Deletions and writes to this directory must made over a secure channel. You create a directory the same way that you create an object, but you use the special header `Content-Type: application/json; type=directory`. When you retrieve a directory, the response has the `Content-Type: application/x-json-stream; type=directory` header. The body consists of a set of JSON objects separated by newlines (`\\n`). Each object has a `type` field that indicates whether the JSON object specifies a directory or a storage object. Here is an example with additional newlines added for clarity. { \"name\": \"1c1bf695-230d-490e-aec7-3b11dff8ef32\", \"type\": \"directory\", \"mtime\": \"2012-09-11T20:28:30Z\" } { \"name\": \"695d5de6-45f4-4156-b6b7-3a8d4af89391\", \"etag\": \"bdf0aa96e3bb87148be084252a059736\", \"size\": 44, \"type\": \"object\", \"mtime\": \"2012-09-11T20:28:31Z\" } | Field | Description | | - | -- | | `type` | Either `object` or `directory`. | | `name` | The name of the object or directory. | | `mtime` | An of the last update time of the object or directory. | | `size` | Present only if `type` is `object`. The size of the object in bytes. | | `etag` | Present only if `type` is `object`. Used for conditional requests. | When you use an HTTP GET request to list a directory, the `result-set-size` header in the response contains the total number of entries in the directory. However, you will get 256 entries per request, so you will have to paginate through the result sets. You can increase the number of entries per request to 1024. Results are sorted lexicographically. To get the next page of a listing, pass in the last name returned in the set until the total number of entries you have processed matches `result-set-size`. You can store CORS, `cache-control` and `m-` headers on directories, as you can on objects. Currently, no data is supported on directories. This section describes some of the design principles that guide the operation of Manta. Several principles guide the design of the service: From the perspective of the , the system is strongly consistent. It chooses to be strongly consistent, at the risk of more HTTP 500 errors than an eventually consistent system. This system is engineered to minimize errors in the event of network or system failures and to recover as quickly as possible, but more errors will occur than in an eventually consistent system. However, it is possible to read the writes immediately. The distinction between a HTTP 404 response and a HTTP 500 response is very clear: A 404 response really means your data isn't"
},
{
"data": "A 500 response means that it might be, but there is some sort of outage. When the system responds with an HTTP 200, you can be certain your data is durably stored on the number of servers you requested. The system is designed to never allow data loss or corruption. The system is designed to be secure. All writes must be performed over a secure channel (TLS). Most reads will be as well, unless you are specifically requesting to bypass TLS for browser/web channels. Manta is designed to support an arbitrarily large number of objects and an arbitrarily large number of directories. However, it bounds the number of objects in a single directory so that list operations can be performed efficiently. The system does not have any limit on the size of a single object, but it may return a \"no space\" error if the requested object size is larger than a single physical server has space for. In practice, this number will be in tens of terabytes, but network transfer times make object sizes of that magnitude unreasonable anyway. There is no default API rate limit imposed upon you, however the system reserves the right to throttle requests if necessary to protect the system. For high-volume web assets, you should use it as a content delivery network (CDN) origin. All REST APIs are modeled as streams. They are designed to let you iterate through result sets without consuming too much memory. For example, listing a directory returns newline separated JSON objects as opposed to an array or large XML document. At a simple level, durability is a system's ability to tolerate failures without a loss of data. By default, the system stores two copies of your object and these two copies are placed in two different data centers. Distributing copies of your objects reduces the risk of data loss in the event of a failure. The system relies on ZFS RAID-Z to store your objects, so the durability is actually greater than two would imply. You can store anywhere from 1 to 6 copies. You are billed for exactly the number of bytes you consume in the system. For example, if you write a 1MB object with the default number of copies (2), you will be billed for 2MB of storage each month. When the number of copies requested is greater than one, the system ensures that at least two copies are placed in two different data centers, and then stripes the other copies across data centers. If any given data center is down at the time, you may have copies unbalanced with extra replicas in fewer data centers, but there will always be at least two data centers with your copy of data. This allows you to still access your data in the event of any one data center failure."
}
] |
{
"category": "Runtime",
"file_name": "storage-reference.md",
"project_name": "Triton Object Storage",
"subcategory": "Cloud Native Storage"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.