content
listlengths
1
171
tag
dict
[ { "data": "(storage-btrfs)= {abbr}`Btrfs (B-tree file system)` is a local file system based on the {abbr}`COW (copy-on-write)` principle. COW means that data is stored to a different block after it has been modified instead of overwriting the existing data, reducing the risk of data corruption. Unlike other file systems, Btrfs is extent-based, which means that it stores data in contiguous areas of memory. In addition to basic file system features, Btrfs offers RAID and volume management, pooling, snapshots, checksums, compression and other features. To use Btrfs, make sure you have `btrfs-progs` installed on your machine. A Btrfs file system can have subvolumes, which are named binary subtrees of the main tree of the file system with their own independent file and directory hierarchy. A Btrfs snapshot is a special type of subvolume that captures a specific state of another subvolume. Snapshots can be read-write or read-only. The `btrfs` driver in Incus uses a subvolume per instance, image and snapshot. When creating a new entity (for example, launching a new instance), it creates a Btrfs snapshot. Btrfs doesn't natively support storing block devices. Therefore, when using Btrfs for VMs, Incus creates a big file on disk to store the VM. This approach is not very efficient and might cause issues when creating snapshots. Btrfs can be used as a storage backend inside a container in a nested Incus environment. In this case, the parent container itself must use Btrfs. Note, however, that the nested Incus setup does not inherit the Btrfs quotas from the parent (see {ref}`storage-btrfs-quotas` below). (storage-btrfs-quotas)= Btrfs supports storage quotas via qgroups. Btrfs qgroups are hierarchical, but new subvolumes will not automatically be added to the qgroups of their parent subvolumes. This means that users can trivially escape any quotas that are set. Therefore, if strict quotas are needed, you should consider using a different storage driver (for example, ZFS with `refquota` or LVM with Btrfs on top). When using quotas, you must take into account that Btrfs extents are immutable. When blocks are written, they end up in new extents. The old extents remain until all their data is dereferenced or rewritten. This means that a quota can be reached even if the total amount of space used by the current files in the subvolume is smaller than the quota. ```{note} This issue is seen most often when using VMs on Btrfs, due to the random I/O nature of using raw disk image files on top of a Btrfs" }, { "data": "Therefore, you should never use VMs with Btrfs storage pools. If you really need to use VMs with Btrfs storage pools, set the instance root disk's property to twice the size of the root disk's size. This configuration allows all blocks in the disk image file to be rewritten without reaching the qgroup quota. The storage pool option can also avoid this scenario, because a side effect of enabling compression is to reduce the maximum extent size such that block rewrites don't cause as much storage to be double-tracked. However, this is a storage pool option, and it therefore affects all volumes on the pool. ``` The following configuration options are available for storage pools that use the `btrfs` driver and for storage volumes in these pools. (storage-btrfs-pool-config)= Key | Type | Default | Description :-- | : | : | :- `btrfs.mountoptions` | string | `usersubvolrmallowed` | Mount options for block devices `size` | string | auto (20% of free disk space, >= 5 GiB and <= 30 GiB) | Size of the storage pool when creating loop-based pools (in bytes, suffixes supported, can be increased to grow storage pool) `source` | string | - | Path to an existing block device, loop file or Btrfs subvolume `source.wipe` | bool | `false` | Wipe the block device specified in `source` prior to creating the storage pool {{volume_configuration}} Key | Type | Condition | Default | Description :-- | : | :-- | : | :- `security.shared` | bool | custom block volume | same as `volume.security.shared` or `false` | Enable sharing the volume across multiple instances `security.shifted` | bool | custom volume | same as `volume.security.shifted` or `false` | {{enableIDshifting}} `security.unmapped` | bool | custom volume | same as `volume.security.unmapped` or `false` | Disable ID mapping for the volume `size` | string | appropriate driver | same as `volume.size` | Size/quota of the storage volume `snapshots.expiry` | string | custom volume | same as `volume.snapshots.expiry` | {{snapshotexpiryformat}} `snapshots.pattern` | string | custom volume | same as `volume.snapshots.pattern` or `snap%d`| {{snapshotpatternformat}} [^*] `snapshots.schedule` | string | custom volume | same as `volume.snapshots.schedule` | {{snapshotscheduleformat}} To enable storage buckets for local storage pool drivers and allow applications to access the buckets via the S3 protocol, you must configure the {config:option}`server-core:core.storagebucketsaddress` server setting. Key | Type | Condition | Default | Description :-- | : | :-- | : | :- `size` | string | appropriate driver | same as `volume.size` | Size/quota of the storage bucket" } ]
{ "category": "Runtime", "file_name": "storage_btrfs.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "KNOWN LIMITATION: E2E tests are designed to run in the CI and currently only support running on linux platform. Install , , . ```shell make teste2e ``` If the tests fails with errors like `ginkgo: not found`, use below command to add GOPATH into the PATH variable ```shell PATH+=:$(go env GOPATH)/bin ``` You may use the prepare script to setup an E2E test environment before developing E2E tests: ```shell REPOROOT=$(git rev-parse --show-toplevel) # REPOROOT is root folder of oras CLI code $REPOROOT/test/e2e/scripts/prepare.sh $REPOROOT ``` Since E2E test suites are added as an nested module, the module file and checksum file are separated from oras CLI. To develop E2E tests, it's better to set the working directory to `$REPO_ROOT/test/e2e/` or open your IDE at it. By default, Gomega builds a temp binary every time before running e2e tests, which makes sure that latest code changes in the working directory are covered. If you are making changes to E2E test code only, set `ORAS_PATH` towards your pre-built ORAS binary to skip building and speed up the test. E2E specs can be ran natively without `ginkgo`: ```shell go test oras.land/oras/test/e2e/suite/${suite_name} ``` This is super handy when you want to do step-by-step debugging from command-line or via an IDE. If you need to debug certain specs, use but don't check it in. The backend of E2E tests are three registry services: : registry service supports artifact and image types in and referrer API. Will be deprecated when is not supported by oras CLI. : registry service supports image media type with subject and provide referrers via . : registry service supports artifact and image types in and referrer API You can run scenario test suite against your own registry services via setting `ORASREGISTRYHOST`, `ORASREGISTRYFALLBACKHOST` and `ZOTREGISTRY_HOST` environmental variables. This is a good choice if you want to debug certain re-runnable specs: ```shell cd $REPO_ROOT/test/e2e ginkgo watch -r ``` The executed commands should be shown in the ginkgo logs after `[It]`, with full execution output in the E2E log. Three suites will be maintained for E2E testing: command: contains test specs for single oras command execution auth: contains test specs similar to command specs but specific to" }, { "data": "It cannot be ran in parallel with command suite specs scenario: contains featured scenarios with several oras commands execution Inside a suite, please follow below model when building the hierarchical collections of specs: ``` Describe: <Role> When: Scenario or command specific description It: <Action> By: <Result> (per-command execution) Expect: <Result> (detailed checks for execution results) ``` Command suite uses two kinds of pre-baked test data: Layered distribution archive files: test data compressed from registry runtime storage directly and stored in `$REPOROOT/test/e2e/testdata/distribution/`. ORAS distribution uses sub-folder `mount` and upstream distribution uses sub-folder `mountfallback`. For both registries, the repository name should follow the convention of `command/$reposuffix`. To add a new layer to the test data, use the below command to compress the `docker` folder from the root directory of the registry storage and copy it to the corresponding subfolder in `$REPOROOT/test/e2e/testdata/distribution/mount`. ```shell tar -cvzf ${repo_suffix}.tar.gz --owner=0 --group=0 docker/ ``` OCI layout files: test data stored in `$REPOROOT/test/e2e/testdata/zot/` and used by ZOT registry service. You may use stable release of ORAS CLI to build it. When adding new artifacts in, please make sure the repository folder is excluded in `$REPOROOT/.gitignore`. ```mermaid graph TD; subgraph \"repository: command/images\" subgraph \"file: images.tar.gz\" direction TB A0>tag: multi]-..->A1[oci index] A1--linux/amd64-->A2[oci image] A1--linux/arm64-->A3[oci image] A1--linux/arm/v7-->A4[oci image] A2-->A5(config1) A3-->A6(config2) A4-->A7(config3) A2-- hello.tar -->A8(blob) A3-- hello.tar -->A8(blob) A4-- hello.tar -->A8(blob) B0>tag: foobar]-..->B1[oci image] B1-- foo1 -->B2(blob1) B1-- foo2 -->B2(blob1) B1-- bar -->B3(blob2) end end ``` ```mermaid graph TD; subgraph \"repository: command/artifacts\" subgraph \"file: artifacts.tar.gz\" direction TB C0>tag: foobar]-..->C1[oci image] direction TB E1[\"test.sbom.file(artifact)\"] -- subject --> C1 E2[\"test.signature.file(artifact)\"] -- subject --> E1 direction TB D1[\"test/sbom.file(image)\"] -- subject --> C1 D2[\"test/signature.file(image)\"] -- subject --> D1 end subgraph \"file: artifacts_index.tar.gz\" direction TB F0>tag: multi]-..->F1[oci index] F1--linux/amd64-->F2[oci image] F1--linux/arm64-->F3[oci image] F1--linux/arm/v7-->F4[oci image] G1[\"referrer.index(image)\"] -- subject --> F1 G2[\"referrer.image(image)\"] -- subject --> F2 end end ``` ```mermaid graph TD; subgraph \"repository: command/artifacts\" subgraph \"file: artifacts_fallback.tar.gz\" direction TB A0>tag: foobar]-..->A1[oci image] A1-- foo1 -->A2(blob1) A1-- foo2 -->A2(blob1) A1-- bar -->A3(blob2) E1[\"test/sbom.file(image)\"] -- subject --> A1 E2[\"test/signature.file(image)\"] -- subject --> E1 end end ``` ```mermaid graph TD; subgraph \"repository: command/images\" direction TB A0>tag: multi]-..->A1[oci index] A1--linux/amd64-->A2[oci image] A1--linux/arm64-->A3[oci image] A1--linux/arm/v7-->A4[oci image] A2-->A5(config1) A3-->A6(config2) A4-->A7(config3) A2-- hello.tar -->A8(blob) A3-- hello.tar -->A8(blob) A4-- hello.tar -->A8(blob) B0>tag: foobar]-..->B1[oci image] B1-- foo1 -->B2(blob1) B1-- foo2 -->B2(blob1) B1-- bar -->B3(blob2) end ``` ```mermaid graph TD; subgraph \"repository: command/artifacts\" direction TB C0>tag: foobar]-..->C1[oci image] direction TB direction TB D1[\"test.sbom.file(image)\"] -- subject --> C1 D2[\"test.signature.file(image)\"] -- subject --> D1 direction TB F0>tag: multi]-..->F1[oci index] F1--linux/amd64-->F2[oci image] F1--linux/arm64-->F3[oci image] F1--linux/arm/v7-->F4[oci image] G1[\"referrer.index(image)\"] -- subject --> F1 G2[\"referrer.image(image)\"] -- subject --> F2 G3[\"index\"] -- subject --> F1 H0>tag: unnamed]-..->H1[\"artifact contains unnamed layer\"] I0>tag: empty]-..->I1[\"artifact contains only one empty layer\"] end ``` Test files used by scenario-based specs are placed in `$REPO_ROOT/test/e2e/testdata/files`." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "ORAS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: CSI Common Issues Issues when provisioning volumes with the Ceph CSI driver can happen for many reasons such as: Network connectivity between CSI pods and ceph Cluster health issues Slow operations Kubernetes issues Ceph-CSI configuration or bugs The following troubleshooting steps can help identify a number of issues. If you are mounting block volumes (usually RWO), these are referred to as `RBD` volumes in Ceph. See the sections below for RBD if you are having block volume issues. If you are mounting shared filesystem volumes (usually RWX), these are referred to as `CephFS` volumes in Ceph. See the sections below for CephFS if you are having filesystem volume issues. The Ceph monitors are the most critical component of the cluster to check first. Retrieve the mon endpoints from the services: ```console $ kubectl -n rook-ceph get svc -l app=rook-ceph-mon NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mon-a ClusterIP 10.104.165.31 <none> 6789/TCP,3300/TCP 18h rook-ceph-mon-b ClusterIP 10.97.244.93 <none> 6789/TCP,3300/TCP 21s rook-ceph-mon-c ClusterIP 10.99.248.163 <none> 6789/TCP,3300/TCP 8s ``` If host networking is enabled in the CephCluster CR, you will instead need to find the node IPs for the hosts where the mons are running. The `clusterIP` is the mon IP and `3300` is the port that will be used by Ceph-CSI to connect to the ceph cluster. These endpoints must be accessible by all clients in the cluster, including the CSI driver. If you are seeing issues provisioning the PVC then you need to check the network connectivity from the provisioner pods. For CephFS PVCs, check network connectivity from the `csi-cephfsplugin` container of the `csi-cephfsplugin-provisioner` pods For Block PVCs, check network connectivity from the `csi-rbdplugin` container of the `csi-rbdplugin-provisioner` pods For redundancy, there are two provisioner pods for each type. Make sure to test connectivity from all provisioner pods. Connect to the provisioner pods and verify the connection to the mon endpoints such as the following: ```console kubectl -n rook-ceph exec -ti deploy/csi-cephfsplugin-provisioner -c csi-cephfsplugin -- bash curl 10.104.165.31:3300 2>/dev/null ceph v2 ``` If you see the response \"ceph v2\", the connection succeeded. If there is no response then there is a network issue connecting to the ceph cluster. Check network connectivity for all monitor IPs and ports which are passed to ceph-csi. Sometimes an unhealthy Ceph cluster can contribute to the issues in creating or mounting the PVC. Check that your Ceph cluster is healthy by connecting to the and running the `ceph` commands: ```console ceph health detail ``` ```console HEALTH_OK ``` Even slow ops in the ceph cluster can contribute to the issues. In the toolbox, make sure that no slow ops are present and the ceph cluster is healthy ```console $ ceph -s cluster: id: ba41ac93-3b55-4f32-9e06-d3d8c6ff7334 health: HEALTH_WARN 30 slow ops, oldest one blocked for 10624 sec, mon.a has slow ops [...] ``` If Ceph is not healthy, check the following health for more clues: The Ceph monitor logs for errors The OSD logs for errors Disk Health Network Health Make sure the pool you have specified in the `storageclass.yaml` exists in the ceph cluster. Suppose the pool name mentioned in the `storageclass.yaml` is `replicapool`. It can be verified to exist in the toolbox: ```console $ ceph osd lspools 1 .mgr 2 replicapool ``` If the pool is not in the list, create the `CephBlockPool` CR for the pool if you have not already. If you have already created the pool, check the Rook operator log for errors creating the" }, { "data": "For the shared filesystem (CephFS), check that the filesystem and pools you have specified in the `storageclass.yaml` exist in the Ceph cluster. Suppose the `fsName` name mentioned in the `storageclass.yaml` is `myfs`. It can be verified in the toolbox: ```console $ ceph fs ls name: myfs, metadata pool: myfs-metadata, data pools: [myfs-data0 ] ``` Now verify the `pool` mentioned in the `storageclass.yaml` exists, such as the example `myfs-data0`. ```console ceph osd lspools 1 .mgr 2 replicapool 3 myfs-metadata0 4 myfs-data0 ``` The pool for the filesystem will have the suffix `-data0` compared the filesystem name that is created by the CephFilesystem CR. If the subvolumegroup is not specified in the ceph-csi configmap (where you have passed the ceph monitor information), Ceph-CSI creates the default subvolumegroup with the name csi. Verify that the subvolumegroup exists: ```console $ ceph fs subvolumegroup ls myfs [ { \"name\": \"csi\" } ] ``` If you dont see any issues with your Ceph cluster, the following sections will start debugging the issue from the CSI side. At times the issue can also exist in the Ceph-CSI or the sidecar containers used in Ceph-CSI. Ceph-CSI has included number of sidecar containers in the provisioner pods such as: `csi-attacher`, `csi-resizer`, `csi-provisioner`, `csi-cephfsplugin`, `csi-snapshotter`, and `liveness-prometheus`. The CephFS provisioner core CSI driver container name is `csi-cephfsplugin` as one of the container names. For the RBD (Block) provisioner you will see `csi-rbdplugin` as the container name. Here is a summary of the sidecar containers: The external-provisioner is a sidecar container that dynamically provisions volumes by calling `ControllerCreateVolume()` and `ControllerDeleteVolume()` functions of CSI drivers. More details about external-provisioner can be found here. If there is an issue with PVC Create or Delete, check the logs of the `csi-provisioner` sidecar container. ```console kubectl -n rook-ceph logs deploy/csi-rbdplugin-provisioner -c csi-provisioner ``` The CSI `external-resizer` is a sidecar container that watches the Kubernetes API server for PersistentVolumeClaim updates and triggers `ControllerExpandVolume` operations against a CSI endpoint if the user requested more storage on the PersistentVolumeClaim object. More details about external-provisioner can be found here. If any issue exists in PVC expansion you can check the logs of the `csi-resizer` sidecar container. ```console kubectl -n rook-ceph logs deploy/csi-rbdplugin-provisioner -c csi-resizer ``` The CSI external-snapshotter sidecar only watches for `VolumeSnapshotContent` create/update/delete events. It will talk to ceph-csi containers to create or delete snapshots. More details about external-snapshotter can be found . In Kubernetes 1.17 the volume snapshot feature was promoted to beta. In Kubernetes 1.20, the feature gate is enabled by default on standard Kubernetes deployments and cannot be turned off. Make sure you have installed the correct snapshotter CRD version. If you have not installed the snapshotter controller, see the . ```console $ kubectl get crd | grep snapshot volumesnapshotclasses.snapshot.storage.k8s.io 2021-01-25T11:19:38Z volumesnapshotcontents.snapshot.storage.k8s.io 2021-01-25T11:19:39Z volumesnapshots.snapshot.storage.k8s.io 2021-01-25T11:19:40Z ``` The above CRDs must have the matching version in your `snapshotclass.yaml` or `snapshot.yaml`. Otherwise, the `VolumeSnapshot` and `VolumesnapshotContent` will not be created. The snapshot controller is responsible for creating both `VolumeSnapshot` and `VolumesnapshotContent` object. If the objects are not getting created, you may need to check the logs of the snapshot-controller container. Rook only installs the snapshotter sidecar container, not the controller. It is recommended that Kubernetes distributors bundle and deploy the controller and CRDs as part of their Kubernetes cluster management process (independent of any CSI Driver). If your Kubernetes distribution does not bundle the snapshot controller, you may manually install these" }, { "data": "If any issue exists in the snapshot Create/Delete operation you can check the logs of the csi-snapshotter sidecar container. ```console kubectl -n rook-ceph logs deploy/csi-rbdplugin-provisioner -c csi-snapshotter ``` If you see an error about a volume already existing such as: ```console GRPC error: rpc error: code = Aborted desc = an operation with the given Volume ID 0001-0009-rook-ceph-0000000000000001-8d0ba728-0e17-11eb-a680-ce6eecc894de already exists. ``` The issue typically is in the Ceph cluster or network connectivity. If the issue is in Provisioning the PVC Restarting the Provisioner pods help(for CephFS issue restart `csi-cephfsplugin-provisioner-xxxxxx` CephFS Provisioner. For RBD, restart the `csi-rbdplugin-provisioner-xxxxxx` pod. If the issue is in mounting the PVC, restart the `csi-rbdplugin-xxxxx` pod (for RBD) and the `csi-cephfsplugin-xxxxx` pod for CephFS issue. When a user requests to create the application pod with PVC, there is a three-step process CSI driver registration Create volume attachment object Stage and publish the volume `csi-cephfsplugin-xxxx` or `csi-rbdplugin-xxxx` is a daemonset pod running on all the nodes where your application gets scheduled. If the plugin pods are not running on the node where your application is scheduled might cause the issue, make sure plugin pods are always running. Each plugin pod has two important containers: one is `driver-registrar` and `csi-rbdplugin` or `csi-cephfsplugin`. Sometimes there is also a `liveness-prometheus` container. The node-driver-registrar is a sidecar container that registers the CSI driver with Kubelet. More details can be found . If any issue exists in attaching the PVC to the application pod check logs from driver-registrar sidecar container in plugin pod where your application pod is scheduled. ```console $ kubectl -n rook-ceph logs deploy/csi-rbdplugin -c driver-registrar [...] I0120 12:28:34.231761 124018 main.go:112] Version: v2.0.1 I0120 12:28:34.233910 124018 connection.go:151] Connecting to unix:///csi/csi.sock I0120 12:28:35.242469 124018 node_register.go:55] Starting Registration Server at: /registration/rook-ceph.rbd.csi.ceph.com-reg.sock I0120 12:28:35.243364 124018 node_register.go:64] Registration Server started at: /registration/rook-ceph.rbd.csi.ceph.com-reg.sock I0120 12:28:35.243673 124018 node_register.go:86] Skipping healthz server because port set to: 0 I0120 12:28:36.318482 124018 main.go:79] Received GetInfo call: &InfoRequest{} I0120 12:28:37.455211 124018 main.go:89] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:true,Error:,} E0121 05:19:28.658390 124018 connection.go:129] Lost connection to unix:///csi/csi.sock. E0125 07:11:42.926133 124018 connection.go:129] Lost connection to unix:///csi/csi.sock. [...] ``` You should see the response `RegistrationStatus{PluginRegistered:true,Error:,}` in the logs to confirm that plugin is registered with kubelet. If you see a driver not found an error in the application pod describe output. Restarting the `csi-xxxxplugin-xxx` pod on the node may help. Each provisioner pod also has a sidecar container called `csi-attacher`. The external-attacher is a sidecar container that attaches volumes to nodes by calling `ControllerPublish` and `ControllerUnpublish` functions of CSI drivers. It is necessary because the internal Attach/Detach controller running in Kubernetes controller-manager does not have any direct interfaces to CSI drivers. More details can be found . If any issue exists in attaching the PVC to the application pod first check the volumeattachment object created and also log from csi-attacher sidecar container in provisioner pod. ```console $ kubectl get volumeattachment NAME ATTACHER PV NODE ATTACHED AGE csi-75903d8a902744853900d188f12137ea1cafb6c6f922ebc1c116fd58e950fc92 rook-ceph.cephfs.csi.ceph.com pvc-5c547d2a-fdb8-4cb2-b7fe-e0f30b88d454 minikube true 4m26s ``` ```console kubectl logs po/csi-rbdplugin-provisioner-d857bfb5f-ddctl -c csi-attacher ``` Check for any stale mount commands on the `csi-cephfsplugin-xxxx` pod on the node where your application pod is scheduled. You need to exec in the `csi-cephfsplugin-xxxx` pod and grep for stale mount operators. Identify the `csi-cephfsplugin-xxxx` pod running on the node where your application is scheduled with `kubectl get po -o wide` and match the node names. ```console $ kubectl exec -it csi-cephfsplugin-tfk2g -c csi-cephfsplugin -- sh $ ps -ef |grep mount [...] root 67 60 0 11:55 pts/0 00:00:00 grep mount ``` ```console ps -ef |grep ceph" }, { "data": "root 1 0 0 Jan20 ? 00:00:26 /usr/local/bin/cephcsi --nodeid=minikube --type=cephfs --endpoint=unix:///csi/csi.sock --v=0 --nodeserver=true --drivername=rook-ceph.cephfs.csi.ceph.com --pidlimit=-1 --metricsport=9091 --forcecephkernelclient=true --metricspath=/metrics --enablegrpcmetrics=true root 69 60 0 11:55 pts/0 00:00:00 grep ceph ``` If any commands are stuck check the dmesg logs from the node. Restarting the `csi-cephfsplugin` pod may also help sometimes. If you dont see any stuck messages, confirm the network connectivity, Ceph health, and slow ops. Check for any stale `map/mkfs/mount` commands on the `csi-rbdplugin-xxxx` pod on the node where your application pod is scheduled. You need to exec in the `csi-rbdplugin-xxxx` pod and grep for stale operators like (`rbd map, rbd unmap, mkfs, mount` and `umount`). Identify the `csi-rbdplugin-xxxx` pod running on the node where your application is scheduled with `kubectl get po -o wide` and match the node names. ```console $ kubectl exec -it csi-rbdplugin-vh8d5 -c csi-rbdplugin -- sh $ ps -ef |grep map [...] root 1297024 1296907 0 12:00 pts/0 00:00:00 grep map ``` ```console $ ps -ef |grep mount [...] root 1824 1 0 Jan19 ? 00:00:00 /usr/sbin/rpc.mountd ceph 1041020 1040955 1 07:11 ? 00:03:43 ceph-mgr --fsid=ba41ac93-3b55-4f32-9e06-d3d8c6ff7334 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=[v2:10.111.136.166:3300,v1:10.111.136.166:6789] --mon-initial-members=a --id=a --setuser=ceph --setgroup=ceph --client-mount-uid=0 --client-mount-gid=0 --foreground --public-addr=172.17.0.6 root 1297115 1296907 0 12:00 pts/0 00:00:00 grep mount ``` ```console $ ps -ef |grep mkfs [...] root 1297291 1296907 0 12:00 pts/0 00:00:00 grep mkfs ``` ```console $ ps -ef |grep umount [...] root 1298500 1296907 0 12:01 pts/0 00:00:00 grep umount ``` ```console $ ps -ef |grep unmap [...] root 1298578 1296907 0 12:01 pts/0 00:00:00 grep unmap ``` If any commands are stuck check the dmesg logs from the node. Restarting the `csi-rbdplugin` pod also may help sometimes. If you dont see any stuck messages, confirm the network connectivity, Ceph health, and slow ops. Check the dmesg logs on the node where pvc mounting is failing or the `csi-rbdplugin` container of the `csi-rbdplugin-xxxx` pod on that node. ```console dmesg ``` If nothing else helps, get the last executed command from the ceph-csi pod logs and run it manually inside the provisioner or plugin pod to see if there are errors returned even if they couldn't be seen in the logs. ```console rbd ls --id=csi-rbd-node -m=10.111.136.166:6789 --key=AQDpIQhg+v83EhAAgLboWIbl+FL/nThJzoI3Fg== ``` Where `-m` is one of the mon endpoints and the `--key` is the key used by the CSI driver for accessing the Ceph cluster. When a node is lost, you will see application pods on the node stuck in the `Terminating` state while another pod is rescheduled and is in the `ContainerCreating` state. !!! important For clusters with Kubernetes version 1.26 or greater, see the to recover from the node loss. If using K8s 1.25 or older, continue with these instructions. To force delete the pod stuck in the `Terminating` state: ```console kubectl -n rook-ceph delete pod my-app-69cd495f9b-nl6hf --grace-period 0 --force ``` After the force delete, wait for a timeout of about 8-10 minutes. If the pod still not in the running state, continue with the next section to blocklist the node. To shorten the timeout, you can mark the node as \"blocklisted\" from the so Rook can safely failover the pod sooner. ```console $ ceph osd blocklist add <NODE_IP> # get the node IP you want to blocklist blocklisting <NODE_IP> ``` After running the above command within a few minutes the pod will be running. After you are absolutely sure the node is permanently offline and that the node no longer needs to be blocklisted, remove the node from the blocklist. ```console $ ceph osd blocklist rm <NODE_IP> un-blocklisting <NODE_IP> ```" } ]
{ "category": "Runtime", "file_name": "ceph-csi-common-issues.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: CephCluster CRD Rook allows creation and customization of storage clusters through the custom resource definitions (CRDs). There are primarily four different modes in which to create your cluster. : Consume storage from host paths and raw devices : Dynamically provision storage underneath Rook by specifying the storage class Rook should use to consume storage (via PVCs) : Distribute Ceph mons across three zones, while storage (OSDs) is only configured in two zones : Connect your K8s applications to an external Ceph cluster See the separate topics for a description and examples of each of these scenarios. Settings can be specified at the global level to apply to the cluster as a whole, while other settings can be specified at more fine-grained levels. If any setting is unspecified, a suitable default will be used automatically. `name`: The name that will be used internally for the Ceph cluster. Most commonly the name is the same as the namespace since multiple clusters are not supported in the same namespace. `namespace`: The Kubernetes namespace that will be created for the Rook cluster. The services, pods, and other resources created by the operator will be added to this namespace. The common scenario is to create a single Rook cluster. If multiple clusters are created, they must not have conflicting devices or host paths. `external`: `enable`: if `true`, the cluster will not be managed by Rook but via an external entity. This mode is intended to connect to an existing cluster. In this case, Rook will only consume the external cluster. However, Rook will be able to deploy various daemons in Kubernetes such as object gateways, mds and nfs if an image is provided and will refuse otherwise. If this setting is enabled all* the other options will be ignored except `cephVersion.image` and `dataDirHostPath`. See . If `cephVersion.image` is left blank, Rook will refuse the creation of extra CRs like object, file and nfs. `cephVersion`: The version information for launching the ceph daemons. `image`: The image used for running the ceph daemons. For example, `quay.io/ceph/ceph:v18.2.2`. For more details read the . For the latest ceph images, see the . To ensure a consistent version of the image is running across all nodes in the cluster, it is recommended to use a very specific image version. Tags also exist that would give the latest version, but they are only recommended for test environments. For example, the tag `v17` will be updated each time a new Quincy build is released. Using the `v17` tag is not recommended in production because it may lead to inconsistent versions of the image running across different nodes in the cluster. `allowUnsupported`: If `true`, allow an unsupported major version of the Ceph release. Currently `quincy` and `reef` are supported. Future versions such as `squid` (v19) would require this to be set to `true`. Should be set to `false` in production. `imagePullPolicy`: The image pull policy for the ceph daemon pods. Possible values are `Always`, `IfNotPresent`, and `Never`. The default is `IfNotPresent`. `dataDirHostPath`: The path on the host () where config and data should be stored for each of the services. If the directory does not exist, it will be created. Because this directory persists on the host, it will remain after pods are deleted. Following paths and any of their subpaths must not be used*: `/etc/ceph`, `/rook` or" }, { "data": "WARNING*: For test scenarios, if you delete a cluster and start a new cluster on the same hosts, the path used by `dataDirHostPath` must be deleted. Otherwise, stale keys and other config will remain from the previous cluster and the new mons will fail to start. If this value is empty, each pod will get an ephemeral directory to store their config files that is tied to the lifetime of the pod running on that node. More details can be found in the Kubernetes . `skipUpgradeChecks`: if set to true Rook won't perform any upgrade checks on Ceph daemons during an upgrade. Use this at YOUR OWN RISK*, only if you know what you're doing. To understand Rook's upgrade process of Ceph, read the . `continueUpgradeAfterChecksEvenIfNotHealthy`: if set to true Rook will continue the OSD daemon upgrade process even if the PGs are not clean, or continue with the MDS upgrade even the file system is not healthy. `upgradeOSDRequiresHealthyPGs`: if set to true OSD upgrade process won't start until PGs are healthy. `dashboard`: Settings for the Ceph dashboard. To view the dashboard in your browser see the . `enabled`: Whether to enable the dashboard to view cluster status `urlPrefix`: Allows to serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy) `port`: Allows to change the default port where the dashboard is served `ssl`: Whether to serve the dashboard via SSL, ignored on Ceph versions older than `13.2.2` `monitoring`: Settings for monitoring Ceph using Prometheus. To enable monitoring on your cluster see the . `enabled`: Whether to enable the prometheus service monitor for an internal cluster. For an external cluster, whether to create an endpoint port for the metrics. Default is false. `metricsDisabled`: Whether to disable the metrics reported by Ceph. If false, the prometheus mgr module and Ceph exporter are enabled. If true, the prometheus mgr module and Ceph exporter are both disabled. Default is false. `externalMgrEndpoints`: external cluster manager endpoints `externalMgrPrometheusPort`: external prometheus manager module port. See for more details. `port`: The internal prometheus manager module port where the prometheus mgr module listens. The port may need to be configured when host networking is enabled. `interval`: The interval for the prometheus module to to scrape targets. `network`: For the network settings for the cluster, refer to the `mon`: contains mon related options For more details on the mons and when to choose a number other than `3`, see the . `mgr`: manager top level section `count`: set number of ceph managers between `1` to `2`. The default value is 2. If there are two managers, it is important for all mgr services point to the active mgr and not the standby mgr. Rook automatically updates the label `mgr_role` on the mgr pods to be either `active` or `standby`. Therefore, services need just to add the label `mgr_role=active` to their selector to point to the active mgr. This applies to all services that rely on the ceph mgr such as the dashboard or the prometheus metrics collector. `modules`: A list of Ceph manager modules to enable or disable. Note the \"dashboard\" and \"monitoring\" modules are already configured by other settings. `crashCollector`: The settings for crash collector" }, { "data": "`disable`: is set to `true`, the crash collector will not run on any node where a Ceph daemon runs `daysToRetain`: specifies the number of days to keep crash entries in the Ceph cluster. By default the entries are kept indefinitely. `logCollector`: The settings for log collector daemon. `enabled`: if set to `true`, the log collector will run as a side-car next to each Ceph daemon. The Ceph configuration option `logtofile` will be turned on, meaning Ceph daemons will log on files in addition to still logging to container's stdout. These logs will be rotated. In case a daemon terminates with a segfault, the coredump files will be commonly be generated in `/var/lib/systemd/coredump` directory on the host, depending on the underlying OS location. (default: `true`) `periodicity`: how often to rotate daemon's log. (default: 24h). Specified with a time suffix which may be `h` for hours or `d` for days. Rotating too often will slightly impact the daemon's performance since the signal briefly interrupts the program.* `annotations`: `labels`: `placement`: `resources`: `priorityClassNames`: `storage`: Storage selection and configuration that will be used across the cluster. Note that these settings can be overridden for specific nodes. `useAllNodes`: `true` or `false`, indicating if all nodes in the cluster should be used for storage according to the cluster level storage selection and configuration values. If individual nodes are specified under the `nodes` field, then `useAllNodes` must be set to `false`. `nodes`: Names of individual nodes in the cluster that should have their storage included in accordance with either the cluster level configuration specified above or any node specific overrides described in the next section below. `useAllNodes` must be set to `false` to use specific nodes and their config. See below. `config`: Config settings applied to all OSDs on the node unless overridden by `devices`. See the below. * `onlyApplyOSDPlacement`: Whether the placement specific for OSDs is merged with the `all` placement. If `false`, the OSD placement will be merged with the `all` placement. If true, the `OSD placement will be applied` and the `all` placement will be ignored. The placement for OSDs is computed from several different places depending on the type of OSD: For non-PVCs: `placement.all` and `placement.osd` For PVCs: `placement.all` and inside the storageClassDeviceSets from the `placement` or `preparePlacement` `flappingRestartIntervalHours`: Defines the time for which an OSD pod will sleep before restarting, if it stopped due to flapping. Flapping occurs where OSDs are marked `down` by Ceph more than 5 times in 600 seconds. The OSDs will stay down when flapping since they likely have a bad disk or other issue that needs investigation. If the issue with the OSD is fixed manually, the OSD pod can be manually restarted. The sleep is disabled if this interval is set to 0. `disruptionManagement`: The section for configuring management of daemon disruptions `managePodBudgets`: if `true`, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically via the strategy outlined in the . The operator will block eviction of OSDs by default and unblock them safely when drains are detected. `osdMaintenanceTimeout`: is a duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the default DOWN/OUT interval) when it is draining. The default value is `30` minutes. `pgHealthCheckTimeout`: A duration in minutes that the operator will wait for the placement groups to become healthy (see `pgHealthyRegex`) after a drain was completed and OSDs came back up. Operator will continue with the next drain if the timeout" }, { "data": "No values or `0` means that the operator will wait until the placement groups are healthy before unblocking the next drain. `pgHealthyRegex`: The regular expression that is used to determine which PG states should be considered healthy. The default is `^(active\\+clean|active\\+clean\\+scrubbing|active\\+clean\\+scrubbing\\+deep)$`. `removeOSDsIfOutAndSafeToRemove`: If `true` the operator will remove the OSDs that are down and whose data has been restored to other OSDs. In Ceph terms, the OSDs are `out` and `safe-to-destroy` when they are removed. `cleanupPolicy`: `security`: `cephConfig`: `csi`: Official releases of Ceph Container images are available from [Docker Hub](https://hub.docker.com/r/ceph ). These are general purpose Ceph container with all necessary daemons and dependencies installed. | TAG | MEANING | | -- | | | vRELNUM | Latest release in this series (e.g., v17 = Quincy) | | vRELNUM.Y | Latest stable release in this stable series (e.g., v17.2) | | vRELNUM.Y.Z | A specific release (e.g., v18.2.2) | | vRELNUM.Y.Z-YYYYMMDD | A specific build (e.g., v18.2.2-20240311) | A specific will contain a specific release of Ceph as well as security fixes from the Operating System. `count`: Set the number of mons to be started. The number must be between `1` and `9`. The recommended value is most commonly `3`. For highest availability, an odd number of mons should be specified. For higher durability in case of mon loss, an even number can be specified although availability may be lower. To maintain quorum a majority of mons must be up. For example, if there are three mons, two must be up. If there are four mons, three must be up. If there are two mons, both must be up. If quorum is lost, see the to restore quorum from a single mon. `allowMultiplePerNode`: Whether to allow the placement of multiple mons on a single node. Default is `false` for production. Should only be set to `true` in test environments. `volumeClaimTemplate`: A `PersistentVolumeSpec` used by Rook to create PVCs for monitor storage. This field is optional, and when not provided, HostPath volume mounts are used. The current set of fields from template that are used are `storageClassName` and the `storage` resource request and limit. The default storage size request for new PVCs is `10Gi`. Ensure that associated storage class is configured to use `volumeBindingMode: WaitForFirstConsumer`. This setting only applies to new monitors that are created when the requested number of monitors increases, or when a monitor fails and is recreated. An . `failureDomainLabel`: The label that is expected on each node where the mons are expected to be deployed. The labels must be found in the list of well-known . `zones`: The failure domain names where the Mons are expected to be deployed. There must be at least three zones specified in the list. Each zone can be backed by a different storage class by specifying the `volumeClaimTemplate`. `name`: The name of the zone, which is the value of the domain label. `volumeClaimTemplate`: A `PersistentVolumeSpec` used by Rook to create PVCs for monitor storage. This field is optional, and when not provided, HostPath volume mounts are used. The current set of fields from template that are used are `storageClassName` and the `storage` resource request and limit. The default storage size request for new PVCs is `10Gi`. Ensure that associated storage class is configured to use `volumeBindingMode:" }, { "data": "This setting only applies to new monitors that are created when the requested number of monitors increases, or when a monitor fails and is recreated. An . `stretchCluster`: The stretch cluster settings that define the zones (or other failure domain labels) across which to configure the cluster. `failureDomainLabel`: The label that is expected on each node where the cluster is expected to be deployed. The labels must be found in the list of well-known . `subFailureDomain`: With a zone, the data replicas must be spread across OSDs in the subFailureDomain. The default is `host`. `zones`: The failure domain names where the Mons and OSDs are expected to be deployed. There must be three zones* specified in the list. This element is always named `zone` even if a non-default `failureDomainLabel` is specified. The elements have two values: `name`: The name of the zone, which is the value of the domain label. `arbiter`: Whether the zone is expected to be the arbiter zone which only runs a single mon. Exactly one zone must be labeled `true`. `volumeClaimTemplate`: A `PersistentVolumeSpec` used by Rook to create PVCs for monitor storage. This field is optional, and when not provided, HostPath volume mounts are used. The current set of fields from template that are used are `storageClassName` and the `storage` resource request and limit. The default storage size request for new PVCs is `10Gi`. Ensure that associated storage class is configured to use `volumeBindingMode: WaitForFirstConsumer`. This setting only applies to new monitors that are created when the requested number of monitors increases, or when a monitor fails and is recreated. An . The two zones that are not the arbiter zone are expected to have OSDs deployed. If these settings are changed in the CRD the operator will update the number of mons during a periodic check of the mon health, which by default is every 45 seconds. To change the defaults that the operator uses to determine the mon health and whether to failover a mon, refer to the . The intervals should be small enough that you have confidence the mons will maintain quorum, while also being long enough to ignore network blips where mons are failed over too often. You can use the cluster CR to enable or disable any manager module. This can be configured like so: ```yaml mgr: modules: name: <name of the module> enabled: true ``` Some modules will have special configuration to ensure the module is fully functional after being enabled. Specifically: `pgautoscaler`: Rook will configure all new pools with PG autoscaling by setting: `osdpooldefaultpgautoscalemode = on` If not specified, the default SDN will be used. Configure the network that will be enabled for the cluster and services. `provider`: Specifies the network provider that will be used to connect the network interface. You can choose between `host`, and `multus`. `selectors`: Used for `multus` provider only. Select NetworkAttachmentDefinitions to use for Ceph networks. `public`: Select the NetworkAttachmentDefinition to use for the public network. `cluster`: Select the NetworkAttachmentDefinition to use for the cluster network. `addressRanges`: Used for `host` or `multus` providers only. Allows overriding the address ranges (CIDRs) that Ceph will listen on. `public`: A list of individual network ranges in CIDR format to use for Ceph's public network. `cluster`: A list of individual network ranges in CIDR format to use for Ceph's cluster network. `ipFamily`: Specifies the network stack Ceph daemons should listen" }, { "data": "`dualStack`: Specifies that Ceph daemon should listen on both IPv4 and IPv6 network stacks. `connections`: Settings for network connections using Ceph's msgr2 protocol `requireMsgr2`: Whether to require communication over msgr2. If true, the msgr v1 port (6789) will be disabled and clients will be required to connect to the Ceph cluster with the v2 port (3300). Requires a kernel that supports msgr2 (kernel 5.11 or CentOS 8.4 or newer). Default is false. `encryption`: Settings for encryption on the wire to Ceph daemons `enabled`: Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network. The default is false. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted. When encryption is not enabled, clients still establish a strong initial authentication and data integrity is still validated with a crc check. IMPORTANT: Encryption requires the 5.11 kernel for the latest nbd and cephfs drivers. Alternatively for testing only, set \"mounter: rbd-nbd\" in the rbd storage class, or \"mounter: fuse\" in the cephfs storage class. The nbd and fuse drivers are not recommended in production since restarting the csi driver pod will disconnect the volumes. If this setting is enabled, CephFS volumes also require setting `CSICEPHFSKERNELMOUNTOPTIONS` to `\"ms_mode=secure\"` in operator.yaml. `compression`: `enabled`: Whether to compress the data in transit across the wire. The default is false. See the kernel requirements above for encryption. !!! caution Changing networking configuration after a Ceph cluster has been deployed is only supported for the network encryption settings. Changing other network settings is NOT supported and will likely result in a non-functioning cluster. Selecting a non-default network provider is an advanced topic. Read more in the documentation. Provide single-stack IPv4 or IPv6 protocol to assign corresponding addresses to pods and services. This field is optional. Possible inputs are IPv6 and IPv4. Empty value will be treated as IPv4. To enable dual stack see the . In addition to the cluster level settings specified above, each individual node can also specify configuration to override the cluster level settings and defaults. If a node does not specify any configuration then it will inherit the cluster level settings. `name`: The name of the node, which should match its `kubernetes.io/hostname` label. `config`: Config settings applied to all OSDs on the node unless overridden by `devices`. See the below. When `useAllNodes` is set to `true`, Rook attempts to make Ceph cluster management as hands-off as possible while still maintaining reasonable data safety. If a usable node comes online, Rook will begin to use it automatically. To maintain a balance between hands-off usability and data safety, Nodes are removed from Ceph as OSD hosts only (1) if the node is deleted from Kubernetes itself or (2) if the node has its taints or affinities modified in such a way that the node is no longer usable by Rook. Any changes to taints or affinities, intentional or unintentional, may affect the data reliability of the Ceph cluster. In order to help protect against this somewhat, deletion of nodes by taint or affinity modifications must be \"confirmed\" by deleting the Rook Ceph operator pod and allowing the operator deployment to restart the pod. For production clusters, we recommend that `useAllNodes` is set to `false` to prevent the Ceph cluster from suffering reduced data reliability unintentionally due to a user" }, { "data": "When `useAllNodes` is set to `false`, Rook relies on the user to be explicit about when nodes are added to or removed from the Ceph cluster. Nodes are only added to the Ceph cluster if the node is added to the Ceph cluster resource. Similarly, nodes are only removed if the node is removed from the Ceph cluster resource. Nodes can be added and removed over time by updating the Cluster CRD, for example with `kubectl -n rook-ceph edit cephcluster rook-ceph`. This will bring up your default text editor and allow you to add and remove storage nodes from the cluster. This feature is only available when `useAllNodes` has been set to `false`. Below are the settings for host-based cluster. This type of cluster can specify devices for OSDs, both at the cluster and individual node level, for selecting which storage resources will be included in the cluster. `useAllDevices`: `true` or `false`, indicating whether all devices found on nodes in the cluster should be automatically consumed by OSDs. Not recommended* unless you have a very controlled environment where you will not risk formatting of devices with existing data. When `true`, all devices and partitions will be used. Is overridden by `deviceFilter` if specified. LVM logical volumes are not picked by `useAllDevices`. `deviceFilter`: A regular expression for short kernel names of devices (e.g. `sda`) that allows selection of devices and partitions to be consumed by OSDs. LVM logical volumes are not picked by `deviceFilter`.If individual devices have been specified for a node then this filter will be ignored. This field uses . For example: `sdb`: Only selects the `sdb` device if found `^sd.`: Selects all devices starting with `sd` `^sd[a-d]`: Selects devices starting with `sda`, `sdb`, `sdc`, and `sdd` if found `^s`: Selects all devices that start with `s` `^[^r]`: Selects all devices that do not* start with `r` `devicePathFilter`: A regular expression for device paths (e.g. `/dev/disk/by-path/pci-0:1:2:3-scsi-1`) that allows selection of devices and partitions to be consumed by OSDs. LVM logical volumes are not picked by `devicePathFilter`.If individual devices or `deviceFilter` have been specified for a node then this filter will be ignored. This field uses . For example: `^/dev/sd.`: Selects all devices starting with `sd` `^/dev/disk/by-path/pci-.`: Selects all devices which are connected to PCI bus `devices`: A list of individual device names belonging to this node to include in the storage cluster. `name`: The name of the devices and partitions (e.g., `sda`). The full udev path can also be specified for devices, partitions, and logical volumes (e.g. `/dev/disk/by-id/ata-ST4000DM004-XXXX` - this will not change after reboots). `config`: Device-specific config settings. See the below Host-based cluster supports raw devices, partitions, logical volumes, encrypted devices, and multipath devices. Be sure to see the for additional considerations. Below are the settings for a PVC-based cluster. `storageClassDeviceSets`: Explained in The following are the settings for Storage Class Device Sets which can be configured to create OSDs that are backed by block mode PVs. `name`: A name for the set. `count`: The number of devices in the set. `resources`: The CPU and RAM requests/limits for the devices. (Optional) `placement`: The placement criteria for the devices. (Optional) Default is no placement criteria. The syntax is the same as for . It supports `nodeAffinity`, `podAffinity`, `podAntiAffinity` and `tolerations` keys. It is recommended to configure the placement such that the OSDs will be as evenly spread across nodes as" }, { "data": "At a minimum, anti-affinity should be added so at least one OSD will be placed on each available nodes. However, if there are more OSDs than nodes, this anti-affinity will not be effective. Another placement scheme to consider is to add labels to the nodes in such a way that the OSDs can be grouped on those nodes, create multiple storageClassDeviceSets, and add node affinity to each of the device sets that will place the OSDs in those sets of nodes. Rook will automatically add required nodeAffinity to the OSD daemons to match the topology labels that are found on the nodes where the OSD prepare jobs ran. To ensure data durability, the OSDs are required to run in the same topology that the Ceph CRUSH map expects. For example, if the nodes are labeled with rack topology labels, the OSDs will be constrained to a certain rack. Without the topology labels, Rook will not constrain the OSDs beyond what is required by the PVs, for example to run in the zone where provisioned. See the section for the related labels. `preparePlacement`: The placement criteria for the preparation of the OSD devices. Creating OSDs is a two-step process and the prepare job may require different placement than the OSD daemons. If the `preparePlacement` is not specified, the `placement` will instead be applied for consistent placement for the OSD prepare jobs and OSD deployments. The `preparePlacement` is only useful for `portable` OSDs in the device sets. OSDs that are not portable will be tied to the host where the OSD prepare job initially runs. For example, provisioning may require topology spread constraints across zones, but the OSD daemons may require constraints across hosts within the zones. `portable`: If `true`, the OSDs will be allowed to move between nodes during failover. This requires a storage class that supports portability (e.g. `aws-ebs`, but not the local storage provisioner). If `false`, the OSDs will be assigned to a node permanently. Rook will configure Ceph's CRUSH map to support the portability. `tuneDeviceClass`: For example, Ceph cannot detect AWS volumes as HDDs from the storage class \"gp2\", so you can improve Ceph performance by setting this to true. `tuneFastDeviceClass`: For example, Ceph cannot detect Azure disks as SSDs from the storage class \"managed-premium\", so you can improve Ceph performance by setting this to true.. `volumeClaimTemplates`: A list of PVC templates to use for provisioning the underlying storage devices. `metadata.name`: \"data\", \"metadata\", or \"wal\". If a single template is provided, the name must be \"data\". If the name is \"metadata\" or \"wal\", the devices are used to store the Ceph metadata or WAL respectively. In both cases, the devices must be raw devices or LVM logical volumes. `resources.requests.storage`: The desired capacity for the underlying storage devices. `storageClassName`: The StorageClass to provision PVCs from. Default would be to use the cluster-default StorageClass. `volumeMode`: The volume mode to be set for the PVC. Which should be Block `accessModes`: The access mode for the PVC to be bound by OSD. `schedulerName`: Scheduler name for OSD pod placement. (Optional) `encrypted`: whether to encrypt all the OSDs in a given storageClassDeviceSet See the table in to know the allowed configurations. The following storage selection settings are specific to Ceph and do not apply to other backends. All variables are key-value pairs represented as strings. `metadataDevice`: Name of a device, or lvm to use for the metadata of OSDs on each" }, { "data": "Performance can be improved by using a low latency device (such as SSD or NVMe) as the metadata device, while other spinning platter (HDD) devices on a node are used to store data. Provisioning will fail if the user specifies a `metadataDevice` but that device is not used as a metadata device by Ceph. Notably, `ceph-volume` will not use a device of the same device class (HDD, SSD, NVMe) as OSD devices for metadata, resulting in this failure. `databaseSizeMB`: The size in MB of a bluestore database. Include quotes around the size. `walSizeMB`: The size in MB of a bluestore write ahead log (WAL). Include quotes around the size. `deviceClass`: The to use for this selection of storage devices. (By default, if a device's class has not already been set, OSDs will automatically set a device's class to either `hdd`, `ssd`, or `nvme` based on the hardware properties exposed by the Linux kernel.) These storage classes can then be used to select the devices backing a storage pool by specifying them as the value of . `initialWeight`: The initial OSD weight in TiB units. By default, this value is derived from OSD's capacity. `primaryAffinity`: The value of an OSD, within range `[0, 1]` (default: `1`). `osdsPerDevice`*: The number of OSDs to create on each device. High performance devices such as NVMe can handle running multiple OSDs. If desired, this can be overridden for each node and each device. `encryptedDevice`*: Encrypt OSD volumes using dmcrypt (\"true\" or \"false\"). By default this option is disabled. See for more information on encryption in Ceph. (Resizing is not supported for host-based clusters.) `crushRoot`: The value of the `root` CRUSH map label. The default is `default`. Generally, you should not need to change this. However, if any of your topology labels may have the value `default`, you need to change `crushRoot` to avoid conflicts, since CRUSH map values need to be unique. Allowed configurations are: | block device type | host-based cluster | PVC-based cluster | | :- | : | : | | disk | | | | part | `encryptedDevice` must be `false` | `encrypted` must be `false` | | lvm | `metadataDevice` must be `\"\"`, `osdsPerDevice` must be `1`, and `encryptedDevice` must be `false` | `metadata.name` must not be `metadata` or `wal` and `encrypted` must be `false` | | crypt | | | | mpath | | | If `metadataDevice` is specified in the global OSD configuration or in the node level OSD configuration, the metadata device will be shared between all OSDs on the same node. In other words, OSDs will be initialized by `lvm batch`. In this case, we can't use partition device. If `metadataDevice` is specified in the device local configuration, we can use partition as metadata device. In other words, OSDs are initialized by `lvm prepare`. Annotations and Labels can be specified so that the Rook components will have those annotations / labels added to them. You can set annotations / labels for Rook components for the list of key value pairs: `all`: Set annotations / labels for all components except" }, { "data": "`mgr`: Set annotations / labels for MGRs `mon`: Set annotations / labels for mons `osd`: Set annotations / labels for OSDs `dashboard`: Set annotations / labels for the dashboard service `prepareosd`: Set annotations / labels for OSD Prepare Jobs `monitoring`: Set annotations / labels for service monitor `crashcollector`: Set annotations / labels for crash collectors `clusterMetadata`: Set annotations only to `rook-ceph-mon-endpoints` configmap and the `rook-ceph-mon` and `rook-ceph-admin-keyring` secrets. These annotations will not be merged with the `all` annotations. The common usage is for backing up these critical resources with `kubed`. Note the clusterMetadata annotation will not be merged with the `all` annotation. When other keys are set, `all` will be merged together with the specific component. Placement configuration for the cluster services. It includes the following keys: `mgr`, `mon`, `arbiter`, `osd`, `prepareosd`, `cleanup`, and `all`. Each service will have its placement configuration generated by merging the generic configuration under `all` with the most specific one (which will override any attributes). In stretch clusters, if the `arbiter` placement is specified, that placement will only be applied to the arbiter. Neither will the `arbiter` placement be merged with the `all` placement to allow the arbiter to be fully independent of other daemon placement. The remaining mons will still use the `mon` and/or `all` sections. !!! note Placement of OSD pods is controlled using the , not the general `placement` configuration. A Placement configuration is specified (according to the kubernetes PodSpec) as: `nodeAffinity`: kubernetes `podAffinity`: kubernetes `podAntiAffinity`: kubernetes `tolerations`: list of kubernetes `topologySpreadConstraints`: kubernetes If you use `labelSelector` for `osd` pods, you must write two rules both for `rook-ceph-osd` and `rook-ceph-osd-prepare` like . It comes from the design that there are these two pods for an OSD. For more detail, see the and . The Rook Ceph operator creates a Job called `rook-ceph-detect-version` to detect the full Ceph version used by the given `cephVersion.image`. The placement from the `mon` section is used for the Job except for the `PodAntiAffinity` field. To control where various services will be scheduled by kubernetes, use the placement configuration sections below. The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node`. Specific node affinity and tolerations that only apply to the`mon`daemons in this example require the label `role=storage-mon-node` and also tolerate the control plane taint. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false dashboard: enabled: true placement: all: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: role operator: In values: storage-node mon: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: matchExpressions: key: role operator: In values: storage-mon-node tolerations: effect: NoSchedule key: node-role.kubernetes.io/control-plane operator: Exists ``` Resources should be specified so that the Rook components are handled after . This allows to keep Rook components running when for example a node runs out of memory and the Rook components are not killed depending on their Quality of Service class. You can set resource requests/limits for Rook components through the structure in the following keys: `mon`: Set resource requests/limits for mons `osd`: Set resource requests/limits for OSDs. This key applies for all OSDs regardless of their device classes. In case of need to apply resource requests/limits for OSDs with particular device class use specific osd keys below. If the memory resource is declared Rook will automatically set the OSD configuration `osdmemorytarget` to the same value. This aims to ensure that the actual OSD memory consumption is consistent with the OSD pods' resource declaration. `osd-<deviceClass>`: Set resource requests/limits for OSDs on a specific device class. Rook will automatically detect `hdd`, `ssd`, or `nvme` device classes. Custom device classes can also be" }, { "data": "`mgr`: Set resource requests/limits for MGRs `mgr-sidecar`: Set resource requests/limits for the MGR sidecar, which is only created when `mgr.count: 2`. The sidecar requires very few resources since it only executes every 15 seconds to query Ceph for the active mgr and update the mgr services if the active mgr changed. `prepareosd`: Set resource requests/limits for OSD prepare job `crashcollector`: Set resource requests/limits for crash. This pod runs wherever there is a Ceph pod running. It scrapes for Ceph daemon core dumps and sends them to the Ceph manager crash module so that core dumps are centralized and can be easily listed/accessed. You can read more about the . `logcollector`: Set resource requests/limits for the log collector. When enabled, this container runs as side-car to each Ceph daemons. `cleanup`: Set resource requests/limits for cleanup job, responsible for wiping cluster's data after uninstall `exporter`: Set resource requests/limits for Ceph exporter. In order to provide the best possible experience running Ceph in containers, Rook internally recommends minimum memory limits if resource limits are passed. If a user configures a limit or request value that is too low, Rook will still run the pod(s) and print a warning to the operator log. `mon`: 1024MB `mgr`: 512MB `osd`: 2048MB `crashcollector`: 60MB `mgr-sidecar`: 100MB limit, 40MB requests `prepareosd`: no limits (see the note) `exporter`: 128MB limit, 50MB requests !!! note We recommend not setting memory limits on the OSD prepare job to prevent OSD provisioning failure due to memory constraints. The OSD prepare job bursts memory usage during the OSD provisioning depending on the size of the device, typically 1-2Gi for large disks. The OSD prepare job only bursts a single time per OSD. All future runs of the OSD prepare job will detect the OSD is already provisioned and skip the provisioning. !!! hint The resources for MDS daemons are not configured in the Cluster. Refer to the instead. For more information on resource requests/limits see the official Kubernetes documentation: `requests`: Requests for cpu or memory. `cpu`: Request for CPU (example: one CPU core `1`, 50% of one CPU core `500m`). `memory`: Limit for Memory (example: one gigabyte of memory `1Gi`, half a gigabyte of memory `512Mi`). `limits`: Limits for cpu or memory. `cpu`: Limit for CPU (example: one CPU core `1`, 50% of one CPU core `500m`). `memory`: Limit for Memory (example: one gigabyte of memory `1Gi`, half a gigabyte of memory `512Mi`). !!! warning Before setting resource requests/limits, please take a look at the Ceph documentation for recommendations for each component: . This example shows that you can override these requests/limits for OSDs per node when using `useAllNodes: false` in the `node` item in the `nodes` list. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph spec: cephVersion: image: quay.io/ceph/ceph:v18.2.2 dataDirHostPath: /var/lib/rook mon: count: 3 allowMultiplePerNode: false storage: useAllNodes: false nodes: name: \"172.17.4.201\" resources: limits: memory: \"4096Mi\" requests: cpu: \"2\" memory: \"4096Mi\" ``` Priority class names can be specified so that the Rook components will have those priority class names added to them. You can set priority class names for Rook components for the list of key value pairs: `all`: Set priority class names for MGRs, Mons, OSDs, and crashcollectors. `mgr`: Set priority class names for MGRs. Examples default to system-cluster-critical. `mon`: Set priority class names for Mons. Examples default to system-node-critical. `osd`: Set priority class names for OSDs. Examples default to" }, { "data": "`crashcollector`: Set priority class names for crashcollectors. The specific component keys will act as overrides to `all`. The Rook Ceph operator will monitor the state of the CephCluster on various components by default. The following CRD settings are available: `healthCheck`: main ceph cluster health monitoring section Currently three health checks are implemented: `mon`: health check on the ceph monitors, basically check whether monitors are members of the quorum. If after a certain timeout a given monitor has not joined the quorum back it will be failed over and replace by a new monitor. `osd`: health check on the ceph osds `status`: ceph health status check, periodically check the Ceph health state and reflects it in the CephCluster CR status field. The liveness probe and startup probe of each daemon can also be controlled via `livenessProbe` and `startupProbe` respectively. The settings are valid for `mon`, `mgr` and `osd`. Here is a complete example for both `daemonHealth`, `livenessProbe`, and `startupProbe`: ```yaml healthCheck: daemonHealth: mon: disabled: false interval: 45s timeout: 600s osd: disabled: false interval: 60s status: disabled: false livenessProbe: mon: disabled: false mgr: disabled: false osd: disabled: false startupProbe: mon: disabled: false mgr: disabled: false osd: disabled: false ``` The probe's timing values and thresholds (but not the probe itself) can also be overridden. For more info, refer to the . For example, you could change the `mgr` probe by applying: ```yaml healthCheck: startupProbe: mgr: disabled: false probe: initialDelaySeconds: 3 periodSeconds: 3 failureThreshold: 30 livenessProbe: mgr: disabled: false probe: initialDelaySeconds: 3 periodSeconds: 3 ``` Changing the liveness probe is an advanced operation and should rarely be necessary. If you want to change these settings then modify the desired settings. The operator is regularly configuring and checking the health of the cluster. The results of the configuration and health checks can be seen in the `status` section of the CephCluster CR. ```console kubectl -n rook-ceph get CephCluster -o yaml ``` ```yaml [...] status: ceph: health: HEALTH_OK lastChecked: \"2021-03-02T21:22:11Z\" capacity: bytesAvailable: 22530293760 bytesTotal: 25757220864 bytesUsed: 3226927104 lastUpdated: \"2021-03-02T21:22:11Z\" message: Cluster created successfully phase: Ready state: Created storage: deviceClasses: name: hdd version: image: quay.io/ceph/ceph:v18.2.2 version: 16.2.6-0 conditions: lastHeartbeatTime: \"2021-03-02T21:22:11Z\" lastTransitionTime: \"2021-03-02T21:21:09Z\" message: Cluster created successfully reason: ClusterCreated status: \"True\" type: Ready ``` Ceph is constantly monitoring the health of the data plane and reporting back if there are any warnings or errors. If everything is healthy from Ceph's perspective, you will see `HEALTH_OK`. If Ceph reports any warnings or errors, the details will be printed to the status. If further troubleshooting is needed to resolve these issues, the toolbox will likely be needed where you can run `ceph` commands to find more details. The `capacity` of the cluster is reported, including bytes available, total, and used. The available space will be less that you may expect due to overhead in the OSDs. The `conditions` represent the status of the Rook operator. If the cluster is fully configured and the operator is stable, the `Ready` condition is raised with `ClusterCreated` reason and no other conditions. The cluster will remain in the `Ready` condition after the first successful configuration since it is expected the storage is consumable from this point on. If there are issues preventing the storage layer from working, they are expected to show as Ceph health errors. If the cluster is externally connected successfully, the `Ready` condition will have the reason" }, { "data": "If the operator is currently being configured or the operator is checking for update, there will be a `Progressing` condition. If there was a failure, the condition(s) status will be `false` and the `message` will give a summary of the error. See the operator log for more details. There are several other properties for the overall status including: `message`, `phase`, and `state`: A summary of the overall current state of the cluster, which is somewhat duplicated from the conditions for backward compatibility. `storage.deviceClasses`: The names of the types of storage devices that Ceph discovered in the cluster. These types will be `ssd` or `hdd` unless they have been overridden with the `crushDeviceClass` in the `storageClassDeviceSets`. `version`: The version of the Ceph image currently deployed. The topology of the cluster is important in production environments where you want your data spread across failure domains. The topology can be controlled by adding labels to the nodes. When the labels are found on a node at first OSD deployment, Rook will add them to the desired level in the . The complete list of labels in hierarchy order from highest to lowest is: ```text topology.kubernetes.io/region topology.kubernetes.io/zone topology.rook.io/datacenter topology.rook.io/room topology.rook.io/pod topology.rook.io/pdu topology.rook.io/row topology.rook.io/rack topology.rook.io/chassis ``` For example, if the following labels were added to a node: ```console kubectl label node mynode topology.kubernetes.io/zone=zone1 kubectl label node mynode topology.rook.io/rack=zone1-rack1 ``` These labels would result in the following hierarchy for OSDs on that node (this command can be run in the Rook toolbox): ```console $ ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.01358 root default -5 0.01358 zone zone1 -4 0.01358 rack rack1 -3 0.01358 host mynode 0 hdd 0.00679 osd.0 up 1.00000 1.00000 1 hdd 0.00679 osd.1 up 1.00000 1.00000 ``` Ceph requires unique names at every level in the hierarchy (CRUSH map). For example, you cannot have two racks with the same name that are in different zones. Racks in different zones must be named uniquely. Note that the `host` is added automatically to the hierarchy by Rook. The host cannot be specified with a topology label. All topology labels are optional. !!! hint When setting the node labels prior to `CephCluster` creation, these settings take immediate effect. However, applying this to an already deployed `CephCluster` requires removing each node from the cluster first and then re-adding it with new configuration to take effect. Do this node by node to keep your data safe! Check the result with `ceph osd tree` from the . The OSD tree should display the hierarchy for the nodes that already have been re-added. To utilize the `failureDomain` based on the node labels, specify the corresponding option in the ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: rack # this matches the topology labels on nodes replicated: size: 3 ``` This configuration will split the replication of volumes across unique racks in the data center setup. During deletion of a CephCluster resource, Rook protects against accidental or premature destruction of user data by blocking deletion if there are any other Rook Ceph Custom Resources that reference the CephCluster being" }, { "data": "Rook will warn about which other resources are blocking deletion in three ways until all blocking resources are deleted: An event will be registered on the CephCluster resource A status condition will be added to the CephCluster resource An error will be added to the Rook Ceph operator log Rook has the ability to cleanup resources and data that were deployed when a CephCluster is removed. The policy settings indicate which data should be forcibly deleted and in what way the data should be wiped. The `cleanupPolicy` has several fields: `confirmation`: Only an empty string and `yes-really-destroy-data` are valid values for this field. If this setting is empty, the `cleanupPolicy` settings will be ignored and Rook will not cleanup any resources during cluster removal. To reinstall the cluster, the admin would then be required to follow the to delete the data on hosts. If this setting is `yes-really-destroy-data`, the operator will automatically delete the data on hosts. Because this cleanup policy is destructive, after the confirmation is set to `yes-really-destroy-data` Rook will stop configuring the cluster as if the cluster is about to be destroyed. `sanitizeDisks`: sanitizeDisks represents advanced settings that can be used to delete data on drives. `method`: indicates if the entire disk should be sanitized or simply ceph's metadata. Possible choices are `quick` (default) or `complete` `dataSource`: indicate where to get random bytes from to write on the disk. Possible choices are `zero` (default) or `random`. Using random sources will consume entropy from the system and will take much more time then the zero source `iteration`: overwrite N times instead of the default (1). Takes an integer value `allowUninstallWithVolumes`: If set to true, then the cephCluster deletion doesn't wait for the PVCs to be deleted. Default is `false`. To automate activation of the cleanup, you can use the following command. WARNING: DATA WILL BE PERMANENTLY DELETED: ```console kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '{\"spec\":{\"cleanupPolicy\":{\"confirmation\":\"yes-really-destroy-data\"}}}' ``` Nothing will happen until the deletion of the CR is requested, so this can still be reverted. However, all new configuration by the operator will be blocked with this cleanup policy enabled. Rook waits for the deletion of PVs provisioned using the cephCluster before proceeding to delete the cephCluster. To force deletion of the cephCluster without waiting for the PVs to be deleted, you can set the `allowUninstallWithVolumes` to true under `spec.CleanupPolicy`. !!! attention This feature is experimental. The Ceph config options are applied after the MONs are all in quorum and running. To set Ceph config options, you can add them to your `CephCluster` spec as shown below. See the for detailed information about how to configure Ceph. ```yaml spec: cephConfig: global: osdpooldefault_size: \"3\" monwarnonpoolno_redundancy: \"false\" osdcrushupdateonstart: \"false\" \"osd.*\": osdmaxscrubs: \"10\" ``` !!! warning Rook performs no direct validation on these config options, so the validity of the settings is the user's responsibility. The operator does not unset any removed config options, it is the user's responsibility to unset or set the default value for each removed option manually using the Ceph CLI. The CSI driver options mentioned here are applied per Ceph cluster. The following options are available: `readAffinity`: RBD and CephFS volumes allow serving reads from an OSD in proximity to the client. Refer to the read affinity section in the for more details. `enabled`: Whether to enable read affinity for the CSI driver. Default is `false`. `crushLocationLabels`: Node labels to use as CRUSH location, corresponding to the values set in the CRUSH map. Defaults to the labels mentioned in the topic. `cephfs`: `kernelMountOptions`: Mount options for kernel mounter. Refer to the for more details. `fuseMountOptions`:" } ]
{ "category": "Runtime", "file_name": "ceph-cluster-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "`oio-sds` allows you to alter a lot of configuration at compile-time as well at runtime. We use a minimal generation tool for a set of variables that can be modified at runtime, and whose default value can be changed at the compile-time. Those variables are described Some variables, though, are not configurable yet, and still require a value to be fixed once for all when compiling the code. Please find below the list of the `cmake` directives to control such variables. | Macro | Default | Description | | -- | - | -- | | OIOSDS_RELEASE | \"master\" | Global release name | | OIOSDSPROJECTVERSION_SHORT | \"1.0\" | Minor version number | Used by `gcc` | Macro | Default | Description | | -- | - | -- | | DAEMONDEFAULTTIMEOUT_READ | 1000 | How long a gridd will block on a recv() (in milliseconds) | | DAEMONDEFAULTTIMEOUT_ACCEPT | 1000 | How long a gridd will block on a accept() (in milliseconds) | | Macro | Default | Description | | -- | - | -- | | OIOEVTBEANSTALKDDEFAULTTUBE | \"oio\" | | | Macro | Default | Description | | -- | - | -- | | GCLUSTERRUNDIR | \"/var/run\" | Prefix to spool. | | GCLUSTERCONFIGFILE_PATH | \"/etc/oio/sds.conf\" | System-wide configuration file | | GCLUSTERCONFIGDIR_PATH | \"/etc/oio/sds.conf.d\" | System-wide configuration directory for additional files. | | GCLUSTERCONFIGLOCAL_PATH | \".oio/sds.conf\" | Local configuration directory. | | GCLUSTERAGENTSOCK_PATH | \"/var/run/oio-sds-agent.sock\" | Default path for agent's socket. | | Macro | Default | Description | | -- | - | -- | | PROXYD_PREFIX | \"v3.0\" | Prefix applied to proxyd's URL, second version (with accounts) | | PROXYDHEADERPREFIX | \"X-oio-\" | Prefix for all the headers sent to the proxy | | PROXYDHEADERREQID | PROXYDHEADERPREFIX \"req-id\" | Header whose value is printed in access log, destined to aggregate several requests belonging to the same session. | | PROXYDHEADERNOEMPTY | PROXYDHEADERPREFIX \"no-empty-list\" | Flag sent to the proxy to turn empty list (results) into 404 not found. | | Macro | Default | Description | | -- | - | -- | | OIOSTATPREFIX_REQ | \"counter req.hits\" | | | OIOSTATPREFIX_TIME | \"counter req.time\" | | | Macro | Default | Description | | -- | - | -- | | SQLXDIRSCHEMAS | NULL | Default directory used to gather applicative schema of SQLX bases. NULL by default, meaning that no directory is set, so that there is no attempt to load a schema. | | SQLXADMINPREFIX_SYS | \"sys.\" | Prefix used for keys used in admin table of sqlite bases | | SQLXADMINPREFIX_USER | \"user.\" | Prefix used for keys used in admin table of sqlite bases | | SQLXADMININITFLAG | SQLXADMINPREFIX_SYS \"sqlx.init\" | Key used in admin table of sqlite bases | | SQLXADMINSTATUS | SQLXADMINPREFIX_SYS \"sqlx.flags\" | Key used in admin table of sqlite bases | | SQLXADMINREFERENCE | SQLXADMINPREFIX_SYS \"sqlx.reference\" | Key used in admin table of sqlite bases | | SQLXADMINBASENAME | SQLXADMINPREFIX_SYS \"sqlx.name\" | Key used in admin table of sqlite bases | | SQLXADMINBASETYPE | SQLXADMINPREFIX_SYS \"sqlx.type\" | Key used in admin table of sqlite bases | | SQLXADMINNAMESPACE | SQLXADMINPREFIX_SYS \"sqlx.ns\" | Key used in admin table of sqlite bases | | Macro | Default | Description | | -- | - | -- | | M2V2ADMINPREFIXSYS | SQLXADMINPREFIXSYS" }, { "data": "| | | M2V2ADMINPREFIXUSER | SQLXADMINPREFIXUSER \"m2.\" | | | M2V2ADMINVERSION | M2V2ADMINPREFIX_SYS \"version\" | | | M2V2ADMINQUOTA | M2V2ADMINPREFIX_SYS \"quota\" | | | M2V2ADMINSIZE | M2V2ADMINPREFIX_SYS \"usage\" | | | M2V2ADMINCTIME | M2V2ADMINPREFIX_SYS \"ctime\" | | | M2V2ADMINVERSIONINGPOLICY | M2V2ADMINPREFIXSYS \"policy.version\" | | | M2V2ADMINSTORAGEPOLICY | M2V2ADMINPREFIXSYS \"policy.storage\" | | | M2V2ADMINKEEPDELETEDDELAY | M2V2ADMINPREFIXSYS \"keepdeleted_delay\" | | | META2INITFLAG | M2V2ADMINPREFIX_SYS \"init\" | | | Macro | Default | Description | | -- | - | -- | | RAWXHEADERPREFIX | \"X-oio-chunk-meta-\" | Prefix for all the headers sent/received with the rawx | | Macro | Default | Description | | -- | - | -- | |OIOUSEOLDFMEMOPEN|undefined_|Use the old implementation of glibc's `fmemopen`. Starting from glibc 2.22 the new implementation lacks the binary mode which made things work.| | Name | Type | Default | Description | | - | - | - | -- | | docroot | string | MANDATORY | Chunks root directory | | namespace | string | MANDATORY | Namespace name | | hash_width | number | 3 | How many hexdigits must be used to name the indirection directories | | hash_depth | number | 1 | How many levels of directories are used to store chunks | | fsync | boolean | disabled | At the end of an upload, perform a fsync() on the chunk file itself | | fsync_dir | boolean | enabled | At the end of an upload, perform a fsync() on the directory holding the chunk | | fallocate | boolean | enabled | Preallocate space for the chunk file | | checksum | string (enabled,disabled,smart) | enabled | Enable checksumming the body of PUT | Should an error be raised when the peer is marked down, instead of trying to contact the peer. default: TRUE* type: gboolean cmake directive: OIOCLIENTDOWNCACHEAVOID* Should the connection timeout be dramatically shortened when talking to a peer that has been reported down. Set to false by default, this is evaluated after the avoidance of those peers. default: FALSE* type: gboolean cmake directive: OIOCLIENTDOWNCACHESHORTEN* Should the client feed a cache with the network errors it encounters, and should those errors be used to prevent RPC to be performed toward 'too-faulty' peers. default: FALSE* type: gboolean cmake directive: OIOCLIENTERRORSCACHEENABLED* Sets the number of faults (on the period given by client.errors_cache.period) beyond which a peer is considered as too faulty to try a new RPC. default: 60* type: guint64 cmake directive: OIOCLIENTERRORSCACHEMAX* range: 1 -> 4294967296 Sets the size of the time window used to count the number of network errors. default: 60* type: gint64 cmake directive: OIOCLIENTERRORSCACHEPERIOD* range: 1 -> 3600 Tells how long the verbosity remains higher before being reset to the default, after a SIGUSR1 has been received. default: 5 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOCOMMONVERBOSITYRESETDELAY* range: 1 GTIMESPANSECOND -> 1 * GTIMESPANHOUR Should the C API adjust the chunk size when over this ceiling. Set to 0 for no action. default: 0* type: gint64 cmake directive: OIOCORECHUNKSIZEMAX* range: 0 -> G_MAXINT64 Should the C API adjust the chunk size when below this" }, { "data": "Set to 0 for no action default: 10000000* type: gint64 cmake directive: OIOCORECHUNKSIZEMIN* range: 0 -> G_MAXINT64 HTTP User-Agent to be used between any C client and the proxy default: * type: string cmake directive: OIOCOREHTTPUSERAGENT* Compare the number of items to select to the number of items available at the current location level, and decide if it is desirable to bypass or slacken the time-consuming distance checks. Disable this if you detect too many situations where distance between selected items could have been bigger. default: TRUE* type: gboolean cmake directive: OIOCORELBALLOWDISTANCE_BYPASS* Generate random, 64 bytes long chunk IDs. If set to false, the chunk IDs will be generated from the container and object names, object version and storage policy, and chunk position. default: FALSE* type: gboolean cmake directive: OIOCORELBGENERATERANDOMCHUNKIDS* How many times shall we try to select a service using a weighted random algorithm, before switching to the shuffled selection. Increase this if you observe too many choices of low-score services while high-score services are available. default: 8* type: guint cmake directive: OIOCORELBWEIGHTEDRANDOM_ATTEMPTS* range: 1 -> 64000 Dump the time spent while holding the global writer lock, when the lock is held for longer than this threshold (in microseconds). default: 5000* type: gint64 cmake directive: OIOCORELBWRITERLOCKALERTDELAY* range: 1 -> 60 GTIMESPAN_SECOND Sets the minimal amount of time between two refreshes of the known CPU-idle counters for the current host. Keep this value small. default: 1 GTIMESPAN_SECOND type: gint64 cmake directive: OIOCOREPERIODREFRESHCPU_IDLE* range: 100 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Sets the minimal amount of time between two refreshes of the known IO-idle counters for the current host. Keep this small. default: 1 GTIMESPAN_SECOND type: gint64 cmake directive: OIOCOREPERIODREFRESHIO_IDLE* range: 100 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Sets the minimal amount of time between two refreshes of the list of the major/minor numbers of the known devices, currently mounted on the current host. If the set of mounted file systems doesn't change, keep this value high. default: 30 GTIMESPAN_SECOND type: gint64 cmake directive: OIOCOREPERIODREFRESHMAJOR_MINOR* range: 100 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR TODO: to be documented default: TRUE* type: gboolean cmake directive: OIOCORERESOLVERDIRSHUFFLE* TODO: to be documented default: TRUE* type: gboolean cmake directive: OIOCORERESOLVERSRVSHUFFLE* Should the client adapt metachunk size to EC policy parameters? Letting this on will make bigger metachunks, but chunks on storage will stay at normal chunk size. Disabling this option allows clients to do write alignment. default: TRUE* type: gboolean cmake directive: OIOCORESDSADAPTMETACHUNK_SIZE* In the current oio-sds client SDK, should the entities be autocreated while accessed for the first time. So, when pushing a content in a container, when this option is set to 'true', the USER and the CONTAINER will be created and configured to the namespace's defaults. default: FALSE* type: gboolean cmake directive: OIOCORESDS_AUTOCREATE* In the current oio-sds client SDK, should the rawx services be shuffled before accessed. This helps ensuring a little load-balancing on the client side. default: FALSE* type: gboolean cmake directive: OIOCORESDS_NOSHUFFLE* Should the object URLs be checked for non-UTF-8 characters? Disable this only if you have trouble reading old objects, uploaded before we check for invalid names. default: TRUE* type: gboolean cmake directive: OIOCORESDSSTRICTUTF8* Sets the connection timeout for requests issued to rawx services. default: 5.0* type: gdouble cmake directive: OIOCORESDSTIMEOUTCNX_RAWX* range: 0.001 -> 300.0 Sets the global timeout when uploading a chunk to a rawx service. default: 60.0* type: gdouble cmake directive: OIOCORESDSTIMEOUTREQ_RAWX* range: 0.001 -> 600.0 The version of the sds. It's used to know the expected metadata of a chunk default: 4.2* type: string cmake directive: OIOCORESDS_VERSION* Parameters to pass when binding a queue to an exchange, separated by" }, { "data": "default: * type: string cmake directive: OIOEVENTSAMQPBINDARGS* Routing key used when binding a queue to an exchange (should not be confused with the routing key used when sending an event). default: #* type: string cmake directive: OIOEVENTSAMQPBINDROUTING_KEY* Parameters to pass when declaring the exchange, separated by ','. default: * type: string cmake directive: OIOEVENTSAMQPEXCHANGEARGS* Name of the exchange to declare. default: oio* type: string cmake directive: OIOEVENTSAMQPEXCHANGENAME* Type of the exchange to declare. default: topic* type: string cmake directive: OIOEVENTSAMQPEXCHANGETYPE* Parameters to pass when declaring a queue, separated by ','. default: x-queue-type=quorum* type: string cmake directive: OIOEVENTSAMQPQUEUEARGS* Name of the queue to declare. default: oio* type: string cmake directive: OIOEVENTSAMQPQUEUENAME* Set a threshold for the number of items in the beanstalkd, so that the service will alert past that value. Set to 0 for no alert sent. default: 0* type: gint64 cmake directive: OIOEVENTSBEANSTALKDCHECKLEVEL_ALERT* range: 0 -> G_MAXINT64 Set the maximum number of items in beanstalkd before considering it full default: 512000* type: gint64 cmake directive: OIOEVENTSBEANSTALKDCHECKLEVEL_DENY* range: 0 -> G_MAXINT64 Set the interval between each check of the beanstalkd availability. Set to 0 to never check. default: 0* type: gint64 cmake directive: OIOEVENTSBEANSTALKDCHECKPERIOD* range: 0 -> 1 GTIMESPAN_DAY Sets the delay on each notification sent to the BEANSTALK endpoint default: 0* type: gint64 cmake directive: OIOEVENTSBEANSTALKD_DELAY* range: 0 -> 86400 Sets the priority of each notification sent to the BEANSTALK endpoint default: 2147483648* type: guint cmake directive: OIOEVENTSBEANSTALKD_PRIO* range: 0 -> 2147483648 Set the interval between each check of the beanstalkd availability. Set to 0 to never check. default: 4 GTIMESPAN_SECOND type: gint64 cmake directive: OIOEVENTSBEANSTALKD_TIMEOUT* range: 100 GTIMESPANMILLISECOND -> 90 * GTIMESPANSECOND Sets the TTR (time to run) allow on the treatment of the notificatio sent to the beanstalkd default: 120* type: gint64 cmake directive: OIOEVENTSBEANSTALKD_TTR* range: 0 -> 86400 Sets the buffering delay of the events emitted by the application default: 1 GTIMESPAN_SECOND type: gint64 cmake directive: OIOEVENTSCOMMONPENDINGDELAY* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Sets the maximum number of pending events, not received yet by the endpoint default: 10000* type: guint32 cmake directive: OIOEVENTSCOMMONPENDINGMAX* range: 1 -> 1048576 Set the acknowledgement policy. Allowed values: all, -1, 0, 1 default: all* type: string cmake directive: OIOEVENTSKAFKA_ACKS* Flush message after produce default: TRUE* type: gboolean cmake directive: OIOEVENTSKAFKA_FLUSH* Set the Kafka client options default: * type: string cmake directive: OIOEVENTSKAFKA_OPTIONS* Set the Kafka client flush timeout default: 0* type: gint64 cmake directive: OIOEVENTSKAFKATIMEOUTSFLUSH* range: 0 -> 86400 Sets the maximum number of ACK managed by the ZMQ notification client default: 32* type: guint cmake directive: OIOEVENTSZMQMAXRECV* range: 1 -> 1073741824 Sets the connection timeout, involved in any RPC to a 'meta' service. default: 4.0* type: gdouble cmake directive: OIOGRIDDTIMEOUTCONNECTCOMMON* range: 0.1 -> 30.0 Sets the default timeout for unitary (request/response) RPC, without considering the possible redirection. default: 30.0* type: gdouble cmake directive: OIOGRIDDTIMEOUTSINGLECOMMON* range: 0.01 -> 120.0 Sets the global timeout of a RPC to e 'meta' service, considering all the possible redirections. default: 30.0* type: gdouble cmake directive: OIOGRIDDTIMEOUTWHOLECOMMON* range: 0.1 -> 120.0 Anti-DDoS counter-measure. In the current server, sets the maximum amount of time a queued TCP event may remain in the queue. If an event is polled and the thread sees the event stayed longer than that delay, A '503 Unavailable' error is" }, { "data": "default: 40 GTIMESPAN_SECOND type: gint64 cmake directive: OIOMETAQUEUEMAXDELAY* range: 10 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Sets the timeout to the set of (quick) RPC that query a meta0 service default: 10.0* type: gdouble cmake directive: OIOMETA0OUTGOINGTIMEOUTCOMMON_REQ* range: 0.01 -> 60.0 Sets the timeout to the set of (quick) RPC that query a meta1 service default: 10.0* type: gdouble cmake directive: OIOMETA1OUTGOINGTIMEOUTCOMMON_REQ* range: 0.01 -> 60.0 TODO: to be documented default: oio* type: string cmake directive: OIOMETA1TUBE_SERVICES* When listing a container, limits the number of items to that value. default: 1000* type: guint cmake directive: OIOMETA2BATCH_MAXLEN* range: 1 -> 100000 How many bytes may be stored in each container. default: 0* type: gint64 cmake directive: OIOMETA2CONTAINERMAXSIZE* range: 0 -> G_MAXINT64 When adding alias with versioning, deletes exceeding versions. default: FALSE* type: gboolean cmake directive: OIOMETA2DELETEEXCEEDINGVERSIONS* When draining a container, limits the number of drained objects per call. default: 1000* type: gint64 cmake directive: OIOMETA2DRAIN_LIMIT* range: 0 -> 100000 When flushing a container, limits the number of deleted objects. default: 1000* type: gint64 cmake directive: OIOMETA2FLUSH_LIMIT* range: 0 -> G_MAXINT64 Should the meta2 check the container state (quota, etc) before generating chunks. default: FALSE* type: gboolean cmake directive: OIOMETA2GENERATE_PRECHECK* Namespace configuration of the max number of versions for a single alias, in a container. default: 1* type: gint64 cmake directive: OIOMETA2MAX_VERSIONS* range: -1 -> G_MAXINT64 DEPRECATED. Sets the period of the periodical reloading of the Load-balancing state, in the current meta2 service. default: 10* type: gint64 cmake directive: OIOMETA2RELOADLBPERIOD* range: 1 -> 3600 Sets the period of the periodical reloading of the namespace configuration, in the current meta2 service. default: 5* type: gint64 cmake directive: OIOMETA2RELOADNSINFOPERIOD* range: 1 -> 3600 How long should deleted content be kept. default: 604800* type: gint64 cmake directive: OIOMETA2RETENTION_PERIOD* range: 1 -> 2592000 Maximum number of entries cleaned in meta2 database. Of course, the higher this number, the longer the cleaning request will be. default: 10000* type: gint64 cmake directive: OIOMETA2SHARDINGMAXENTRIES_CLEANED* range: 1 -> 1000000 Maximum number of entries merged in meta2 database. Of course, the higher this number, the longer the merging request will be. default: 10000* type: gint64 cmake directive: OIOMETA2SHARDINGMAXENTRIES_MERGED* range: 10000 -> 1000000 Maximum time to clean a shard (in replicated mode) from the moment the lock is taken. default: 1 GTIMESPAN_SECOND type: gint64 cmake directive: OIOMETA2SHARDINGREPLICATEDCLEAN_TIMEOUT* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANMINUTE Maximum time allowed between the preparation phase and the locking phase to shard a container. default: 12 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOMETA2SHARDING_TIMEOUT* range: 1 GTIMESPANSECOND -> 1 * GTIMESPANHOUR Should the meta2 store complete chunk IDs (URL) or just store service IDs. If this is set to false, core.lb.generaterandomchunk_ids should be set to false also. default: FALSE* type: gboolean cmake directive: OIOMETA2STORECHUNKIDS* Tube name (or routing key) for 'storage.container.deleted' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTAINERDELETED* Tube name (or routing key) for 'storage.container.new' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTAINERNEW* Tube name (or routing key) for 'storage.container.state' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTAINERSTATE* Tube name (or routing key) for 'storage.container.update' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTAINERUPDATED* Tube name (or routing key) for 'storage.content.append' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTENTAPPENDED* Tube name (or routing key) for 'storage.content.broken' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTENTBROKEN* Tube name (or routing key) for 'storage.content.new' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTENTCREATED* Tube name (or routing key) for 'storage.content.deleted'" }, { "data": "default: oio* type: string cmake directive: OIOMETA2TUBECONTENTDELETED* Tube name (or routing key) for 'storage.content.drained' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTENTDRAINED* Tube name (or routing key) for 'storage.content.update' events. default: oio* type: string cmake directive: OIOMETA2TUBECONTENTUPDATED* Tube name (or routing key) for 'storage.meta2.deleted' events. default: oio* type: string cmake directive: OIOMETA2TUBEMETA2DELETED* Default chunk size for the given namespace. default: 10485760* type: gint64 cmake directive: OIONSCHUNK_SIZE* range: 1 -> G_MAXINT64 Default number of bits with flat-NS computation. default: 17* type: guint cmake directive: OIONSFLAT_BITS* range: 0 -> 64 TODO: to be documented default: TRUE* type: gboolean cmake directive: OIONSMASTER* Default number of digits to aggregate meta1 databases. default: 4* type: guint cmake directive: OIONSMETA1_DIGITS* range: 0 -> 4 Geographical region where the cluster has been deployed (case insensitive). default: localhost* type: string cmake directive: OIONSREGION* TODO: to be documented default: meta2=KEEP|3|1;sqlx=KEEP|1|1|;rdir=KEEP|1|1|* type: string cmake directive: OIONSSERVICEUPDATEPOLICY* TODO: to be documented default: NONE* type: string cmake directive: OIONSSTORAGE_POLICY* Is the NS in a WORM (for Write Once, Read Many --but never delete). default: FALSE* type: gboolean cmake directive: OIONSWORM* In a proxy, sets how many containers can be created at once. default: 100* type: guint cmake directive: OIOPROXYBULKMAXCREATE_MANY* range: 0 -> 10000 In a proxy, sets how many objects can be deleted at once. default: 100* type: guint cmake directive: OIOPROXYBULKMAXDELETE_MANY* range: 0 -> 10000 In a proxy, sets if any form of caching is allowed. Supersedes the value of resolver.cache.enabled. default: TRUE* type: gboolean cmake directive: OIOPROXYCACHE_ENABLED* Should the proxy shuffle the meta1 addresses before contacting them, thus trying to perform a better fanout of the requests. default: TRUE* type: gboolean cmake directive: OIOPROXYDIR_SHUFFLE* In a proxy, should the process ask the target service (with the help of an option in each RPC) to accept the RPC only if it is MASTER on that DB. default: FALSE* type: gboolean cmake directive: OIOPROXYFORCE_MASTER* Specify the OpenIO SDS location of the service. default: * type: string cmake directive: OIOPROXYLOCATION* In a proxy, sets the global timeout for all the other RPC issued (not conscience, not stats-related) default: 30.0* type: gdouble cmake directive: OIOPROXYOUTGOINGTIMEOUTCOMMON* range: 0.1 -> 60.0 In a proxy, sets the global timeout for 'config' requests issued default: 10.0* type: gdouble cmake directive: OIOPROXYOUTGOINGTIMEOUTCONFIG* range: 0.1 -> 60.0 In a proxy, sets the global timeout for the RPC to the central cosnience service. default: 10.0* type: gdouble cmake directive: OIOPROXYOUTGOINGTIMEOUTCONSCIENCE* range: 0.1 -> 60.0 In a proxy, sets the global timeout for 'info' requests issued default: 5.0* type: gdouble cmake directive: OIOPROXYOUTGOINGTIMEOUTINFO* range: 0.01 -> 60.0 In a proxy, sets the global timeout for 'stat' requests issued (mostly forwarded for the event-agent) default: 10.0* type: gdouble cmake directive: OIOPROXYOUTGOINGTIMEOUTSTAT* range: 0.1 -> 60.0 In a proxy, sets the period between the refreshes of the load-balancing state from the central conscience. default: 5* type: gint64 cmake directive: OIOPROXYPERIODCSDOWNSTREAM* range: 0 -> 60 In a proxy, sets the period between two sendings of services states to the" }, { "data": "default: 1* type: gint64 cmake directive: OIOPROXYPERIODCSUPSTREAM* range: 1 -> 60 In the proxy, tells the period between the reloadings of the conscience URL, known from the local configuration default: 30* type: gint64 cmake directive: OIOPROXYPERIODREFRESHCSURL* range: 0 -> 86400 In the proxy, tells the period between two refreshes of the known service types, from the conscience default: 30* type: gint64 cmake directive: OIOPROXYPERIODREFRESHSRVTYPES* range: 1 -> 86400 In the proxy, tells the period between two refreshes of the namespace configuration, from the conscience default: 30* type: gint64 cmake directive: OIOPROXYPERIODRELOADNSINFO* range: 1 -> 3600 In a proxy, upon a read request, should the proxy prefer a service known to host a MASTER copy of the DB. Supersedes proxy.prefer.slaveforread default: FALSE* type: gboolean cmake directive: OIOPROXYPREFERMASTERFOR_READ* In a proxy, upon a write request, should the proxy prefer services known to host the MASTER copy of the DB default: TRUE* type: gboolean cmake directive: OIOPROXYPREFERMASTERFOR_WRITE* In a proxy, upon a read request, should the proxy prefer a service known to host a SLAVE copy of the DB. default: TRUE* type: gboolean cmake directive: OIOPROXYPREFERSLAVEFOR_READ* In a proxy, tells if the (ugly-as-hell) quirk that sets the score known from the conscience on the corresponding entries in the cache of services 'known to be local' default: FALSE* type: gboolean cmake directive: OIOPROXYQUIRKLOCALSCORES* How long a request might take to execute, when no specific deadline has been received. Used to compute a deadline transmitted to backend services, when no timeout is present in the request. default: 1 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOPROXYREQUESTMAXDELAY* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Should the proxy patch the services descriptions to let the caller prefer local services. default: FALSE* type: gboolean cmake directive: OIOPROXYSRVLOCALPATCH* Should the proxy allocate services with a local preference. 0 for no, 1 for only one local service and 2 for a maximum of services in spite of the location constraints. The value 2 is a quirk that should be avoided unless upon exceptional condition wherein you accept the risk. default: 0* type: gint cmake directive: OIOPROXYSRVLOCALPREPARE* range: 0 -> 2 Should the proxy shuffle the meta2 addresses before the query, to do a better load-balancing of the requests. default: TRUE* type: gboolean cmake directive: OIOPROXYSRV_SHUFFLE* In the proxy cache, sets the TTL of a service known to be down default: 5 GTIMESPAN_SECOND type: gint64 cmake directive: OIOPROXYTTLSERVICESDOWN* range: 0 -> 1 GTIMESPAN_DAY In a proxy, sets the TTL of each service already encountered default: 5 GTIMESPAN_DAY type: gint64 cmake directive: OIOPROXYTTLSERVICESKNOWN* range: 0 -> 7 GTIMESPAN_DAY In the proxy cache, sets the TTL of a local service default: 30 GTIMESPAN_SECOND type: gint64 cmake directive: OIOPROXYTTLSERVICESLOCAL* range: 0 -> 1 GTIMESPAN_DAY In a proxy, sets the TTL on each 'known master' entry. That cache is filled each time a redirection to a MASTER occurs, so that we can immediately direct write operation to the service that owns the MASTER copy. default: 30 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOPROXYTTLSERVICESMASTER* range: 0 -> 7 GTIMESPAN_DAY In a proxy, sets the maximum length for the URL it receives. This options protects stack allocation for that URL. default: 2048* type: guint cmake directive: OIOPROXYURLPATHMAXLEN* range: 32 -> 65536 Is the RAWX allowed to emit events in the beansatlkd? default: TRUE* type: gboolean cmake directive: OIORAWXEVENTS_ALLOWED* Is the RAWX allowed to run if its base directory is hosted in the root partition, a.k.a. '/'. default: TRUE* type: gboolean cmake directive: OIORAWXSLASH_ALLOWED* TODO: to be documented default: oio* type: string cmake directive: OIORAWXTUBECHUNKCREATED* TODO: to be documented default: oio* type: string cmake directive: OIORAWXTUBECHUNKDELETED* Configure the maximum number of file descriptors allowed to each leveldb database. Set to 0 to autodetermine the value (cf. rdir.fd_reserve). The real value will be clamped at least to" }, { "data": "Will only be applied on bases opened after the configuration change. default: 0* type: guint cmake directive: OIORDIRFDPERBASE* range: 0 -> 16384 Configure the total number of file descriptors the leveldb backend may use. Set to 0 to autodetermine the value. Will only be applied on bases opened after the configuration change. default: 0* type: guint cmake directive: OIORDIRFD_RESERVE* range: 0 -> 32768 Configure the size of rdir Leveldb data blocks. See Leveldb documentation. default: 4096* type: guint cmake directive: OIORDIRLEVELDBBLOCKSIZE* range: 1024 -> 1048576 Configure the size of rdir Leveldb files. Leveldb will write up to this amount of bytes to a file before switching to a new one. See Leveldb documentation. default: 2097152* type: guint cmake directive: OIORDIRLEVELDBMAXFILE_SIZE* range: 16384 -> 1073741824 In any service resolver instantiated, sets the maximum number of entries related to meta0 (meta1 addresses) and conscience (meta0 address) default: 4194304* type: guint cmake directive: OIORESOLVERCACHECSM0MAX_DEFAULT* range: 0 -> G_MAXUINT In any service resolver instantiated, sets the default TTL on the entries related meta0 (meta1 addresses) and conscience (meta0 address) default: 0* type: gint64 cmake directive: OIORESOLVERCACHECSM0TTL_DEFAULT* range: 0 -> G_MAXINT64 Allows the resolver instances to cache entries default: FALSE* type: gboolean cmake directive: OIORESOLVERCACHE_ENABLED* In any shard resolver instantiated, sets the maximum number of root entries default: 4096* type: guint cmake directive: OIORESOLVERCACHEROOTMAX_DEFAULT* range: 0 -> G_MAXUINT In any shard resolver instantiated, sets the default TTL on the root entries default: 0* type: gint64 cmake directive: OIORESOLVERCACHEROOTTTL_DEFAULT* range: 0 -> G_MAXINT64 In any shard resolver instantiated, sets the maximum number of shards entries per root default: 16* type: guint cmake directive: OIORESOLVERCACHESHARDSMAX_DEFAULT* range: 1 -> 128 In any service resolver instantiated, sets the maximum number of meta1 entries (data-bound services) default: 4194304* type: guint cmake directive: OIORESOLVERCACHESRVMAX_DEFAULT* range: 0 -> G_MAXUINT In any service resolver instantiated, sets the default TTL on the meta1 entries (data-bound services) default: 0* type: gint64 cmake directive: OIORESOLVERCACHESRVTTL_DEFAULT* range: 0 -> G_MAXINT64 In the network core, when the server socket wakes the call to epoll_wait(), that value sets the number of subsequent calls to accept(). Setting it to a low value allows to quickly switch to other events (established connection) and can lead to a starvation on the new connections. Setting to a high value might spend too much time in accepting and ease denials of service (with established but idle cnx). default: 64* type: guint cmake directive: OIOSERVERBATCH_ACCEPT* range: 1 -> 4096 In the network core of a server, how many events do you manage in each call to epoll_wait(). Set to a low value to quickly react on new connections, to an higher value to rather treat established connections. The value is bound to a stack-allocated buffer, keep it rather small. default: 128* type: guint cmake directive: OIOSERVERBATCH_EVENTS* range: 1 -> 4096 In the current server, sets the maximum amount of time a connection may live without activity since the last activity (i.e. the last reply sent) default: 5 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOSERVERCNXTIMEOUTIDLE* range: 0 -> 1 GTIMESPAN_DAY In the current server, sets the maximum amount of time an established connection is allowed to live when it has no activity at" }, { "data": "default: 30 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSERVERCNXTIMEOUTNEVER* range: 0 -> 1 GTIMESPAN_DAY In the current server, sets the maximum amount of time a connection is allowed to live, since its creation by the accept() call, whether it presents activity or not. default: 2 GTIMESPAN_HOUR type: gint64 cmake directive: OIOSERVERCNXTIMEOUTPERSIST* range: 0 -> 1 GTIMESPAN_DAY Enable by default, unnecessary access logs are ignored. Set to false to log every access default: TRUE* type: gboolean cmake directive: OIOSERVERDISABLENOISYACCESS_LOGS* Maximum number of simultaneous incoming connections. Set to 0 for an automatic detection (50% of available file descriptors). default: 0* type: guint cmake directive: OIOSERVERFDMAXPASSIVE* range: 0 -> 65536 TODO: to be documented default: FALSE* type: gboolean cmake directive: OIOSERVERLOG_OUTGOING* Sets how many bytes bytes are released when the LEAN request is received by the current 'meta' service. default: 0* type: guint cmake directive: OIOSERVERMALLOCTRIMSIZE_ONDEMAND* range: 0 -> 2147483648 Sets how many bytes bytes are released when the LEAN request is received by the current 'meta' service. default: 0* type: guint cmake directive: OIOSERVERMALLOCTRIMSIZE_PERIODIC* range: 0 -> 2147483648 Enable server-side performance data collection. default: FALSE* type: gboolean cmake directive: OIOSERVERPERFDATA_ENABLED* How many bases may be decached each time the background task performs its Dance of Death default: 1* type: guint cmake directive: OIOSERVERPERIODICDECACHEMAX_BASES* range: 1 -> 4194304 How long may the decache routine take default: 500 GTIMESPAN_MILLISECOND type: gint64 cmake directive: OIOSERVERPERIODICDECACHEMAX_DELAY* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANMINUTE In ticks / jiffies, with approx. 1 tick per second. 0 means never default: 0* type: guint cmake directive: OIOSERVERPERIODICDECACHEPERIOD* range: 0 -> 1048576 In the current server, sets how long a thread can remain unused before considered as idle (and thus to be stopped) default: 30 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSERVERPOOLMAXIDLE* range: 1 -> 1 GTIMESPAN_HOUR In the current server, sets the maximum number of threads for the pool responsible for the TCP connections (threading model is one thread per request being managed, and one request at once per TCP connection). Set to 0 for no limit. default: 0* type: gint cmake directive: OIOSERVERPOOLMAXTCP* range: 0 -> 1073741824 In the current server, sets the maximum number of threads for pool responsible for the UDP messages handling. UDP is only used for quick synchronisation messages during MASTER elections. Set ot 0 for no limit. default: 4* type: gint cmake directive: OIOSERVERPOOLMAXUDP* range: 0 -> 1073741824 In the current server, sets how many threads may remain unused. This value is, in the GLib, common to all the threadpools. default: 20* type: gint cmake directive: OIOSERVERPOOLMAXUNUSED* range: 0 -> 1073741824 Anti-DDoS counter-measure. In the current server, sets the maximum amount of time a queued TCP event may remain in the queue. If an event is polled and the thread sees the event stayed longer than that delay, the connection is immediately closed. Keep this value rather high because the connection closing doesn't involve a reply that will help the client to retry with an exponential back-off. default: 60 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSERVERQUEUEMAXDELAY* range: 10 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR In the current server, set the time threshold after which a warning is sent when a file descriptor stays longer than that in the queue of the Thread Pool. default: 4 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSERVERQUEUEWARNDELAY* range: 10 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Maximum amount of memory used to decode ASN.1 requests. This MUST be more than server.request.max_size, or big requests will always be denied. default: 4294967296* type: guint64 cmake directive: OIOSERVERREQUESTMAXMEMORY* range: 1048576 -> 68719476736 How long a request might take to run on the server" }, { "data": "This value is used to compute a deadline for several waitings (DB cache, manager of elections, etc). Common to all sqliterepo-based services, it might be overriden. default: 300 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSERVERREQUESTMAXRUN_TIME* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Maximum size of an ASN.1 request to a 'meta' service. A service will refuse to serve a request bigger than this. Be careful to have enough memory on the system. default: 1073741824* type: guint cmake directive: OIOSERVERREQUESTMAXSIZE* range: 1048576 -> 4294966272 Default statsd host address. default: * type: string cmake directive: OIOSERVERSTATSD_HOST* Default statsd port. default: 8125* type: guint cmake directive: OIOSERVERSTATSD_PORT* range: 0 -> 100000 In jiffies, how often the periodic task that calls malloc_trim() is fired. default: 3600* type: guint cmake directive: OIOSERVERTASKMALLOCTRIM_PERIOD* range: 0 -> 86400 In the current server, sets the maximum length of the queue for UDP messages. When that number has been reached and a new message arrives, the message will be dropped. default: 512* type: guint cmake directive: OIOSERVERUDPQUEUEMAX* range: 0 -> 2147483648 In the current server, sets the maximum amount of time a queued UDP frame may remain in the queue. When unqueued, if the message was queued for too long, it will be dropped. The purpose of such a mechanism is to avoid clogging the queue and the whole election/cache mechanisms with old messages, those messages having already been resent. default: 1 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSERVERUDPQUEUETTL* range: 100 GTIMESPANMILLISECOND -> 1 * GTIMESPANDAY Should the socket to meta~ services use TCP_FASTOPEN flag. default: TRUE* type: gboolean cmake directive: OIOSOCKETFASTOPEN_ENABLED* Set to a non-zero value to explicitely force a RCVBUF option on client sockets to gridd services. Set to 0 to keep the OS default. default: 0* type: guint cmake directive: OIOSOCKETGRIDD_RCVBUF* range: 0 -> 16777216 Set to a non-zero value to explicitely force a SNDBUF option on client sockets to gridd services. Set to 0 to keep the OS default. default: 0* type: guint cmake directive: OIOSOCKETGRIDD_SNDBUF* range: 0 -> 16777216 When socket.linger.enabled is set to TRUE, socket.linger.delat tells how the socket remains in the TIME_WAIT state after the close() has been called. default: 1* type: gint64 cmake directive: OIOSOCKETLINGER_DELAY* range: 0 -> 60 Set to TRUE to allow the LINGER behavior of TCP sockets, as a default. The connections then end with a normal FIN packet, and go in the TIMEWAIT state for a given delay. Setting to FALSE causes connections to be closed with a RST packet, then avoiding a lot of TCP sockets in the TIMEWAIT state. default: FALSE* type: gboolean cmake directive: OIOSOCKETLINGER_ENABLED* Should the socket to meta~ services receive the TCP_NODELAY flag. When TRUE, it disables the Naggle's algorithm. default: TRUE* type: gboolean cmake directive: OIOSOCKETNODELAY_ENABLED* Advice the libcurl to use that buffer size for the interactions with the proxy. libcurl gives no guaranty to take the advice into account. Set to 0 to let the default. libcurl applies its own range, usually between 1k and 512k. default: 0* type: guint cmake directive: OIOSOCKETPROXY_BUFLEN* range: 0 -> 512000 Should the sockets opened by the application receive the TCP_QUICKACK flag. default: TRUE* type: gboolean cmake directive: OIOSOCKETQUICKACK_ENABLED* Advice the libcurl to use that buffer size for the interactions with the rawx services. libcurl gives no guaranty to take the advice into account. Set to 0 to let the" }, { "data": "libcurl applies its own range, usually between 1k and 512k. default: 0* type: guint cmake directive: OIOSOCKETRAWX_BUFLEN* range: 0 -> 512000 Sets the heat value over which a database is considered hot default: 1* type: guint32 cmake directive: OIOSQLITEREPOCACHEHEATTHRESHOLD* range: 1 -> 2147483648 Triggers an alert when a thread tries to wait for an overloaded database. default: TRUE* type: gboolean cmake directive: OIOSQLITEREPOCACHEHEAVYLOADALERT* Triggers an error when a thread waits for an overloaded database. default: FALSE* type: gboolean cmake directive: OIOSQLITEREPOCACHEHEAVYLOADFAIL* Minimum number of requests per second opening the database to consider the database overloaded. default: 64* type: guint32 cmake directive: OIOSQLITEREPOCACHEHEAVYLOADMIN_LOAD* range: 1 -> 10000 Number of kibibytes (kiB) of cache per open DB. default: 0* type: guint cmake directive: OIOSQLITEREPOCACHEKBYTESPER_DB* range: 0 -> 1048576 Sets how long we (unit)wait on the lock around the databases. Keep it small. default: 1 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOCACHETIMEOUTLOCK* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Sets how long a worker thread accepts for a DB to become available. default: 20 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOCACHETIMEOUTOPEN* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANMINUTE Sets the period after the return to the IDLE/COLD state, during which the recycling is forbidden. 0 means the base won't be decached. default: 1 GTIMESPAN_MILLISECOND type: gint64 cmake directive: OIOSQLITEREPOCACHETTLCOOL* range: 0 -> 1 GTIMESPAN_DAY Sets the period after the return to the IDLE/HOT state, during which the recycling is forbidden. 0 means the base won't be decached. default: 1 GTIMESPAN_MILLISECOND type: gint64 cmake directive: OIOSQLITEREPOCACHETTLHOT* range: 0 -> 1 GTIMESPAN_DAY In the current sqliterepo repository, sets the maximum amount of time a periodical task may take, while checking for the timeouts on the outbound connections. default: 5 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOCLIENTTIMEOUTALERTIFLONGER* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR How to check the database before executing a DBDUMP request. 0: no check. 1: quickcheck (the default). 2: integrity_check. default: 1* type: gint cmake directive: OIOSQLITEREPODUMPCHECKTYPE* range: 0 -> 2 Size of data chunks when copying a database using the chunked DBPIPEFROM/DBDUMP mechanism. Also used as block size for internal database copies. default: 8388608* type: gint64 cmake directive: OIOSQLITEREPODUMPCHUNKSIZE* range: 4096 -> 2146435072 Maximum size of a database dump. If a base is bigger than this size, it will be refused the synchronous DBRESTORE mechanism, and will be ansynchronously restored with the DBDUMP/DBPIPEFROM mechanism. This value will be clamped to server.request.maxsize - 1024. default: 1072693248* type: gint64 cmake directive: OIOSQLITEREPODUMPMAXSIZE* range: 0 -> 4293918720 How many concurrent DB dumps may happen in a single process. default: 1024* type: gint cmake directive: OIOSQLITEREPODUMPS_MAX* range: 1 -> G_MAXINT How long to wait for a concurrent DB dump to finish. Should be set accordingly with sqliterepo.outgoing.timeout.req.resync. default: 4 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOSQLITEREPODUMPS_TIMEOUT* range: 1 GTIMESPANSECOND -> 1 * GTIMESPANDAY Allow the role of MASTER in any election. default: TRUE* type: gboolean cmake directive: OIOSQLITEREPOELECTIONALLOWMASTER* In the current sqliterepo repository, sets the amount of time after which a MASTER election will drop its status and return to the NONE status. This helps recycling established-but-unused elections, and save Zookeeper nodes. Keep this value greater than sqliterepo.election.delay.expire_slave to avoid rotating the master" }, { "data": "default: 240 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOSQLITEREPOELECTIONDELAYEXPIRE_MASTER* range: 1 GTIMESPANMILLISECOND -> 7 * GTIMESPANDAY In the current sqliterepo repository, sets the amount of time an election without status will be forgotten default: 30 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOSQLITEREPOELECTIONDELAYEXPIRE_NONE* range: 1 GTIMESPANSECOND -> 1 * GTIMESPANDAY Sets the amount of time after which a pending election (without any status change) will be reset and return to the NONE status. This helps recovering after a ZooKeeper failure. Should be set between sqliterepo.zk.timeout and sqliterepo.election.wait.delay. default: 12 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOELECTIONDELAYEXPIRE_PENDING* range: 1 GTIMESPANSECOND -> 7 * GTIMESPANDAY In the current sqliterepo repository, sets the amount of time after which a SLAVE election will drop its status and return to the NONE status. This helps recycling established-but-unused elections, and save Zookeeper nodes. default: 210 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOSQLITEREPOELECTIONDELAYEXPIRE_SLAVE* range: 1 GTIMESPANSECOND -> 7 * GTIMESPANDAY In the current sqliterepo repository, sets the amount of time after which a failed election leaves its FAILED status and returns to the NONE status. default: 250 GTIMESPAN_MILLISECOND type: gint64 cmake directive: OIOSQLITEREPOELECTIONDELAYRETRY_FAILED* range: 1 GTIMESPANMILLISECOND -> 7 * GTIMESPANDAY Should the election mecanism try to recreate missing DB? default: FALSE* type: gboolean cmake directive: OIOSQLITEREPOELECTIONLAZYRECOVER* Only effective when built in DEBUG mode. Dump the long critical sections around the elections lock, when the lock is held for longer than this threshold (in microseconds). default: 500* type: gint64 cmake directive: OIOSQLITEREPOELECTIONLOCKALERT_DELAY* range: 1 -> 60 GTIMESPAN_SECOND In the current sqliterepo repository, sets the amount of time spent in an election resolution that will make a worker thread won't wait at all and consider that election is stalled. default: 15 GTIMESPAN_MINUTE type: gint64 cmake directive: OIOSQLITEREPOELECTIONNOWAITAFTER* range: 1 GTIMESPANMILLISECOND -> GMAXINT64 Check of the election is pending since too long. If it is, don't way for it. default: FALSE* type: gboolean cmake directive: OIOSQLITEREPOELECTIONNOWAITENABLE* In the current sqliterepo repository, sets the maximum amount of time a worker thread is allowed to wait for an election to get its final status. default: 20 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOELECTIONWAITDELAY* range: 100 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR In the current sqliterepo repository, while loop-waiting for a final election status to be reached, this value sets the unit amount of time of eacch unit wait on the lock. Keep this value rather small to avoid waiting for too long, but not too small to avoid dumping CPU cycles in active waiting. default: 4 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOELECTIONWAITQUANTUM* range: 100 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Set the 'journal_mode' sqlite pragma when opening a database. 0 = DELETE, 1 = TRUNCATE, 2 = PERSIST, 3 = MEMORY. default: 3* type: guint cmake directive: OIOSQLITEREPOJOURNAL_MODE* range: 0 -> 3 Sets the connection timeout when exchanging versions between databases replicas. default: 5.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTCNX_GETVERS* range: 0.01 -> 30.0 Sets the connection timeout sending a replication request. default: 5.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTCNX_REPLICATE* range: 0.01 -> 30.0 Set the connection timeout during RPC to ask for a SLAVE database to be resync on its MASTER default: 5.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTCNX_RESYNC* range: 0.01 -> 30.0 Sets the connection timeout when pinging a peer database. Keep it small. Only used when UDP is disabled. default: 1.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTCNX_USE* range: 0.01 -> 30.0 Sets the global timeout when performing a version exchange RPC. Keep it rather small, to let election quickly fail on network troubles. Only used when UDP is disabled. default: 10.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTREQ_GETVERS* range: 0.01 ->" }, { "data": "Sets the global timeout when sending a replication RPC, from the current MASTER to a SLAVE default: 10.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTREQ_REPLICATE* range: 0.01 -> 30.0 Sets the global timeout of a RESYNC request sent to a 'meta' service. Sent to a SLAVE DB, the RESYNC operation involves a RPC from the SLAVE to the MASTER, then a DB dump on the MASTER and restoration on the SLAVE. Thus that operation might be rather long, due to the possibility of network/disk latency/bandwidth, etc. Should be set accordingly with sqliterepo.dumps.timeout default: 241.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTREQ_RESYNC* range: 0.01 -> 300.0 Sets the global timeout when pinging a peer database. Keep it small. default: 10.0* type: gdouble cmake directive: OIOSQLITEREPOOUTGOINGTIMEOUTREQ_USE* range: 0.01 -> 30.0 In the current sqliterepo repository, sets the page size of all the databases used. This value only has effects on databases created with that value. default: 4096* type: guint cmake directive: OIOSQLITEREPOPAGE_SIZE* range: 512 -> 1048576 Sets how many bytes bytes are released when the LEAN request is received by the current 'meta' service. default: 67108864* type: guint cmake directive: OIOSQLITEREPORELEASE_SIZE* range: 1 -> 2147483648 In the current server, sets the maximum amount of time a queued DBUSE, DBGETVERS or DB_PIPEFROM request may remain in the queue. If the message was queued for too long before being sent, it will be dropped. The purpose of such a mechanism is to avoid clogging the queue and the whole election/cache mechanisms with old messages, those messages having already been resent. default: 4 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOREPOACTIVEQUEUE_TTL* range: 100 GTIMESPANMILLISECOND -> 60 * GTIMESPANSECOND Maximum number of simultaneous outgoing connections. Set to 0 for an automatic detection (2% of available file descriptors). default: 512* type: guint cmake directive: OIOSQLITEREPOREPOFDMAX_ACTIVE* range: 0 -> 65536 Minimum number of simultaneous outgoing connections. default: 32* type: guint cmake directive: OIOSQLITEREPOREPOFDMIN_ACTIVE* range: 0 -> 65536 Sets how many versions exchanges are allowed during the journey in the election FSM. default: 5* type: guint cmake directive: OIOSQLITEREPOREPOGETVERSATTEMPTS* range: 1 -> 64 . default: 100 GTIMESPAN_MILLISECOND type: gint64 cmake directive: OIOSQLITEREPOREPOGETVERSDELAY* range: 10 GTIMESPANMILLISECOND -> 1 * GTIMESPANMINUTE Sets how many databases can be kept simultaneously open (in use or idle) in the current service. If defined to 0, it is set to 48% of available file descriptors. default: 0* type: guint cmake directive: OIOSQLITEREPOREPOHARDMAX* range: 0 -> 131072 Sets how many databases can be in use at the same moment in the current service. If defined to 0, it is set to sqliterepo.repo.hard_max. default: 0* type: guint cmake directive: OIOSQLITEREPOREPOSOFTMAX* range: 0 -> 131072 Memory size ceiling we try to honor. The check is performed while closing databases. Set to -1 to autodetect the max RSS from the resource limits (see `man getrlimit` for more information) or 0 to disable. The autodetection considers the environment and is not aware of all the processes that could share that environment. default: -1* type: gint64 cmake directive: OIOSQLITEREPORSS_MAX* range: -1 -> G_MAXINT64 . default: 10 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOSERVICEEXITTTL* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR Should the sendto() of DBUSE be deferred to a thread-pool. Only effective when `oioudp_allowed` is set. Set to 0 to keep the OS default. default: TRUE* type: gboolean cmake directive: OIOSQLITEREPOUDP_DEFERRED* For testing purposes. The value simulates ZK sharding on different connection to the same" }, { "data": "default: 1* type: guint cmake directive: OIOSQLITEREPOZKMUXFACTOR* range: 1 -> 64 Sets the maximum number of reconnections to the ZK that remains acceptable. Beyond that limit, we consider the current service has been disconnected, and that it lost all its nodes. default: 5* type: guint cmake directive: OIOSQLITEREPOZKRRDTHRESHOLD* range: 1 -> 2147483648 Sets the time window to remember the reconnection events, on a ZK connection. default: 30* type: guint cmake directive: OIOSQLITEREPOZKRRDWINDOW* range: 1 -> 4095 Should the synchronism mechanism shuffle the set of URL in the ZK connection string? Set to yes as an attempt to a better balancing of the connections to the nodes of the ZK cluster. default: TRUE* type: gboolean cmake directive: OIOSQLITEREPOZK_SHUFFLE* Sets the timeout of the zookeeper handle (in the meaning of the zookeeper client library) default: 10 GTIMESPAN_SECOND type: gint64 cmake directive: OIOSQLITEREPOZK_TIMEOUT* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANHOUR In the current sqlx-based service, tells the period (in seconds) at which the service will refresh its load-balancing information. default: 1* type: gint64 cmake directive: OIOSQLXLBREFRESHPERIOD* range: 1 -> 60 Sets the timeout for the requests issued to the SQLX services. default: 30.0* type: gdouble cmake directive: OIOSQLXOUTGOINGTIMEOUTREQ* range: 0.01 -> 60.0 Allow the sqlx client DB_USE RPC to be sent via UDP instead of the default TCP channel. default: TRUE* type: gboolean cmake directive: OIOUDPALLOWED* Allow the services to self-assign to the volume they find in their configuration. This was the default behaviour in pre 5.2 versions of oio-sds. default: TRUE* type: gboolean cmake directive: OIOVOLUMELAZY_LOCK* These variables are only active when the ENBUG option has been specified on the cmake command line. Set the probability of fake timeout failures, in any client RPC to a 'meta' service default: 0* type: gint32 cmake directive: OIOENBUGCLIENTFAKETIMEOUT_THRESHOLD* range: 0 -> 100 In testing situations, sets a delay to add to any service listing request. default: 0* type: gint64 cmake directive: OIOENBUGCSLISTDELAY* range: 0 -> 60 GTIMESPAN_MINUTE In testing situations, sets a delay to add to service pre-serialize operations. default: 0* type: gint64 cmake directive: OIOENBUGCSSERIALIZEDELAY* range: 0 -> 60 GTIMESPAN_MINUTE CID of the base to force a double master condition on. default: * type: string cmake directive: OIOENBUGELECTIONDOUBLEMASTER_DB* Proxy probability to fail with 503 on a /cs route default: 0* type: gint32 cmake directive: OIOENBUGPROXYCSFAILURE_RATE* range: 0 -> 100 Really, do not use this! default: 0* type: gint32 cmake directive: OIOENBUGPROXYREQUESTFAILURE_ALONE* range: 0 -> 100 Really, do not use this! default: 0* type: gint32 cmake directive: OIOENBUGPROXYREQUESTFAILURE_FIRST* range: 0 -> 100 Really, do not use this! default: 0* type: gint32 cmake directive: OIOENBUGPROXYREQUESTFAILURE_LAST* range: 0 -> 100 Really, do not use this! default: 0* type: gint32 cmake directive: OIOENBUGPROXYREQUESTFAILURE_MIDDLE* range: 0 -> 100 In testing situations, sets the average ratio of requests failing for a fake reason (from the peer). This helps testing the retrial mechanisms. default: 0* type: gint32 cmake directive: OIOENBUGSERVERREQUESTFAILURE_THRESHOLD* range: 0 -> 100 In testing situations, sets the average ratio of requests failing for a fake reason (from the peer). This helps testing the retrial mechanisms. default: 0* type: gint32 cmake directive: OIOENBUGSQLITEREPOCLIENTFAILURE_THRESHOLD* range: 0 -> 100 In testing situations, sets the average ratio of requests failing for a fake reason (connection timeout). This helps testing the retrial mechanisms and the behavior under strong network split-brain. default: 1 GTIMESPAN_SECOND type: gint64 cmake directive: OIOENBUGSQLITEREPOCLIENTTIMEOUT_PERIOD* range: 1 GTIMESPANMILLISECOND -> 1 * GTIMESPANDAY Fake Error rate on synchronism RPC (a.k.a. ZK) default: 0* type: gint32 cmake directive: OIOENBUGSQLITEREPOSYNCHROFAILURE* range: 0 -> 100" } ]
{ "category": "Runtime", "file_name": "Variables.md", "project_name": "OpenIO", "subcategory": "Cloud Native Storage" }
[ { "data": "The Container Storage Interface (CSI) . It will reach beta support in Kubernetes v1.17, scheduled for release in December 2019. This proposal documents an approach for integrating support for this snapshot API within Velero, augmenting its existing capabilities. Enable Velero to backup and restore CSI-backed volumes using the Kubernetes CSI CustomResourceDefinition API Replacing Velero's existing API Replacing Velero's Restic support Velero has had support for performing persistent volume snapshots since its inception. However, support has been limited to a handful of providers. The plugin API introduced in Velero v0.7 enabled the community to expand the number of supported providers. In the meantime, the Kubernetes sig-storage advanced the CSI spec to allow for a generic storage interface, opening up the possibility of moving storage code out of the core Kubernetes code base. The CSI working group has also developed a generic snapshotting API that any CSI driver developer may implement, giving users the ability to snapshot volumes from a standard interface. By supporting the CSI snapshot API, Velero can extend its support to any CSI driver, without requiring a Velero-specific plugin be written, easing the development burden on providers while also reaching more end users. In order to support CSI's snapshot API, Velero must interact with the and CRDs. These act as requests to the CSI driver to perform a snapshot on the underlying provider's volume. This can largely be accomplished with Velero `BackupItemAction` and `RestoreItemAction` plugins that operate on these CRDs. Additionally, changes to the Velero server and client code are necessary to track `VolumeSnapshot`s that are associated with a given backup, similarly to how Velero tracks its own type. Tracking these is important for allowing users to see what is in their backup, and provides parity for the existing `volume.Snapshot` and types. This is also done to retain the object store as Velero's source of truth, without having to query the Kubernetes API server for associated `VolumeSnapshot`s. `velero backup describe --details` will use the stored VolumeSnapshots to list CSI snapshots included in the backup to the user. A set of plugins was developed that informed this design. The plugins will be as follows: This plugin will act directly on PVCs, since an implementation of Velero's VolumeSnapshotter does not have enough information about the StorageClass to properly create the `VolumeSnapshot` objects. The associated PV will be queried and checked for the presence of `PersistentVolume.Spec.PersistentVolumeSource.CSI`. (See the \"Snapshot Mechanism Selection\" section below). If this field is `nil`, then the plugin will return early without taking action. If the `Backup.Spec.SnapshotVolumes` value is `false`, the plugin will return early without taking action. Additionally, to prevent creating CSI snapshots for volumes backed up by restic, the plugin will query for all pods in the `PersistentVolumeClaim`'s namespace. It will then filter out the pods that have the PVC mounted, and inspect the `backup.velero.io/backup-volumes` annotation for the associated volume's" }, { "data": "If the name is found in the list, then the plugin will return early without taking further action. Create a `VolumeSnapshot.snapshot.storage.k8s.io` object from the PVC. Label the `VolumeSnapshot` object with the label for ease of lookup later. Also set an ownerRef on the `VolumeSnapshot` so that cascading deletion of the Velero `Backup` will delete associated `VolumeSnapshots`. The CSI controllers will create a `VolumeSnapshotContent.snapshot.storage.k8s.io` object associated with the `VolumeSnapshot`. Associated `VolumeSnapshotContent` objects will be retrieved and updated with the label for ease of lookup later. `velero.io/volume-snapshot-name` will be applied as a label to the PVC so that the `VolumeSnapshot` can be found easily for restore. `VolumeSnapshot`, `VolumeSnapshotContent`, and `VolumeSnapshotClass` objects would be returned as additional items to be backed up. GitHub issue represents this work. The `VolumeSnapshotContent.Spec.VolumeSnapshotSource.SnapshotHandle` field is the link to the underlying platform's on-disk snapshot, and must be preserved for restoration. The plugin will not wait for the `VolumeSnapshot.Status.readyToUse` field to be `true` before returning. This field indicates that the snapshot is ready to use for restoration, and for different vendors can indicate that the snapshot has been made durable. However, the applications can proceed as soon as `VolumeSnapshot.Status.CreationTime` is set. This also maintains current Velero behavior, which allows applications to quiesce and resume quickly, with minimal interruption. Any sort of monitoring or waiting for durable snapshots, either Velero-native or CSI snapshots, are not covered by this proposal. ``` K8s object relationships inside of the backup tarball +--+ +--+ | PersistentVolumeClaim +-->+ PersistentVolume | +--+--+ +--+--+ ^ ^ | | | | | | +--+--+ +--+--+ | VolumeSnapshot +<->+ VolumeSnapshotContent | +--+ +--+ ``` On restore, `VolumeSnapshotContent` objects are cleaned so that they may be properly associated with IDs assigned by the target cluster. Only `VolumeSnapshotContent` objects with the `velero.io/backup-name` label will be processed, using the plugin's `AppliesTo` function. The metadata (excluding labels), `PersistentVolumeClaim.UUID`, and `VolumeSnapshotRef.UUID` fields will be cleared. The reference fields are cleared because the associated objects will get new UUIDs in the cluster. This also maps to the \"import\" case of . This means the relationship between the `VolumeSnapshot` and `VolumeSnapshotContent` is one way until the CSI controllers rebind them. ``` K8s objects after the velero.io/csi-vsc plugin has run +--+ +--+ | PersistentVolumeClaim +-->+ PersistentVolume | +--+ +--+ +--+ +--+ | VolumeSnapshot +-->+ VolumeSnapshotContent | +--+ +--+ ``` `VolumeSnapshot` objects must be prepared for importing into the target cluster by removing IDs and metadata associated with their origin cluster. Only `VolumeSnapshot` objects with the `velero.io/backup-name` label will be processed, using the plugin's `AppliesTo` function. Metadata (excluding labels) and `Source` (that is, the pointer to the `PersistentVolumeClaim`) fields on the object will be cleared. The `VolumeSnapshot.Spec.SnapshotContentName` is the link back to the `VolumeSnapshotContent` object, and thus the actual snapshot. The `Source` field indicates that a new CSI snapshot operation should be performed, which isn't relevant on restore. This follows the \"import\" case of . The `Backup` associated with the `VolumeSnapshot` will be queried, and set as an ownerRef on the `VolumeSnapshot` so that deletion can" }, { "data": "``` +--+ +--+ | PersistentVolumeClaim +-->+ PersistentVolume | +--+ +--+ +--+ +--+ | VolumeSnapshot +-->+ VolumeSnapshotContent | +--+ +--+ ``` On restore, `PersistentVolumeClaims` will need to be created from the snapshot, and thus will require editing before submission. Only `PersistentVolumeClaim` objects with the `velero.io/volume-snapshot-name` label will be processed, using the plugin's `AppliesTo` function. Metadata (excluding labels) will be cleared, and the `velero.io/volume-snapshot-name` label will be used to find the relevant `VolumeSnapshot`. A reference to the `VolumeSnapshot` will be added to the `PersistentVolumeClaim.DataSource` field. ``` +--+ | PersistentVolumeClaim | +--+ +--+ +--+ | VolumeSnapshot +-->+ VolumeSnapshotContent | +--+ +--+ ``` No special logic is required to restore `VolumeSnapshotClass` objects. These plugins should be provided with Velero, as there will also be some changes to core Velero code to enable association of a `Backup` to the included `VolumeSnapshot`s. Any non-plugin code changes must be behind a `EnableCSI` feature flag and the behavior will be opt-in until it's exited beta status. This will allow the development to continue on the feature while it's in pre-production state, while also reducing the need for long-lived feature branches. will be extended to query for all `VolumeSnapshot`s associated with the backup, and persist the list to JSON. will receive an additional argument, `volumeSnapshots io.Reader`, that contains the JSON representation of `VolumeSnapshots`. This will be written to a file named `csi-snapshots.json.gz`. should be rewritten to the following to accommodate proper association between the CSI objects and PVCs. `CustomResourceDefinition`s are moved up because they're necessary for creating the CSI CRDs. The CSI CRDs are created before `PersistentVolume`s and `PersistentVolumeClaim`s so that they may be used as data sources. GitHub issue represents this work. ```go var defaultRestorePriorities = []string{ \"namespaces\", \"storageclasses\", \"customresourcedefinitions\", \"volumesnapshotclass.snapshot.storage.k8s.io\", \"volumesnapshotcontents.snapshot.storage.k8s.io\", \"volumesnapshots.snapshot.storage.k8s.io\", \"persistentvolumes\", \"persistentvolumeclaims\", \"secrets\", \"configmaps\", \"serviceaccounts\", \"limitranges\", \"pods\", \"replicaset\", } ``` Volumes found in a `Pod`'s `backup.velero.io/backup-volumes` list will use Velero's current Restic code path. This also means Velero will continue to offer Restic as an option for CSI volumes. The `velero.io/csi-pvc` BackupItemAction plugin will inspect pods in the namespace to ensure that it does not act on PVCs already being backed up by restic. This is preferred to modifying the PVC due to the fact that Velero's current backup process backs up PVCs and PVs mounted to pods at the same time as the pod. A drawback to this approach is that we're querying all pods in the namespace per PVC, which could be a large number. In the future, the plugin interface could be improved to have some sort of context argument, so that additional data such as our existing `resticSnapshotTracker` could be passed to plugins and reduce work. To ensure that all created resources are deleted when a backup expires or is deleted, `VolumeSnapshot`s will have an `ownerRef` defined pointing to the Velero backup that created them. In order to fully delete these objects, each `VolumeSnapshotContent`s object will need to be edited to ensure the associated provider snapshot is" }, { "data": "This will be done by editing the object and setting `VolumeSnapshotContent.Spec.DeletionPolicy` to `Delete`, regardless of whether or not the default policy for the class is `Retain`. See the Deletion Policies section below. The edit will happen before making Kubernetes API deletion calls to ensure that the cascade works as expected. Deleting a Velero `Backup` or any associated CSI object via `kubectl` is unsupported; data will be lost or orphaned if this is done. Since `VolumeSnapshot` and `VolumeSnapshotContent` objects are contained within a Velero backup tarball, it is possible that all CRDs and on-disk provider snapshots have been deleted, yet the CRDs are still within other Velero backup tarballs. Thus, when a Velero backup that contains these CRDs is restored, the `VolumeSnapshot` and `VolumeSnapshotContent` objects are restored into the cluster, the CSI controllers will attempt to reconcile their state, and there are two possible states when the on-disk snapshot has been deleted: 1) If the driver does not support the `ListSnapshots` gRPC method, then the CSI controllers have no way of knowing how to find it, and sets the `VolumeSnapshot.Status.readyToUse` field to `true`. 2) If the driver does support the `ListSnapshots` gRPC method, then the CSI controllers will query the state of the on-disk snapshot, see it is missing, and set `VolumeSnapshot.Status.readyToUse` and `VolumeSnapshotContent.Status.readyToUse` fields to `false`. To use CSI features, the Velero client must use the `EnableCSI` feature flag. will be extended to download the `csi-snapshots.json.gz` file for processing. GitHub Issue captures this work. A new `describeCSIVolumeSnapshots` function should be added to the package that knows how to render the included `VolumeSnapshot` names referenced in the `csi-snapshots.json.gz` file. The most accurate, reliable way to detect if a PersistentVolume is a CSI volume is to check for a non-`nil` field. Using the is not viable, since the usage is for any PVC that should be dynamically provisioned, and is not limited to CSI implementations. It was in 2016, predating CSI. In the `BackupItemAction` for PVCs, the associated PV will be queried and checked for the presence of `PersistentVolume.Spec.PersistentVolumeSource.CSI`. Volumes with any other `PersistentVolumeSource` set will use Velero's current VolumeSnapshotter plugin code path. Velero uses its own `VolumeSnapshotLocation` CRDs to specify configuration options for a given storage system. In Velero, this often includes topology information such as regions or availability zones, as well as credential information. CSI volume snapshotting has a `VolumeSnapshotClass` CRD which also contains configuration options for a given storage system, but these options are not the same as those that Velero would use. Since CSI volume snapshotting is operating within the same storage system that manages the volumes already, it does not need the same topology or credential information that Velero does. As such, when used with CSI volumes, Velero's `VolumeSnapshotLocation` CRDs are not relevant, and could be omitted. This will create a separate path in our documentation for the time being, and should be called out explicitly. Implementing similar logic in a Velero VolumeSnapshotter plugin was" }, { "data": "However, this is inappropriate given CSI's data model, which requires a PVC/PV's StorageClass. Given the arguments to the VolumeSnapshotter interface, the plugin would have to instantiate its own client and do queries against the Kubernetes API server to get the necessary information. This is unnecessary given the fact that the `BackupItemAction` and `RestoreItemAction` APIs can act directly on the appropriate objects. Additionally, the VolumeSnapshotter plugins and CSI volume snapshot drivers overlap - both produce a snapshot on backup and a PersistentVolume on restore. Thus, there's not a logical place to fit the creation of VolumeSnapshot creation in the VolumeSnapshotter interface. Implement CSI logic directly in Velero core code. The plugins could be packaged separately, but that doesn't necessarily make sense with server and client changes being made to accommodate CSI snapshot lookup. Implementing the CSI logic entirely in external plugins. As mentioned above, the necessary plugins for `PersistentVolumeClaim`, `VolumeSnapshot`, and `VolumeSnapshotContent` could be hosted out-out-of-tree from Velero. In fact, much of the logic for creating the CSI objects will be driven entirely inside of the plugin implementation. However, Velero currently has no way for plugins to communicate that some arbitrary data should be stored in or retrieved from object storage, such as list of all `VolumeSnapshot` objects associated with a given `Backup`. This is important, because to display snapshots included in a backup, whether as native snapshots or Restic backups, separate JSON-encoded lists are stored within the backup on object storage. Snapshots are not listed directly on the `Backup` to fit within the etcd size limitations. Additionally, there are no client-side Velero plugin mechanisms, which means that the `velero describe backup --details` command would have no way of displaying the objects to the user, even if they were stored. In order for underlying, provider-level snapshots to be retained similarly to Velero's current functionality, the `VolumeSnapshotContent.Spec.DeletionPolicy` field must be set to `Retain`. This is most easily accomplished by setting the `VolumeSnapshotClass.DeletionPolicy` field to `Retain`, which will be inherited by all `VolumeSnapshotContent` objects associated with the `VolumeSnapshotClass`. The current default for dynamically provisioned `VolumeSnapshotContent` objects is `Delete`, which will delete the provider-level snapshot when the `VolumeSnapshotContent` object representing it is deleted. Additionally, the `Delete` policy will cascade a deletion of a `VolumeSnapshot`, removing the associated `VolumeSnapshotContent` object. It is not currently possible to define a deletion policy on a `VolumeSnapshot` that gets passed to a `VolumeSnapshotContent` object on an individual basis. This proposal does not significantly change Velero's security implications within a cluster. If a deployment is using solely CSI volumes, Velero will no longer need privileges to interact with volumes or snapshots, as these will be handled by the CSI driver. This reduces the provider permissions footprint of Velero. Velero must still be able to access cluster-scoped resources in order to back up `VolumeSnapshotContent` objects. Without these objects, the provider-level snapshots cannot be located in order to re-associate them with volumes in the event of a restore." } ]
{ "category": "Runtime", "file_name": "csi-snapshots.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "The SUSHI Project is an umbrella project for all the NorthBound Plugins, for OpenStack, Kubernetes, Mesos, VMware and more. The plugins themselves could have multiple hosting location, however the OpenSDS Sushi Project repo should always have the most up-to-date version. Sushi will also seek to collaborate with other upstream open source communities such as Cloud Native Computing Foundation, Docker, OpenStack, and Open Container Initiative. The OpenSDS community welcomes anyone who is interested in software defined storage and shaping the future of cloud-era storage. If you are a company, you should consider joining the . If you are a developer want to be part of the code development that is happening now, please refer to the Contributing sections below. Mailing list: slack: # Ideas/Bugs:" } ]
{ "category": "Runtime", "file_name": "nbp.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently multus team supports Kubernetes that Kubernetes community maintains. See for the details. Latest multus uses container image for its base, hence there is no shell command. If you want to execute shell in multus pod, please use `-debug` image (e.g. ghcr.io/k8snetworkplumbingwg/multus-cni:snapshot-debug), which has shell. Multus now uses to expose its code as library. You can use following command to import our code into your go code. ``` go get gopkg.in/k8snetworkplumbingwg/multus-cni.v4 ``` Use GitHub as normally, you'll be presented with an option to submit a issue or enhancement request. Issues are considered stale after 90 days. After which, the maintainers reserve the right to close an issue. Typically, we'll tag the submitter and ask for more information if necessary before closing. If an issue is closed that you don't feel is sufficiently resolved, please feel free to re-open the issue and provide any necessary information. You can use the built in `./hack/build-go.sh` script! ``` git clone https://github.com/k8snetworkplumbingwg/multus-cni.git cd multus-cni ./hack/build-go.sh ``` Multus has go unit tests (based on ginkgo framework).The following commands drive CI tests manually in your environment: ``` sudo ./hack/test-go.sh ``` The following are the best practices for multus logging: Add `logging.Debugf()` at the beginning of functions In case of error handling, use `logging.Errorf()` with given error info `logging.Panicf()` only be used for critical errors (it should NOT normally be used) On the first maintainer's meeting, twice yearly, after January 1st and July 1st, if a new version has not been tagged, a new version will tagged." } ]
{ "category": "Runtime", "file_name": "development.md", "project_name": "Multus", "subcategory": "Cloud Native Network" }
[ { "data": "CRI-O builds for native package managers using Below is a compatibility matrix between versions of CRI-O (y-axis) and distributions (x-axis) <!-- markdownlint-disable MD013 --> | | Fedora 31+ | openSUSE | CentOS8 | CentOS8Stream | CentOS7 | DebianUnstable | DebianTesting | Debian 10 | Rasbian10 | xUbuntu20.04 | xUbuntu19.10 | xUbuntu19.04 | xUbuntu_18.04 | | - | - | -- | -- | | -- | | -- | | - | - | - | - | - | | 1.18 | | | | | | | | | | | | | | | 1.17 | | | | | | | | | | | | | | | 1.16 | | | | | | | | | | | | | | <!-- markdownlint-enable MD013 --> To install, choose a supported version for your operating system, and export it as a variable, like so: `export VERSION=1.18` We also save releases as subprojects. If you'd, for instance, like to use `1.18.3` you can set `export VERSION=1.18:1.18.3` ```shell sudo zypper install cri-o ``` ```shell sudo dnf module enable cri-o:$VERSION sudo dnf install cri-o ``` For Fedora, we only support setting minor versions. i.e: `VERSION=1.18`, and do not support pinning patch versions: `VERSION=1.18.3` To install on the following operating systems, set the environment variable ```$OS``` to the appropriate value from the following table: | Operating system | $OS | | - | -- | | Centos 8 | `CentOS_8` | | Centos 8 Stream | `CentOS8Stream` | | Centos 7 | `CentOS_7` | And then run the following as root: <!-- markdownlint-disable MD013 --> ```shell curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo yum install cri-o ``` <!-- markdownlint-enable MD013 --> Note: this tutorial assumes you have curl and gnupg installed To install on the following operating systems, set the environment variable ```$OS``` to the appropriate value from the following table: | Operating system | $OS | | - | -- | | Debian Unstable | `Debian_Unstable` | | Debian Testing | `Debian_Testing` | | Ubuntu 20.04 | `xUbuntu_20.04` | | Ubuntu 19.10 | `xUbuntu_19.10` | | Ubuntu 19.04 | `xUbuntu_19.04` | | Ubuntu 18.04 | `xUbuntu_18.04` | And then run the following as root: <!-- markdownlint-disable MD013 --> ```shell echo \"deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /\" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list echo \"deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /\" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list mkdir -p /usr/share/keyrings curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg apt-get update apt-get install cri-o ``` <!-- markdownlint-enable MD013 -->" } ]
{ "category": "Runtime", "file_name": "install-distro.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `[email protected]` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`" } ]
{ "category": "Runtime", "file_name": "RELEASE.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "<BR> OpenEBS is an \"umbrella project\". Every project, repository and file in the OpenEBS organization adopts and follows the policies found in the Community repo umbrella project files. <BR> This project follows the" } ]
{ "category": "Runtime", "file_name": "GOVERNANCE.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Work in progress. Please contribute if you see an area that needs more detail. rkt runs applications packaged according to the open-source specification. ACIs consist of the root filesystem of the application container, a manifest, and an optional signature. ACIs are named with a URL-like structure. This naming scheme allows for a decentralized discovery of ACIs, related signatures and public keys. rkt uses these hints to execute . rkt can execute ACIs identified by name, hash, local file path, or URL. If an ACI hasn't been cached on disk, rkt will attempt to find and download it. To use rkt's , enable registration with the `--mds-register` flag when . rkt provides subcommands to list, get status, and clean its pods. rkt provides subcommands to list, inspect and export images in its local store. The metadata service helps running apps introspect their execution environment and assert their pod identity. The API service allows clients to list and inspect pods and images running under rkt. In addition to the flags used by individual `rkt` commands, `rkt` has a set of global options that are applicable to all commands. | Flag | Default | Options | Description | | | | | | | `--cpuprofile (hidden flag)` | '' | A file path | Write CPU profile to the file | | `--debug` | `false` | `true` or `false` | Prints out more debug information to `stderr` | | `--dir` | `/var/lib/rkt` | A directory path | Path to the `rkt` data directory | | `--insecure-options` | none | none, http, image, tls, pubkey, capabilities, paths, seccomp, all-fetch, all-run, all <br/> More information below. | Comma-separated list of security features to disable | | `--local-config` | `/etc/rkt` | A directory path | Path to the local configuration directory | | `--memprofile (hidden flag)` | '' | A file path | Write memory profile to the file | | `--system-config` | `/usr/lib/rkt` | A directory path | Path to the system configuration directory | | `--trust-keys-from-https` | `false` | `true` or `false` | Automatically trust gpg keys fetched from HTTPS (or HTTP if the insecure `pubkey` option is also specified) | | `--user-config` | '' | A directory path | Path to the user configuration directory | none: All security features are enabled http: Allow HTTP connections. Be warned that this will send any credentials as clear text, allowing anybody with access to your network to obtain them. It will also perform no validation of the remote server, making it possible for an attacker to impersonate the remote server. This applies specifically to fetching images, signatures, and gpg pubkeys. image: Disables verifying image signatures. If someone is able to replace the image on the server with a modified one or is in a position to impersonate the server, they will be able to force you to run arbitrary code. tls: Accept any certificate from the server and any host name in that certificate. This will make it possible for attackers to spoof the remote server and provide malicious images. pubkey: Allow fetching pubkeys via insecure connections (via HTTP connections or from servers with unverified" }, { "data": "This slightly extends the meaning of the `--trust-keys-from-https` flag. This will make it possible for an attacker to spoof the remote server, potentially providing fake keys and allowing them to provide container images that have been tampered with. capabilities: Gives all to apps. This allows an attacker that is able to execute code in the container to trivially escalate to root privileges on the host. paths: Disables inaccessible and read-only paths. This makes it easier for an attacker who can gain control over a single container to execute code in the host system, potentially allowing them to escape from the container. This also leaks additional information. seccomp: Disables . This increases the attack surface available to an attacker who can gain control over a single container, potentially making it easier for them to escape from the container. all-fetch: Disables the following security checks: image, tls, http all-run: Disables the following security checks: capabilities, paths, seccomp all: Disables all security checks By default, rkt will send logs directly to stdout/stderr, allowing them to be captured by the invoking process. On host systems running systemd, rkt will attempt to integrate with journald on the host. In this case, the logs can be accessed directly via journalctl. To read the logs of a running pod, get the pod's machine name from `machinectl`: ``` $ machinectl MACHINE CLASS SERVICE rkt-bc3c1451-2e81-45c6-aeb0-807db44e31b4 container rkt 1 machines listed. ``` or `rkt list --full` ``` $ rkt list --full UUID APP IMAGE NAME IMAGE ID STATE CREATED STARTED NETWORKS bc3c1451-2e81-45c6-aeb0-807db44e31b4 etcd coreos.com/etcd:v2.3.4 sha512-7f05a10f6d2c running 2016-05-18 10:07:35.312 +0200 CEST 2016-05-18 10:07:35.405 +0200 CEST default:ip4=172.16.28.83 redis registry-1.docker.io/library/redis:3.2 sha512-6eaaf936bc76 ``` The pod's machine name will be the pod's UUID prefixed with `rkt-`. Given this machine name, logs can be retrieved by `journalctl`: ``` $ journalctl -M rkt-bc3c1451-2e81-45c6-aeb0-807db44e31b4 [...] ``` To get logs from one app in the pod: ``` $ journalctl -M rkt-bc3c1451-2e81-45c6-aeb0-807db44e31b4 -t etcd [...] $ journalctl -M rkt-bc3c1451-2e81-45c6-aeb0-807db44e31b4 -t redis [...] ``` Additionally, logs can be programmatically accessed via the . Currently there are two known main issues with logging in rkt: In some rare situations when an application inside the pod is writing to `/dev/stdout` and `/dev/stderr` (i.e. nginx) there is no way to obtain logs. The app should be modified so it will write to `stdout` or `syslog`. In the case of nginx the following snippet should be added to ```/etc/nginx/nginx.conf```: ``` error_log stderr; http { access_log syslog:server=unix:/dev/log main; [...] } ``` Some applications, like etcd 3.0, write directly to journald. Such log entries will not be written to stdout or stderr. These logs can be retrieved by passing the machine ID to journalctl: ``` $ journalctl -M rkt-bc3c1451-2e81-45c6-aeb0-807db44e31b4 ``` For the specific etcd case, since release 3.1.0-rc.1 it is possible to force emitting logs to stdout via a `--log-output=stdout` command-line option. To read the logs of a stopped pod, use: ``` journalctl -m MACHINEID=132f9d560e3f4d1eba8668efd488bb62 [...] ``` On some distributions such as Ubuntu, persistent journal storage is not enabled by default. In this case, it is not possible to get the logs of a stopped pod. Persistent journal storage can be enabled with `sudo mkdir /var/log/journal` before starting the pods." } ]
{ "category": "Runtime", "file_name": "commands.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete a service ``` cilium-dbg service delete { <service id> | --all } [flags] ``` ``` --all Delete all services -h, --help help for delete ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage services & loadbalancers" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_service_delete.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Firecracker is running serverless workloads at scale within AWS, but it's still day 1 on the journey guided by our . There's a lot more to build and we welcome all contributions. There's a lot to contribute to in Firecracker. We've opened issues for all the features we want to build and improvements we want to make. Good first issues are labeled accordingly. We're also keen to hearing about your use cases and how we can support them, your ideas, and your feedback for what's already here. If you're just looking for quick feedback for an idea or proposal, open an or chat with us on the . Follow the for submitting your changes to the Firecracker codebase. If you want to receive high-level but still commit-based feedback for a contribution, follow the steps instead. Firecracker uses the fork-and-pull development model. Follow these steps if you want to merge your changes to Firecracker: Within your fork of , create a branch for your contribution. Use a meaningful name. Create your contribution, meeting all against the main branch of the Firecracker repository. Add two reviewers to your pull request (a maintainer will do that for you if you're new). Work with your reviewers to address any comments and obtain a minimum of 2 approvals, at least one of which must be provided by . To update your pull request amend existing commits whenever applicable and then push the new changes to your pull request branch. Once the pull request is approved, one of the maintainers will merge it. If you just want to receive feedback for a contribution proposal, open an RFC (Request for Comments) pull request: On your fork of , create a branch for the contribution you want feedback on. Use a meaningful name. Create your proposal based on the existing codebase. against the main branch of the Firecracker repository. Prefix your pull request name with `[RFC]`. Discuss your proposal with the community on the pull request page (or on any other channel). Add the conclusion(s) of this discussion to the pull request page. Most quality and style standards are enforced automatically during integration testing. For ease of use you can setup a git pre-commit hook by running the following in the Firecracker root directory: ``` cargo install rusty-hook rusty-hook init ``` Your contribution needs to meet the following standards: Separate each logical change into its own commit. Each commit must pass all unit & code style tests, and the full pull request must pass all integration tests. See for information on how to run tests. Unit test coverage must increase the overall project code coverage. Include integration tests for any new functionality in your pull request. Document all your public functions. Add a descriptive message for each commit. Follow . A good commit message may look like ``` A descriptive title of 72 characters or fewer A concise description where each line is 72 characters or fewer. Signed-off-by: <A full name> <A email> Co-authored-by: <B full name> <B email> ``` Usage of `unsafe` is heavily discouraged. If `unsafe` is required, it should be accompanied by a comment detailing its... Justification, potentially including quantifiable reasons why safe alternatives were not used" }, { "data": "via a benchmark showing a valuable[^1] performance improvements). Safety, as per . This comment must list all invariants of the called function, and explain why there are upheld. If relevant, it must also prove that is not possible. E.g. ```rust // Test creating a resource. // JUSTIFICATION: This cannot be accomplished without unsafe as // `external_function()` returns `RawFd`. An alternative here still uses // unsafe e.g. `drop(unsafe { OwnedFd::fromrawfd(external_function()) });`. // SAFETY: `external_function()` returns a valid file descriptor. unsafe { libc::close(external_function()); } ``` Document your pull requests. Include the reasoning behind each change, and the testing done. Acknowledge Firecracker's and certify that no part of your contribution contravenes this license by signing off on all your commits with `git -s`. Ensure that every file in your pull request has a header referring to the repository license file. Firecracker is an open source product released under the . We respect intellectual property rights of others and we want to make sure all incoming contributions are correctly attributed and licensed. A Developer Certificate of Origin (DCO) is a lightweight mechanism to do that. The DCO is a declaration attached to every contribution made by every developer. In the commit message of the contribution, the developer simply adds a `Signed-off-by` statement and thereby agrees to the DCO, which you can find below or at DeveloperCertificate.org (<http://developercertificate.org/>). ``` Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as Indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ``` We require that every contribution to Firecracker is signed with a Developer Certificate of Origin. DCO checks are enabled via <https://github.com/apps/dco>, and your PR will fail CI without it. Additionally, we kindly ask you to use your real name. We do not accept anonymous contributors, nor those utilizing pseudonyms. Each commit must include a DCO which looks like this: ``` Signed-off-by: Jane Smith <[email protected]> ``` You may type this line on your own when writing your commit messages. However, if your `user.name` and `user.email` are set in your git config, you can use `-s` or `--signoff` to add the `Signed-off-by` line to the end of the commit message automatically. Forgot to add DCO to a commit? Amend it with `git commit --amend -s`. valuable." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "This document tracks people and use cases for rkt in production. , and help us keep the list up-to-date. BlaBlaCar is a trusted car-pooling service based in France. They've the stability of rkt being a big appeal. Media report from : \"rkt has become the container technology of choice for the French carpool specialist Blablacar. The company, which has adopted early container technologies, now relies on rkt and CoreOS for 90% of its applications.\" Media report from , \"Rocket tackled the limitations we had identified with Docker. In particular, the solution eliminated the use of a daemon (process running in the background, Editor's note) and approached the network part in a very modular way\" rkt is used by Container Linux to execute the Kubernetes on node agent called the \"kubelet\". This enables users to be able to have cluster controlled versioning of this critical component. There are many documented production users of Kubernetes and Container Linux including all of the users of . Kumulus Technologies offers classes and services to optimize your Cloud. They've rkt's pod native features and Kubernetes support being a reason for their adoption. Kinvolk are a professional consulting team and active contributors to systemd, rkt, and the Linux kernel. rkt helps them easily . Per Milosz Tanski, . \"The same experiences we switched to using rkt, supervised by upstart (and now systemd). We have an \"application\" state template in our salt config and every docker update something would cause all of them to fail. Thankful the \"application\" state template abstracted running container enough were we switched from docker -> rkt under the covers without anybody noticing, except now we no longer fearing of container software updates.\" These are blog posts and stories from people evaluating rkt. Although most aren't (yet) production use cases, the links offer useful information and shared experience in deploying rkt. moving from docker to rkt using rkt and systemd what happens when you run a rkt container writing his thesis on the security model of rkt a beginners guide to rkt containers rkt support is on the roadmap. See docker vs rkt comparison getting started with containers rkt overview of the rkt container engine" } ]
{ "category": "Runtime", "file_name": "production-users.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "A primary goal of OpenSDS is to be inclusive to the largest number of contributors, with the most varied and diverse backgrounds possible. As such, we are committed to providing a friendly, safe and welcoming environment for all, regardless of gender, sexual orientation, ability, ethnicity, socioeconomic status, and religion (or lack thereof). This code of conduct outlines our expectations for all those who participate in our community, as well as the consequences for unacceptable behavior. We invite all those who participate in OpenSDS to help us create safe and positive experiences for everyone. A supplemental goal of this Code of Conduct is to increase open source citizenship by encouraging participants to recognize and strengthen the relationships between our actions and their effects on our community. Communities mirror the societies in which they exist and positive action is essential to counteract the many forms of inequality and abuses of power that exist in society. If you see someone who is making an extra effort to ensure our community is welcoming, friendly, and encourages all participants to contribute to the fullest extent, we want to know. The following behaviors are expected and requested of all community members: Participate in an authentic and active way. In doing so, you contribute to the health and longevity of this community. Exercise consideration and respect in your speech and actions. Attempt collaboration before conflict. Refrain from demeaning, discriminatory, or harassing behavior and speech. Be mindful of your surroundings and of your fellow participants. Alert community leaders if you notice a dangerous situation, someone in distress, or violations of this Code of Conduct, even if they seem inconsequential. Remember that community event venues may be shared with members of the public; please be respectful to all patrons of these locations. The following behaviors are considered harassment and are unacceptable within our community: Violence, threats of violence or violent language directed against another person. Sexist, racist, homophobic, transphobic, ableist or otherwise discriminatory jokes and language. Posting or displaying sexually explicit or violent material. Posting or threatening to post other peoples personally identifying information" }, { "data": "Personal insults, particularly those related to gender, sexual orientation, race, religion, or disability. Inappropriate photography or recording. Inappropriate physical contact. You should have someones consent before touching them. Unwelcome sexual attention. This includes, sexualized comments or jokes; inappropriate touching, groping, and unwelcome sexual advances. Deliberate intimidation, stalking or following (online or in person). Advocating for, or encouraging, any of the above behavior. Sustained disruption of community events, including talks and presentations. Unacceptable behavior from any community member, including sponsors and those with decision-making authority, will not be tolerated. Anyone asked to stop unacceptable behavior is expected to comply immediately. If a community member engages in unacceptable behavior, the community organizers may take any action they deem appropriate, up to and including a temporary ban or permanent expulsion from the community without warning (and without refund in the case of a paid event). If you are subject to or witness unacceptable behavior, or have any other concerns, please notify a community organizer as soon as possible. [email protected]. Additionally, community organizers are available to help community members engage with local law enforcement or to otherwise help those experiencing unacceptable behavior feel safe. In the context of in-person events, organizers will also provide escorts as desired by the person experiencing distress. If you feel you have been falsely or unfairly accused of violating this Code of Conduct, you should notify OpenSDS with a concise description of your grievance. Your grievance will be handled in accordance with our existing governing policies. We expect all community participants (contributors, paid or otherwise; sponsors; and other guests) to abide by this Code of Conduct in all community venuesonline and in-personas well as in all one-on-one communications pertaining to community business. This code of conduct and its related procedures also applies to unacceptable behavior occurring outside the scope of community activities when such behavior has the potential to adversely affect the safety and well-being of community members. [email protected] This Code of Conduct is distributed under a . Portions of text derived from the and the . Retrieved on November 22, 2016 from" } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "`rkt` reads configuration from two or three directories - a system directory, a local directory and, if provided, a user directory. The system directory defaults to `/usr/lib/rkt`, the local directory to `/etc/rkt`, and the user directory to an empty string. These locations can be changed with command line flags described below. The system directory should contain a configuration created by a vendor (e.g. distribution). The contents of this directory should not be modified - it is meant to be read only. The local directory keeps configuration local to the machine. It can be modified by the admin. The user directory may hold some user specific configuration. It may be useful for specifying credentials used for fetching images without spilling them to some directory readable by everyone. `rkt` looks for configuration files with the `.json` file name extension in subdirectories beneath the system and local directories. `rkt` does not recurse down the directory tree to search for these files. Users may therefore put additional appropriate files (e.g., documentation) alongside `rkt` configuration in these directories, provided such files are not named with the `.json` extension. Every configuration file has two common fields: `rktKind` and `rktVersion`. Both fields' values are strings, and the subsequent fields are specified by this pair. The currently supported kinds and versions are described below. These fields must be specified and cannot be empty. `rktKind` describes the type of the configuration. This is to avoid putting unrelated values into a single monolithic file. `rktVersion` allows configuration versioning for each kind of configuration. A new version should be introduced when doing some backward-incompatible changes: for example, when removing a field or incompatibly changing its semantics. When a new field is added, a default value should be specified for it, documented, and used when the field is absent in any configuration file. This way, an older version of `rkt` can work with newer-but-compatible versions of configuration files, and newer versions of `rkt` can still work with older versions of configuration files. Configuration values in the system directory are superseded by the value of the same field if it exists in the local directory. The same relationship exists between the local directory and the user directory if the user directory is provided. The semantics of overriding configuration in this manner are specific to the `kind` and `version` of the configuration, and are described below. File names are not examined in determining local overrides. Only the fields inside configuration files need to match. To change the system configuration directory, use `--system-config` flag. To change the local configuration directory, use `--local-config` flag. To change the user configuration directory, use `--user-config` flag. The `auth` configuration kind is used to set up necessary credentials when downloading images and signatures. The configuration files should be placed inside the `auth.d` subdirectory (e.g., in the case of the default system/local directories, in `/usr/lib/rkt/auth.d` and/or `/etc/rkt/auth.d`). This version of the `auth` configuration specifies three additional fields: `domains`, `type` and `credentials`. The `domains` field is an array of strings describing hosts for which the following credentials should be used. Each entry must consist of a host/port combination in a URL as specified by RFC 3986. This field must be specified and cannot be empty. The `type` field describes the type of credentials to be sent. This field must be specified and cannot be empty. The `credentials` field is defined by the `type`" }, { "data": "It should hold all the data that are needed for successful authentication with the given hosts. This version of auth configuration supports three methods - basic HTTP authentication, OAuth Bearer Token, and AWS v4 authentication. Basic HTTP authentication requires two things - a user and a password. To use this type, define `type` as `basic` and the `credentials` field as a map with two keys - `user` and `password`. These fields must be specified and cannot be empty. For example: `/etc/rkt/auth.d/coreos-basic.json`: ```json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [\"coreos.com\", \"tectonic.com\"], \"type\": \"basic\", \"credentials\": { \"user\": \"foo\", \"password\": \"bar\" } } ``` OAuth Bearer Token authentication requires only a token. To use this type, define `type` as `oauth` and the `credentials` field as a map with only one key - `token`. This field must be specified and cannot be empty. For example: `/etc/rkt/auth.d/coreos-oauth.json`: ```json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [\"coreos.com\", \"tectonic.com\"], \"type\": \"oauth\", \"credentials\": { \"token\": \"sometoken\" } } ``` AWS v4 authentication requires three things - an access key ID, a secret access key and an AWS region. If the region is left empty, it will be determined automatically from the URL/domain. To use this type, define `type` as `aws` and the `credentials` field as a map with two or three keys - `accessKeyID` and `secretAccessKey` are mandatory, whilst `awsRegion` is optional and can be left empty. For example: `/etc/rkt/auth.d/coreos-aws.json`: ```json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [\"my-s3-bucket.s3.amazonaws.com\"], \"type\": \"aws\", \"credentials\": { \"accessKeyID\": \"foo\", \"secretAccessKey\": \"bar\", \"awsRegion\": \"us-east-1\" } } ``` Overriding is done for each domain. That means that the user can override authentication type and/or credentials used for each domain. As an example, consider this system configuration: `/usr/lib/rkt/auth.d/coreos.json`: ```json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [\"coreos.com\", \"tectonic.com\", \"kubernetes.io\"], \"type\": \"oauth\", \"credentials\": { \"token\": \"common-token\" } } ``` If only this configuration file is provided, then when downloading data from either `coreos.com`, `tectonic.com` or `kubernetes.io`, `rkt` would send an HTTP header of: `Authorization: Bearer common-token`. But with additional configuration provided in the local configuration directory, this can be overridden. For example, given the above system configuration and the following local configuration: `/etc/rkt/auth.d/specific-coreos.json`: ```json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [\"coreos.com\"], \"type\": \"basic\", \"credentials\": { \"user\": \"foo\", \"password\": \"bar\" } } ``` `/etc/rkt/auth.d/specific-tectonic.json`: ```json { \"rktKind\": \"auth\", \"rktVersion\": \"v1\", \"domains\": [\"tectonic.com\"], \"type\": \"oauth\", \"credentials\": { \"token\": \"tectonic-token\" } } ``` The result is that when downloading data from `kubernetes.io`, `rkt` still sends `Authorization: Bearer common-token`, but when downloading from `coreos.com`, it sends `Authorization: Basic Zm9vOmJhcg==` (i.e. `foo:bar` encoded in base64). For `tectonic.com`, it will send `Authorization: Bearer tectonic-token`. Note that within a particular configuration directory (either system or local), it is a syntax error for the same domain to be defined in multiple files. There are no command line flags for specifying or overriding the auth configuration. The `dockerAuth` configuration kind is used to set up necessary credentials when downloading data from Docker registries. The configuration files should be placed inside `auth.d` subdirectory (e.g. in `/usr/lib/rkt/auth.d` or `/etc/rkt/auth.d`). This version of `dockerAuth` configuration specifies two additional fields: `registries` and `credentials`. The `registries` field is an array of strings describing Docker registries for which the associated credentials should be used. This field must be specified and cannot be empty. A short list of popular Docker registries is given below. The `credentials` field holds the necessary data to authenticate against the Docker registry. This field must be specified and cannot be" }, { "data": "Currently, Docker registries only support basic HTTP authentication, so `credentials` has two subfields - `user` and `password`. These fields must be specified and cannot be empty. For registries like tools may be used to obtain credentials and registry endpoints. For example, use `aws ecr get-login` to fetch login credentials when using AWS. Please keep in mind that when using ECR the credentials will expire and will need to be refreshed. Some popular Docker registries: registry-1.docker.io (Assumed as the default when no specific registry is named on the rkt command line, as in `docker:///redis`.) quay.io gcr.io `<awsaccountid>`.dkr.ecr.`<region>`.amazonaws.com (AWS ECR) Example `dockerAuth` configuration: `/etc/rkt/auth.d/docker.json`: ```json { \"rktKind\": \"dockerAuth\", \"rktVersion\": \"v1\", \"registries\": [\"registry-1.docker.io\", \"quay.io\"], \"credentials\": { \"user\": \"foo\", \"password\": \"bar\" } } ``` Overriding is done for each registry. That means that the user can override credentials used for each registry. For example, given this system configuration: `/usr/lib/rkt/auth.d/docker.json`: ```json { \"rktKind\": \"dockerAuth\", \"rktVersion\": \"v1\", \"registries\": [\"registry-1.docker.io\", \"gcr.io\", \"quay.io\"], \"credentials\": { \"user\": \"foo\", \"password\": \"bar\" } } ``` If only this configuration file is provided, then when downloading images from either `registry-1.docker.io`, `gcr.io`, or `quay.io`, `rkt` would use user `foo` and password `bar`. But with additional configuration provided in the local configuration directory, this can be overridden. For example, given the above system configuration and the following local configuration: `/etc/rkt/auth.d/specific-quay.json`: ```json { \"rktKind\": \"dockerAuth\", \"rktVersion\": \"v1\", \"registries\": [\"quay.io\"], \"credentials\": { \"user\": \"baz\", \"password\": \"quux\" } } ``` `/etc/rkt/auth.d/specific-gcr.json`: ```json { \"rktKind\": \"dockerAuth\", \"rktVersion\": \"v1\", \"registries\": [\"gcr.io\"], \"credentials\": { \"user\": \"goo\", \"password\": \"gle\" } } ``` The result is that when downloading images from `registry-1.docker.io`, `rkt` still sends user `foo` and password `bar`, but when downloading from `quay.io`, it uses user `baz` and password `quux`; and for `gcr.io` it will use user `goo` and password `gle`. Note that within a particular configuration directory (either system or local), it is a syntax error for the same Docker registry to be defined in multiple files. There are no command line flags for specifying or overriding the docker auth configuration. The `paths` configuration kind is used to customize the various paths that rkt uses. The configuration files should be placed inside the `paths.d` subdirectory (e.g., in the case of the default system/local directories, in `/usr/lib/rkt/paths.d` and/or `/etc/rkt/paths.d`). This version of the `paths` configuration specifies two additional fields: `data` and `stage1-images`. The `data` field is a string that defines where image data and running pods are stored. This field is optional. The `stage1-images` field is a string that defines where are the stage1 images are stored, so rkt can search for them when using the `--stage1-from-dir` flag. This field is optional. Example `paths` configuration: `/etc/rkt/paths.d/paths.json`: ```json { \"rktKind\": \"paths\", \"rktVersion\": \"v1\", \"data\": \"/home/me/rkt/data\", \"stage1-images\": \"/home/me/rkt/stage1-images\" } ``` Overriding is done for each path. For example, given this system configuration: `/usr/lib/rkt/paths.d/data.json`: ```json { \"rktKind\": \"paths\", \"rktVersion\": \"v1\", \"data\": \"/opt/rkt-stuff/data\" } ``` If only this configuration file is provided, then rkt will store images and pods in the `/opt/rkt-stuff/data` directory. Also, when user passes `--stage1-from-dir=stage1.aci` to rkt, rkt will search for this file in the directory specified at build time (usually `/usr/lib/rkt/stage1-images`). But with additional configuration provided in the local configuration directory, this can be overridden. For example, given the above system configuration and the following local configuration: `/etc/rkt/paths.d/paths.json`: ```json { \"rktKind\": \"paths\", \"rktVersion\": \"v1\", \"data\": \"/home/me/rkt\" } ``` Now rkt will store the images and pods in the `/home/me/rkt` directory. It will not know about any other data" }, { "data": "Also, rkt will still search for the stage1 images in the directory specified at build time for the `--stage1-from-dir` flag. To override the stage1 images directory: `/etc/rkt/paths.d/stage1.json`: ```json { \"rktKind\": \"paths\", \"rktVersion\": \"v1\", \"stage1-images\": \"/home/me/stage1-images\" } ``` Now rkt will search in the `/home/me/stage1/images` directory, not in the directory specified at build time. The `data` field can be overridden with the `--dir` flag. The `stage1-images` field cannot be overridden with a command line flag. The `stage1` configuration kind is used to set up a default stage1 image. The configuration files should be placed inside the `stage1.d` subdirectory (e.g., in the case of the default system/local directories, in `/usr/lib/rkt/stage1.d` and/or `/etc/rkt/stage1.d`). This version of the `stage1` configuration specifies three additional fields: `name`, `version` and `location`. The `name` field is a string specifying a name of a default stage1 image. This field is optional. If specified, the `version` field must be specified too. The `version` field is a string specifying a version of a default stage1 image. This field is optional. If specified, the `name` field must be specified too. The `location` field is a string describing the location of a stage1 image file. This field is optional. The `name` and `version` fields are used by `rkt` (unless overridden with a run-time flag or left empty) to search for the stage1 image in the image store. If it is not found there then `rkt` will use a value from the `location` field (again, unless overridden or empty) to fetch the stage1 image. If the `name`, `version` and `location` fields are specified then it is expected that the file in `location` is a stage1 image with the same name and version in manifest as values of the `name` and `version` fields, respectively. Note that this is not enforced in any way. The `location` field can be: a `file://` URL a `http://` URL a `https://` URL a `docker://` URL an absolute path (basically the same as a `file://` URL) An example: ```json { \"rktKind\": \"stage1\", \"rktVersion\": \"v1\", \"name\": \"example.com/rkt/stage1\", \"version\": \"1.2.3\", \"location\": \"https://example.com/download/stage1-1.2.3.aci\" } ``` Overriding is done separately for the name-and-version pairs and for the locations. That means that the user can override either both a name and a version or a location. As an example, consider this system configuration: `/usr/lib/rkt/stage1.d/coreos.json`: ```json { \"rktKind\": \"stage1\", \"rktVersion\": \"v1\", \"name\": \"coreos.com/rkt/stage1-coreos\", \"version\": \"0.15.0+git\", \"location\": \"/usr/libexec/rkt/stage1-coreos.aci\" } ``` If only this configuration file is provided then `rkt` will check if `coreos.com/rkt/stage1-coreos` with version `0.15.0+git` is in image store. If it is absent then it would fetch it from `/usr/libexec/rkt/stage1-coreos.aci`. But with additional configuration provided in the local configuration directory, this can be overridden. For example, given the above system configuration and the following local configurations: `/etc/rkt/stage1.d/specific-coreos.json`: ```json { \"rktKind\": \"stage1\", \"rktVersion\": \"v1\", \"location\": \"https://example.com/coreos-stage1.aci\" } ``` The result is that `rkt` will still look for `coreos.com/rkt/stage1-coreos` with version `0.15.0+git` in the image store, but if it is not found, it will fetch it from `https://example.com/coreos-stage1.aci`. To continue the example, we can also override name and version with an additional configuration file like this: `/etc/rkt/stage1.d/other-name-and-version.json`: ```json { \"rktKind\": \"stage1\", \"rktVersion\": \"v1\", \"name\": \"example.com/rkt/stage1\", \"version\": \"1.2.3\" } ``` Now `rkt` will search for `example.com/rkt/stage1` with version `1.2.3` in the image store before trying to fetch the image from `https://example.com/coreos-stage1.aci`. Note that within a particular configuration directory (either system or local), it is a syntax error for the name, version or location to be defined in multiple files. The `name`, `version` and `location` fields are ignored in favor of a value coming from `--stage1-url`, `--stage1-path`, `--stage1-name`, `--stage1-hash`, or `--stage1-from-dir` flags." } ]
{ "category": "Runtime", "file_name": "configuration.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Name | Type | Description | Notes | - | - | - ReceiverUrl | string | | `func NewReceiveMigrationData(receiverUrl string, ) *ReceiveMigrationData` NewReceiveMigrationData instantiates a new ReceiveMigrationData object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewReceiveMigrationDataWithDefaults() *ReceiveMigrationData` NewReceiveMigrationDataWithDefaults instantiates a new ReceiveMigrationData object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *ReceiveMigrationData) GetReceiverUrl() string` GetReceiverUrl returns the ReceiverUrl field if non-nil, zero value otherwise. `func (o ReceiveMigrationData) GetReceiverUrlOk() (string, bool)` GetReceiverUrlOk returns a tuple with the ReceiverUrl field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *ReceiveMigrationData) SetReceiverUrl(v string)` SetReceiverUrl sets ReceiverUrl field to given value." } ]
{ "category": "Runtime", "file_name": "ReceiveMigrationData.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "StratoVirt supports to take a snapshot of a paused VM as VM template. This template can be used to warm start a new VM. Warm start skips the kernel boot stage and userspace initialization stage to boot VM in a very short time. First, we create a StratoVirt VM: ```shell $ ./stratovirt \\ -machine microvm \\ -kernel path/to/vmlinux.bin \\ -append \"console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda\" \\ -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \\ -device virtio-blk-device,drive=rootfs,id=rootfs \\ -qmp unix:path/to/socket,server,nowait \\ -serial stdio ``` After the VM boot up, pause the VM with QMP: ```shell $ ncat -U path/to/socket {\"QMP\":{\"version\":{\"StratoVirt\":{\"micro\":1,\"minor\":0,\"major\":0},\"package\":\"\"},\"capabilities\":[]}} {\"execute\":\"stop\"} {\"event\":\"STOP\",\"data\":{},\"timestamp\":{\"seconds\":1583908726,\"microseconds\":162739}} {\"return\":{}} ``` When VM is in paused state, is's safe to take a snapshot of the VM into the specified directory with QMP. ```shell $ ncat -U path/to/socket {\"QMP\":{\"version\":{\"StratoVirt\":{\"micro\":1,\"minor\":0,\"major\":0},\"package\":\"\"},\"capabilities\":[]}} {\"execute\":\"migrate\", \"arguments\":{\"uri\":\"file:path/to/template\"}} {\"return\":{}} ``` Two files will be created in given directory on the system. ```shell $ ls path/to/template memory state ``` File `state` contains the device state data of VM devices. File `memory` contains guest memory data of VM memory. The file size is explained by the size of VM guest memory. Restore from VM template with below command: ```shell $ ./stratovirt \\ -machine microvm \\ -kernel path/to/vmlinux.bin \\ -append \"console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda\" \\ -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \\ -device virtio-blk-device,drive=rootfs,id=rootfs \\ -qmp unix:path/to/socket,server,nowait \\ -serial stdio \\ -incoming file:path/to/template ``` The device configuration must be the same with template VM. Its cpu number, guest memory size, device number and type can be changed. For drive file, only support previous file or its backups. After that, the VM is created from template successfully. Use QMP command `query-migrate` to check snapshot state: ```shell $ ncat -U path/to/socket {\"QMP\":{\"version\":{\"StratoVirt\":{\"micro\":1,\"minor\":0,\"major\":0},\"package\":\"\"},\"capabilities\":[]}} {\"execute\":\"query-migrate\"} {\"return\":{\"status\":\"completed\"}} ``` Now there are 5 states during snapshot: `None`: Resource is not prepared all. `Setup`: Resource is setup, ready to do snapshot. `Active`: In snapshot. `Completed`: Snapshot succeed. `Failed`: Snapshot failed. Snapshot-restore support machine type: `microvm` `q35` (on x86_64 platform) `virt` (on aarch64 platform) Some devices and feature don't support to be snapshot yet: `vhost-net` `vfio` devices `balloon` `hugepage`,`mem-shared`,`backend file of memory` `pmu` `sve` `gic-version=2` Some device attributes can't be changed: `virtio-net`: mac `virtio-blk`: file(only ordinary file or copy file), serial_num `device`: bus, addr `smp` `m` For machine type `microvm`, if use `hot-replace` before snapshot, add newly replaced device to restore command." } ]
{ "category": "Runtime", "file_name": "snapshot.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "The most common reason for this issue is improper use of struct tags (eg. `yaml` or `json`). Viper uses under the hood for unmarshaling values which uses `mapstructure` tags by default. Please refer to the library's documentation for using other struct tags. Viper installation seems to fail a lot lately with the following (or a similar) error: ``` cannot find package \"github.com/hashicorp/hcl/tree/hcl1\" in any of: /usr/local/Cellar/go/1.15.7_1/libexec/src/github.com/hashicorp/hcl/tree/hcl1 (from $GOROOT) /Users/user/go/src/github.com/hashicorp/hcl/tree/hcl1 (from $GOPATH) ``` As the error message suggests, Go tries to look up dependencies in `GOPATH` mode (as it's commonly called) from the `GOPATH`. Viper opted to use to manage its dependencies. While in many cases the two methods are interchangeable, once a dependency releases new (major) versions, `GOPATH` mode is no longer able to decide which version to use, so it'll either use one that's already present or pick a version (usually the `master` branch). The solution is easy: switch to using Go Modules. Please refer to the on how to do that. tl;dr* `export GO111MODULE=on` This is a YAML 1.1 feature according to . Potential solutions are: Quoting values resolved as boolean Upgrading to YAML v3 (for the time being this is possible by passing the `viper_yaml3` tag to your build)" } ]
{ "category": "Runtime", "file_name": "TROUBLESHOOTING.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for zsh Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: echo \"autoload -U compinit; compinit\" >> ~/.zshrc To load completions in your current shell session: source <(cilium-operator-generic completion zsh) To load completions for every new session, execute once: cilium-operator-generic completion zsh > \"${fpath[1]}/_cilium-operator-generic\" cilium-operator-generic completion zsh > $(brew --prefix)/share/zsh/site-functions/_cilium-operator-generic You will need to start a new shell for this setup to take effect. ``` cilium-operator-generic completion zsh [flags] ``` ``` -h, --help help for zsh --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-generic_completion_zsh.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "CubeFS is compatible with the Hadoop FileSystem interface protocol, and users can use CubeFS to replace the Hadoop file system (HDFS). This chapter describes the installation and configuration process of CubeFS in the Hadoop storage ecosystem. Set up an accessible CubeFS cluster and need to in advance. The SDK dynamic library provided by CubeFS for Java calling. The CubeFS plugin for Hadoop. The third-party dependency package jna-5.4.0.jar (minimum supported version 4.0, recommended 5.4 or above) for the cfs-hadoop.jar plugin. ::: warning Note The current CubeFS Hadoop does not support file permission management of HDFS. ::: ```shell git clone https://github.com/cubefs/cubefs.git cd cubefs make libsdk ``` ::: warning Note Since the compiled package depends on glibc, the glibc version of the compilation environment and the runtime environment must be consistent. ::: ```shell git clone https://github.com/cubefs/cubefs-hadoop.git mvn package -Dmaven.test.skip=true ``` The above dependency packages must be installed on each node of the Hadoop cluster and must be found in the `CLASSPATH`. Each participating node in the Hadoop cluster must install the native CubeFS Hadoop client. | Resource Package Name | Installation Path | |--|--| | cfs-hadoop.jar | $HADOOP_HOME/share/hadoop/common/lib | | jna-5.4.0.jar | $HADOOP_HOME/share/hadoop/common/lib | | libcfs.so | $HADOOP_HOME/lib/native | After correctly placing the above resource packages, you need to make simple modifications to the core-site.xml configuration file, whose path is: `$HADOOP_HOME/etc/hadoop/core-site.xml`. Add the following configuration content to `core-site.xml`: ```yuml <property> <name>fs.cfs.impl</name> <value>io.cubefs.CubefsFileSystem</value> </property> <property> <name>cfs.master.address</name> <value>your.master.address[ip:port,ip:port,ip:port]</value> </property> <property> <name>cfs.log.dir</name> <value>your.log.dir[/tmp/cfs-access-log]</value> </property> <property> <name>cfs.log.level</name> <value>INFO</value> </property> <property> <name>cfs.access.key</name> <value>your.access.key</value> </property> <property> <name>cfs.secret.key</name> <value>your.secret.key</value> </property> <property> <name>cfs.min.buffersize</name> <value>67108864</value> </property> <property> <name>cfs.min.read.buffersize</name> <value>4194304</value> </property> <property> ``` Parameter Description: | Property | Value | Notes | |:|:|:--| | fs.cfs.impl | io.cubefs.CubefsFileSystem | Specify the storage implementation class with scheme `cfs://` | | cfs.master.address | | CubeFS master address, can be `ip+port` format, `ip:port`, `ip:port`, `ip:port`, or domain name | | cfs.log.dir | /tmp/cfs-access-log | Log path | | cfs.log.level | INFO | Log level | | cfs.access.key | | AccessKey of the user to which the CubeFS file system belongs | | cfs.secret.key | | SecretKey of the user to which the CubeFS file system belongs | | cfs.min.buffersize | 8MB | Write buffer" }, { "data": "The default value is recommended for replica volumes, and 64MB is recommended for EC volumes. | | cfs.min.read.buffersize | 128KB | Read buffer size. The default value is recommended for replica volumes, and 4MB is recommended for EC volumes. | After the configuration is completed, you can use the `ls` command to verify whether the configuration is successful: ```shell hadoop fs -ls cfs://volumename/ ``` If there is no error message, the configuration is successful. Hive scenario: Copy the jar package and modify the configuration on all nodemanagers, hive servers, and metastores in the Yarn cluster. Spark scenario: Copy the jar package and modify the configuration on all execution nodes (Yarn nodemanagers) in the Spark computing cluster and the Spark client. Presto scenario: Copy the jar package and modify the configuration on all worker nodes and coordinator nodes in Presto. Flink scenario: Copy the jar package and modify the configuration on all JobManager nodes in Flink. ```shell cp cfs-hadoop.jar $HADOOP_HOME/share/hadoop/common/lib cp jna-5.4.0 $HADOOP_HOME/share/hadoop/common/lib cp libcfs.so $HADOOP_HOME/lib/native ``` ::: tip Note After the configuration is changed for hive server, hive metastore, presto worker, and coordinator, the service process needs to be restarted on the server to take effect. ::: ```shell cp cfs-hadoop.jar $SPARK_HOME/jars/ cp libcfs.so $SPARK_HOME/jars/ cp jna-5.4.0 $SPARK_HOME/jars/ ``` ```shell cp cfs-hadoop.jar $PRESTO_HOME/plugin/hive-hadoop2 cp libcfs.so $PRESTO_HOME/plugin/hive-hadoop2 cp jna-5.4.0.jar $PRESTO_HOME/plugin/hive-hadoop2 ln -s $PRESTO_HOME/plugin/hive-hadoop2/libcfs.so /usr/lib sudo ldconfig ``` ```shell cp cfs-hadoop.jar $FLINK_HOME/lib cp jna-5.4.0.jar $FLINK_HOME/lib cp libcfs.so $FLINK_HOME/lib ln -s $FLINK_HOME/lib/libcfs.so /usr/lib sudo ldconfig ``` ```shell cp cfs-hadoop.jar $TRINO_HOME/plugin/iceberg cp jna-5.4.0.jar $TRINO_HOME/plugin/iceberg ``` The most common problem after deployment is the lack of packages. Check whether the resource package is copied to the corresponding location according to the installation steps. The common error messages are as follows: ```java java.lang.RuntimeException: java.lang.ClassNotFoundException: Class io.chubaofs.CubeFSFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2349) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2790) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) ``` ```java Suppressed: java.lang.UnsatisfiedLinkError: libcfs.so: cannot open shared object file: No such file or directory at com.sun.jna.Native.open(Native Method) at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:191) ... 21 more Suppressed: java.lang.UnsatisfiedLinkError: libcfs.so: cannot open shared object file: No such file or directory at com.sun.jna.Native.open(Native Method) at com.sun.jna.NativeLibrary.loadLibrary(NativeLibrary.java:204) ... 21 more ``` ```java Exception in thread \"main\" java.lang.NoClassDefFoundError: com/sun/jna/Library at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:763) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:468) at java.net.URLClassLoader.access$100(URLClassLoader.java:74)` ``` he volume name cannot contain underscores." } ]
{ "category": "Runtime", "file_name": "hadoop.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "While you have gone through 'how to contribute' guides, if you are not sure what to work on, but really want to help the project, you have now landed on the right document :-) Instead of planning to fix all the below issues in one patch, we recommend you to have a a constant, continuous flow of improvements for the project. We recommend you to pick 1 file (or just few files) at a time to address below issues. Pick any `.c` (or `.h`) file, and you can send a patch which fixes any of the below themes. Ideally, fix all such occurrences in the file, even though, the reviewers would review even a single line change patch from you. Check for variable definitions, and if there is an array definition, which is very large at the top of the function, see if you can re-scope the variable to relevant sections (if it helps). Most of the time, some of these arrays may be used for 'error' handling, and it is possible to use them only in that scope. Reference: https://review.gluster.org/20846/ Check for complete string initialization at the beginning of a function. Ideally, there is no reason to initialize a string. Fix it across the file. Example: `char newpathname[PATHMAX] = {0};` to `char newpathname[PATHMAX];` Change `calloc()` to `malloc()` wherever it makes sense. In a case of allocating a structures, where you expect certain (or most of) variables to be 0 (or NULL), it makes sense to use calloc(). But otherwise, there is an extra cost to `memset()` the whole object after allocating it. While it is not a significant improvement in performance, code which gets hit 1000s of times in a second, it would add some value. Reference: https://review.gluster.org/20878/ You can consider using `snprintf()`, instead of `strncpy()` while dealing with strings. strncpy() won't null terminate if the dest buffer isn't big enough; snprintf() does. While most of the string operations in the code is on array, and larger size than required, strncpy() does an extra copy of 0s at the end of string till the size of the array. It makes sense to use `snprintf()`, which doesn't suffer from that behavior. Also check the return value from snprintf() for buffer overflow and handle accordingly Reference: https://review.gluster.org/20925/ Now, pick a `.h` file, and see if a structure is very large, and see if re-aligning them as per gives any size benefit, if yes, go ahead and change it. Make sure you check all the structures in the file for similar pattern. Reference: [Check this section](https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/coding-standard.md#structure-members-should-be-aligned-based-on-the-padding-requirements Good progress! Glad you are interested to know more. We are surely interested in next level of contributions from you! Visit . Now, if the number of defect is not 0, you have an opportunity to contribute. You get all the detail on why the particular defect is mentioned there, and most probable hint on how to fix it. Do it! Reference: https://review.gluster.org/21394/ Use the same reference Id (789278) as the patch, so we can capture it is in single bugzilla. Clang-Scan is a tool which scans the .c files and reports the possible issues, similar to coverity, but a different tool. Over the years we have seen, they both report very different set of issues, and hence there is a value in fixing" }, { "data": "GlusterFS project gets tested with clang-scan job every night, and the report is posted in the . As long as the number is not 0 in the report here, you have an opportunity to contribute! Similar to coverity dashboard, click on 'Details' to find out the reason behind that report, and send a patch. Reference: https://review.gluster.org/21025 Again, you can use reference Id (1622665) for these patches! In the file you open, see if the lock is taken only to increment or decrement a flag, counter etc. If yes, then recommend you to convert it to ATOMIC locks. It is simple activity, but, if you know programing, you would know the benefit here. NOTE: There may not always a possibility to do this! You may have to check with developers first before going ahead. Reference: https://review.gluster.org/21221/ runs regression with asan builds, and you can also run glusterfs with asan on your workload to identify the leaks. If there are any leaks reported, feel free to check it, and send us patch. You can also run `valgrind` and let us know what it reports. Reference: https://review.gluster.org/21397 This is something which we are not focusing right now, happy to collaborate! Reference: https://review.gluster.org/21276 There are few cases of pending features, or pending validations, which are pending from sometime. You can pick them in the given file, and choose to fix it. You are most welcome! Our community is open for your contribution! First thing which comes to our mind is documentation. Next is, testing or validation. If you have some hardware, and want to run some performance comparisons with different version, or options, and help us to tune better is also a great help. We have some documentation in , go through these, and see if you can help us to keep up-to-date. The https://docs.gluster.org is powered by https://github.com/gluster/glusterdocs repo. You can check out the repo, and help in keeping that up-to-date. is maintained by https://github.com/gluster/glusterweb repo. Help us to keep this up-to-date, and add content there. Write blogs about Gluster, and your experience, and make world know little more about Gluster, and your use-case, and how it helped to solve the problem. There is a regression test suite in glusterfs, which runs with every patch, and is triggered by just running `./run-tests.sh` from the root of the project repo. You can add more test case to match your use-case, and send it as a patch, so you can make sure all future patches in glusterfs would keep your usecase intact. : This is another testing framework written for gluster, and makes use of clustered setup to test different use-cases, and helps to validate many bugs. Gluster Organization has rich set of ansible roles, which are actively maintained. Feel free to check them out here - https://github.com/gluster/gluster-ansible We have prometheus repo, and are actively working on adding more metrics. Add what you need @ https://github.com/gluster/gluster-prometheus This is a project, where at any given point in time, you want to run some set of commands locally, and get an output to analyze the status, it can be added. Contribute @ https://github.com/gluster/gluster-health-report We have something for you too :-) Please visit our https://github.com/gluster/gcs repo for checking how you can help, and how gluster can help you in container world. For any queries, best way is to contact us through mailing-list, <mailto:[email protected]>" } ]
{ "category": "Runtime", "file_name": "options-to-contribute.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. This library adapts the stdlib `encoding/json` decoder to be compatible with Kubernetes JSON decoding, and is not expected to actively add new features. It may be updated with changes from the stdlib `encoding/json` decoder. Any code that is added must: Have full unit test and benchmark coverage Be backward compatible with the existing exposed go API Have zero external dependencies Preserve existing benchmark performance Preserve compatibility with existing decoding behavior of `UnmarshalCaseSensitivePreserveInts()` or `UnmarshalStrict()` Avoid use of `unsafe` We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers You can reach the maintainers of this project via the . - We have a diverse set of mentorship programs available that are always looking for volunteers!" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "title: NFS Storage Overview NFS storage can be mounted with read/write permission from multiple pods. NFS storage may be especially useful for leveraging an existing Rook cluster to provide NFS storage for legacy applications that assume an NFS client connection. Such applications may not have been migrated to Kubernetes or might not yet support PVCs. Rook NFS storage can provide access to the same network filesystem storage from within the Kubernetes cluster via PVC while simultaneously providing access via direct client connection from within or outside of the Kubernetes cluster. !!! warning Simultaneous access to NFS storage from Pods and from external clients complicates NFS user ID mapping significantly. Client IDs mapped from external clients will not be the same as the IDs associated with the NFS CSI driver, which mount exports for Kubernetes pods. !!! warning Due to a number of Ceph issues and changes, Rook officially only supports Ceph v16.2.7 or higher for CephNFS. If you are using an earlier version, upgrade your Ceph version following the advice given in Rook's . !!! note CephNFSes support NFSv4.1+ access only. Serving earlier protocols inhibits responsiveness after a server restart. This guide assumes you have created a Rook cluster as explained in the main as well as a which will act as the backing storage for NFS. Many samples reference the CephNFS and CephFilesystem example manifests and . Create the NFS cluster by specifying the desired settings documented for the . When a CephNFS is first created, all NFS daemons within the CephNFS cluster will share a configuration with no exports defined. When creating an export, it is necessary to specify the CephFilesystem which will act as the backing storage for the NFS export. RADOS Gateways (RGWs), provided by , can also be used as backing storage for NFS exports if desired. Exports can be created via the as well. To enable and use the Ceph dashboard in Rook, see . The Ceph CLI can be used from the Rook toolbox pod to create and manage NFS exports. To do so, first ensure the necessary Ceph mgr modules are enabled, if necessary, and that the Ceph orchestrator backend is set to Rook. Required for Ceph v16.2.7 and below Optional for Ceph v16.2.8 and above Must be disabled for Ceph" }, { "data": "due to a ```console ceph mgr module enable rook ceph mgr module enable nfs ceph orch set backend rook ``` can create NFS exports that are backed by (a CephFilesystem) or (a CephObjectStore). `cluster_id` or `cluster-name` in the Ceph NFS docs normally refers to the name of the NFS cluster, which is the CephNFS name in the Rook context. For creating an NFS export for the CephNFS and CephFilesystem example manifests, the below command can be used. This creates an export for the `/test` pseudo path. ```console ceph nfs export create cephfs my-nfs /test myfs ``` The below command will list the current NFS exports for the example CephNFS cluster, which will give the output shown for the current example. ```console $ ceph nfs export ls my-nfs [ \"/test\" ] ``` The simple `/test` export's info can be listed as well. Notice from the example that only NFS protocol v4 via TCP is supported. ```console $ ceph nfs export info my-nfs /test { \"export_id\": 1, \"path\": \"/\", \"cluster_id\": \"my-nfs\", \"pseudo\": \"/test\", \"access_type\": \"RW\", \"squash\": \"none\", \"security_label\": true, \"protocols\": [ 4 ], \"transports\": [ \"TCP\" ], \"fsal\": { \"name\": \"CEPH\", \"user_id\": \"nfs.my-nfs.1\", \"fs_name\": \"myfs\" }, \"clients\": [] } ``` If you are done managing NFS exports and don't need the Ceph orchestrator module enabled for anything else, it may be preferable to disable the Rook and NFS mgr modules to free up a small amount of RAM in the Ceph mgr Pod. ```console ceph orch set backend \"\" ceph mgr module disable rook ``` Each CephNFS server has a unique Kubernetes Service. This is because NFS clients can't readily handle NFS failover. CephNFS services are named with the pattern `rook-ceph-nfs-<cephnfs-name>-<id>` `<id>` is a unique letter ID (e.g., a, b, c, etc.) for a given NFS server. For example, `rook-ceph-nfs-my-nfs-a`. For each NFS client, choose an NFS service to use for the connection. With NFS v4, you can mount an export by its path using a mount command like below. You can mount all exports at once by omitting the export path and leaving the directory as just `/`. ```console mount -t nfs4 -o proto=tcp <nfs-service-address>:/<export-path> <mount-location> ``` Use a LoadBalancer Service to expose an NFS server (and its exports) outside of the Kubernetes cluster. The Service's endpoint can be used as the NFS service address when . We provide an example Service here: . Security options for NFS are documented . The NFS CSI provisioner and driver are documented Advanced NFS configuration is documented Known issues are documented on the ." } ]
{ "category": "Runtime", "file_name": "nfs.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "To run this site in a Docker container, you can use `make serve-docs` from the root directory. Install the following for an easy to use dev environment: `brew install hugo` If you are running a build on Ubuntu you will need the following packages: hugo Clone down your own fork, or clone the main repo `git clone https://github.com/vmware-tanzu/velero` and add your own remote. `cd velero/site` Serve the site and watch for markup/sass changes `hugo serve`. View your website at http://127.0.0.1:1313/ Commit any changes and push everything to your fork. Once you're ready, submit a PR of your changes. Netlify will automatically generate a preview of your changes. Install the `Hugo Integration` plugin: https://plugins.jetbrains.com/plugin/13215-hugo-integration Under `Preferences...` -> `Plugins` Create a new configuration: Click `Edit Configurations...` Click the `+` button to create a new configuration and select `Hugo` Select `hugo serve` and make sure it is running under the `site` directory Save and run the new Configuration View your website at http://127.0.0.1:1313/ Any changes in `site` will reload the website automatically To add a new set of versioned docs to go with a new Velero release: In the root of the repository, run: ```bash NEWDOCSVERSION=vX.Y VELERO_VERSION=vX.Y.Z make gen-docs ``` [Pre-release only] In `site/config.yaml`, revert the change to the `latest` field, so the pre-release docs do not become the default." } ]
{ "category": "Runtime", "file_name": "README-HUGO.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "In the hosts file `/etc/hosts` map the ip of the kube node you are testing with to `zenko.local`. For example, by adding the line: ``` <kubernetes-IP> zenko.local ``` If you are ssh-ed into one of the kube servers or doing a port-forward to a remote kube instance, the `kube-node-ip` value will be `127.0.0.1` Find the Zenko service endpoint for S3 calls to Cloudserver by running: ``` kubectl get services ``` Look for the service named `zenko-cloudserver`, the value under `NAME` should be set as the Zenko endpoint for testing: ``` export CLOUDSERVER_ENDPOINT=http://<zenko-cloudserver-IP> export CLOUDSERVER_HOST=<zenko-cloudserver-IP> ``` Create an account using Orbit. Export the access key and secret key of that account (for example, in `.secrets.env`): ``` export ZENKOACCESSKEY=<zenko-access-key> export ZENKOSECRETKEY=<zenko-secret-key> ``` Install node and npm. Navigate to `Zenko/tests/node_tests/backbeat`. Install node modules: `npm i`. Create a bucket on AWS `<destination-aws-bucket-name>` with versioning enabled. In Orbit, create an AWS storage location `<destination-aws-location-name>` with an AWS bucket `<destination-aws-bucket-name>`. In Orbit, create an AWS location `<source-aws-location-name>`. Create a container on Azure `<destination-azure-container-name>`. In Orbit, create an Azure storage location `<destination-azure-location-name>` with an Azure container `<destination-azure-container-name>`. Create a bucket on GCP `<destination-gcp-bucket-name>`. In Orbit, create a GCP storage location `<destination-gcp-location-name>` with an GCP bucket `<destination-gcp-bucket-name>`. Export the keys, bucket name, container name, and storage location names (for example, in `.env` and `.secrets.env`): ``` export AWSACCESSKEY=<aws-access-key> export AWSSECRETKEY=<aws-secret-key> export AWSCRRBUCKET_NAME=<destination-aws-bucket-name> export AWSBACKENDDESTINATION_LOCATION=<destination-aws-location-name> export AWSBACKENDSOURCE_LOCATION=<source-aws-location-name> export AZUREACCOUNTNAME=<azure-account-name> export AZURESECRETKEY=<azure-access-key> export AZUREBACKENDENDPOINT=<azure-endpoint> export AZURECRRBUCKET_NAME=<destination-azure-container-name> export AZUREBACKENDDESTINATION_LOCATION=<destination-azure-location-name> export GCPCRRBUCKET_NAME=<destination-gcp-bucket-name> export GCPBACKENDDESTINATION_LOCATION=<destination-gcp-location-name> export GCPBACKENDSERVICE_KEY=<gcp-private-key> export GCPBACKENDSERVICE_EMAIL=<gcp-client-email> ``` If using `*.env` files, source the files: ``` source .env && source .secrets.env ``` Create the GCP credential file in `Zenko/tests/zenko_e2e/backbeat`: ``` cat >gcp_key.json <<EOF { \"privatekey\": \"${GCPBACKENDSERVICEKEY}\", \"clientemail\": \"${GCPBACKENDSERVICEEMAIL}\" } EOF ``` Run the test suite: `npm run test_crr`. Create a bucket on AWS `<destination-aws-bucket-name>` with versioning enabled. In Orbit, create an AWS location `<destination-aws-location-name>` with an AWS bucket `<destination-aws-bucket-name>`. Create a container on Azure `<destination-azure-container-name>`. In Orbit, create an Azure storage location `<destination-azure-location-name>` with an Azure container `<destination-azure-container-name>`. Export the keys, AWS bucket name, and AWS location (for example, in `.env` and `.secrets.env`): ``` export CLOUDSERVER_HOST=<zenko-cloudserver-name> export AWSACCESSKEY=<aws-access-key> export AWSSECRETKEY=<aws-secret-key> export AWSCRRBUCKET_NAME=<destination-aws-bucket-name> export AWSBACKENDDESTINATION_LOCATION=<destination-aws-location-name> export AZUREACCOUNTNAME=<azure-account-name> export AZURESECRETKEY=<azure-access-key> export AZUREBACKENDENDPOINT=<azure-endpoint> export AZURECRRBUCKET_NAME=<destination-azure-container-name> export AZUREBACKENDDESTINATION_LOCATION=<destination-azure-location-name> ``` If using `*.env` files, source the files: ``` source .env && source .secrets.env ``` Run the test suite: `npm run test_api` for API tests, or `npm run testcrrpause_resume` for CRR pause and resume tests. Create a bucket on AWS `<destination-fail-aws-bucket-name>` with versioning enabled. In Orbit, create an AWS location `<destination-fail-aws-location-name>` with an AWS bucket `<destination-fail-aws-bucket-name>`. Export the keys, AWS bucket name, and AWS location (for example, in `.env` and `.secrets.env`): ``` export AWSACCESSKEY=<aws-access-key> export AWSSECRETKEY=<aws-secret-key> export AWSS3FAILBACKBEATBUCKET_NAME=<destination-fail-aws-bucket-name> export AWSS3FAILBACKENDDESTINATION_LOCATION=<destination-fail-aws-bucket-name> ``` If using `*.env` files, source the files: ``` source .env && source .secrets.env ``` Run the test suite: `npm run test_retry`." } ]
{ "category": "Runtime", "file_name": "Using.md", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "\\[!WARNING\\] Support is currently in developer preview. See for more info. As an alternative to `Sync` and `Async` engines, Firecracker supports a vhost-user block device. There is a good introduction of how a vhost-user block device works in general at . is a userspace protocol that allows to delegate Virtio queue processing to another userspace process on the host, as opposed to performing this task within Firecracker's VMM thread. In the vhost-user architecture, the VMM acts as a vhost-user frontend and it is responsible for: connecting to the backend via a Unix domain socket (UDS) feature negotiation with the backend and the guest handling device configuration requests from the guest sharing sufficient information about the guest memory and Virtio queues with the backend The vhost-user backend receives the information from the frontend and performs handling of IO requests from the guest. The UDS socket is only used for control plane purposes and does not participate in the data plane. Firecracker only implements a vhost-user frontend. Users are free to choose from or implement their own. Each vhost-user device connects to its own UDS socket. There is no way for multiple devices to share a single socket, as there is no way to differentiate messages related to devices at the vhost-user protocol level. Each device can be served by a separate backend or a single backend can serve multiple devices. There are three points when the vhost-user frontend communicates with the backend: Device initialisation. When a vhost-user device is created, Firecracker connects to the corresponding UDS socket and negotiates Virtio and Vhost features with backend and retrieves device configuration. Device activation. When the guest driver finishes setting up the device, Firecracker shares memory tables and Virtio queue information with the backend. As a part of this, Firecracker shares file descriptors for guest's memory regions, as well as file descriptors for queue notifications. Config update. When receving a on a vhost-user backed drive, Firecracker rerequests the device config from the backend in order to make the new config available to the guest. While vhost-user block is considered an optimisation to Firecracker IO, a naive implementation of the backend is not going to improve performance. The major advantage of using a vhost-user device is that the backend can implement custom processing logic. It can use intelligent algorithms to serve block requests, eg by fetching the block device data over the network or using sophisticated readahead logic. In such cases, the performance improvement will be coming from the fact that the custom logic is implemented in the same process that handles Virtio queues, which reduces the number of required context" }, { "data": "In order for the backend to be able to process virtio requests, guest memory needs to be shared by the frontend to the backend. This means, a shared memory mapping is required to back guest memory. When a vhost-user device is configured, Firecracker uses `memfd_create` instead of creating an anonymous private mapping to achieve that. It was observed that page faults to a shared memory mapping take significantly longer (up to 24% in our testing), because Linux memory subsystem has to use atomic memory operations to update page status, which is an expensive operation under specific conditions. We advise users to profile performance on their workloads when considering to use vhost-user devices. Compared to virtio block device where Firecracker interacts with a drive file on the host, vhost-user block device is handled by the backend directly. Some workloads may benefit from caching and readahead that the host pagecache offers for the backing file. This benefit is not available in vhost-user block case. Users may need to implement internal caching within the backend if they find it appropriate. There are a number of open source implementations of a vhost-user backend available for reference that can help developing a custom backend: By design, a vhost-user frontend must share file descriptors of all guest memory regions to the backend. In order to achive that, guest memory is created as a and mapped as `MAP_SHARED`. An open `memfd` is reflected in `procfs` as any other open file descriptor: ```shell $ ls -l /proc/{pid}/fd | grep memfd lrwx 1 1234 1234 64 Nov 2 13:39 32 -> /memfd:guest_mem (deleted) ``` Any process on the host that has access to this file in `procfs` will be able to map the file descriptor and observe runtime behaviour of the guest. At the moment, Firecracker does not close the `memfd`, because it must remain open until all the configured vhost-user devices have been activated and their info shared with the backends. This kind of tracking is not implemented in Firecracker, but may be implemented in the future. Meanwhile, users need to make sure that the access to the Firecracker's `procfs` tree is restricted to trusted processes on the host. On the backend side, it is advised that the backend closes the guest memory region file descriptors after mapping them into its own address space. The Firecracker allows to configure resource limits for the Firecracker process. Specifically, it allows to set the maximum file size. Since `memfd` that is used to back the guest memory is considered a file, the file size resource limit cannot be less than the biggest guest memory region. This does not require any special action from a user, but needs to be taken into consideration. It is recommended to run Firecracker using the" }, { "data": "Since the vhost-user backend interacts with the guest via a Virtio queue, there is a potential for the guest to exercise issues in the backend codebase to trigger undesired behaviours. Users should consider running their backend in a jailer or applying other adequate security measures to restrict it. Note is currently only capable of running Firecracker as the binary. Vhost-user block device users are expected to use another jailer to run the backend. It is also recommended to use proactive security measures like running a Virtio-level fuzzer in the guest during testing to make sure that the backend correctly handles all possible classes of inputs (including invalid ones) from the guest. Virtio block device in Firecracker has a . In the vhost-user case, Firecracker does not participate in handling requests from the guest, so rate limiting becomes backend's responsibility. As an additional indirect measure, users can make use of `cgroups` settings (either via Firecracker jailer or independently) in order to restrict host CPU consumption of the guest, which would transitively limit guest's IO activity. Due to potential defects in the backend (eg mislocating Virtio queues or writes to a wrong location in the guest memory), the guest execution may be affected. It is advised that customers monitor guest's health periodically. Additionally, in order to avoid orhpaned Firecracker processes if the backend crashes, the backend may need to send a signal, such as `SIGBUS`, to the Firecracker process for it to exit as well. In order to correctly handle the case where the Firecracker process exits before it exchanges all the expected data with the backend, the backend may need to implement a timeout for how long it waits for Firecracker to connect and/or to exchange the data via the vhost-user protocol and exit to avoid resource exhaustion. At the moment, is not supported for microVMs that have vhost-user devices configured. An attempt to take a snapshot of such a microVM will fail. It is planned to add support for that in the future. Run a vhost-user backend, eg Qemu backend: ```bash vhost-user-blk --socket-path=${backendsocket} --blk-file=${drivepath} ``` Firecracker API request to add a vhost-user block device: ```bash curl --unix-socket ${fc_socket} -i \\ -X PUT \"http://localhost/drives/scratch\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \\\"drive_id\\\": \\\"scratch\\\", \\\"socket\\\": \\\"${backend_socket}\\\", \\\"isrootdevice\\\": false }\" ``` Note Unlike Virtio block device, there is no way to configure a `readonly` vhost-user drive on the Firecracker side. Instead, this configuration belongs to the backend. Whenever the backend advertises the `VIRTIOBLKF_RO` feature, Firecracker will accept it, and the device will act as readonly. Note Whenever a `PUT` request is sent to the `/drives` endpoint for a vhost-user device with the `id` that already exists, Firecracker will close the existing connection to the backend and will open a new one. Users may need to restart their backend if they do so." } ]
{ "category": "Runtime", "file_name": "block-vhost-user.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Some Pull Requests (PRs) which fix bugs in the main branch of Antrea can be identified as good candidates for backporting to currently maintained release branches (using a Git ), so that they can be included in subsequent patch releases. If you have authored such a PR (thank you!!!), one of the Antrea maintainers may comment on your PR to ask for your assistance with that process. This document provides the steps you can use to cherry-pick your change to one or more release branches, with the help of the . For information about which changes are good candidates for cherry-picking, please refer to our [versioning policy](../versioning.md#minor-releases-and-patch-releases). A PR which was approved and merged into the main branch. The PR was identified as a good candidate for backporting by an Antrea maintainer: they will label the PR with `action/backport` and comment a list of release branches to which the patch should be backported (example: ). Have the installed (version >= 1.3) and make sure you authenticate yourself by running `gh auth`. Your own fork of the Antrea repository, and a clone of this fork with two remotes: the `origin` remote tracking your fork and the `upstream` remote tracking the upstream Antrea repository. If you followed our recommended [Github Workflow], this should already be the case. Set the GITHUB_USER environment variable. Optional If your remote names do not match our recommended [Github Workflow], you must set the `UPSTREAMREMOTE` and `FORKREMOTE` environment variables. Run the This example applies a main branch PR #2134 to the remote branch `upstream/release-1.0`: ```shell hack/cherry-pick-pull.sh upstream/release-1.0 2134 ``` If the cherry-picked PR does not apply cleanly against an old release branch, the script will let you resolve conflicts manually. This is one of the reasons why we ask contributors to backport their own bug fixes, as their participation is critical in case of such a conflict. The script will create a PR on Github for you, which will automatically be labelled with `kind/cherry-pick`. This PR will go through the normal testing process, although it should be very quick given that the original PR was already approved and merged into the main branch. The PR should also go through normal CI testing. In some cases, a few CI tests may fail because we do not have dedicated CI infrastructure for past Antrea releases. If this happens, the PR will be merged despite the presence of CI test failures. You will need to run the cherry pick script separately for each release branch you need to cherry-pick to. Typically, cherry-picks should be applied to all release branches for which the fix is applicable." } ]
{ "category": "Runtime", "file_name": "cherry-picks.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Direct access to the kvstore ``` -h, --help help for kvstore --kvstore string Key-Value Store type --kvstore-opt map Key-Value Store options ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Delete a key - Retrieve a key - Set a key and value" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_kvstore.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Setup swap device in guest kernel can help to increase memory capacity, handle some memory issues and increase file access speed sometimes. Kata Containers can insert a raw file to the guest as the swap device. The swap config of the containers should be set by . So . Kata Containers just supports setup swap device in guest kernel with QEMU. Install and setup Kata Containers as shown . Enable setup swap device in guest kernel as follows: ``` $ sudo sed -i -e 's/^#enableguestswap.*$/enableguestswap = true/g' /etc/kata-containers/configuration.toml ``` Use following command to start a Kata Containers with swappiness 60 and 1GB swap device (swapinbytes - memorylimitin_bytes). ``` $ pod_yaml=pod.yaml $ container_yaml=container.yaml $ image=\"quay.io/prometheus/busybox:latest\" $ cat << EOF > \"${pod_yaml}\" metadata: name: busybox-sandbox1 uid: $(uuidgen) namespace: default EOF $ cat << EOF > \"${container_yaml}\" metadata: name: busybox-test-swap annotations: io.katacontainers.container.resource.swappiness: \"60\" io.katacontainers.container.resource.swapinbytes: \"2147483648\" linux: resources: memorylimitin_bytes: 1073741824 image: image: \"$image\" command: top EOF $ sudo crictl pull $image $ podid=$(sudo crictl runp --runtime kata $pod_yaml) $ cid=$(sudo crictl create $podid $containeryaml $podyaml) $ sudo crictl start $cid ``` Kata Containers setups swap device for this container only when `io.katacontainers.container.resource.swappiness` is set. The following table shows the swap size how to decide if `io.katacontainers.container.resource.swappiness` is set. |`io.katacontainers.container.resource.swapinbytes`|`memorylimitin_bytes`|swap size| |||| |set|set| `io.katacontainers.container.resource.swapinbytes` - `memorylimitin_bytes`| |not set|set| `memorylimitin_bytes`| |not set|not set| `io.katacontainers.config.hypervisor.default_memory`| |set|not set|cgroup doesn't support this usage|" } ]
{ "category": "Runtime", "file_name": "how-to-setup-swap-devices-in-guest-kernel.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark restore describe\" layout: docs Describe restores Describe restores ``` ark restore describe [NAME1] [NAME2] [NAME...] [flags] ``` ``` -h, --help help for describe -l, --selector string only show items matching this label selector --volume-details display details of restic volume restores ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restores" } ]
{ "category": "Runtime", "file_name": "ark_restore_describe.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This manual is mainly about how to use native network for iSulad community developers and users. The code of native network code is only exists in the master branch of lcr and iSulad. It is isolated by compilation macro `ENABLENATIVENETWORK`, and it is enabled by default. For the installation of the dependent environment of iSulad, please refer to the document ``docs/builddocs/guide/buildguide_zh.md``, and it will not be repeated here. The following only describes the compilation of lcr and iSulad. ```bash $ git clone https://gitee.com/openeuler/lcr.git $ cd lcr $ mkdir build $ cd build $ cmake .. $ make -j $(nproc) $ make install $ git clone https://gitee.com/openeuler/iSulad.git $ cd iSulad $ mkdir build $ cd build $ cmake -DENABLENATIVENETWORK=ON .. $ make -j $(nproc) $ make install ``` The natvie netwrok needs to install CNI plugin binary. The open source repository address is `https://github.com/containernetworking/plugins`. It is recommended to install the CNI plugin `v0.9.0` version and above. Here is an example of the latest v1.3.0 version when the manual was released. ```bash $ wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz $ mkdir -p /opt/cni/bin/ $ tar -zxvf cni-plugins-linux-amd64-v1.3.0.tgz -C /opt/cni/bin/ ``` The following table lists the iSulad version, supported CNI specification, and recommended network plugin version: iSulad version|CNI spec version|cni-plugins version || v2.1.3 and earlier|spec-v0.4.0|v0.9.1 v2.1.4|spec-v1.0.0|v1.3.0 Modify `isulad daemon.json` and config cni ```bash $ vim /etc/isulad/daemon.json ... \"cni-bin-dir\": \"/opt/cni/bin\", \"cni-conf-dir\": \"/etc/cni/net.d\", ... $ isulad ``` `cni-bin-dir` is the cni binary directory. If it is not configured, the default value is `/opt/cni/bin`. `cni-conf-dir` is the network conflist directory. If it is not configured, the default value is `/etc/cni/net.d`. If you want to use default value, you can start isulad directly without config the `daemon.json`. The use of native network is similar to that of docker. Here are some simple operation. ```bash $ isula network create cni0 cni0 $ isula network ls NAME VERSION PLUGIN cni0 1.0.0 bridge,portmap,firewall $ isula network inspect cni0 [ { \"cniVersion\": 1.0.0, \"name\": cni0, \"plugins\": [ { \"type\": bridge, \"bridge\": isula-br0, \"isGateway\": true, \"ipMasq\": true, \"hairpinMode\": true, \"ipam\": { \"type\": host-local, \"routes\": [ { \"dst\": 0.0.0.0/0 } ], \"ranges\": [ [ { \"subnet\": 192.168.0.0/24, \"gateway\": 192.168.0.1 } ] ] } }, { \"type\": portmap, \"capabilities\": { \"portMappings\": true } }, { \"type\": firewall } ] } ] ``` ```bash $ isula run -tid --net cni0 --name test busybox sh 3a933b6107114fe684393441ead8addc8994258dab4c982aedb1ea203f0df7d9 $ isula ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3a933b610711 busybox \"sh\" 9 seconds ago Up 9 seconds test $ isula inspect test" }, { "data": "\"NetworkSettings\": { \"Bridge\": \"\", \"SandboxID\": \"\", \"LinkLocalIPv6Address\": \"\", \"LinkLocalIPv6PrefixLen\": 0, \"Ports\": {}, \"CNIPorts\": [], \"SandboxKey\": \"/var/run/netns/isulacni-e93b9ac71757d204\", \"EndpointID\": \"\", \"Gateway\": \"\", \"GlobalIPv6Address\": \"\", \"GlobalIPv6PrefixLen\": 0, \"IPAddress\": \"\", \"IPPrefixLen\": 0, \"IPv6Gateway\": \"\", \"MacAddress\": \"\", \"Activation\": true, \"Networks\": { \"cni0\": { \"Links\": [], \"Alias\": [], \"NetworkID\": \"\", \"EndpointID\": \"\", \"Gateway\": \"192.168.0.1\", \"IPAddress\": \"192.168.0.4\", \"IPPrefixLen\": 24, \"IPv6Gateway\": \"\", \"GlobalIPv6Address\": \"\", \"GlobalIPv6PrefixLen\": 0, \"MacAddress\": \"d2:74:53:c5:9c:be\", \"IFName\": \"eth0\", \"DriverOpts\": {} } } } ... $ ping 192.168.0.4 PING 192.168.0.4 (192.168.0.4) 56(84) bytes of data. 64 bytes from 192.168.0.4: icmp_seq=1 ttl=64 time=0.080 ms 64 bytes from 192.168.0.4: icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from 192.168.0.4: icmp_seq=3 ttl=64 time=0.038 ms ^C 192.168.0.4 ping statistics 3 packets transmitted, 3 received, 0% packet loss, time 2084ms rtt min/avg/max/mdev = 0.038/0.052/0.080/0.019 ms $ isula rm -f test 3a933b6107114fe684393441ead8addc8994258dab4c982aedb1ea203f0df7d9 ``` ```bash $ isula network rm cni0 cni0 $ isula network ls NAME VERSION PLUGIN ``` Create a native network. isulad will create a network configuration file that conforms to the cni standard and store it in the `cni-conf-dir` directory. ```bash isula network create [OPTIONS] [NETWORK] ``` | Options | Description | | - | - | | -d, --driver | Driver to manager the network (default \"bridge\"), and only support bridge mode | | --gateway | IPv4 or IPv6 gateway for the subnet. When specifying the gateway parameter, you must specify the subnet parameter. If no gateway is specified, the first IP in the subnet is used as the gateway | | --internal | Restrict external access from this network | | --subnet | Subnet in CIDR format | Query one or more native networks that have been created. ```bash isula network inspect [OPTIONS] NETWORK [NETWORK...] ``` | Options | Description | | - | - | | -f, --format | Format the output using the given go template | List all created native networks. ```bash isula network ls [OPTIONS] ``` | Options | Description | | - | - | | -f, --filter | Filter output based on conditions provided (specify string matching name or plugin) | | -q, --quiet | Only display network names | Deleting one or more native networks that have been created, and it will also delete the corresponding bridge devices and network configuration files. ```bash isula network rm [OPTIONS] NETWORK [NETWORK...] ``` None Add the corresponding network parameters, add network capabilities to the container when creating/starting the container. ```bash isula run [OPTIONS] ROOTFS|IMAGE [COMMAND] [ARG...] ``` Show only network-related parameters. | Options | Description | | - | - | | --expose | Expose a port or a range of ports | | --net, --network | Connect a container to a network | | -p, --publish | Publish a container's port(s) to host with format `<hostport>:<container port>` | | -P, --publish-all | Publish all exposed ports to random ports |" } ]
{ "category": "Runtime", "file_name": "native_network.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "title: Block Storage Overview Block storage allows a single pod to mount storage. This guide shows how to create a simple, multi-tier web application on Kubernetes using persistent volumes enabled by Rook. This guide assumes a Rook cluster as explained in the . Before Rook can provision storage, a and need to be created. This will allow Kubernetes to interoperate with Rook when provisioning persistent volumes. !!! note This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes. Each OSD must be located on a different node, because the is set to `host` and the `replicated.size` is set to `3`. Save this `StorageClass` definition as `storageclass.yaml`: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: replicapool namespace: rook-ceph spec: failureDomain: host replicated: size: 3 apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block provisioner: rook-ceph.rbd.csi.ceph.com parameters: clusterID: rook-ceph pool: replicapool imageFormat: \"2\" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph csi.storage.k8s.io/fstype: ext4 reclaimPolicy: Delete allowVolumeExpansion: true ``` If you've deployed the Rook operator in a namespace other than \"rook-ceph\", change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in the namespace \"my-namespace\" the provisioner value should be \"my-namespace.rbd.csi.ceph.com\". Create the storage class. ```console kubectl create -f deploy/examples/csi/rbd/storageclass.yaml ``` !!! note As , when using the `Retain` reclaim policy, any Ceph RBD image that is backed by a `PersistentVolume` will continue to exist even after the `PersistentVolume` has been deleted. These Ceph RBD images will need to be cleaned up manually using `rbd rm`. We create a sample app to consume the block storage provisioned by Rook with the classic wordpress and mysql apps. Both of these apps will make use of block volumes provisioned by Rook. Start mysql and wordpress from the `deploy/examples` folder: ```console kubectl create -f mysql.yaml kubectl create -f wordpress.yaml ``` Both of these apps create a block volume and mount it to their respective pod. You can see the Kubernetes volume claims by running the following: ```console kubectl get pvc ``` !!! example \"Example Output: `kubectl get pvc`\" ```console NAME STATUS VOLUME CAPACITY ACCESSMODES AGE mysql-pv-claim Bound pvc-95402dbc-efc0-11e6-bc9a-0cc47a3459ee 20Gi RWO 1m wp-pv-claim Bound pvc-39e43169-efc1-11e6-bc9a-0cc47a3459ee 20Gi RWO 1m ``` Once the wordpress and mysql pods are in the `Running` state, get the cluster IP of the wordpress app and enter it in your browser: ```console kubectl get svc wordpress ``` !!! example \"Example Output: `kubectl get svc wordpress`\" ```console NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress 10.3.0.155 <pending> 80:30841/TCP 2m ``` You should see the wordpress app running. If you are using Minikube, the Wordpress URL can be retrieved with this one-line command: ```console echo http://$(minikube ip):$(kubectl get service wordpress -o jsonpath='{.spec.ports[0].nodePort}') ``` !!! note When running in a vagrant environment, there will be no external IP address to reach wordpress" }, { "data": "You will only be able to reach wordpress via the `CLUSTER-IP` from inside the Kubernetes cluster. With the pool that was created above, we can also create a block image and mount it directly in a pod. See the topic for more details. To clean up all the artifacts created by the block demo: ```console kubectl delete -f wordpress.yaml kubectl delete -f mysql.yaml kubectl delete -n rook-ceph cephblockpools.ceph.rook.io replicapool kubectl delete storageclass rook-ceph-block ``` If you want to use erasure coded pool with RBD, your OSDs must use `bluestore` as their `storeType`. Additionally the nodes that are going to mount the erasure coded RBD block storage must have Linux kernel >= `4.11`. !!! attention This example requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the is set to `host` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). To be able to use an erasure coded pool you need to create two pools (as seen below in the definitions): one erasure coded and one replicated. !!! attention This example requires at least 3 bluestore OSDs, with each OSD located on a different node. The OSDs must be located on different nodes, because the is set to `host` and the `erasureCoded` chunk settings require at least 3 different OSDs (2 `dataChunks` + 1 `codingChunks`). The erasure coded pool must be set as the `dataPool` parameter in It is used for the data of the RBD images. If a node goes down where a pod is running where a RBD RWO volume is mounted, the volume cannot automatically be mounted on another node. The node must be guaranteed to be offline before the volume can be mounted on another node. !!! Note These instructions are for clusters with Kubernetes version 1.26 or greater. For K8s 1.25 or older, see the to recover from the node loss. Deploy the csi-addons manifests: ```console kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.8.0/deploy/controller/crds.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.8.0/deploy/controller/rbac.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.8.0/deploy/controller/setup-controller.yaml ``` Enable the `csi-addons` sidecar in the Rook operator configuration. ```console kubectl patch cm rook-ceph-operator-config -n<namespace> -p $'data:\\n \"CSIENABLECSIADDONS\": \"true\"' ``` When a node is confirmed to be down, add the following taints to the node: ```console kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoSchedule ``` After the taint is added to the node, Rook will automatically blocklist the node to prevent connections to Ceph from the RBD volume on that node. To verify a node is blocklisted: ```console kubectl get networkfences.csiaddons.openshift.io NAME DRIVER CIDRS FENCESTATE AGE RESULT minikube-m02 rook-ceph.rbd.csi.ceph.com [\"192.168.39.187:0/32\"] Fenced 20s Succeeded ``` The node is blocklisted if the state is `Fenced` and the result is `Succeeded` as seen above. If the node comes back online, the network fence can be removed from the node by removing the node taints: ```console kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoSchedule- ```" } ]
{ "category": "Runtime", "file_name": "block-storage.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "MIT License Copyright (c) 2019 Josh Bleecher Snyder Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." } ]
{ "category": "Runtime", "file_name": "LICENSE.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark describe schedules\" layout: docs Describe schedules Describe schedules ``` ark describe schedules [NAME1] [NAME2] [NAME...] [flags] ``` ``` -h, --help help for schedules -l, --selector string only show items matching this label selector ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Describe ark resources" } ]
{ "category": "Runtime", "file_name": "ark_describe_schedules.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- toc --> - - - - - <!-- /toc --> This document proposes all the high-level changes required within Kanister to use as the primary backup and restore tool. Kanister offers an in-house capability to perform backup and restore to and from object stores using some operation-specific Functions like BackupData, RestoreData, etc. Although they are useful and simple to use, these Functions can be significantly improved to provide better reliability, security, and performance. The improvements would include: Encryption of data during transfers and at rest Efficient content-based data deduplication Configurable data compression Reduced memory consumption Increased variety of backend storage target for backups These improvements can be achieved by using Kopia as the primary data movement tool in these Kanister Functions. Kanister also provides a command line utility `kando` that can be used to move data to and from object stores. This tool internally executes Kopia commands to move the backup data. The v2 version of the example Kanister Blueprints supports this. However, there are a few caveats to using these Blueprints. `kando` uses Kopia only when a Kanister Profile of type `kopia` is provided A Kanister Profile of type `kopia` requires a running in the same namespace as the Kanister controller A Repository Server requires a to be initialized on a backend storage target Kanister currently lacks documentation and automation to use these features. Kopia is a powerful, cross-platform tool for managing encrypted backups in the cloud. It provides fast and secure backups, using compression, data deduplication, and client-side end-to-end encryption. It supports a variety of backup storage targets, including object stores, which allows users to choose the storage provider that better addresses their needs. In Kopia, these storage locations are called repositories. It is a lock-free system that allows concurrent multi-client operations including garbage collection. To explore other features of Kopia, see its . Design and automate the lifecycle of the required Kopia Repository Server. Add new versions of Kanister Data Functions like BackupData, RestoreData, etc. with Kopia as the primary data mover tool. All the new features mentioned in this document will be opt-in only. Existing users will not see any changes in the Kanister controller's behavior. Users will be able to continue using their current Blueprints, switch to the v2 version of the example Blueprints, or use Blueprints with the new version of the Kanister Data Functions. Users opting to use the v2 Blueprints and Blueprints with Kopia-based Kanister Data Functions will be required to follow instructions to set up the required Kopia repository and the repository server before executing the actions. After setting up the repository and the repository server, users can follow the normal workflow to execute actions from the v2 Blueprints. To use the new versions of the Kanister Data Functions, users must specify the version of the function via the ActionSet Action's `preferredVersion` field. This field in an Action is applied to all the Phases in" }, { "data": "If a particular Kanister function is not registered with this version, Kanister will fall back to the default version of the function. Kanister allows mutliple versions of Functions to be registered with the controller. Existing Functions are registered with the default `v0.0.0` version. Find more information . The following Data Functions will be registered with a second version `v1.0.0-alpha`: BackupData BackupDataAll BackupDataStats CopyVolumeData DeleteData DeleteDataAll RestoreData RestoreDataAll The purpose, signature and output of these functions will remain intact i.e. their usage in Blueprints will remain unchanged. However, their internal implementation will leverage Kopia to connect to the Repository Server to perform the required data operations. As noted above, users will execute these functions by specifying `v1.0.0-alpha` as the `preferredVersion` during the creation of an ActionSet. The version management scheme for these functions is out of scope of this document and will be discussed separately. Please note an important update here The design for implementing these Kanister Functions is still a work in-progress. The above-mentioned versioning of Kanister Functions may not be the final design. We plan to submit a new Pull Request stating a more detailed design of these Kanister Functions, shortly. As mentioned above, the backup storage location is called a \"Repository\" in Kopia. It is important to note that the Kanister users are responsible for the lifecycle management of the repository, i.e., the creation, deletion, upgrades, garbage collection, etc. The Kanister controller will not override any user configuration or policies set on the repository. Such direct changes might impact Kanister's ability to interact with the repository. Kanister documentation will provide instructions for initializing a new repository. Kanister users can initialize repositories with boundaries defined based on their needs. The repository domain can include a single workload, groups of workloads, namespaces, or groups of namespaces, etc. Only a single repository can exist at a particular path in the backend storage location. Users opting to use separate repositories are recommended to use unique path prefixes for each repository. For example, a repository for a namespace called `monitoring` on S3 storage bucket called `test-bucket` could be created at the location `s3://test-bucket/<UUID of monitoring namespace>/repo/`. Accessing the repository requires the storage location and credential information similar to a Kanister Profile CR and a unique password used by Kopia during encryption, along with a unique path prefix mentioned above. . In the first iteration, users will be required to provide the location and repository information to the controller during the creation of the repository server in the form of Kubernetes Secrets. Future iterations will allow users to use a Key Management Service of choice. A Kopia Repository Server allows Kopia clients proxy access to the backend storage location through it. At any time, a repository server can only connect to a single repository. Due to this a separate instance of the server will be used for each repository. In Kanister, the server will comprise a K8s `Pod`, `Service` and a" }, { "data": "The pod will execute the Kopia server process exposed to the application via the K8s service and the network policy. Accessing the server requires the service name, a server username, and a password without any knowledge of the backend storage location. To authorize access, a list of server usernames and passwords must be added prior to starting the server. The server also uses TLS certificates to secure incoming connections to it. Kanister users can configure a repository server via a newly added Custom Resource called `RepositoryServer` as described in the following section. This design proposes a new Custom Resource Definition (CRD) named `RepositoryServer`, to represent a Kopia repository server. It is a namespace-scoped resource, owned and managed by the controller in the `kanister` namespace. As mentioned above, the CRD offers a set of parameters to configure a Kopia repository server. To limit the controller's RBAC scope, all the `RepositoryServer` resources are created in the `kanister` namespace. A sample `RepositoryServer` resource created to interact with the Kopia repository of workloads running in the `monitoring` namespace looks like this: ```yaml apiVersion: cr.kanister.io/v1alpha1 kind: RepositoryServer metadata: name: repository-monitoring namespace: kanister labels: repo.kanister.io/target-namespace: monitoring spec: storage: secretRef: name: location namespace: kanister credentialSecretRef: name: loc-creds namespace: kanister repository: rootPath: /repo/monitoring/ passwordSecretRef: name: repository-monitoring-password namespace: kanister username: kanisterAdmin hostname: monitoring server: adminSecretRef: name: repository-monitoring-server-admin namespace: kanister tlsSecretRef: name: repository-monitoring-server-tls namespace: kanister userAccessSecretRef: name: repository-monitoring-server-access namespace: kanister networkPolicy: namespaceSelector: matchLabels: app: monitoring podSelector: matchLabels: pod: kopia-client app: monitoring status: conditions: lastTransitionTime: \"2022-08-20T09:48:36Z\" lastUpdateTime: \"2022-08-20T09:48:36Z\" status: \"True\" type: RepositoryServerReady serverInfo: podName: \"repository-monitoring-pod-a1b2c3\" networkPolicyName: \"repository-monitoring-np-d4e5f6\" serviceName: \"repository-monitoring-svc-g7h8i9\" tlsFingerprint: \"48537CCE585FED39FB26C639EB8EF38143592BA4B4E7677A84A31916398D40F7\" ``` The required `spec.storage.secretRef` refers to a `Secret` resource storing location-related sensitive data. This secret is provided by the user. For example, ```yaml apiVersion: v1 kind: Secret metadata: name: location namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: Opaque data: type: s3 bucket: my-bucket endpoint: https://foo.example.com path: kanister/backups region: us-west-1 skipSSLVerify: false claimName: store-pvc ``` The credentials required to access the location above are provided by the user in a separate secret referenced by `spec.location.credentialSecretRef`. The example below shows the credentials required to access AWS S3 and S3-compatible locations. ```yaml apiVersion: v1 kind: Secret metadata: name: s3-loc-creds namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: secrets.kanister.io/aws data: access-key: <redacted> secret-acccess-key: <redacted> role: <redacted> ``` The credentials secret will follow a different format for different providers. This secret is optional when using a file store location. Example secrets for Google Cloud Storage (GCS) and Azure Blob Storage will be as follows: GCS: ```yaml apiVersion: v1 kind: Secret metadata: name: gcs-loc-creds namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: Opaque data: project-id: <redacted> service-account.json: <base64 encoded SA json file> ``` Azure: ```yaml apiVersion: v1 kind: Secret metadata: name: az-loc-creds namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: Opaque data: azurestorageaccount_id: <redacted> azurestoragekey: <redacted> azurestorageenvironment: <redacted> ``` Kopia identifies users by `username@hostname` and uses the values specified when establishing connection to the repository to identify backups created in the session. If the username and hostname values are specified in the repository section, Kanister controller will override defaults when establishing connection to the repository from the repository" }, { "data": "Users will be required to specify the same values when they need to restore or delete the backups created. By default, the controller will use generated defaults when connecting to the repository. The password used while creating the Kopia repository is provided by the user in the `Secret` resource referenced by `spec.repository.passwordSecretRef`. ```yaml apiVersion: v1 kind: Secret metadata: name: repository-monitoring-password namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: Opaque data: repo-password: <redacted> ``` The server admin credentials, and TLS sensitive data are stored in the `Secret` resources referenced by the `spec.server.adminSecretRef` and `spec.server.tlsSecretRef` properties. For example, ```yaml apiVersion: v1 kind: Secret metadata: name: repository-monitoring-server-admin namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: Opaque data: username: <redacted> password: <redacted> apiVersion: v1 kind: Secret metadata: name: repository-monitoring-server-tls namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: kubernetes.io/tls data: tls.crt: | <redacted> tls.key: | <redacted> ``` The `spec.server.accessSecretRef` property provides a list of access credentials used by data mover clients to authenticate with the Kopia repository server. This is discussed in more detail in the section. The Kopia repository server `Pod` resource can be accessed through a K8s `Service`. The `spec.server.networkPolicy` property is used to determine the selector used in the `namespaceSelector` or `podSelector` of the `NetworkPolicy` resource that controls the ingress traffic to the repository server from a namespace other than the `kanister` namespace. The `status` subresource provides conditions and status information to ensure that the controller does not attempt to re-create the repository server during restart. When a `RepositoryServer` resource is created, the controller responds by creating a `Pod`, `Service`, and a `NetworkPolicy` as mentioned above. These resources are cleaned up when the `RepositoryServer` resource is deleted. Once the pod is running, the controller executes a set of Kopia CLI commands as follows. The examples in this section use S3 for illustration purposes. Establish a connection to the Kopia repository. This is equivalent to running the following command: ```sh kopia repository connect s3 \\ --bucket=my-bucket \\ --access-key=<redacted> \\ --secret-access-key=<redacted> \\ [--endpoint=https://foo.example.com \\] [--prefix=my-prefix \\] [--region=us-west-1 \\] --password=<redacted> \\ --override-hostname=<hostname> \\ --override-username=<username> ``` The bucket, endpoint, and region are read from the secret referenced by the `spec.storage.secret` property while the access-key and secret-access-key are read from the secret referenced by `spec.location.credentialSecretRef`. The prefix, username, and hostname values are read from the `spec.repository` section, and the repository password is derived from the secret referenced by the `spec.repository.passwordSecretRef` property. Start the Kopia repository server as a background process ```sh kopia server start --address=0.0.0.0:51515 \\ --config-file=/run/kopia/repo.config \\ --log-directory=/run/kopia/log \\ --tls-cert-file=/run/kopia/tls.crt \\ --tls-key-file=/run/kopia/tls.key \\ --server-username=<redacted> \\ --server-password=<redacted> \\ --server-control-username=<redacted> \\ --server-control-password=<redacted> \\ > /dev/null 2>&1 & ``` The `/run/kopia/repo.config` configuration file is generated from the secret referenced by the `spec.storage.secretRef` and `spec.repository` properties. See documentation and GitHub source for more information on the configuration file format, and supported configuration. The `/run/kopia/tls.crt` and `/run/kopia/tls.key` files contain the TLS x509 certificate and private key read from the secret referenced by the `spec.server.tlsSecretRef` property. The credentials for the `--server-username`, `--server-password`, `--server-control-username` and `--server-control-password` options are read from the secret referenced by the" }, { "data": "property. Register the set of users that have access to the server ```sh kopia server user add <username>@<hostname> --user-password=<redacted> ``` The username, hostname, and password will be picked up from the `spec.server.accessSecretRef` property. Refresh the server process to enable the newly added users ```sh kopia server refresh --server-cert-fingerprint=<redacted> \\ --address=0.0.0.0:51515 \\ --server-username=<redacted> \\ --server-password=<redacted> ``` The secrets provided in the `RepositoryServer` resource are mounted via the pod's `spec.volumes` API. The Kopia server is fronted by a K8s `Service` resource. Data mover clients will connect to it using the equivalence of: ```sh kopia repository connect server \\ --url=https://<service-name> \\ --config-file=/run/kopia/repo.config \\ --server-cert-fingerprint=<redacted> \\ --override-username=<username> \\ --override-hostname=<hostname> \\ --password=<redacted> ``` The `<service-name>` is the Kopia server's `Service` resource name. This will be auto-generated by the repository controller and provided via the `status` subresource of the `RepositoryServer` resource. The `server-cert-fingerprint` is derived from the TLS certificates provided during the creation of the server resource and provided via the `status` subresource. The username, hostname, and password must match one of the users registered with the server through the `spec.server.accessSecretRef` property. Once connected to the server, the data mover clients can utilize the family of `kopia snapshot` subcommands to manage snapshots. In order for a data mover client to connect to the Kopia server, it needs to provide for authentication purposes. This section describes the approach to add these access credentials to the Kopia server. As mentioned above, when a Kopia server starts, it registers the set of users defined in the `spec.server.accessSecretRef` property of the `RepositoryServer` resource. The permissions of these access users are governed by the Kopia server . The secret referenced by the `spec.server.accessSecretRef` property must contain at least one username/password pair. This secret is mounted to the Kopia server via the pod's `spec.volumes` API. The following YAML shows an example of user access credentials that can be used during the creation of the server resource. ```yaml apiVersion: v1 kind: Secret metadata: name: repository-monitoring-server-access namespace: kanister labels: repo.kanister.io/target-namespace: monitoring type: Opaque data: <username1>@<hostname1>: <password1> <username2>@<hostname2>: <password2> ``` The server also establishes a watch on its access users file. When this file is updated (due to changes to the underlying secret), the server will also rebuild its access user list. Instead of assuming full responsibility over the management of different Kopia credentials, this design proposes the adoption of a shared responsibility model, where users are responsible for the long-term safekeeping of their credentials. This model ensures Kanister remains free from a hard dependency on any crypto packages, and vault-like functionalities. If misplaced, Kanister will not be able to recover these credentials. Currently, we are using Kopia CLI to perform the repository and kopia repository server operations in Kanister. The repository controller creates a pod, executes commands through `kube.exec` on the pod to perform repository operations. The commands include: repo connect start server add users refresh server Kopia provides an SDK to perform repository operations which can be used instead of CLI. The detailed design is explained in the document ." } ]
{ "category": "Runtime", "file_name": "kanister-kopia-integration.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "When contributing to this repository, please first discuss the change you wish to make via an . Create an outlining the fix or feature. Fork the rekor repository to your own github account and clone it locally. Hack on your changes. Update the README.md with details of changes to any interface, this includes new environment variables, exposed ports, useful file locations, CLI parameters and new or changed configuration values. Correctly format your commit message see below. Ensure that CI passes, if it fails, fix the failures. Every pull request requires a review from the before merging. If your pull request consists of more than one commit, please squash your commits as described in We follow the commit formatting recommendations found on . Well formed commit messages not only help reviewers understand the nature of the Pull Request, but also assists the release process where commit messages are used to generate release notes. A good example of a commit message would be as follows: ``` Summarize changes in around 50 characters or less More detailed explanatory text, if necessary. Wrap it to about 72 characters or so. In some contexts, the first line is treated as the subject of the commit and the rest of the text as the body. The blank line separating the summary from the body is critical (unless you omit the body entirely); various tools like `log`, `shortlog` and `rebase` can get confused if you run the two together. Explain the problem that this commit is solving. Focus on why you are making this change as opposed to how (the code explains that). Are there side effects or other unintuitive consequences of this change? Here's the place to explain them. Further paragraphs come after blank lines. Bullet points are okay, too Typically a hyphen or asterisk is used for the bullet, preceded by a single space, with blank lines in between, but conventions vary here If you use an issue tracker, put references to them at the bottom, like this: Resolves: #123 See also: #456, #789 ``` Note the `Resolves #123` tag, this references the issue raised and allows us to ensure issues are associated and closed when a pull request is merged. Please refer to for a complete list of issue references. Should your pull request consist of more than one commit (perhaps due to a change being requested during the review cycle), please perform a git squash once a reviewer has approved your pull request. A squash can be performed as follows. Let's say you have the following commits: initial commit second commit final commit Run the command below with the number set to the total commits you wish to squash (in our case 3 commits): git rebase -i HEAD~3 You default text editor will then open up and you will see the following:: pick eb36612 initial commit pick 9ac8968 second commit pick a760569 final commit We want to rebase on top of our first commit, so we change the other two commits to `squash`: pick eb36612 initial commit squash 9ac8968 second commit squash a760569 final commit After this, should you wish to update your commit message to better summarise all of your pull request, run: git commit --amend You will then need to force push (assuming your initial commit(s) were posted to github): git push origin your-branch --force Alternatively, a core member can squash your commits within Github. Rekor adheres to and enforces the Code of Conduct. Please take a moment to read the document." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTORS.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "layout: global title: Running Apache Hive with Alluxio This guide describes how to run with Alluxio, so that you can easily store Hive tables in Alluxio's tiered storage. Setup Java for Java 8 Update 60 or higher (8u60+), 64-bit. . If you are using Hive2.1+, make sure to before starting Hive. `$HIVE_HOME/bin/schematool -dbType derby -initSchema` Make sure that the Alluxio client jar is available. This Alluxio client jar file can be found at `{{site.ALLUXIOCLIENTJAR_PATH}}` in the tarball downloaded from Alluxio . To run Hive on Hadoop MapReduce, please also follow the instructions in to make sure Hadoop MapReduce can work with Alluxio. In the following sections of this documentation, Hive is running on Hadoop MapReduce. Distribute Alluxio client jar on all Hive nodes and include the Alluxio client jar to Hive classpath so Hive can query and access data on Alluxio. Within Hive installation directory , set `HIVEAUXJARS_PATH` in `conf/hive-env.sh`: ```shell $ export HIVEAUXJARSPATH={{site.ALLUXIOCLIENTJARPATH}}:${HIVEAUXJARS_PATH} ``` This section talks about how to use Hive to create new either tables from files stored on Alluxio. In this way, Alluxio is used as one of the filesystems to store Hive tables similar to HDFS. The advantage of this setup is that it is fairly straightforward and each Hive table is isolated from other tables. One typical use case is to store frequently used Hive tables in Alluxio for high throughput and low latency by serving these files from memory storage. TipsAll the following Hive CLI examples are also applicable to Hive Beeline. You can try these commands out in Beeline shell. Here is an example to create a table in Hive backed by files in Alluxio. You can download a data file (e.g., `ml-100k.zip`) from . Unzip this file and upload the file `u.user` into `ml-100k/` on Alluxio: ```shell $ ./bin/alluxio fs mkdir /ml-100k $ ./bin/alluxio fs cp file:///path/to/ml-100k/u.user alluxio://master_hostname:port/ml-100k ``` View Alluxio WebUI at `http://master_hostname:19999` and you can see the directory and file Hive creates: Then create a new internal table: ```sql hive> CREATE TABLE u_user ( userid INT, age INT, gender CHAR(1), occupation STRING, zipcode STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE LOCATION 'alluxio://master_hostname:port/ml-100k'; ``` Make the same setup as the previous example, and create a new external table: ```sql hive> CREATE EXTERNAL TABLE u_user ( userid INT, age INT, gender CHAR(1), occupation STRING, zipcode STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE LOCATION 'alluxio://master_hostname:port/ml-100k'; ``` The difference is that Hive will manage the lifecycle of internal tables. When you drop an internal table, Hive deletes both the table metadata and the data file from" }, { "data": "Now you can query the created table. For example: ```sql hive> select * from u_user; ``` And you can see the query results from shell: When Hive is already serving and managing the tables stored in HDFS, Alluxio can also serve them for Hive if HDFS is mounted as the under storage of Alluxio. In this example, we assume an HDFS cluster is mounted as the under storage of Alluxio root directory (i.e., property `alluxio.dora.client.ufs.root=hdfs://namenode:port/` is set in `conf/alluxio-site.properties`). We assume that the `hive.metastore.warehouse.dir` property (within your Hive installation `conf/hive-default.xml`) is set to `/user/hive/warehouse` which is the default value, and the internal table is already created like this: ```sql hive> CREATE TABLE u_user ( userid INT, age INT, gender CHAR(1), occupation STRING, zipcode STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|'; hive> LOAD DATA LOCAL INPATH '/path/to/ml-100k/u.user' OVERWRITE INTO TABLE u_user; ``` The following HiveQL statement will change the table data location from HDFS to Alluxio ```sql hive> alter table uuser set location \"alluxio://masterhostname:port/user/hive/warehouse/u_user\"; ``` Verify whether the table location is set correctly: ```sql hive> desc formatted u_user; ``` Note that, accessing files in `alluxio://masterhostname:port/user/hive/warehouse/uuser` for the first time will be translated to access corresponding files in `hdfs://namenode:port/user/hive/warehouse/u_user` (the default Hive internal data storage); once the data is cached in Alluxio, Alluxio will serve them for follow-up queries without loading data again from HDFS. The entire process is transparent to Hive and users. Assume there is an existing external table `u_user` in Hive with location set to `hdfs://namenode_hostname:port/ml-100k`. You can use the following HiveQL statement to check its \"Location\" attribute: ```sql hive> desc formatted u_user; ``` Then use the following HiveQL statement to change the table data location from HDFS to Alluxio ```sql hive> alter table uuser set location \"alluxio://masterhostname:port/ml-100k\"; ``` In both cases above about changing table data location to Alluxio, you can also change the table location back to HDFS: ```sql hive> alter table TABLE_NAME set location \"hdfs://namenode:port/table/path/in/HDFS\"; ``` Instructions and examples till here illustrate how to use Alluxio as one of the filesystems to store tables in Hive, together with other filesystems like HDFS. They do not require to change the global setting in Hive such as the default filesystem which is covered in the next section. The process of moving a partitioned table is quite similar to moving a non-partitioned table, with one caveat. In addition to altering the table location, we also need to modify the partition location for all the partitions. See the following for an" }, { "data": "```sql hive> alter table TABLENAME partition(PARTITIONCOLUMN = VALUE) set location \"hdfs://namenode:port/table/path/partitionpath\"; ``` There are two ways to specify any Alluxio client properties for Hive queries when connecting to Alluxio service: Specify the Alluxio client properties in `alluxio-site.properties` and ensure that this file is on the classpath of Hive service on each node. Add the Alluxio site properties to `conf/hive-site.xml` configuration file on each node. For example, change `alluxio.user.file.writetype.default` from default `ASYNCTHROUGH` to `CACHETHROUGH`. One can specify the property in `alluxio-site.properties` and distribute this file to the classpath of each Hive node: ```properties alluxio.user.file.writetype.default=CACHE_THROUGH ``` Alternatively, modify `conf/hive-site.xml` to have: ```xml <property> <name>alluxio.user.file.writetype.default</name> <value>CACHE_THROUGH</value> </property> ``` If you are running Alluxio in HA mode with internal leader election, set the Alluxio property `alluxio.master.rpc.addresses` in `alluxio-site.properties`. Ensure that this file is on the classpath of Hive. ```properties alluxio.master.rpc.addresses=masterhostname1:19998,masterhostname2:19998,masterhostname3:19998 ``` Alternatively one can add the properties to the Hive `conf/hive-site.xml`: ```xml <configuration> <property> <name>alluxio.master.rpc.addresses</name> <value>masterhostname1:19998,masterhostname2:19998,masterhostname3:19998</value> </property> </configuration> ``` For information about how to connect to Alluxio HA cluster using Zookeeper-based leader election, please refer to . If the master RPC addresses are specified in one of the configuration files listed above, you can omit the authority part in Alluxio URIs: ```sql hive> alter table u_user set location \"alluxio:///ml-100k\"; ``` Since Alluxio 2.0, one can directly use Alluxio HA-style authorities in Hive queries without any configuration setup. See for more details. This section talks about how to use Alluxio as the default file system for Hive. Apache Hive can also use Alluxio through a generic file system interface to replace the Hadoop file system. In this way, Hive uses Alluxio as the default file system and its internal metadata and intermediate results will be stored in Alluxio by default. Add the following property to `hive-site.xml` in your Hive installation `conf` directory ```xml <property> <name>fs.defaultFS</name> <value>alluxio://master_hostname:port</value> </property> ``` Create directories in Alluxio for Hive: ```shell $ ./bin/alluxio fs mkdir /tmp $ ./bin/alluxio fs mkdir /user/hive/warehouse $ ./bin/alluxio fs chmod 775 /tmp $ ./bin/alluxio fs chmod 775 /user/hive/warehouse ``` Then you can follow the to use Hive. Create a table in Hive and load a file in local path into Hive: Again use the data file in `ml-100k.zip` from as an example. ```sql hive> CREATE TABLE u_user ( userid INT, age INT, gender CHAR(1), occupation STRING, zipcode STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE; hive> LOAD DATA LOCAL INPATH '/path/to/ml-100k/u.user' OVERWRITE INTO TABLE u_user; ``` View Alluxio Web UI at `http://master_hostname:19999` and you can see the directory and file Hive creates: Using a single query: ```sql hive> select * from u_user; ``` And you can see the query results from shell: If you wish to modify how your Hive client logs information, see the detailed page within the Hive documentation that ." } ]
{ "category": "Runtime", "file_name": "Hive.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "A Helm chart that collects CustomResourceDefinitions (CRDs) from OpenEBS. | Key | Type | Default | Description | |--|||-| | csi.volumeSnapshots.enabled | bool | `true` | Install Volume Snapshot CRDs | | csi.volumeSnapshots.keep | bool | `true` | Keep CRDs on chart uninstall |" } ]
{ "category": "Runtime", "file_name": "helm.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "The policy controller maintains a number of ipsets which are subsequently referred to by the iptables rules used to effect network policy specifications. These ipsets are created, modified and destroyed automatically in response to Pod, Namespace and NetworkPolicy object updates from the k8s API server: A `hash:ip` set per namespace, containing the IP addresses of all pods in that namespace for which default ingress is allowed A `hash:ip` set per namespace, containing the IP addresses of all pods in that namespace for which default egress is allowed A `list:set` per distinct (across all network policies in all namespaces) namespace selector mentioned in a network policy, containing the names of any of the above hash:ip sets whose corresponding namespace labels match the selector A `hash:ip` set for each distinct (within the scope of the containing network policy's namespace) pod selector mentioned in a network policy, containing the IP addresses of all pods in the namespace whose labels match that selector A `hash:net` set for each distinct (within the scope of the containing network policy's namespace) `except` list of CIDR's mentioned in the network policies IPsets are implemented by the kernel module `xt_set`, without which weave-npc will not work. ipset names are generated deterministically from a string representation of the corresponding label selector. Because ipset names are limited to 31 characters in length, this is done by taking a SHA hash of the selector string and then printing that out as a base 85 string with a \"weave-\" prefix e.g.: weave-k?Z;25^M}|1s7P3|H9i;*;MhG Because pod selectors are scoped to a namespace, we need to make sure that if the same selector definition is used in different namespaces that we maintain distinct ipsets. Consequently, for such selectors the namespace name is prepended to the label selector string before hashing to avoid clashes. The policy controller maintains several iptables chains in response to changes to pods, namespaces and network policies. `WEAVE-NPC` chain contains static rules to ACCEPT traffic that is `RELATED,ESTABLISHED` and run `NEW` traffic through `WEAVE-NPC-DEFAULT` followed by `WEAVE-NPC-INGRESS` chains Static configuration: ``` iptables -A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT iptables -A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS ``` The policy controller maintains a rule in this chain for every namespace whose ingress isolation policy is" }, { "data": "The purpose of this rule is simply to ACCEPT any traffic destined for such namespaces before it reaches the ingress chain. ``` iptables -A WEAVE-NPC-DEFAULT -m set --match-set $NSIPSET dst -j ACCEPT ``` For each namespace network policy ingress rule peer/port combination: ``` iptables -A WEAVE-NPC-INGRESS -p $PROTO [-m set --match-set $SRCSET] -m set --match-set $DSTSET --dport $DPORT -j ACCEPT ``` `WEAVE-NPC-EGRESS` chain contains static rules to ACCEPT traffic that is `RELATED,ESTABLISHED` and run `NEW` traffic through `WEAVE-NPC-EGRESS-DEFAULT` followed by `WEAVE-NPC-EGRESS-CUSTOM` chains Static configuration: ``` iptables -A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT iptables -A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM iptables -A WEAVE-NPC-EGRESS -m mark ! --mark 0x40000/0x40000 -j DROP ``` The policy controller maintains a rule in this chain for every namespace whose egress isolation policy is `DefaultAllow`. The purpose of this rule is simply to ACCEPT any traffic originating from such namespace before it reaches the egress chain. ``` iptables -A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set $NSIPSET src -j WEAVE-NPC-EGRESS-ACCEPT iptables -A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set $NSIPSET src -j RETURN ``` `WEAVE-NPC-EGRESS-ACCEPTS` chain contains static rules to mark traffic Static configuration: ``` iptables -A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000 ``` For each namespace network policy egress rule peer/port combination: ``` iptables -A WEAVE-NPC-EGRESS-CUSTOM -p $PROTO [-m set --match-set $SRCSET] -m set --match-set $DSTSET --dport $DPORT -j ACCEPT ``` To direct traffic into the policy engine: ``` iptables -A INPUT -i weave -j WEAVE-NPC-EGRESS iptables -A FORWARD -i weave -j WEAVE-NPC-EGRESS iptables -A FORWARD -o weave -j WEAVE-NPC ``` Note this only affects traffic which egresses the bridge on a physical port which is not the Weave Net router - in other words, it is destined for an application container veth. The following traffic is affected: Traffic bridged between local application containers Traffic bridged from the router to a local application container Traffic originating from the internet destined for nodeports - this is routed via the FORWARD chain to a container pod IP after DNAT The following traffic is NOT affected: Traffic bridged from a local application container to the router Traffic originating from processes in the host network namespace (e.g. kubelet health checks) Traffic routed from an application container to the internet The above mechanism relies on the kernel module `br_netfilter` being loaded and enabled via `/proc/sys/net/bridge/bridge-nf-call-iptables`. See these resources for helpful context: http://ebtables.netfilter.org/brfwia/brfwia.html https://commons.wikimedia.org/wiki/File:Netfilter-packet-flow.svg" } ]
{ "category": "Runtime", "file_name": "weavenpc-design.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "layout: global title: Presto SDK with Local Cache Presto provides an SDK way to combined with Alluxio. With the SDK, hot data that need to be scanned frequently can be cached locally on Presto Workers that execute the TableScan operator. Setup Java for Java 8 Update 161 or higher (8u161+), 64-bit. {:target=\"_blank\"}. Alluxio has been set up and is running following . Make sure that the Alluxio client jar that provides the SDK is available. This Alluxio client jar file can be found at `/<PATHTOALLUXIO>/client/alluxio-${VERSION}-client.jar` in the tarball downloaded from Alluxio download page. Make sure that Hive Metastore is running to serve metadata information of Hive tables. The default port of Hive Metastore is `9083`. Executing `lsof -i:9083` can check whether the Hive Metastore process exists or not. Presto gets the database and table metadata information (including file system locations) from the Hive Metastore, via Presto's Hive Connector. Here is an example Presto configuration file `${PRESTO_HOME}/etc/hive.properties`, for a catalog using the Hive connector, where the metastore is located on `localhost`. ```properties connector.name=hive-hadoop2 hive.metastore.uri=thrift://localhost:9083 ``` To enable local caching, add the following configurations in `${PRESTO_HOME}/etc/hive.properties`: ```properties hive.node-selection-strategy=SOFT_AFFINITY cache.enabled=true cache.type=ALLUXIO cache.base-directory=file:///tmp/alluxio cache.alluxio.max-cache-size=100MB ``` Here `cache.enabled=true` and `cache.type=ALLUXIO` are to enable the local caching feature in Presto. `cache.base-directory` is used for specifying the path for local caching. `cache.alluxio.max-cache-size` is to allocate the space for local caching. As Presto communicates with Alluxio servers by the SDK provided in the Alluxio client jar, the Alluxio client jar must be in the classpath of Presto servers. Put the Alluxio client jar `/<PATHTOALLUXIO>/client/alluxio-2.9.1-client.jar` into the directory `${PRESTO_HOME}/plugin/hive-hadoop2/` (this directory may differ across versions) on all Presto servers. Restart the Presto workers and coordinator ```shell $ ${PRESTO_HOME}/bin/launcher restart ``` After completing the basic configuration, Presto should be able to access data in Alluxio. Create a Hive table by hive client specifying its `LOCATION` to Alluxio. ```sql hive> CREATE TABlE employeeparquetalluxio (name string, salary int) PARTITIONED BY (doj string) STORED AS PARQUET LOCATION 'alluxio://Master01:19998/alluxio/employeeparquetalluxio'; ``` Replace `Master01:19998` to your Alluxio Master address. Note that we set `STORED AS PARQUET` here since currently only parquet and orc format is supported in Presto Local Caching. Insert some data into the created Hive table for testing. ```sql INSERT INTO employeeparquetalluxio select 'jack', 15000, '2023-02-26'; INSERT INTO employeeparquetalluxio select 'make', 25000, '2023-02-25'; INSERT INTO employeeparquetalluxio select 'amy', 20000, '2023-02-26'; ``` Follow {:target=\"blank\"} to download the` presto-cli-<PRESTOVERSION>-executable.jar`, rename it to `presto-cli`, and make it executable with `chmod + x`. Run a single query with `presto-cli` to select the data from the" }, { "data": "```sql presto> SELECT * FROM employeeparquetalluxio; ``` You can see that data are cached in the directory specified in `/etc/catalog/hive.properties`. In our example, we should see the files are cached in `/tmp/alluxio/LOCAL`. In order to expose the metrics of local caching, follow the steps below: Step 1: Add `-Dalluxio.metrics.conf.file=<ALLUXIO_HOME>/conf/metrics.properties` to specify the metrics configuration for the SDK used by Presto. Step 2: Add `sink.jmx.class=alluxio.metrics.sink.JmxSink` to `<ALLUXIO_HOME>/conf/metrics.properties` to expose the metrics. Step 3: Add `cache.alluxio.metrics-enabled=true` in `<PRESTO_HOME>/etc/catalog.hive.properties` to enable metric collection. Step 4: Restart the Presto process by executing `<PRESTO_HOME>/bin/laucher restart`. Step 5: Metrics about local caching should be seen in JMX if we access Presto's JMX RESTful API `<PRESTONODEHOSTNAME>:<PRESTOPORT>/v1/jmx`. The following metrics would be useful for tracking local caching: <table class=\"table table-striped\"> <tr> <th>Metric Name</th> <th>Type</th> <th>Description</th> </tr> <tr> <td markdown=\"span\">`Client.CacheBytesReadCache`</td> <td>METER</td> <td>Bytes read from client.</td> </tr> <tr> <td markdown=\"span\">`Client.CachePutErrors`</td> <td>COUNTER</td> <td>Number of failures when putting cached data in the client cache.</td> </tr> <tr> <td markdown=\"span\">`Client.CachePutInsufficientSpaceErrors`</td> <td>COUNTER</td> <td>Number of failures when putting cached data in the client cache due to insufficient space made after eviction.</td> </tr> <tr> <td markdown=\"span\">`Client.CachePutNotReadyErrors`</td> <td>COUNTER</td> <td>Number of failures when cache is not ready to add pages.</td> </tr> <tr> <td markdown=\"span\">`Client.CachePutBenignRacingErrors`</td> <td>COUNTER</td> <td>Number of failures when adding pages due to racing eviction. This error is benign.</td> </tr> <tr> <td markdown=\"span\">`Client.CachePutStoreWriteErrors`</td> <td>COUNTER</td> <td>Number of failures when putting cached data in the client cache due to failed writes to page store.</td> </tr> <tr> <td markdown=\"span\">`Client.CachePutEvictionErrors`</td> <td>COUNTER</td> <td>Number of failures when putting cached data in the client cache due to failed eviction.</td> </tr> <tr> <td markdown=\"span\">`Client.CachePutStoreDeleteErrors`</td> <td>COUNTER</td> <td>Number of failures when putting cached data in the client cache due to failed deletes in page store.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheGetErrors`</td> <td>COUNTER</td> <td>Number of failures when getting cached data in the client cache.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheGetNotReadyErrors`</td> <td>COUNTER</td> <td>Number of failures when cache is not ready to get pages.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheGetStoreReadErrors`</td> <td>COUNTER</td> <td>Number of failures when getting cached data in the client cache due to failed read from page stores.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheDeleteNonExistingPageErrors`</td> <td>COUNTER</td> <td>Number of failures when deleting pages due to absence.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheDeleteNotReadyErrors`</td> <td>COUNTER</td> <td>Number of failures when cache is not ready to delete pages.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheDeleteFromStoreErrors`</td> <td>COUNTER</td> <td>Number of failures when deleting pages from page stores.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheHitRate`</td> <td>GAUGE</td> <td>Cache hit rate: (# bytes read from cache) / (# bytes requested).</td> </tr> <tr> <td markdown=\"span\">`Client.CachePagesEvicted`</td> <td>METER</td> <td>Total number of pages evicted from the client cache.</td> </tr> <tr> <td markdown=\"span\">`Client.CacheBytesEvicted`</td> <td>METER</td> <td>Total number of bytes evicted from the client cache.</td> </tr> </table>" } ]
{ "category": "Runtime", "file_name": "presto-sdk.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage services & loadbalancers ``` -h, --help help for service ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Delete a service - Display service information - List services - Update a service" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_service.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "The method of installation for Sysbox depends on the environment where Sysbox will be installed: - If a package for your distro is not yet available, or if you want to get the latest changes from upstream. NOTE: See the for the list of Linux distros where Sysbox is supported and the installation methods supported for each. Also, see the for the list of supported machine architectures (e.g., amd64, arm64)." } ]
{ "category": "Runtime", "file_name": "install.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "title: Use JuiceFS on AWS sidebar_position: 4 slug: /clouds/aws Amazon Web Services (AWS) is a leading global cloud computing platform that offers a wide range of cloud computing services. With its extensive product line, AWS provides flexible options for creating and utilizing JuiceFS file systems. JuiceFS has a rich set of API interfaces. For AWS, JuiceFS can typically be used in the following products: Amazon EC2: Use by mounting the JuiceFS file system Amazon Elastic Kubernetes Service (EKS): Utilizing the JuiceFS CSI Driver Amazon EMR: Using the JuiceFS Hadoop Java SDK A JuiceFS file system consists of two parts: Object Storage: Used for data storage. Metadata Engine: A database used for storing metadata. Depending on specific requirements, you can choose to use fully managed databases and S3 object storage on AWS, or deploy them on EC2 and EKS by yourself. :::tip This article focuses on the method of creating a JuiceFS file system using AWS fully managed services. For self-hosted scenarios, please refer to the and guides, as well as the corresponding program documentation. ::: S3 is the object storage service provided by AWS. You can create a bucket in the corresponding region as needed, or authorize the JuiceFS client to automatically create a bucket through . Amazon S3 provides various , for example: S3 Standard: Standard storage, suitable for general-purpose storage with frequent data access, offering real-time access with no retrieval costs. S3 Standard-IA: Infrequent Access (IA) storage, suitable for data that is accessed less frequently but needs to be stored for the long term, offering real-time access with retrieval costs. S3 Glacier: Archive storage, suitable for data that is rarely accessed and requires retrieval (thawing) before access. You can set the storage class when creating or mounting the JuiceFS file system, please refer to for details. It is recommended to choose the standard storage class first. Although other storage classes may have lower unit storage prices, they often come with minimum storage duration requirements and retrieval costs. Furthermore, accessing object storage services requires authentication using Access Key (a.k.a. access key ID) and Secret Key (a.k.a. secret access key). You can refer to the document for creating the necessary policies. When accessing S3 from an EC2 cloud server, you can also assign an to the EC2 instance to enable the S3 API to be called without using access" }, { "data": "AWS offers various network-based fully managed databases that can be used to build the JuiceFS metadata engine, mainly including: Amazon MemoryDB for Redis (hereinafter referred to as MemoryDB): A durable Redis in-memory database service that provides extremely fast performance. Amazon RDS: Fully managed databases such as MariaDB, MySQL, PostgreSQL, and more. :::note Although Amazon ElastiCache for Redis (hereinafter referred to as ElastiCache) also provides services compatible with the Redis protocol, compared with MemoryDB, ElastiCache cannot provide \"strong consistency guarantee\", so MemoryDB is recommended. ::: Please refer to the documentation to install the latest JuiceFS Community Edition client based on the operating system used by your EC2 instance. For example, if you are using a Linux system, you can use the one-liner installation script to automatically install the client: ```shell curl -sSL https://d.juicefs.com/install | sh - ``` You can assign an IAM role with permission to your EC2 instance, allowing it to create and use S3 Buckets directly without using Access Key and Secret Key. Here we take MemoryDB as an example, please refer to and AWS documentation to create a database. In order to allow EC2 instances to access the Redis cluster, you need to create them in the same VPC or add rules to the security group of the Redis cluster to allow access from the EC2 instance. :::note If you are creating a Redis 7.0 version cluster, you will need to install JuiceFS version 1.1 or above on the client side. ::: ```shell juicefs format --storage s3 \\ --bucket https://s3.ap-east-1.amazonaws.com/myjfs \\ rediss://clustercfg.myredis.hc79sw.memorydb.ap-east-1.amazonaws.com:6379/1 \\ myjfs ``` ```shell sudo juicefs mount -d \\ rediss://clustercfg.myredis.hc79sw.memorydb.ap-east-1.amazonaws.com:6379/1 \\ /mnt/myjfs ``` To mount and use the file system created by authorizing S3 access through an IAM role from outside of AWS, you will need to use `juicefs config` to add the Access Key and Secret Key for the file system. ```shell juicefs config \\ --access-key=<your-access-key> \\ --secret-key=<your-secret-key> \\ rediss://clustercfg.myredis.hc79sw.memorydb.ap-east-1.amazonaws.com:6379/1 ``` Please refer to the document for details on how to automatically mount JuiceFS at boot. Amazon EKS supports : EKS managed node groups: Use Amazon EC2 as compute nodes Self-managed nodes: Use Amazon EC2 as compute nodes Fargate: A serverless compute engine JuiceFS CSI Driver is not currently supported on Fargate. Please create a cluster using \"EKS managed node groups\" or \"self-managed nodes\" to use JuiceFS CSI Driver. Amazon EKS is a standard Kubernetes cluster and can be managed using tools such as `eksctl`, `kubectl`, and `helm`. For installation and usage instructions, please refer to the . Please refer to the document for instructions." } ]
{ "category": "Runtime", "file_name": "aws.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, a Rook admin can declare how they want their cluster deployed by specifying values in the . However, after a cluster has been initially declared and deployed, it is not currently possible to update the Cluster CRD and have those desired changes reflected in the actual cluster state. This document will describe a design for how cluster updating can be implemented, along with considerations, trade-offs, and a suggested scope of work. As previously mentioned, the interface for a user who wants to update their cluster will be the Cluster CRD. To specify changes to a Rook cluster, the user could run a command like the following: ```console kubectl -n rook-ceph edit cluster.ceph.rook.io rook-ceph ``` This will bring up a text editor with the current value of the cluster CRD. After their desired edits are made, for instance to add a new storage node, they will save and exit the editor. Of course, it is also possible to update a cluster CRD via the Kubernetes API instead of `kubectl`. This will trigger an update of the CRD object, which the operator is already subscribed to events for. The update event is provided both the new and old cluster objects, making it possible to perform a diff between desired and actual state. Once the difference is calculated, the operator will begin to bring actual state in alignment with desired state by performing similar operations to what it does to create a cluster in the first place. Controllers, pod templates, config maps, etc. will be updated and configured with the end result of the Rook cluster pods and state representing the users desired cluster state. The most common case for updating a Rook cluster will be to add and remove storage resources. This will essentially alter the number of OSDs in the cluster which will cause data rebalancing and migration. Therefore, updating storage resources should be performed by the operator with special consideration as to not degrade cluster performance and health beyond acceptable levels. The Cluster CRD has many fields, but not all of them will be updatable (i.e., the operator will not attempt to make any changes to the cluster for updates to some fields). The following fields will be supported for updates: `mon`: Ceph mon specific settings can be changed. `count`: The number of monitors can be updated and the operator will ensure that as monitors are scaled up or down the cluster remains in quorum. `allowMultiplePerNode`: The policy to allow multiple mons to be placed on one node can be toggled. `deviceFilter`: The regex filter for devices allowed to be used for storage can be updated and OSDs will be added or removed to match the new filter pattern. `devicePathFilter`: The regex filter for paths of devices allowed to be used for storage can be updated and OSDs will be added or removed to match the new filter pattern. `useAllDevices`: If this value is updated to `true`, then OSDs will be added to start using all devices on" }, { "data": "However, if this value is updated to `false`, the operator will only allow OSDs to be removed if there is a value set for `deviceFilter`. This is to prevent an unintentional action by the user that would effectively remove all data in the cluster. `useAllNodes`: This value will be treated similarly to `useAllDevices`. Updating it to `true` is a safe action as it will add more nodes and their storage to the cluster, but updating it to `false` is not always a safe action. If there are no individual nodes listed under the `nodes` field, then updating this field to `false` will not be allowed. `resources`: The CPU and memory limits can be dynamically updated. `placement`: The placement of daemons across the cluster can be updated, but it is dependent on the specific daemon. For example, monitors can dynamically update their placement as part of their ongoing health checks. OSDs can not update their placement at all since they have data gravity that is tied to specific nodes. Other daemons can decide when and how to update their placement, for example doing nothing for current pods and only honoring new placement settings for new pods. `nodes`: Specific storage nodes can be added and removed, as well as additional properties on the individual nodes that have not already been described above: `devices`: The list of devices to use for storage can have entries added and removed. `directories`: The list of directories to use for storage can also be updated. All other properties not listed above are not supported for runtime updates. Some particular unsupported fields to note: `dataDirHostPath`: Once the local host directory for storing cluster metadata and config has been set and populated, migrating it to a new location is not supported. `hostNetwork`: After the cluster has been initialized to either use host networking or pod networking, the value can not be changed. Changing this value dynamically would very likely cause a difficult to support transition period while pods are transferring between networks and would certainly impact cluster health. It is in the user's best interests to provide early feedback if they have made an update to their Cluster CRD that is invalid or not supported. Along with , we should use the Kubernetes CRD validation feature to verify any changes to the Cluster CRD and provide helpful error messages in the case that their update can not be fulfilled. It is important to remember that . Because of this, we need to be very careful when determining whether it is a safe operation to remove an OSD. We need to be absolutely sure that the user really intended to remove the OSD from a device, as opposed to the device name randomly changing and becoming out of the device filter or list. What is especially challenging here is that before the initial deployment of OSDs onto a node, which creates the UUIDs for each device, there is no known consistent and user friendly way to specify devices. A lot of environments do not have labels, IDs, UUIDs, etc. for their devices at first boot and the only way to address them is by device name, such as" }, { "data": "This is unfortunate because it is a volatile identifier. Some environments do have IDs at first boot and we should consider allowing users to specify devices by those IDs instead of names in the near future. That effort is being tracked by . The main approach that will be taken to solve this issue is to always compare the device UUID from a node's saved OSD config map against the device UUIDs of the current set of device names. If the two do not match, then it is not a safe operation to remove the OSD from the device. Let's walk through a couple simple scenarios to illustrate this approach: NOT SAFE: Device name has changed, but filter has not been updated by the user: User initially specifies `sda` via device filter or list. Rook configures `sda` and gets an OSD up and running. The node reboots which causes the OSD pod to restart. The filter still specifies `sda`, but the device has changed its name to `sdb`. The device is now out of the filter. We look at the node's saved OSD config and see that we originally set up `sda` with device UUID `wxyz-1234`. The user's filter still says to use `sda`, so going by the saved config and not what the current devices names are, we know that the old `sda` (device UUID `wxyz-1234`), which is now `sdb` should NOT be removed. SAFE: User has updated the filter and the device name has not changed: User initially specifies `sda` via device filter or list. Rook configures `sda` and gets an OSD up and running. User updates the Cluster CRD to change the device filter or list to now be `sdb`. The OSD pod restarts and when it comes back up it sees that the previously configured `sda` is no longer in the filter. The pod checks the device UUID of `sda` in its saved config and compares that to the device UUID of the current `sda`. The two match, so the pod knows it's a safe (user intended) operation to remove the OSD from `sda`. When the operator receives an event that the Cluster CRD has been updated, it will need to perform some orchestration in order to bring actual state of the cluster in agreement with the desired state. For example, when `mon.count` is updated, the operator will add or remove a single monitor at a time, ensuring that quorum is restored before moving onto the next monitor. Updates to the storage spec for the cluster require even more careful consideration and management by the operator, which will be discussed in this section. First and foremost, changes to the cluster state should not be carried out when the cluster is not in a healthy state. The operator should wait until cluster health is restored until any orchestration is carried out. It is important to remember that a single OSD pod can contain multiple OSD processes and that the operator itself does not have detailed knowledge of the storage resources of each node. More specifically, the devices that can be used for storage" }, { "data": "match `deviceFilter`) is not known until the OSD pod has been started on a given node. As mentioned previously, it is recommended to make storage changes to the cluster one OSD at a time. Therefore, the operator and the OSD pods will need to coordinate their efforts in order to adhere to this guidance. When a cluster update event is received by the operator, it will work on a node by node basis, ensuring all storage updates are completed by the OSD pod for that node before moving to the next. When an OSD pod starts up and has completed its device discovery, it will need to perform a diff of the desired storage against the actual storage that is currently included in the cluster. This diff will determine the set of OSD instances that need to be removed or added within the pod. Fortunately, the OSD pod start up is already idempotent and already handles new storage additions, so the remaining work will be the following: Safely removing existing OSDs from the cluster Waiting for data migration to complete and all placement groups to become clean Signaling to the operator that the pod has completed its storage updates We should consider an implementation that allows the OSD pod to refresh it's set of OSDs without restarting the entire pod, but since the replication controller's pod template spec needs to be updated by the operator in order to convey this information to the pod, we may need to live with restarting the pod either way. Remember that this will be done one node at a time to mitigate impact to cluster health. Also, other types of update operations to the cluster (e.g., software upgrade) should be blocked while a cluster update is ongoing. The Cluster CRD status will be kept up to date by the operator so the user has some insight into the process being carried out. While the operator is carrying out an update to the cluster, the Cluster CRD `status` will be set to `updating`. If there are any errors during the process, the `message` field will be updated with a specific reason for the failure. We should also update documentation for our users with easy commands to query the status and message fields so they can get more information easily. As mentioned previously, the OSD pods need to communicate to the operator when they are done orchestrating their local OSD instance changes. To make this effort more resilient and tolerant of operator restarts, this effort should be able to be resumed. For example, if the operator restarts while an OSD pod is draining OSDs, the operator should not start telling other OSD pods to do work. The OSDs and operator will jointly maintain a config map to track the status of storage update operations within the cluster. When the operator initially requests an OSD pod to compute its storage diff, it will update a config map with an entry for the OSD containing a status of `computingDiff` and a current" }, { "data": "When the OSD pod has finished computation and started orchestrating changes, it will update the entry with a status of `orchestrating` and a current timestamp. Finally, when the pod has finished, it will update the entry with `completed` and a current timestamp again, letting the operator know it is safe to move onto the next node. If the operator is restarted during this flow, it will look in the config map for any OSD pod that is not in the `completed` state. If it finds any, then it will wait until they are completed before moving onto another node. This approach will ensure that only 1 OSD pod is performing changes at a time. Note that this approach can also be used to ask an OSD pod to compute changes without having to restart the pod needlessly. If the OSD pods are watching the config map for changes, then they can compute a diff upon request of the operator. This section covers the general sequence for updating storage resources and outlines important considerations for cluster health. Before any changes begin, we will temporarily disable scrubbing of placement groups (the process of verifying data integrity of stored objects) to maximize cluster resources that can go to both client I/O and recovery I/O for data migration: ```console ceph osd set noscrub ceph osd set nodeep-scrub ``` Some also recommends limiting backfill and recovery work while storage is being added or removed. The intent is to maximize client I/O while sacrificing throughput of data migration. I do not believe this is strictly necessary and at this point I would prefer to not limit recovery work in the hopes of finishing data migrations as quickly as possible. I suspect that most cluster administrators would not be removing storage when the cluster is under heavy load in the first place. This trade-off can be revisited if we see unacceptable performance impact. As mentioned previously, we will add one OSD at a time in order to allow the cluster to rebalance itself in a controlled manner and to avoid getting into a situation where there is an unacceptable amount of churn and thrashing. Adding a new OSD is fairly simple since the OSD pod logic already supports it: If the entire node is being added, ensure the node is added to the CRUSH map: `ceph osd crush add-bucket {bucket-name} {type}` For each OSD: Register, format, add OSD to the crush map and start the OSD process like normal Wait for all placement groups to reach `active+clean` state, meaning data migration is complete. Removing storage is a more involved process and it will also be done one OSD at a time to ensure the cluster returns to a clean state. Of special note for removing storage is that a check should be performed to ensure that the cluster has enough remaining storage to recover (backfill) the entire set of objects from the OSD that is being removed. If the cluster does not have enough space for this (e.g., it would hit the `full` ratio), then the removal should not" }, { "data": "For each OSD to remove, the following steps should be performed: reweight the OSD to 0.0 with `ceph osd crush reweight osd.<id> 0.0`, which will trigger data migration from the OSD. wait for all data to finish migrating from the OSD, meaning all placement groups return to the `active+clean` state mark the OSD as `out` with `ceph osd out osd.<id>` stop the OSD process and remove it from monitoring remove the OSD from the CRUSH map: `ceph osd crush remove osd.<id>` delete the OSD's auth info: `ceph auth del osd.<id>` delete the OSD from the cluster: `ceph osd rm osd.<id>` delete the OSD directory from local storage (if using `dataDirHostPath`): `rm -fr /var/lib/rook/<osdID>` If the entire node is being removed, ensure that the host node is also removed from the CRUSH map: ```console $ ceph osd crush rm <host-bucket-name> ``` After all storage updates are completed, both additions and removals, then we can once again enable scrubbing: ```console ceph osd unset noscrub ceph osd unset nodeep-scrub ``` The number of placement groups in the cluster compared to the number of OSDs is a difficult trade-off without knowing the user's intent for future cluster growth. The general rule of thumb is that you want around 100 PGs per OSD. With less than that, you have potentially unbalanced distribution of data with certain OSDs storing more than others. With more PGs than that, you have increased overhead in the cluster because more OSDs need to coordinate with each other, impacting performance and reliability. It's important to note that shrinking placement group count (merging) is still not supported in Ceph. Therefore, you can only increase the number of placement groups (splitting) over time. If the cluster grows such that we have too few placement groups per OSD, then we can consider increasing the number of PGs in the cluster by incrementing the `pgnum` and `pgpnum` for each storage pool. Similar to adding new OSDs, this increase of PGs should be done incrementally and in a coordinated fashion to avoid degrading performance significantly in the cluster. Placement group management will be tracked in further detail in . The implementation of the design described in this document could be done in a phased approach in order to get critical features out sooner. One proposal for implementation phases would be: Simple add storage: Storage resources can be added to the Cluster CRD and extremely minimal orchestration would be performed to coordinate the storage changes. Cluster performance impact would not be ideal but may be tolerable for many scenarios, and Rook clusters would then have dynamic storage capabilities. Simple remove storage: Similar to the simple adding of storage, storage resources can be removed from the Cluster CRD with minimal orchestration. Dynamic storage orchestration: The more careful orchestration of storage changes would be implemented, with the operator and OSD pods coordinating across the cluster to slowly ramp up/down storage changes with minimal impact to cluster performance. Non-storage cluster field updates: All other properties in the cluster CRD supported for updates will be implemented (e.g., `mon`, `resources`, etc.). Placement Group updates: Placement group counts will be updated over time as the cluster grows in order to optimize cluster performance." } ]
{ "category": "Runtime", "file_name": "cluster-update.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Azure Data Lake Storage Gen2 This guide describes how to configure Alluxio with {:target=\"_blank\"} as the under storage system. Azure Data Lake Storage Gen2 is a set of capabilities dedicated to big data analytics, built on Azure Blob Storage. It converges the capabilities of Azure Data Lake Storage Gen1 with Azure Blob Storage. For more information about Azure Data Lake Storage Gen1, please read its {:target=\"_blank\"}. If you haven't already, please see before you get started. In preparation for using Azure Data Lake Storage Gen2 with Alluxio, {:target=\"_blank\"} or use an existing Data Lake storage. <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<AZURE_CONTAINER>`</td> <td markdown=\"span\">The container you want to use</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<AZURE_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use in the container, either by creating a new directory or using an existing one</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<AZURE_ACCOUNT>`</td> <td markdown=\"span\">Your Azure storage account</td> </tr> </table> You also need a {:target=\"_blank\"} to authorize requests. To use Azure Data Lake Storage Gen2 as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Specify the underfs address by modifying `conf/alluxio-site.properties` to include: ```properties alluxio.dora.client.ufs.root=abfs://<AZURECONTAINER>@<AZUREACCOUNT>.dfs.core.windows.net/<AZURE_DIRECTORY>/ ``` Specify the Shared Key by adding the following property in `conf/alluxio-site.properties`: ```properties fs.azure.account.key.<AZUREACCOUNT>.dfs.core.windows.net=<AZURESHARED_KEY> ``` Specify the OAuth 2.0 Client Credentials by adding the following property in `conf/alluxio-site.properties`: (Please note that for URL Endpoint, use the V1 token endpoint) ```properties fs.azure.account.oauth2.client.endpoint=<OAUTH_ENDPOINT> fs.azure.account.oauth2.client.id=<CLIENT_ID> fs.azure.account.oauth2.client.secret=<CLIENT_SECRET> ``` Specify the Azure Managed Identities by adding the following property in `conf/alluxio-site.properties`: ```properties fs.azure.account.oauth2.msi.endpoint=<MSI_ENDPOINT> fs.azure.account.oauth2.client.id=<CLIENT_ID> fs.azure.account.oauth2.msi.tenant=<MSI_TENANT> ``` Once you have configured Alluxio to Azure Data Lake Storage Gen2, try to see that everything works." } ]
{ "category": "Runtime", "file_name": "Azure-Data-Lake-Gen2.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> BPF datapath traffic metrics ``` -h, --help help for metrics ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List BPF datapath traffic metrics" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_metrics.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "% runc-create \"8\" runc-create - create a container runc create [option ...] container-id The create command creates an instance of a container from a bundle. The bundle is a directory with a specification file named config.json, and a root filesystem. --bundle|-b path : Path to the root of the bundle directory. Default is current directory. --console-socket path : Path to an AF_UNIX socket which will receive a file descriptor referencing the master end of the console's pseudoterminal. See . --pid-file path : Specify the file to write the initial container process' PID to. --no-pivot : Do not use pivot root to jail process inside rootfs. This should not be used except in exceptional circumstances, and may be unsafe from the security standpoint. --no-new-keyring : Do not create a new session keyring for the container. This will cause the container to inherit the calling processes session key. --preserve-fds N : Pass N additional file descriptors to the container (stdio + $LISTEN_FDS + N in total). Default is 0. runc-spec(8), runc-start(8), runc(8)." } ]
{ "category": "Runtime", "file_name": "runc-create.8.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "(network-ipam)= {abbr}`IPAM (IP Address Management)` is a method used to plan, track, and manage the information associated with a computer network's IP address space. In essence, it's a way of organizing, monitoring, and manipulating the IP space in a network. Checking the IPAM information for your Incus setup can help you debug networking issues. You can see which IP addresses are used for instances, network interfaces, forwards, and load balancers and use this information to track down where traffic is lost. To display IPAM information, enter the following command: ```bash incus network list-allocations ``` By default, this command shows the IPAM information for the `default` project. You can select a different project with the `--project` flag, or specify `--all-projects` to display the information for all projects. The resulting output will look something like this: ``` ++--+-++-+ | USED BY | ADDRESS | TYPE | NAT | HARDWARE ADDRESS | ++--+-++-+ | /1.0/networks/incusbr0 | 192.0.2.0/24 | network | true | | ++--+-++-+ | /1.0/networks/incusbr0 | 2001:db8::/32 | network | true | | ++--+-++-+ | /1.0/instances/u1 | 2001:db8::1/128 | instance | true | 00:16:3e:04:f0:95 | ++--+-++-+ | /1.0/instances/u1 | 192.0.2.2/32 | instance | true | 00:16:3e:04:f0:95 | ++--+-++-+ ... ``` Each listed entry lists the IP address (in CIDR notation) of one of the following Incus entities: `network`, `network-forward`, `network-load-balancer`, and `instance`. An entry contains an IP address using the CIDR notation. It also contains an Incus resource URI, the type of the entity, whether it is in NAT mode, and the hardware address (only for the `instance` entity)." } ]
{ "category": "Runtime", "file_name": "network_ipam.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "The goal of this guide is to manage the complexity, keep a consistent code style and prevent common mistakes. New code should follow the guides below and reviewers should check if new PRs follow the rules. Always use camelcase to name variables and functions. <table> <thead><tr><th>Bad</th><th>Good</th></tr></thead> <tbody> <tr><td> ```go var command_line string ``` </td><td> ```go var commandLine string ``` </td></tr> </tbody></table> All error that not expected should be handled with error log. No error should be skipped silently. <table> <thead><tr><th>Bad</th><th>Good</th></tr></thead> <tbody> <tr><td> ```go kubeClient, _ := kubernetes.NewForConfig(cfg) ``` </td><td> ```go kubeClient, err := kubernetes.NewForConfig(cfg) if err != nil { klog.Errorf(\"init kubernetes client failed %v\", err) return err } ``` </td></tr> </tbody></table> We prefer use `if err := somefunction(); err != nil {}` to check error in one line. <table> <thead><tr><th>Bad</th><th>Good</th></tr></thead> <tbody> <tr><td> ```go err := c.initNodeRoutes() if err != nil { klog.Fatalf(\"failed to initialize node routes: %v\", err) } ``` </td><td> ```go if err := c.initNodeRoutes(); err != nil { klog.Fatalf(\"failed to initialize node routes: %v\", err) } ``` </td></tr> </tbody></table> The length of one function should not exceed 100 lines. When err occurs in the function, it should be returned to the caller not skipped silently. <table> <thead><tr><th>Bad</th><th>Good</th></tr></thead> <tbody> <tr><td> ```go func startHandle() { if err = some(); err != nil { klog.Errorf(err) } return } ``` </td><td> ```go func startHandle() error { if err = some(); err != nil { klog.Errorf(err) return err } return nil } ``` </td></tr> </tbody></table> When adding a new CRD to Kube-OVN, you should consider things below to avoid common bugs. The new feature should be disabled for performance and stability reasons. The `install.sh`, `charts` and `yamls` should install the new CRD. The `cleanup.sh` should clean the CRD and all the related resources. The `gc.go` should check the inconsistent resource and do the cleanup. The add/update/delete event can be triggered many times during the lifecycle, the handler should be reentrant." } ]
{ "category": "Runtime", "file_name": "CODE_STYLE.md", "project_name": "Kube-OVN", "subcategory": "Cloud Native Network" }
[ { "data": "title: Use JuiceFS on Alibaba Cloud sidebar_position: 7 slug: /clouds/aliyun description: Learn how to use JuiceFS on Alibaba Cloud. As shown in the figure below, JuiceFS is driven by both the database and the object storage. The files stored in JuiceFS are split into fixed-size data blocks and stored in the object store according to certain rules, while the metadata corresponding to the data is stored in the database. The metadata is stored completely independently. Retrieval and processing of files do not directly manipulate the data in the object storage. Instead, operations are performed first on the metadata in the database. Interaction with the object storage only occurs when data changes. This design can effectively reduce the cost of the object storage in terms of the number of requests. It also allows users to significantly experience the performance improvement brought by JuiceFS. This document introduces how to use JuiceFS on Alibaba Cloud. From the previous architecture description, you can know that JuiceFS needs to be used together with database and object storage. Here we directly use the Alibaba Cloud ECS cloud server, combined with cloud database and OSS object storage. When you create cloud computing resources, try to choose in the same region, so that resources can access each other through intranet and avoid using public network to incur additional traffic costs. JuiceFS has no special requirements for server hardware. Generally speaking, entry-level cloud servers can also use JuiceFS stably. Typically, you just need to choose the one that can meet your own application requirements. In particular, you do not need to buy a new server or reinstall the system to use JuiceFS. JuiceFS is not application invasive and does not cause any interference with your existing systems and programs. You can install and use JuiceFS on your running server. By default, JuiceFS takes up 1 GB of hard disk space for caching, and you can adjust the size of the cache space as needed. This cache is a data buffer layer between the client and the object storage. You can get better performance by choosing a cloud drive with better performance. In terms of operating system, JuiceFS can be installed on all operating systems provided by Alibaba Cloud ECS. The ECS specification used in this document are as follows: | Instance specification | ecs.t5-lc1m1.small | | -- | -- | | CPU | 1 core | | MEMORY | 1 GB | | Storage | 40 GB | | OS | Ubuntu Server 20.04 64-bit | | Location | Shanghai | JuiceFS stores all the metadata corresponding to the data in a separate database, which currently supports Redis, MySQL, PostgreSQL, and SQLite. Depending on the database type, the performance and reliability of metadata are different. For example, Redis runs entirely in memory. While it provides the ultimate performance, it is difficult to operate and maintain and has low reliability. SQLite is a single-file relational database with low performance and is not suitable for large-scale data storage. However, it is configuration-free and suitable for a small amount of data storage on a single" }, { "data": "If you just want to evaluate the functionality of JuiceFS, you can build the database manually on ECS. If you want to use JuiceFS in a production environment, and you don't have a professional database operation and maintenance team, the cloud database service is usually a better choice. You can also use cloud database services provided on other platforms if you wish. But in this case, you have to expose the database port to the public network, which may have some security risks. If you must access the database through the public network, you can enhance the security of your data by strictly limiting the IP addresses that are allowed to access the database through the whitelist feature provided by the cloud database console. On the other hand, if you cannot successfully connect to the cloud database through the public network, you can check the whitelist of the database. | Database | Redis | MySQL/PostgreSQL | SQLite | | :-: | :--: | :-: | :-: | | Performance | High | Medium | Low | | Management | High | Medium | Low | | Reliability | Low | Medium | Low | | Scenario | Massive data, distributed high-frequency reads and writes | Massive data, distributed low- and medium-frequency reads and writes | Low-frequency reads and writes in single machine for small amounts of data | This document uses , and the following pseudo address is compiled for demonstration purposes only: | Redis version | 5.0 Community Edition | |-|-| | Instance specification | 256M Standard master-replica instances | | Connection address | `herald-sh-abc.redis.rds.aliyuncs.com` | | Available zone | Shanghai | JuiceFS stores all data in object storage, which supports almost all object storage services. However, to get the best performance, when using Alibaba Cloud ECS, OSS object storage is usually the optimal choice. However, you must choose ECS and OSS buckets in the same region so that they can be accessed through intranet. This has low latency and does not require additional traffic costs. You can also use object storage services provided by other cloud platforms if you wish, but this is not recommended. This is because accessing object storage from other cloud platforms through ECS needs the public network, and object storage will incur traffic costs. In addition, the access latency will be higher compared to this, which may affect the performance of JuiceFS. Alibaba Cloud OSS has different storage levels. Since JuiceFS needs to interact with object storage frequently, it is recommended to use standard tier. You can use it with OSS resource pack to reduce the cost of using object storage. Alibaba Cloud OSS needs to be accessed through an API. You need to prepare an access key pair, including an AccessKey ID and an AccessKey secret. to see how to obtain the access key pair. Security advisory: Explicit use of the API access secret key may lead to key compromise. It is recommended to assign a to the cloud" }, { "data": "Once an ECS is granted access to the OSS, the API access key is no longer required to access the OSS. We are currently using Ubuntu Server 20.04 64-bit, so you can download the latest version of the client by running the following command: ```shell curl -sSL https://d.juicefs.com/install | sh - ``` Alternatively, you can choose another version by visiting the page. Execute the command, and you will see the help message returned by JuiceFS. This means that the client installation was successful. ```shell $ juicefs NAME: juicefs - A POSIX file system built on Redis and object storage. USAGE: juicefs [global options] command [command options] [arguments...] VERSION: 0.15.2 (2021-07-07T05:51:36Z 4c16847) COMMANDS: format format a volume mount mount a volume umount unmount a volume gateway S3-compatible gateway sync sync between two storage rmr remove directories recursively info show internal information for paths or inodes bench run benchmark to read/write/stat big/small files gc collect any leaked objects fsck Check consistency of file system profile analyze access log status show status of JuiceFS warmup build cache for target directories/files dump dump metadata into a JSON file load load metadata from a previously dumped JSON file help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --verbose, --debug, -v enable debug log (default: false) --quiet, -q only warning and errors (default: false) --trace enable trace log (default: false) --no-agent disable pprof (:6060) agent (default: false) --help, -h show help (default: false) --version, -V print only the version (default: false) COPYRIGHT: Apache License 2.0 ``` JuiceFS has good cross-platform compatibility and supports Linux, Windows, and macOS. This document focuses on installing and using JuiceFS on Linux. For installation instructions on other systems, . Once the JuiceFS client is installed, you can create the JuiceFS storage using the Redis database and OSS object storage that you prepared earlier. Technically speaking, this step should be called \"Format a volume.\" However, given that many users may not understand or care about the standard file system terminology, we will refer to the process simply as \"Create JuiceFS storage.\" The following command creates a storage named `mystor`, which is a file system, using the `format` subcommand provided by the JuiceFS client: ```shell $ juicefs format \\ --storage oss \\ --bucket https://<your-bucket-name> \\ --access-key <your-access-key-id> \\ --secret-key <your-access-key-secret> \\ redis://:<your-redis-password>@herald-sh-abc.redis.rds.aliyuncs.com:6379/1 \\ mystor ``` Option description: `--storage`: Specifies the type of object storage. to view the object storage services supported by JuiceFS. `--bucket`: Bucket domain name of the object storage. When using OSS, just fill in the bucket name. There is no need to fill in the full domain name. JuiceFS will automatically identify and fill in the complete address. `--access-key` and `--secret-key`: The secret key pair to access the object storage API. for instructions on obtaining these keys. Redis 6.0 authentication requires username and password parameters in the format of `redis://username:password@redis-server-url:6379/1`. Currently, Alibaba Cloud Redis only provides Reids 4.0 and 5.0 versions, which require only a password for authentication. When setting the Redis server address, leave the username empty, like this:" }, { "data": "When you are using the RAM role to bind to the ECS, you can create JuiceFS storage by specifying `--storage` and `--bucket` without providing the API access key. The command can be rewritten as follows: ```shell $ juicefs format \\ --storage oss \\ --bucket https://mytest.oss-cn-shanghai.aliyuncs.com \\ redis://:<your-redis-password>@herald-sh-abc.redis.rds.aliyuncs.com:6379/1 \\ mystor ``` A successful creation of the file system will yield output similar to the following: ```shell 2021/07/13 16:37:14.264445 juicefs[22290] <INFO>: Meta address: redis://@herald-sh-abc.redis.rds.aliyuncs.com:6379/1 2021/07/13 16:37:14.277632 juicefs[22290] <WARNING>: maxmemory_policy is \"volatile-lru\", please set it to 'noeviction'. 2021/07/13 16:37:14.281432 juicefs[22290] <INFO>: Ping redis: 3.609453ms 2021/07/13 16:37:14.527879 juicefs[22290] <INFO>: Data uses oss://mytest/mystor/ 2021/07/13 16:37:14.593450 juicefs[22290] <INFO>: Volume is formatted as {Name:mystor UUID:4ad0bb86-6ef5-4861-9ce2-a16ac5dea81b Storage:oss Bucket:https://mytest340 AccessKey:LTAI4G4v6ioGzQXy56m3XDkG SecretKey:removed BlockSize:4096 Compression:none Shards:0 Partitions:0 Capacity:0 Inodes:0 EncryptKey:} ``` When the file system is created, the information related to the object storage is stored in the database. Therefore, you do not need to enter information such as the bucket domain and secret key when mounting. Use the `mount` subcommand to mount the file system to the `/mnt/jfs` directory. ```shell sudo juicefs mount -d redis://:<your-redis-password>@herald-sh-abc.redis.rds.aliyuncs.com:6379/1 /mnt/jfs ``` Note: When mounting the file system, only the Redis database address is required; the file system name is not necessary. The default cache path is `/var/jfsCache`. Make sure the current user has sufficient read/write permissions. Output similar to the following means that the file system was mounted successfully: ```shell 2021/07/13 16:40:37.088847 juicefs[22307] <INFO>: Meta address: redis://@herald-sh-abc.redis.rds.aliyuncs.com/1 2021/07/13 16:40:37.101279 juicefs[22307] <WARNING>: maxmemory_policy is \"volatile-lru\", please set it to 'noeviction'. 2021/07/13 16:40:37.104870 juicefs[22307] <INFO>: Ping redis: 3.408807ms 2021/07/13 16:40:37.384977 juicefs[22307] <INFO>: Data use oss://mytest/mystor/ 2021/07/13 16:40:37.387412 juicefs[22307] <INFO>: Disk cache (/var/jfsCache/4ad0bb86-6ef5-4861-9ce2-a16ac5dea81b/): capacity (1024 MB), free ratio (10%), max pending pages (15) .2021/07/13 16:40:38.410742 juicefs[22307] <INFO>: OK, mystor is ready at /mnt/jfs ``` You can use the `df` command to see how the file system is mounted: ```shell $ df -Th File system type capacity used usable used% mount point JuiceFS:mystor fuse.juicefs 1.0P 64K 1.0P 1% /mnt/jfs ``` After the file system is successfully mounted, you can store data in the `/mnt/jfs` directory as if you were using a local hard drive. Multi-host sharing: JuiceFS storage supports being mounted by multiple cloud servers at the same time. You can install the JuiceFS client on other could servers and then use the `redis://:<your-redis-password>@herald-sh-abc.redis.rds.aliyuncs. com:6379/1` database address to mount the file system on each host. Use the `status` subcommand of the JuiceFS client to view basic information and connection status of a file system. ```shell $ juicefs status redis://:<your-redis-password>@herald-sh-abc.redis.rds.aliyuncs.com:6379/1 2021/07/13 16:56:17.143503 juicefs[22415] <INFO>: Meta address: redis://@herald-sh-abc.redis.rds.aliyuncs.com:6379/1 2021/07/13 16:56:17.157972 juicefs[22415] <WARNING>: maxmemory_policy is \"volatile-lru\", please set it to 'noeviction'. 2021/07/13 16:56:17.161533 juicefs[22415] <INFO>: Ping redis: 3.392906ms { \"Setting\": { \"Name\": \"mystor\", \"UUID\": \"4ad0bb86-6ef5-4861-9ce2-a16ac5dea81b\", \"Storage\": \"oss\", \"Bucket\": \"https://mytest\", \"AccessKey\": \"<your-access-key-id>\", \"BlockSize\": 4096, \"Compression\": \"none\", \"Shards\": 0, \"Partitions\": 0, \"Capacity\": 0, \"Inodes\": 0 }, \"Sessions\": [ { \"Sid\": 3, \"Heartbeat\": \"2021-07-13T16:55:38+08:00\", \"Version\": \"0.15.2 (2021-07-07T05:51:36Z 4c16847)\", \"Hostname\": \"demo-test-sh\", \"MountPoint\": \"/mnt/jfs\", \"ProcessID\": 22330 } ] } ``` You can unmount the file system using the `umount` command provided by the JuiceFS client, for example: ```shell sudo juicefs umount /mnt/jfs ``` Note: Forcelly unmounting a file system in use may result in data corruption or loss. Therefore, proceed with caution. For details on auto-mounting JuiceFS at boot time, see ." } ]
{ "category": "Runtime", "file_name": "aliyun.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "(exp-clustering)= To spread the total workload over several servers, Incus can be run in clustering mode. In this scenario, any number of Incus servers share the same distributed database that holds the configuration for the cluster members and their instances. The Incus cluster can be managed uniformly using the client or the REST API. (clustering-members)= An Incus cluster consists of one bootstrap server and at least two further cluster members. It stores its state in a , which is a database replicated using the Raft algorithm. While you could create a cluster with only two members, it is strongly recommended that the number of cluster members be at least three. With this setup, the cluster can survive the loss of at least one member and still be able to establish quorum for its distributed state. When you create the cluster, the Cowsql database runs on only the bootstrap server until a third member joins the cluster. Then both the second and the third server receive a replica of the database. See {ref}`cluster-form` for more information. (clustering-member-roles)= In a cluster with three members, all members replicate the distributed database that stores the state of the cluster. If the cluster has more members, only some of them replicate the database. The remaining members have access to the database, but don't replicate it. At each time, there is an elected cluster leader that monitors the health of the other members. Each member that replicates the database has either the role of a voter or of a stand-by. If the cluster leader goes offline, one of the voters is elected as the new leader. If a voter member goes offline, a stand-by member is automatically promoted to voter. The database (and hence the cluster) remains available as long as a majority of voters is online. The following roles can be assigned to Incus cluster members. Automatic roles are assigned by Incus itself and cannot be modified by the user. | Role | Automatic | Description | | : | :-- | :- | | `database` | yes | Voting member of the distributed database | | `database-leader` | yes | Current leader of the distributed database | | `database-standby` | yes | Stand-by (non-voting) member of the distributed database | | `event-hub` | no | Exchange point (hub) for the internal Incus events (requires at least two) | | `ovn-chassis` | no | Uplink gateway candidate for OVN networks | The default number of voter members ({config:option}`server-cluster:cluster.max_voters`) is three. The default number of stand-by members ({config:option}`server-cluster:cluster.max_standby`) is two. With this configuration, your cluster will remain operational as long as you switch off at most one voting member at a time. See {ref}`cluster-manage` for more" }, { "data": "(clustering-offline-members)= If a cluster member is down for more than the configured offline threshold, its status is marked as offline. In this case, no operations are possible on this member, and neither are operations that require a state change across all members. As soon as the offline member comes back online, operations are available again. If the member that goes offline is the leader itself, the other members will elect a new leader. If you can't or don't want to bring the server back online, you can . You can tweak the amount of seconds after which a non-responding member is considered offline by setting the {config:option}`server-cluster:cluster.offline_threshold` configuration. The default value is 20 seconds. The minimum value is 10 seconds. To automatically {ref}`evacuate <cluster-evacuate>` instances from an offline member, set the {config:option}`server-cluster:cluster.healing_threshold` configuration to a non-zero value. See {ref}`cluster-recover` for more information. You can use failure domains to indicate which cluster members should be given preference when assigning roles to a cluster member that has gone offline. For example, if a cluster member that currently has the database role gets shut down, Incus tries to assign its database role to another cluster member in the same failure domain, if one is available. To update the failure domain of a cluster member, use the command and change the `failure_domain` property from `default` to another string. (clustering-member-config)= Incus cluster members are generally assumed to be identical systems. This means that all Incus servers joining a cluster must have an identical configuration to the bootstrap server, in terms of storage pools and networks. To accommodate things like slightly different disk ordering or network interface naming, there is an exception for some configuration options related to storage and networks, which are member-specific. When such settings are present in a cluster, any server that is being added must provide a value for them. Most often, this is done through the interactive `incus admin init` command, which asks the user for the value for a number of configuration keys related to storage or networks. Those settings typically include: The source device and size for a storage pool The name for a ZFS zpool, LVM thin pool or LVM volume group External interfaces and BGP next-hop for a bridged network The name of the parent network device for managed `physical` or `macvlan` networks See {ref}`cluster-config-storage` and {ref}`cluster-config-networks` for more information. If you want to look up the questions ahead of time (which can be useful for scripting), query the `/1.0/cluster` API endpoint. This can be done through `incus query /1.0/cluster` or through other API clients. By default, Incus replicates images on as many cluster members as there are database members. This typically means up to three copies within the" }, { "data": "You can increase that number to improve fault tolerance and the likelihood of the image being locally available. To do so, set the {config:option}`server-cluster:cluster.imagesminimalreplica` configuration. The special value of `-1` can be used to have the image copied to all cluster members. (cluster-groups)= In an Incus cluster, you can add members to cluster groups. You can use these cluster groups to launch instances on a cluster member that belongs to a subset of all available members. For example, you could create a cluster group for all members that have a GPU and then launch all instances that require a GPU on this cluster group. By default, all cluster members belong to the `default` group. See {ref}`howto-cluster-groups` and {ref}`cluster-target-instance` for more information. (clustering-instance-placement)= In a cluster setup, each instance lives on one of the cluster members. When you launch an instance, you can target it to a specific cluster member, to a cluster group or have Incus automatically assign it to a cluster member. By default, the automatic assignment picks the cluster member that has the lowest number of instances. If several members have the same amount of instances, one of the members is chosen at random. However, you can control this behavior with the {config:option}`cluster-cluster:scheduler.instance` configuration option: If `scheduler.instance` is set to `all` for a cluster member, this cluster member is selected for an instance if: The instance is created without `--target` and the cluster member has the lowest number of instances. The instance is targeted to live on this cluster member. The instance is targeted to live on a member of a cluster group that the cluster member is a part of, and the cluster member has the lowest number of instances compared to the other members of the cluster group. If `scheduler.instance` is set to `manual` for a cluster member, this cluster member is selected for an instance if: The instance is targeted to live on this cluster member. If `scheduler.instance` is set to `group` for a cluster member, this cluster member is selected for an instance if: The instance is targeted to live on this cluster member. The instance is targeted to live on a member of a cluster group that the cluster member is a part of, and the cluster member has the lowest number of instances compared to the other members of the cluster group. (clustering-instance-placement-scriptlet)= Incus supports using custom logic to control automatic instance placement by using an embedded script (scriptlet). This method provides more flexibility than the built-in instance placement functionality. The instance placement scriptlet must be written in the (which is a subset of Python). The scriptlet is invoked each time Incus needs to know where to place an instance. The scriptlet receives information about the instance that is being placed and the candidate cluster members that could host the" }, { "data": "It is also possible for the scriptlet to request information about each candidate cluster member's state and the hardware resources available. An instance placement scriptlet must implement the `instance_placement` function with the following signature: `instanceplacement(request, candidatemembers)`: `request` is an object that contains an expanded representation of . This request includes `project` and `reason` fields. The `reason` can be `new`, `evacuation` or `relocation`. `candidate_members` is a `list` of cluster member objects representing entries. For example: ```python def instanceplacement(request, candidatemembers): log_info(\"instance placement started: \", request) if request.name == \"foo\": log_error(\"Invalid name supplied: \", request.name) fail(\"Invalid name\") # Exit with an error to reject instance placement. settarget(candidatemembers[0].server_name) return # Return empty to allow instance placement to proceed. ``` The scriptlet must be applied to Incus by storing it in the `instances.placement.scriptlet` global configuration setting. For example, if the scriptlet is saved inside a file called `instance_placement.star`, then it can be applied to Incus with the following command: cat instance_placement.star | incus config set instances.placement.scriptlet=- To see the current scriptlet applied to Incus, use the `incus config get instances.placement.scriptlet` command. The following functions are available to the scriptlet (in addition to those provided by Starlark): `log_info(*messages)`: Add a log entry to Incus' log at `info` level. `messages` is one or more message arguments. `log_warn(*messages)`: Add a log entry to Incus' log at `warn` level. `messages` is one or more message arguments. `log_error(*messages)`: Add a log entry to Incus' log at `error` level. `messages` is one or more message arguments. `settarget(membername)`: Set the cluster member where the instance should be created. `member_name` is the name of the cluster member the instance should be created on. If this function is not called, then Incus will use its built-in instance placement logic. `getclustermemberresources(membername)`: Get information about resources on the cluster member. Returns an object with the resource information in the form of . `member_name` is the name of the cluster member to get the resource information for. `getclustermemberstate(membername)`: Get the cluster member's state. Returns an object with the cluster member's state in the form of . `member_name` is the name of the cluster member to get the state for. `getinstanceresources()`: Get information about the resources the instance will require. Returns an object with the resource information in the form of . `get_instances(location, project)`: Get a list of instances based on project and/or location filters. Returns the list of instances in the form of . `getclustermembers(group)`: Get a list of cluster members based on the cluster group. Returns the list of cluster members in the form of . `get_project(name)`: Get a project object based on the project name. Returns a project object in the form of . ```{note} Field names in the object types are equivalent to the JSON field names in the associated Go types. ```" } ]
{ "category": "Runtime", "file_name": "clustering.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Inspect the hive ``` cilium-agent hive [flags] ``` ``` --agent-liveness-update-interval duration Interval at which the agent updates liveness time for the datapath (default 1s) --api-rate-limit stringToString API rate limiting configuration (example: --api-rate-limit endpoint-create=rate-limit:10/m,rate-burst:2) (default []) --bpf-node-map-max uint32 Sets size of node bpf map which will be the max number of unique Node IPs in the cluster (default 16384) --certificates-directory string Root directory to find certificates specified in L7 TLS policy enforcement (default \"/var/run/cilium/certs\") --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-ip-identities-sync-timeout duration Timeout waiting for the initial synchronization of IPs and identities from remote clusters before local endpoints regeneration (default 1m0s) --cni-chaining-mode string Enable CNI chaining with the specified plugin (default \"none\") --cni-chaining-target string CNI network name into which to insert the Cilium chained configuration. Use '*' to select any network. --cni-exclusive Whether to remove other CNI configurations --cni-external-routing Whether the chained CNI plugin handles routing on the node --cni-log-file string Path where the CNI plugin should write logs (default \"/var/run/cilium/cilium-cni.log\") --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --devices strings List of devices facing cluster/external network (used for BPF NodePort, BPF masquerading and host firewall); supports '+' as wildcard in device name, e.g. 'eth+' --disable-envoy-version-check Do not perform Envoy version check --disable-iptables-feeder-rules strings Chains to ignore when installing feeder rules. --egress-gateway-policy-map-max int Maximum number of entries in egress gateway policy map (default 16384) --egress-gateway-reconciliation-trigger-interval duration Time between triggers of egress gateway state reconciliations (default 1s) --enable-bandwidth-manager Enable BPF bandwidth manager --enable-bbr Enable BBR for the bandwidth manager --enable-cilium-api-server-access strings List of cilium API APIs which are administratively enabled. Supports ''. (default []) --enable-cilium-health-api-server-access strings List of cilium health API APIs which are administratively enabled. Supports" }, { "data": "(default []) --enable-gateway-api Enables Envoy secret sync for Gateway API related TLS secrets --enable-ingress-controller Enables Envoy secret sync for Ingress controller related TLS secrets --enable-ipv4-big-tcp Enable IPv4 BIG TCP option which increases device's maximum GRO/GSO limits for IPv4 --enable-ipv6-big-tcp Enable IPv6 BIG TCP option which increases device's maximum GRO/GSO limits for IPv6 --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-l2-pod-announcements Enable announcing Pod IPs with Gratuitous ARP --enable-monitor Enable the monitor unix domain socket server (default true) --enable-service-topology Enable support for service topology aware hints --endpoint-bpf-prog-watchdog-interval duration Interval to trigger endpoint BPF programs load check watchdog (default 30s) --envoy-base-id uint Envoy base ID --envoy-config-retry-interval duration Interval in which an attempt is made to reconcile failed EnvoyConfigs. If the duration is zero, the retry is deactivated. (default 15s) --envoy-config-timeout duration Timeout that determines how long to wait for Envoy to N/ACK CiliumEnvoyConfig resources (default 2m0s) --envoy-keep-cap-netbindservice Keep capability NETBINDSERVICE for Envoy process --envoy-log string Path to a separate Envoy log file, if any --envoy-secrets-namespace string EnvoySecretsNamespace is the namespace having secrets used by CEC --gateway-api-secrets-namespace string GatewayAPISecretsNamespace is the namespace having tls secrets used by CEC, originating from Gateway API --gops-port uint16 Port for gops server to listen on (default 9890) -h, --help help for hive --http-idle-timeout uint Time after which a non-gRPC HTTP stream is considered failed unless traffic in the stream has been processed (in seconds); defaults to 0 (unlimited) --http-max-grpc-timeout uint Time after which a forwarded gRPC request is considered failed unless completed (in seconds). A \"grpc-timeout\" header may override this with a shorter value; defaults to 0 (unlimited) --http-normalize-path Use Envoy HTTP path normalization options, which currently includes RFC 3986 path normalization, Envoy merge slashes option, and unescaping and redirecting for paths that contain escaped slashes. These are necessary to keep path based access control functional, and should not interfere with normal operation. Set this to false only with caution. (default true) --http-request-timeout uint Time after which a forwarded HTTP request is considered failed unless completed (in seconds); Use 0 for unlimited (default 3600) --http-retry-count uint Number of retries performed after a forwarded request attempt fails (default 3) --http-retry-timeout uint Time after which a forwarded but uncompleted request is retried (connection failures are retried immediately); defaults to 0 (never) --ingress-secrets-namespace string IngressSecretsNamespace is the namespace having tls secrets used by CEC, originating from Ingress controller --iptables-lock-timeout duration Time to pass to each iptables invocation to wait for xtables lock acquisition (default 5s) --iptables-random-fully Set iptables flag random-fully on masquerading rules --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --l2-pod-announcements-interface string Interface used for sending gratuitous arp messages --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255," }, { "data": "(default 255) --mesh-auth-enabled Enable authentication processing & garbage collection (beta) (default true) --mesh-auth-gc-interval duration Interval in which auth entries are attempted to be garbage collected (default 5m0s) --mesh-auth-mutual-connect-timeout duration Timeout for connecting to the remote node TCP socket (default 5s) --mesh-auth-mutual-listener-port int Port on which the Cilium Agent will perform mutual authentication handshakes between other Agents --mesh-auth-queue-size int Queue size for the auth manager (default 1024) --mesh-auth-rotated-identities-queue-size int The size of the queue for signaling rotated identities. (default 1024) --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-admin-socket string The path for the SPIRE admin agent Unix socket. --metrics strings Metrics that should be enabled or disabled from the default metric list. (+metricfoo to enable metricfoo, -metricbar to disable metricbar) --monitor-queue-size int Size of the event queue when reading monitor events --multicast-enabled Enables multicast in Cilium --nodeport-addresses strings A whitelist of CIDRs to limit which IPs are used for NodePort. If not set, primary IPv4 and/or IPv6 address of each native device is used. --pprof Enable serving pprof debugging API --pprof-address string Address that pprof listens on (default \"localhost\") --pprof-port uint16 Port that pprof listens on (default 6060) --prepend-iptables-chains Prepend custom iptables chains instead of appending (default true) --procfs string Path to the host's proc filesystem mount (default \"/proc\") --prometheus-serve-addr string IP:Port on which to serve prometheus metrics (pass \":Port\" to bind on all interfaces, \"\" is off) --proxy-admin-port int Port to serve Envoy admin interface on. --proxy-connect-timeout uint Time after which a TCP connect attempt is considered failed unless completed (in seconds) (default 2) --proxy-gid uint Group ID for proxy control plane sockets. (default 1337) --proxy-idle-timeout-seconds int Set Envoy upstream HTTP idle connection timeout seconds. Does not apply to connections with pending requests. Default 60s (default 60) --proxy-max-connection-duration-seconds int Set Envoy HTTP option maxconnectionduration seconds. Default 0 (disable) --proxy-max-requests-per-connection int Set Envoy HTTP option maxrequestsper_connection. Default 0 (disable) --proxy-portrange-max uint16 End of port range that is used to allocate ports for L7 proxies. (default 20000) --proxy-portrange-min uint16 Start of port range that is used to allocate ports for L7 proxies. (default 10000) --proxy-prometheus-port int Port to serve Envoy metrics on. Default 0 (disabled). --proxy-xff-num-trusted-hops-egress uint32 Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the egress L7 policy enforcement Envoy listeners. --proxy-xff-num-trusted-hops-ingress uint32 Number of trusted hops regarding the x-forwarded-for and related HTTP headers for the ingress L7 policy enforcement Envoy listeners. --read-cni-conf string CNI configuration file to use as a source for --write-cni-conf-when-ready. If not supplied, a suitable one will be generated. --tunnel-port uint16 Tunnel port (default 8472 for \"vxlan\" and 6081 for \"geneve\") --tunnel-protocol string Encapsulation protocol to use for the overlay (\"vxlan\" or \"geneve\") (default \"vxlan\") --use-full-tls-context If enabled, persist ca.crt keys into the Envoy config even in a terminatingTLS block on an L7 Cilium Policy. This is to enable compatibility with previously buggy behaviour. This flag is deprecated and will be removed in a future release. --write-cni-conf-when-ready string Write the CNI configuration to the specified path when agent is ready ``` - Run the cilium agent - Output the dependencies graph in graphviz dot format" } ]
{ "category": "Runtime", "file_name": "cilium-agent_hive.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This guide is useful if you intend to contribute on containerd. Thanks for your effort. Every contribution is very appreciated. This doc includes: To get started, create a codespace for this repository by clicking this ](https://github.com/codespaces/new?hidereposelect=true&ref=main&repo=46089560) A codespace will open in a web-based version of Visual Studio Code. The is fully configured with software needed for this project and the containerd built. If you use a codespace, then you can directly skip to the section of this document. Note: Dev containers is an open spec which is supported by and . To build the `containerd` daemon, and the `ctr` simple test client, the following build system dependencies are required: Go 1.22.x or above Protoc 3.x compiler and headers (download at the ) Btrfs headers and libraries for your distribution. Note that building the btrfs driver can be disabled via the build tag `no_btrfs`, removing this dependency. Note: On macOS, you need a third party runtime to run containers on containerd First you need to setup your Go development environment. You can follow this guideline and at the end you have `go` command in your `PATH`. You need `git` to checkout the source code: ```sh git clone https://github.com/containerd/containerd ``` For proper results, install the `protoc` release into `/usr/local` on your build system. When generating source code from `.proto` files, containerd may rely on some external protocol buffer files. These external dependencies should be added to the `/usr/local/include` directory. To install the appropriate version of `protoc` and download any necessary external protocol buffer files on a Linux host, run the install script located at `script/setup/install-protobuf`. To enable optional snapshotter, you should have the headers from the Linux kernel 4.12 or later. The dependency on the kernel headers only affects users building containerd from source. Users on older kernels may opt to not compile the btrfs support (see `BUILDTAGS=no_btrfs` below), or to provide headers from a newer kernel. Note The dependency on the Linux kernel headers 4.12 was introduced in containerd 1.7.0-beta.4. containerd 1.6 has different set of dependencies for enabling btrfs. containerd 1.6 users should refer to https://github.com/containerd/containerd/blob/release/1.6/BUILDING.md#build-the-development-environment At this point you are ready to build `containerd` yourself! Runc is the default container runtime used by `containerd` and is required to run containerd. While it is okay to download a `runc` binary and install that on the system, sometimes it is necessary to build runc directly when working with container runtime development. Make sure to follow the guidelines for versioning in for the best results. Note: Runc only supports Linux `containerd` uses `make` to create a repeatable build flow. It means that you can run: ```sh cd containerd make ``` This is going to build all the project binaries in the `./bin/` directory. You can move them in your global path, `/usr/local/bin` with: ```sh sudo make install ``` The install prefix can be changed by passing the `PREFIX` variable (defaults to `/usr/local`). Note: if you set one of these vars, set them to the same values on all make stages (build as well as install). If you want to prepend an additional prefix on actual installation" }, { "data": "packaging or chroot install), you can pass it via `DESTDIR` variable: ```sh sudo make install DESTDIR=/tmp/install-x973234/ ``` The above command installs the `containerd` binary to `/tmp/install-x973234/usr/local/bin/containerd` The current `DESTDIR` convention is supported since containerd v1.6. Older releases was using `DESTDIR` for a different purpose that is similar to `PREFIX`. When making any changes to the gRPC API, you can use the installed `protoc` compiler to regenerate the API generated code packages with: ```sh make generate ``` Note: Several build tags are currently available: `no_cri`: A build tag disables building Kubernetes support into containerd. See for build tags of CRI plugin. snapshotters (alphabetical order) `no_aufs`: A build tag disables building the aufs snapshot driver. (Ignored since containerd v2.0, as the aufs snapshot driver is no longer supported) `no_btrfs`: A build tag disables building the Btrfs snapshot driver. `no_devmapper`: A build tag disables building the device mapper snapshot driver. `no_zfs`: A build tag disables building the ZFS snapshot driver. platform `no_systemd`: disables any systemd specific code For example, adding `BUILDTAGS=no_btrfs` to your environment before calling the binaries Makefile target will disable the btrfs driver within the containerd Go build. Vendoring of external imports uses the . You need to use `go mod` command to modify the dependencies. After modifition, you should run `go mod tidy` and `go mod vendor` to ensure the `go.mod`, `go.sum` files and `vendor` directory are up to date. Changes to these files should become a single commit for a PR which relies on vendored updates. Please refer to for the currently supported version of `runc` that is used by containerd. Note: On macOS, the containerd daemon can be built and run natively. However, as stated above, runc only supports linux. You can build static binaries by providing a few variables to `make`: ```sh make STATIC=1 ``` Note: static build is discouraged static containerd binary does not support loading shared object plugins (`*.so`) static build binaries are not position-independent The following instructions assume you are at the parent directory of containerd source directory. You can build `containerd` via a Linux-based Docker container. You can build an image from this `Dockerfile`: ```dockerfile FROM golang ``` Let's suppose that you built an image called `containerd/build`. From the containerd source root directory you can run the following command: ```sh docker run -it \\ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \\ -e GOPATH=/go \\ -w /go/src/github.com/containerd/containerd containerd/build sh ``` This mounts `containerd` repository You are now ready to : ```sh make && make install ``` To have complete core container runtime, you will need both `containerd` and `runc`. It is possible to build both of these via Docker container. You can use `git` to checkout `runc`: ```sh git clone https://github.com/opencontainers/runc ``` We can build an image from this `Dockerfile`: ```sh FROM golang RUN apt-get update && \\ apt-get install -y libseccomp-dev ``` In our Docker container we will build `runc` build, which includes , , and support. Seccomp support in runc requires `libseccomp-dev` as a dependency (AppArmor and SELinux support do not require external libraries at build time). Refer to in the docs directory to for details about building runc, and to learn about supported versions of `runc` as used by containerd. Let's suppose you build an image called `containerd/build` from the above Dockerfile. You can run the following command: ```sh docker run -it --privileged \\ -v /var/lib/containerd \\ -v ${PWD}/runc:/go/src/github.com/opencontainers/runc \\ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \\ -e GOPATH=/go \\ -w /go/src/github.com/containerd/containerd containerd/build sh ``` This mounts both `runc` and `containerd` repositories in our Docker" }, { "data": "From within our Docker container let's build `containerd`: ```sh cd /go/src/github.com/containerd/containerd make && make install ``` These binaries can be found in the `./bin` directory in your host. `make install` will move the binaries in your `$PATH`. Next, let's build `runc`: ```sh cd /go/src/github.com/opencontainers/runc make && make install ``` For further details about building runc, refer to in the docs directory. When working with `ctr`, the simple test client we just built, don't forget to start the daemon! ```sh containerd --config config.toml ``` During the automated CI the unit tests and integration tests are run as part of the PR validation. As a developer you can run these tests locally by using any of the following `Makefile` targets: `make test`: run all non-integration tests that do not require `root` privileges `make root-test`: run all non-integration tests which require `root` `make integration`: run all tests, including integration tests and those which require `root`. `TESTFLAGSPARALLEL` can be used to control parallelism. For example, `TESTFLAGSPARALLEL=1 make integration` will lead a non-parallel execution. The default value of `TESTFLAGS_PARALLEL` is 8. `make cri-integration`: run cri integration tests To execute a specific test or set of tests you can use the `go test` capabilities without using the `Makefile` targets. The following examples show how to specify a test name and also how to use the flag directly against `go test` to run root-requiring tests. ```sh go test -v -run \"<TEST_NAME>\" . go test -v -run . -test.root ``` Example output from directly running `go test` to execute the `TestContainerList` test: ```sh sudo go test -v -run \"TestContainerList\" . -test.root INFO[0000] running tests against containerd revision=f2ae8a020a985a8d9862c9eb5ab66902c2888361 version=v1.0.0-beta.2-49-gf2ae8a0 === RUN TestContainerList PASS: TestContainerList (0.00s) PASS ok github.com/containerd/containerd 4.778s ``` Note: in order to run `sudo go` you need to either keep user PATH environment variable. ex: `sudo \"PATH=$PATH\" env go test <args>` or use `go test -exec` ex: `go test -exec sudo -v -run \"TestTarWithXattr\" ./archive/ -test.root` In addition to `go test`-based testing executed via the `Makefile` targets, the `containerd-stress` tool is available and built with the `all` or `binaries` targets and installed during `make install`. With this tool you can stress a running containerd daemon for a specified period of time, selecting a concurrency level to generate stress against the daemon. The following command is an example of having five workers running for two hours against a default containerd gRPC socket address: ```sh containerd-stress -c 5 -d 120m ``` For more information on this tool's options please run `containerd-stress --help`. is an external tool which can be used to drive load against a container runtime, specifying a particular set of lifecycle operations to run with a specified amount of concurrency. Bucketbench is more focused on generating performance details than simply inducing load against containerd. Bucketbench differs from the `containerd-stress` tool in a few ways: Bucketbench has support for testing the Docker engine, the `runc` binary, and containerd 0.2.x (via `ctr`) and 1.0 (via the client library) branches. Bucketbench is driven via configuration file that allows specifying a list of lifecycle operations to execute. This can be used to generate detailed statistics per-command (e.g. start, stop, pause, delete). Bucketbench generates detailed reports and timing data at the end of the configured test run. More details on how to install and run `bucketbench` are available at the ." } ]
{ "category": "Runtime", "file_name": "BUILDING.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark get restores\" layout: docs Get restores Get restores ``` ark get restores [flags] ``` ``` -h, --help help for restores --label-columns stringArray a comma-separated list of labels to be displayed as columns -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default \"table\") -l, --selector string only show items matching this label selector --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Get ark resources" } ]
{ "category": "Runtime", "file_name": "ark_get_restores.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Longhorn architecture includes engine and replica instance manager pods on each node. After the upgrade, Longhorn adds an additional engine and replica instance manager pods. When the cluster is set with a default request of 12% guaranteed CPU, all instance manager pods will occupy 12% * 4 CPUs per node. Nevertheless, this caused high base resource requirements and is likely unnecessary. ``` NAME STATE E-CPU(CORES) E-MEM(BYTES) R-CPU(CORES) R-MEM(BYTES) CREATED-WORKLOADS DURATION(MINUTES) AGE demo-0 (no-IO) Complete 8.88m 24Mi 1.55m 43Mi 5 10 22h demo-0-bs-512b-5g Complete 109.70m 66Mi 36.46m 54Mi 5 10 16h demo-0-bs-1m-10g Complete 113.16m 65Mi 36.63m 56Mi 5 10 14h demo-0-bs-5m-10g Complete 114.17m 64Mi 31.37m 54Mi 5 10 42m ``` Aiming to simplify the architecture and free up some resource requests, this document proposes to consolidate the engine and replica instance managers into a single pod. This consolidation will not affect any data plane operations or volume migration. As the engine process is the primary consumer of CPU resources, merging the instance managers will result in a 50% reduction in CPU requests for instance managers. This is because there will only be one instance manager pod for both process types. Phase 1: https://github.com/longhorn/longhorn/issues/5208 Phase 2: https://github.com/longhorn/longhorn/issues/5842 https://github.com/longhorn/longhorn/issues/5844 Having single instance manager pods to run replica and engine processes. After the Longhorn upgrade, the previous engine instance manager should continue to handle data plane operations for attached volumes until they are detached. And the replica instance managers should continue servicing data plane operations until the volume engine is upgraded or volume is detached. Automatically clean up any engine/replica instance managers when all instances (process) get removed. Online/offline upgrade volume engine should be functional. The replicas will automatically migrate to use the new `aio` (all-in-one) type instance managers, and the `engine` type instance manager will continue to serve until the first volume detachment. The Pod Disruption Budget (PDB) handling for cluster auto-scaler and node drain should work as expected. `None` To ensure uninterrupted upgrades, this enhancement will be implemented in two phases. The existing `engine`/`replica` instance manager may coexist with the consolidated instance manager during the transition. Phase 1: Introduce a new `aio` instance manager type. The `engine` and `replica` instance manager types will be deprecated and continue to serve for the upgraded volumes until the first volume detachment. Introduce new `Guaranteed Instance Manager CPU` setting, `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings will be deprecated and continues to serve for the upgraded volumes until the first volume detachment. Phase 2: Remove all instance manager types. Remove the `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings. For freshly installed Longhorn, the user will see `aio` type instance managers. For upgraded Longhorn with all volume detached, the user will see the `engine`, and `replica` instance managers removed and replaced by `aio` type instance managers. For upgraded Longhorn with volume attached, the user will see existing `engine`, and `replica` instance managers still servicing the old attached volumes and the new `aio` type instance manager servicing new volume attachments. User creates and attaches a" }, { "data": "``` > kubectl -n longhorn-system get volume NAME STATE ROBUSTNESS SCHEDULED SIZE NODE AGE demo-0 attached unknown 21474836480 ip-10-0-1-113 12s > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-8f81ca7c3bf95bbbf656be6ac2d1b7c4 running aio ip-10-0-1-105 124m instance-manager-7e59c9f2ef7649630344050a8d5be68e running aio ip-10-0-1-102 124m instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc running aio ip-10-0-1-113 124m > kubectl -n longhorn-system get lhim/instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc -o yaml apiVersion: longhorn.io/v1beta2 kind: InstanceManager metadata: creationTimestamp: \"2023-03-16T10:48:59Z\" generation: 1 labels: longhorn.io/component: instance-manager longhorn.io/instance-manager-image: imi-8d41c3a4 longhorn.io/instance-manager-type: aio longhorn.io/managed-by: longhorn-manager longhorn.io/node: ip-10-0-1-113 name: instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc namespace: longhorn-system ownerReferences: apiVersion: longhorn.io/v1beta2 blockOwnerDeletion: true kind: Node name: ip-10-0-1-113 uid: 00c0734b-f061-4b28-8071-62596274cb18 resourceVersion: \"926067\" uid: a869def6-1077-4363-8b64-6863097c1e26 spec: engineImage: \"\" image: c3y1huang/research:175-lh-im nodeID: ip-10-0-1-113 type: aio status: apiMinVersion: 1 apiVersion: 3 currentState: running instanceEngines: demo-0-e-06d4c77d: spec: name: demo-0-e-06d4c77d status: endpoint: \"\" errorMsg: \"\" listen: \"\" portEnd: 10015 portStart: 10015 resourceVersion: 0 state: running type: engine instanceReplicas: demo-0-r-ca78cab4: spec: name: demo-0-r-ca78cab4 status: endpoint: \"\" errorMsg: \"\" listen: \"\" portEnd: 10014 portStart: 10000 resourceVersion: 0 state: running type: replica ip: 10.42.0.238 ownerID: ip-10-0-1-113 proxyApiMinVersion: 1 proxyApiVersion: 4 ``` The engine and replica instances(processes) created in the `aio` type instance manager. User has a Longhorn v1.4.0 cluster and a volume in the detached state. ``` > kubectl -n longhorn-system get volume NAME STATE ROBUSTNESS SCHEDULED SIZE NODE AGE demo-1 detached unknown 21474836480 12s > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-r-1278a39fa6e6d8f49eba156b81ac1f59 running replica ip-10-0-1-113 3m44s instance-manager-e-1278a39fa6e6d8f49eba156b81ac1f59 running engine ip-10-0-1-113 3m44s instance-manager-e-45ad195db7f55ed0a2dd1ea5f19c5edf running engine ip-10-0-1-105 3m41s instance-manager-r-45ad195db7f55ed0a2dd1ea5f19c5edf running replica ip-10-0-1-105 3m41s instance-manager-e-225a2c7411a666c8eab99484ab632359 running engine ip-10-0-1-102 3m42s instance-manager-r-225a2c7411a666c8eab99484ab632359 running replica ip-10-0-1-102 3m42s ``` User upgraded Longhorn to v1.5.0. ``` > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-8f81ca7c3bf95bbbf656be6ac2d1b7c4 running aio ip-10-0-1-105 112s instance-manager-7e59c9f2ef7649630344050a8d5be68e running aio ip-10-0-1-102 48s instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc running aio ip-10-0-1-113 47s ``` Unused `engine` type instance managers removed. Unused `replica` type instance managers removed. 3 `aio` type instance managers created. User upgraded volume engine. User attaches the volume. ``` > kubectl -n longhorn-system get volume NAME STATE ROBUSTNESS SCHEDULED SIZE NODE AGE demo-1 attached healthy 21474836480 ip-10-0-1-113 4m51s > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-8f81ca7c3bf95bbbf656be6ac2d1b7c4 running aio ip-10-0-1-105 3m58s instance-manager-7e59c9f2ef7649630344050a8d5be68e running aio ip-10-0-1-102 2m54s instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc running aio ip-10-0-1-113 2m53s > kubectl -n longhorn-system get lhim/instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc -o yaml apiVersion: longhorn.io/v1beta2 kind: InstanceManager metadata: creationTimestamp: \"2023-03-16T13:03:15Z\" generation: 1 labels: longhorn.io/component: instance-manager longhorn.io/instance-manager-image: imi-8d41c3a4 longhorn.io/instance-manager-type: aio longhorn.io/managed-by: longhorn-manager longhorn.io/node: ip-10-0-1-113 name: instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc namespace: longhorn-system ownerReferences: apiVersion: longhorn.io/v1beta2 blockOwnerDeletion: true kind: Node name: ip-10-0-1-113 uid: 12eb73cd-e9de-4c45-875d-3eff7cfb1034 resourceVersion: \"3762\" uid: c996a89a-f841-4841-b69d-4218ed8d8c6e spec: engineImage: \"\" image: c3y1huang/research:175-lh-im nodeID: ip-10-0-1-113 type: aio status: apiMinVersion: 1 apiVersion: 3 currentState: running instanceEngines: demo-1-e-b7d28fb3: spec: name: demo-1-e-b7d28fb3 status: endpoint: \"\" errorMsg: \"\" listen: \"\" portEnd: 10015 portStart: 10015 resourceVersion: 0 state: running type: engine instanceReplicas: demo-1-r-189c1bbb: spec: name: demo-1-r-189c1bbb status: endpoint: \"\" errorMsg: \"\" listen: \"\" portEnd: 10014 portStart: 10000 resourceVersion: 0 state: running type: replica ip: 10.42.0.28 ownerID: ip-10-0-1-113 proxyApiMinVersion: 1 proxyApiVersion: 4 ``` The engine and replica instances(processes) created in the `aio` type instance manager. User has a Longhorn v1.4.0 cluster and a volume in the attached" }, { "data": "``` > kubectl -n longhorn-system get volume NAME STATE ROBUSTNESS SCHEDULED SIZE NODE AGE demo-2 attached healthy 21474836480 ip-10-0-1-113 35s > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-r-1278a39fa6e6d8f49eba156b81ac1f59 running replica ip-10-0-1-113 2m41s instance-manager-r-45ad195db7f55ed0a2dd1ea5f19c5edf running replica ip-10-0-1-105 119s instance-manager-r-225a2c7411a666c8eab99484ab632359 running replica ip-10-0-1-102 119s instance-manager-e-1278a39fa6e6d8f49eba156b81ac1f59 running engine ip-10-0-1-113 2m41s instance-manager-e-225a2c7411a666c8eab99484ab632359 running engine ip-10-0-1-102 119s instance-manager-e-45ad195db7f55ed0a2dd1ea5f19c5edf running engine ip-10-0-1-105 119s ``` User upgraded Longhorn to v1.5.0. ``` > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-r-1278a39fa6e6d8f49eba156b81ac1f59 running replica ip-10-0-1-113 5m24s instance-manager-r-45ad195db7f55ed0a2dd1ea5f19c5edf running replica ip-10-0-1-105 4m42s instance-manager-r-225a2c7411a666c8eab99484ab632359 running replica ip-10-0-1-102 4m42s instance-manager-e-1278a39fa6e6d8f49eba156b81ac1f59 running engine ip-10-0-1-113 5m24s instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc running aio ip-10-0-1-113 117s instance-manager-7e59c9f2ef7649630344050a8d5be68e running aio ip-10-0-1-102 33s instance-manager-8f81ca7c3bf95bbbf656be6ac2d1b7c4 running aio ip-10-0-1-105 32s ``` 2 unused `engine` type instance managers removed. 3 `aio` type instance managers created. User upgraded online volume engine. ``` > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-8f81ca7c3bf95bbbf656be6ac2d1b7c4 running aio ip-10-0-1-105 6m53s instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc running aio ip-10-0-1-113 8m18s instance-manager-7e59c9f2ef7649630344050a8d5be68e running aio ip-10-0-1-102 6m54s instance-manager-e-1278a39fa6e6d8f49eba156b81ac1f59 running engine ip-10-0-1-113 11m ``` All `replica` type instance manager migrated to `aio` type instance managers. User detached the volume. ``` > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-8f81ca7c3bf95bbbf656be6ac2d1b7c4 running aio ip-10-0-1-105 8m38s instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc running aio ip-10-0-1-113 10m instance-manager-7e59c9f2ef7649630344050a8d5be68e running aio ip-10-0-1-102 8m39s ``` The `engine` type instance managers removed. User attached the volume. ``` > kubectl -n longhorn-system get volume NAME STATE ROBUSTNESS SCHEDULED SIZE NODE AGE demo-2 attached healthy 21474836480 ip-10-0-1-113 12m > kubectl -n longhorn-system get lhim NAME STATE TYPE NODE AGE instance-manager-7e59c9f2ef7649630344050a8d5be68e running aio ip-10-0-1-102 9m40s instance-manager-8f81ca7c3bf95bbbf656be6ac2d1b7c4 running aio ip-10-0-1-105 9m39s instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc running aio ip-10-0-1-113 11m > kubectl -n longhorn-system get lhim/instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc -o yaml apiVersion: longhorn.io/v1beta2 kind: InstanceManager metadata: creationTimestamp: \"2023-03-16T13:12:41Z\" generation: 1 labels: longhorn.io/component: instance-manager longhorn.io/instance-manager-image: imi-8d41c3a4 longhorn.io/instance-manager-type: aio longhorn.io/managed-by: longhorn-manager longhorn.io/node: ip-10-0-1-113 name: instance-manager-b34d5db1fe1e2d52bcfb308be3166cfc namespace: longhorn-system ownerReferences: apiVersion: longhorn.io/v1beta2 blockOwnerDeletion: true kind: Node name: ip-10-0-1-113 uid: 6d109c40-abe3-42ed-8e40-f76cfc33e4c2 resourceVersion: \"4339\" uid: 01556f2c-fbb4-4a15-a778-c73df518b070 spec: engineImage: \"\" image: c3y1huang/research:175-lh-im nodeID: ip-10-0-1-113 type: aio status: apiMinVersion: 1 apiVersion: 3 currentState: running instanceEngines: demo-2-e-65845267: spec: name: demo-2-e-65845267 status: endpoint: \"\" errorMsg: \"\" listen: \"\" portEnd: 10015 portStart: 10015 resourceVersion: 0 state: running type: engine instanceReplicas: demo-2-r-a2bd415f: spec: name: demo-2-r-a2bd415f status: endpoint: \"\" errorMsg: \"\" listen: \"\" portEnd: 10014 portStart: 10000 resourceVersion: 0 state: running type: replica ip: 10.42.0.31 ownerID: ip-10-0-1-113 proxyApiMinVersion: 1 proxyApiVersion: 4 ``` The engine and replica instances(processes) created in the `aio` type instance manager. Introduce new `instanceManagerCPURequest` in `Node` resource. Introduce new `instanceEngines` in InstanceManager resource. Introduce new `instanceReplicas` in InstanceManager resource. Introducing a new instance manager type to have Longhorn continue to service existing attached volumes for Longhorn v1.5.x. Introduce a new `aio` (all-in-one) instance manager type to differentiate the handling of the old `engine`/`replica` instance managers and the new consolidated instance managers. When getting InstanceManagers by instance of the attached volume, retrieve the InstanceManager from the instance manager list using the new `aio` type. New InstanceManagers will use the `instanceEngines` and `instanceReplicas` fields, replacing the `instances` field. For the existing InstanceManagers for the attached Volumes, the `instances` field will remain in use. Rename the `engine-manager` script to `instance-manager`. Bump up version to `4`. Replace `engine` and `replica` pod creation with spec to use for `aio` instance manager" }, { "data": "``` > kubectl -n longhorn-system get pod/instance-manager-0d96990c6881c828251c534eb31bfa85 -o yaml apiVersion: v1 kind: Pod metadata: annotations: longhorn.io/last-applied-tolerations: '[]' creationTimestamp: \"2023-03-01T08:13:03Z\" labels: longhorn.io/component: instance-manager longhorn.io/instance-manager-image: imi-a1873aa3 longhorn.io/instance-manager-type: aio longhorn.io/managed-by: longhorn-manager longhorn.io/node: ip-10-0-1-113 name: instance-manager-0d96990c6881c828251c534eb31bfa85 namespace: longhorn-system ownerReferences: apiVersion: longhorn.io/v1beta2 blockOwnerDeletion: true controller: true kind: InstanceManager name: instance-manager-0d96990c6881c828251c534eb31bfa85 uid: 51c13e4f-d0a2-445d-b98b-80cca7080c78 resourceVersion: \"12133\" uid: 81397cca-d9e9-48f6-8813-e7f2e2cd4617 spec: containers: args: instance-manager --debug daemon --listen 0.0.0.0:8500 env: name: TLS_DIR value: /tls-files/ image: c3y1huang/research:174-lh-im imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 initialDelaySeconds: 3 periodSeconds: 5 successThreshold: 1 tcpSocket: port: 8500 timeoutSeconds: 4 name: instance-manager resources: requests: cpu: 960m securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: mountPath: /host mountPropagation: HostToContainer name: host mountPath: /engine-binaries/ mountPropagation: HostToContainer name: engine-binaries mountPath: /host/var/lib/longhorn/unix-domain-socket/ name: unix-domain-socket mountPath: /tls-files/ name: longhorn-grpc-tls mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-hkbfc readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: ip-10-0-1-113 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: {} serviceAccount: longhorn-service-account serviceAccountName: longhorn-service-account terminationGracePeriodSeconds: 30 tolerations: effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: hostPath: path: / type: \"\" name: host hostPath: path: /var/lib/longhorn/engine-binaries/ type: \"\" name: engine-binaries hostPath: path: /var/lib/longhorn/unix-domain-socket/ type: \"\" name: unix-domain-socket name: longhorn-grpc-tls secret: defaultMode: 420 optional: true secretName: longhorn-grpc-tls name: kube-api-access-hkbfc projected: defaultMode: 420 sources: serviceAccountToken: expirationSeconds: 3607 path: token configMap: items: key: ca.crt path: ca.crt name: kube-root-ca.crt downwardAPI: items: fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: lastProbeTime: null lastTransitionTime: \"2023-03-01T08:13:03Z\" status: \"True\" type: Initialized lastProbeTime: null lastTransitionTime: \"2023-03-01T08:13:04Z\" status: \"True\" type: Ready lastProbeTime: null lastTransitionTime: \"2023-03-01T08:13:04Z\" status: \"True\" type: ContainersReady lastProbeTime: null lastTransitionTime: \"2023-03-01T08:13:03Z\" status: \"True\" type: PodScheduled containerStatuses: containerID: containerd://cb249b97d128e47a7f13326b76496656d407fd16fc44b5f1a37384689d0fa900 image: docker.io/c3y1huang/research:174-lh-im imageID: docker.io/c3y1huang/research@sha256:1f4e86b92b3f437596f9792cd42a1bb59d1eace4196139dc030b549340af2e68 lastState: {} name: instance-manager ready: true restartCount: 0 started: true state: running: startedAt: \"2023-03-01T08:13:03Z\" hostIP: 10.0.1.113 phase: Running podIP: 10.42.0.27 podIPs: ip: 10.42.0.27 qosClass: Burstable startTime: \"2023-03-01T08:13:03Z\" ``` Map the status of the engine/replica process to the corresponding instanceEngines/instanceReplicas fields in the InstanceManager instead of the instances field. To ensure backward compatibility, the instances field will continue to be utilized by the pre-upgrade attached volume. Ensure support for the previous version's attached volumes with the old engine/replica instance manager types. Replace the old engine/replica InstanceManagers with the aio type instance manager during replenishment. Introduce a new `Guaranteed Instance Manager CPU` setting for the new `aio` instance manager pod. The `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` will co-exist with this setting in Longhorn v1.5.x. Based on the assumption when upgrading from v1.5.x to 1.6.x, volumes should have detached at least once and migrated to `aio` type instance managers. Then the cluster should not have volume depending on `engine` and `replica` type instance managers. Therefore in this phase, remove the related types and settings. Remove the `engine`, `replica`, and `aio` instance manager types. There is no need for differentiation. Remove the `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` settings. The settings have already been replaced by the `Guaranteed Instance Manager CPU` setting in phase 1. Remove support for engine/replica InstanceManager types. Support new `aio` instance manager type and run regression test cases. The `instances` field in the instance manager custom resource will still be utilized by old instance managers of the attached volume. `None`" } ]
{ "category": "Runtime", "file_name": "20230303-consolidate-instance-managers.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "A non-exhaustive list of containerd adopters is provided below. _Docker/Moby engine_ - Containerd began life prior to its CNCF adoption as a lower-layer runtime manager for `runc` processes below the Docker engine. Continuing today, containerd has extremely broad production usage as a component of the stack. Note that this includes any use of the open source ; including the Balena project listed below. __ - offers containerd as the CRI runtime for v1.11 and higher versions. __ - IBM's on-premises cloud offering has containerd as a \"tech preview\" CRI runtime for the Kubernetes offered within this product for the past two releases, and plans to fully migrate to containerd in a future release. __ - offers containerd as the CRI runtime in beta for recent versions of Kubernetes. __ - uses containerd + Firecracker (noted below) as the runtime and isolation technology for containers run in the Fargate platform. Fargate is a serverless, container-native compute offering from Amazon Web Services. _Cloud Foundry_ - The for CF has been using OCI runC directly with additional code from CF managing the container image and filesystem interactions, but have recently migrated to use containerd as a replacement for the extra code they had written around runC. _Alibaba's PouchContainer_ - The Alibaba project uses containerd as its runtime for a cloud native offering that has unique isolation and image distribution capabilities. _Rancher's k3s project_ - Rancher Labs is a lightweight Kubernetes distribution; in their words: \"Easy to install, half the memory, all in a binary less than 40mb.\" k8s uses containerd as the embedded runtime for this popular lightweight Kubernetes variant. _Rancher's Rio project_ - Rancher Labs project uses containerd as the runtime for a combined Kubernetes, Istio, and container \"Cloud Native Container Distribution\" platform. _Eliot_ - The container project for IoT device container management uses containerd as the" }, { "data": "_Balena_ - Resin's container engine, based on moby/moby but for edge, embedded, and IoT use cases, uses the containerd and runc stack in the same way that the Docker engine uses containerd. _LinuxKit_ - the Moby project's for building secure, minimal Linux OS images in a container-native model uses containerd as the core runtime for system and service containers. _BuildKit_ - The Moby project's can use either runC or containerd as build execution backends for building container images. BuildKit support has also been built into the Docker engine in recent releases, making BuildKit provide the backend to the `docker build` command. _Azure acs-engine_ - Microsoft Azure's open source project has customizable deployment of Kubernetes clusters, where containerd is a selectable container runtime. At some point in the future Azure's AKS service will default to use containerd as the CRI runtime for deployed Kubernetes clusters. _Amazon Firecracker_ - The AWS has extended containerd with a new snapshotter and v2 shim to allow containerd to drive virtualized container processes via their VMM implementation. More details on their containerd integration are available in . _Kata Containers_ - The lightweight-virtualized container runtime project integrates with containerd via a custom v2 shim implementation that drives the Kata container runtime. _D2iQ Konvoy_ - D2iQ Inc product uses containerd as the container runtime for its Kubernetes distribution. _Inclavare Containers_ - is an innovation of container runtime with the novel approach for launching protected containers in hardware-assisted Trusted Execution Environment (TEE) technology, aka Enclave, which can prevent the untrusted entity, such as Cloud Service Provider (CSP), from accessing the sensitive and confidential assets in use. _Other Projects_ - While the above list provides a cross-section of well known uses of containerd, the simplicity and clear API layer for containerd has inspired many smaller projects around providing simple container management platforms. Several examples of building higher layer functionality on top of the containerd base have come from various containerd community participants: Michael Crosby's project, Evan Hazlett's project, Paul Knopf's immutable Linux image builder project: ." } ]
{ "category": "Runtime", "file_name": "ADOPTERS.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "title: Contributing Guide sidebar_position: 1 description: JuiceFS is open source software and the code is contributed and maintained by developers worldwide. Learn how to participate in this article. Before starting work on a feature or bug fix, search GitHub or reach out to us via GitHub or Slack, make sure no one else is already working on it and we'll ask you to open a GitHub issue if necessary. Before contributing, use the GitHub issue to discuss the feature and reach an agreement with the core developers. For major feature updates, write a design document to help the community understand your motivation and solution. Find issues with the label or . Read for important data structure references. We're following and . Use `go fmt` to format your code before committing. You can find information in editor support for Go tools in . Every new source file must begin with a license header. Install and use it to set up a pre-commit hook for static analysis. Just run `pre-commit install` in the root of the repo. Before you can contribute to JuiceFS, you will need to sign the . There're a CLA assistant to guide you when you first time submit a pull request. Presence of unit tests Adherence to the coding style Adequate in-line comments Explanatory commit message Create a topic branch from where to base the contribution. This is usually `main`. Make commits of logical units. Make sure commit messages are in the proper format. Push changes in a topic branch to a personal fork of the repository. Submit a pull request to . The PR should link to one issue which either created by you or others. The PR must receive approval from at least one maintainer before it be merged." } ]
{ "category": "Runtime", "file_name": "contributing_guide.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This document describes the tailoring and building methods of the linux kernel for the kuasar security container in different scenarios. Developers can quickly and automatically build a linux kernel image which adapts to the kuasar security container by using the provided `build-kernel.sh` script. Linux kernel tailoring is a process of removing or retaining some kernel features or modules based on actual application scenario requirements to achieve the purposes of optimizing system performance, reducing memory usage, and improving security. Generally, there are two methods to tailor the linux kernel: Subtraction Tailoring: Disable the configuration options of unnecessary features based on the default kernel configuration. Addition Tailoring: Developers know which kernel capabilities are needed, and combine the various kernel feature configuration options from scratch. The two tailoring methods have their own advantages and disadvantages: The advantage of the \"subtraction\" tailoring method is that it is simple and convenient, and can quickly tailor the kernel to meet functional requirements. However, the disadvantage is that every time the kernel version is updated, the manual tailoring process needs to be repeated, and the tailoring process cannot be automated and inherited*. The advantage of the \"addition\" tailoring mode is that the kernel can quickly and automatically tailored for different versions of the kernel, and the memory footprint of the tailored kernel is very small. The disadvantage is that it is difficult to get started. Developers need to be familiar with the features of the kernel and know how to divide the kernel configuration options according to the kernel features. This may require a significant amount of time for the first tailoring.* After analyzing the advantages and disadvantages of the two tailoring methods mentioned above, and considering the requirements of supporting multiple versions of linux kernel, minimal kernel memory overhead requirement, and easy expansion of kernel tailoring configuration in the usage scenario of kuasar vmm sandbox, the \"addition\" tailoring method is the most suitable choice. Currently, security container is currently mainly used in serverless computing and the hybrid deployment of trusted and untrusted containers. The different characteristics of these two scenarios also have different requirements for the kernel capabilities of security container. Serverless computing scenario The characteristics of this scenario are that the functions provided by the application are very simple, which are mainly focused on computation, sensitive to delay, have short running time, and require high density of single-machine deployment. These characteristics require the kernel to meet basic computing and network communication capabilities, and the kernel's memory overhead must be small enough. Trusted and untrusted applications deployed together The characteristic of this scenario is that the applications are typically standard linux monolithic applications with complex functionality and high performance requirements, such as multi-tenant AI training/inference scenarios. To reduce performance loss, accelerator hardware devices need to be directly passthrough to secure containers. In addition, the device driver module can be loaded and complex networking modes can be supported. The requirements for the kernel in these scenarios are advanced capabilities, including support for hardware device pass-through, multiple network modes, and loadable kernel" }, { "data": "Based on the directory structure of kernel features output by the `make menuconfig` command and combined with the typical scenarios of security container, the kernel features can be divided into the following categories: Basic general configuration (without architecture differentiation) CPU architecture configuration Firmware configuration CPU Memory management ACPI driver PCI bus Hotplug/Unhotplug Kernel module Scheduling Storage management Block storage NVDIMM non-volatile memory SCSI protocol File system Ext4/Xfs basic file system 9p/Virtio-fs shared file system NFS file system Fuse file system Network management IPv4/IPv6 support VLAN network Netfilter packet filtering Cgroup resource control management NameSpace isolation Security features Device drivers Character device driver Virtio device driver Debug capability We abstract the kernel capabilities corresponding to the preceding two typical scenarios of security container. The kernel used by security container are classified into the following types: micro-kernel*: The lightweight kernel adopts the MMIO bus structure with minimal memory overhead, and is used in conjunction with lightweight virtual machine mode (such as StratoVirt hypervisor's microVM virtual machine mode, Cloud-Hypervisor/Firecracker light-weight virtualization engine), making it suitable for serverless computing scenarios. mini-kernel: The kernel is miniaturized and adopts a PCI bus structure, providing advanced kernel functions such as ACPI/SCSI/NFS/kernel module loading, with rich features. The mini-kernel has rich functions and is combined with the standard VM mode. (e.g. standard VM mode for StratoVirt and Qemu) It is applicable to complex scenarios, such as trusted applications and untrusted applications deployed in the same machine.* ``` $ ./build-kernel.sh --help Usage: ./build-kernel.sh [options] --help, -h print the usage --arch specify the hardware architecture: aarch64/x86_64 --kernel-type specify the target kernel type: micro/mini --kernel-dir specify the kernel source directory --kernel-conf-dir specify the kernel tailor conf directory ``` Note `--arch`: specifies the target platform architecture of the kernel. The valid value can be aarch64 or x86_64. `--kernel-type`: specifies the guest kernel type of the security container to be built. The valid value can be micro (for serverless computing) or mini (for trusted and untrusted applications). `--kernel-dir`: specifies the absolute path of the kernel source code directory. `--kernel-conf-dir`: specifies the absolute path of the directory where the kernel tailoring configuration files are stored. The following is an example for tailoring and building the mini type guest kernel in the aarch64 architecture: ``` $ ./build-kernel.sh --arch aarch64 --kernel-type mini --kernel-dir /home/test/kernel/linux-5.10/ --kernel-conf-dir /home/test/kuasar/vmm/scripts/kernel/build-kernel Kernel Type: mini Kernel Dir: /home/jpf/kernel/linux-5.10/ Kernel Conf Dir: /home/jpf/kuasar/vmm/scripts/kernel/build-kernel Merge kernel fragments with /home/jpf/kuasar/vmm/scripts/kernel/build-kernel/small-kernel-aarch64.list successfully. SYNC include/config/auto.conf.cmd CC scripts/mod/devicetable-offsets.s HOSTCC scripts/mod/modpost.o ...... AR init/built-in.a LD vmlinux.o MODPOST vmlinux.symvers MODINFO modules.builtin.modinfo GEN modules.builtin LD .tmp_vmlinux.kallsyms1 KSYMS .tmp_vmlinux.kallsyms1.S AS .tmp_vmlinux.kallsyms1.S LD .tmp_vmlinux.kallsyms2 KSYMS .tmp_vmlinux.kallsyms2.S AS .tmp_vmlinux.kallsyms2.S LD vmlinux SORTTAB vmlinux SYSMAP System.map MODPOST modules-only.symvers GEN Module.symvers OBJCOPY arch/arm64/boot/Image GZIP arch/arm64/boot/Image.gz Build kernel successfully. ``` Note: When the version of the kernel was changed by user or some customized patches applied for the kernel, the dependency relationships of some CONFIG configuration options in the kernel may change. When automatically generating the config configuration for the tailored kernel, an error message similar to \"CONFIG_XXX not in final .config\" may appear., it means that the kernel configuration item that needs to be enabled in the kernel fragment file is not present in the final kernel configuration file `.config`. The reason for this error message is that the dependency relationships of the kernel configuration options in the tailored kernel fragment file list have changed. This may occur when the kernel version undergoes significant changes and the configuration dependency of the original `CONFIG_XXX` changes, or it may be due to existing problems with the dependency relationships of the kernel configuration options in the original kernel fragment" }, { "data": "Solution: In the directory of the built kernel source code, use the configuration GUI of `make menuconfig` to locate the problematic `CONFIG_XXX` configuration option, and check its dependency relationship with other kernel configuration options. Based on the kernel configuration dependency relationship found in Step 1, adjust the kernel configuration options in the kernel fragment file (which may involve adding new kernel configuration options or deleting some existing ones). The core workflow of the `build-kernel.sh` script is as follows: Based on the input target architecture and kernel type information, the script locates the corresponding tailored configuration file in the kernel tailored configuration directory. The rule for matching the tailored configuration files is `<kernel-type>-kernel-<arch>.list`. The script merges all kernel configuration option fragments stored in the tailored configuration file using the `scripts/kconfig/merge_config.sh` script which stored in the kernel source directory, and generates the final `.config` file for kernel building. Executing the kernel building process, generating the kernel binary image. There are two ways for developers to customize some of the kernel tailoring configuration options: After the `build-kernel.sh` script automatically generates the merged kernel tailored configuration file `.config`, manually adjust it through the `make menuconfig` configuration GUI. Directly modify the kernel tailored configuration file `<kernel-type>-kernel-<arch>.list` which stored in the `kuasar/vmm/scripts/kernel/build-kernel/` directory and add or remove kernel fragments as needed. The format of the contents in the kernel tailoring configuration file `<kernel-type>-kernel-<arch>.list` is: ``` fragments/aarch64.conf fragments/base.conf fragments/block.conf fragments/cgroup.conf fragments/character.conf fragments/cpu.conf fragments/device.conf fragments/filesystem.conf fragments/mem.conf fragments/namespace.conf fragments/net.conf fragments/security.conf fragments/virtio.conf ``` Each line in the file represents a kernel fragment file that needs to be included, and all kernel configuration options in that fragment file will be added to the final generated kernel configuration file `.config`. For example, the contents of the kernel fragment file `fragments/cgroup.conf` are as follows: ``` CONFIG_CGROUPS=y CONFIGPAGECOUNTER=y CONFIG_MEMCG=y CONFIGMEMCGSWAP=y CONFIGMEMCGKMEM=y CONFIGBLKCGROUP=y CONFIGCGROUPWRITEBACK=y CONFIGCGROUPSCHED=y CONFIGFAIRGROUP_SCHED=y CONFIGCFSBANDWIDTH=y CONFIGCGROUPPIDS=y CONFIGCGROUPFREEZER=y CONFIG_CPUSETS=y CONFIGPROCPID_CPUSET=y CONFIGCGROUPDEVICE=y CONFIGCGROUPCPUACCT=y CONFIGSOCKCGROUP_DATA=y ``` Start a new StratoVirt microVM type lightweight virtual machine sandbox instance through `crictl runp` command and observe various indicator data. Measurement method for various test indicators: Kernel image file size*: Obtain directly by `ls -ahl <kernel-image-filename>` command. Kernel memory overhead: The total physical memory used by the Guest OS (which can be acquired by `pmem -p <vm pid>` command ) subtracts* the RSS memory overhead of the init process in the Guest OS. Kernel cold start time*: Check the time consumed by the guest kernel to start the user-mode init process by `dmesg` command executed in the Guest OS. | Test Type | Kernel image file size (MB) | Kernel memory overhead (MB) | Kernel Cold Start Time (ms) | | | | -- | - | | kuasar-aarch64-micro-kernel | 7.7 | 46.2 | 48.3 | | kuasar-aarch64-mimi-kernel | 11 | 43.2 | 67.4 | | kata-aarch64-kernel | 12 | 45.4 | 73.1 | | kuasar-x86_64-micro-kernel | 3.5 | 60.2 | 63.2 | | kuasar-x86_64-mini-kernel | 5 | 107.4 | 106.9 | | kata-x86_64-kernel | 5.8 | 85 | 120.4 | In the aarch64 architecture, Kuasar has a 34% decrease in kernel cold-start time compared to the guest kernel tailored by Kata-Containers, and the kernel memory overhead remains basically the same. In the x86_64 architecture, Kuasar has a 47.5% decrease in kernel cold-start time compared to the guest kernel tailored by Kata, and 29% decrease in memory baseline overhead." } ]
{ "category": "Runtime", "file_name": "how-to-tailor-linux-kernel-for-kuasar-security-container.md", "project_name": "Kuasar", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Get Maglev lookup table for given service by ID ``` cilium-dbg bpf lb maglev get <service id> [flags] ``` ``` -h, --help help for get -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Maglev lookup table" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_lb_maglev_get.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-operator completion fish | source To load completions for every new session, execute once: cilium-operator completion fish > ~/.config/fish/completions/cilium-operator.fish You will need to start a new shell for this setup to take effect. ``` cilium-operator completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator_completion_fish.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "OpenSDS End User Advisory Committee The End User Advisory Committee (EUAC) is to assist and support the OpenSDS community in its objectives by providing technical and strategic guidance real-world storage challenges Cosimo Rosetti (Vodafone) Kei Kusunoki (NTT Communications) Yusuke Sato (Yahoo Japan) Yuji Yazawa (Toyota) Wim Jacobs (KPN) Shinya Tsunematsu (GMO Pepabo) This meeting is hosted on zoom. Join the EUAC mailing list for info on meetings Zoom Meeting: https://zoom.us/j/477192859 Or iPhone one-tap : US: +19294362866,,477192859# or +16699006833,,477192859# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 929 436 2866 or +1 669 900 6833 Meeting ID: 477 192 859 International numbers available: https://zoom.us/zoomconference?m=h0x5xsxAwYrgrrKRsEx7PLkOfvL3bm" } ]
{ "category": "Runtime", "file_name": "euac.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Bug report about: Create a report to help us improve title: '[Bug] Title' labels: 'Quality: Bug' assignees: '' `[Author TODO: A clear and concise description of what the bug is.]` `[Author TODO: Steps to reproduce the behaviour:]` Start Firecracker via.... Configure Firecracker via... ... `[Author TODO: A clear and concise description of what you expected to happen.]` `[Author TODO: Please supply the following information):]` Firecracker version: Host and guest kernel versions: Rootfs used: Architecture: Any other relevant software versions: `[Author TODO: How has this bug affected you?]` `[Author TODO: What are you trying to achieve?]` `[Author TODO: Do you have any idea of what the solution might be?]` [ ] Have you searched the Firecracker Issues database for similar problems? [ ] Have you read the existing relevant Firecracker documentation? [ ] Are you certain the bug being reported is a Firecracker issue?" } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | | - | | -- | -- | | -- | | N00001 | Creating a VLAN interface should succeed | p1 | true | done | | | N00002 | VLAN interface already exists, skip creation | p2 | | done | | | N00003 | VLAN interface exists but state is down, set it up and exit | p2 | | done | | | N00004 | Different VLAN interfaces have the same VLAN id, an error is returned | p2 | | done | | | N00005 | The master interface is down, setting it up and creating VLAN interface | p2 | | done | | | N00006 | Restart the node vlan/bond will be lost, restart the pod they should be restored. | p3 | | done | |" } ]
{ "category": "Runtime", "file_name": "ifacer.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Flushes the current IPsec state Will cause a short connectivity disruption ``` cilium-dbg encrypt flush [flags] ``` ``` -f, --force Skip confirmation -h, --help help for flush --node-id string Only delete states and policies with this node ID. Decimal or hexadecimal (0x) format. If multiple filters are used, they all apply -o, --output string json| yaml| jsonpath='{}' --spi uint8 Only delete states and policies with this SPI. If multiple filters are used, they all apply --stale Delete stale states and policies based on the current node ID map content ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage transparent encryption" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_encrypt_flush.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": ":::tip For all versions, please see . ::: JuiceFS Community Edition uses to label its releases. Each version number consists of three numbers in the format `x.y.z`, representing the major version number (x), the minor version number (y), and the patch number (z). Major version number (x): When the major version number is greater than or equal to `1`, it indicates that the version is suitable for production environments. When the major version number changes, it indicates that this version may have added major features, architectural changes, or data format changes that are not backward compatible. For example, `v0.8.3` `v1.0.0` means production-ready, `v1.0.0` `v2.0.0` represents an architectural or functional change. Minor version number (y): The minor version number indicates that the version adds some new features, performance optimizations, bug fixes, etc. that can be backward compatible. For example, `v1.0.0` `v1.1.0`. Patch version number (z): The patch version number indicates a minor update or bug fix for the software, which is only some minor changes or fixes to existing features and will not affect the compatibility of the softwares. For example, `v1.0.3` `v1.0.4`. JuiceFS client has only one binary file, so usually you only need to replace the old binary with the new one when upgrading JuiceFS. :::tip If you are using JuiceFS version prior to v1.0, please first. ::: In v1.1 (specifically, v1.1.0-beta2) JuiceFS added and . These two features were not available in older versions of the client, and writing with the old client when they were turned on would result in large deviations in the statistics. When upgrading to v1.1, if you do not intend to enable these two new features, you can simply replace the client without additional action. If you do, it is recommended that you read the following content before upgrading. The default configurations for these two features are: For newly created filesystems they are automatically enabled. For existing filesystems, they are disabled. Directory statistics can be enabled independently by `juicefs config` command. When setting directory quotas the directory statistics will be enabled automatically. Upgrade all client binaries to v1.1 version. Deny re-connections from versions prior to v1.1: `juicefs config META-URL --min-client-version 1.1.0-A`. Restart the service at a proper time (remount, restart gateway, etc.) Make sure that all online clients are version v1.1 or higher: `juicefs status META-URL | grep -w Version` Enable the new features, see and . JuiceFS has two compatibility changes in version v1.0 (specifically, v1.0.0-beta3). If you are using an older version of the client, it is recommended that you read the following content before upgrading. JuiceFS v1.0 has changed the table schema to support encoding other than UTF-8. For existing file systems, you need to upgrade the table schema manually to support that. It's recommended to upgrade all clients first and then the table schema. :::note Table schema upgrades are optional, and they are required only if you need to use non-UTF-8 characters. In addition, database performance may degrade when upgrading SQL table schemas, affecting running services. ::: ```sql alter table jfs_edge modify name varbinary(255) not null; alter table jfs_symlink modify target varbinary(4096) not null; ``` ```sql alter table jfs_edge alter column name type bytea using name::bytea; alter table jfs_symlink alter column target type bytea using target::bytea; ``` SQLite does not support modifying columns, but you can migrate columns by `dump` and `load` commands, refer to for details. JuiceFS v1.0 uses a new session management format. The previous versions of clients cannot see the sessions generated by v1.0 clients via `juicefs status` or `juicefs destroy`, whereas the new versions are able to see all the sessions." } ]
{ "category": "Runtime", "file_name": "release_notes.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This document describes the method to configure the image registry for `containerd` for use with the `cri` plugin. _NOTE:_ registry.mirrors and registry.configs as previously described in this document have been DEPRECATED. As described in you should now use the following configuration Before containerd 2.0 ```toml [plugins.\"io.containerd.grpc.v1.cri\".registry] config_path = \"/etc/containerd/certs.d\" ``` In containerd 2.0 ```toml [plugins.\"io.containerd.cri.v1.images\".registry] config_path = \"/etc/containerd/certs.d\" ``` _NOTE:_ registry.configs.*.auth is DEPRECATED and will NOT have an equivalent way to store unencrypted secrets in the host configuration files. However, it will not be removed until a suitable secret management alternative is available as a plugin. It remains supported in 1.x releases, including the 1.6 LTS release. To configure a credential for a specific registry, create/modify the `/etc/containerd/config.toml` as follows: Before containerd 2.0 ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".registry.configs.\"gcr.io\".auth] username = \"\" password = \"\" auth = \"\" identitytoken = \"\" ``` In containerd 2.0 ```toml version = 3 [plugins.\"io.containerd.cri.v1.images\".registry.configs.\"gcr.io\".auth] username = \"\" password = \"\" auth = \"\" identitytoken = \"\" ``` The meaning of each field is the same with the corresponding field in `.docker/config.json`. Please note that auth config passed by CRI takes precedence over this config. The registry credential in this config will only be used when auth config is not specified by Kubernetes via CRI. After modifying this config, you need to restart the `containerd` service. If you don't already have Google Container Registry (GCR) set up then you need to do the following steps: Create a Google Cloud Platform (GCP) account and project if not already created (see ) Enable GCR for your project (see ) For authentication to GCR: Create The JSON key file needs to be downloaded to your system from the GCP console For access to the GCR storage: Add service account to the GCR storage bucket with storage admin access rights (see ) Refer to for detailed information on the above" }, { "data": "Note: The JSON key file is a multi-line file and it can be cumbersome to use the contents as a key outside of the file. It is worthwhile generating a single line format output of the file. One way of doing this is using the `jq` tool as follows: `jq -c . key.json` It is beneficial to first confirm that from your terminal you can authenticate with your GCR and have access to the storage before hooking it into containerd. This can be verified by performing a login to your GCR and pushing an image to it as follows: ```console docker login -u jsonkey -p \"$(cat key.json)\" gcr.io docker pull busybox docker tag busybox gcr.io/your-gcp-project-id/busybox docker push gcr.io/your-gcp-project-id/busybox docker logout gcr.io ``` Now that you know you can access your GCR from your terminal, it is now time to try out containerd. Edit the containerd config (default location is at `/etc/containerd/config.toml`) to add your JSON key for `gcr.io` domain image pull requests: Before containerd 2.0 ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".registry] [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors] [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"docker.io\"] endpoint = [\"https://registry-1.docker.io\"] [plugins.\"io.containerd.grpc.v1.cri\".registry.mirrors.\"gcr.io\"] endpoint = [\"https://gcr.io\"] [plugins.\"io.containerd.grpc.v1.cri\".registry.configs] [plugins.\"io.containerd.grpc.v1.cri\".registry.configs.\"gcr.io\".auth] username = \"jsonkey\" password = 'paste output from jq' ``` In containerd 2.0 ```toml version = 3 [plugins.\"io.containerd.cri.v1.images\".registry] [plugins.\"io.containerd.cri.v1.images\".registry.mirrors] [plugins.\"io.containerd.cri.v1.images\".registry.mirrors.\"docker.io\"] endpoint = [\"https://registry-1.docker.io\"] [plugins.\"io.containerd.cri.v1.images\".registry.mirrors.\"gcr.io\"] endpoint = [\"https://gcr.io\"] [plugins.\"io.containerd.cri.v1.images\".registry.configs] [plugins.\"io.containerd.cri.v1.images\".registry.configs.\"gcr.io\".auth] username = \"jsonkey\" password = 'paste output from jq' ``` Note: `username` of `jsonkey` signifies that JSON key authentication will be used. Restart containerd: ```console service containerd restart ``` Pull an image from your GCR with `crictl`: ```console $ sudo crictl pull gcr.io/your-gcp-project-id/busybox DEBU[0000] get image connection DEBU[0000] connect using endpoint 'unix:///run/containerd/containerd.sock' with '3s' timeout DEBU[0000] connected successfully using endpoint: unix:///run/containerd/containerd.sock DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:gcr.io/your-gcr-instance-id/busybox,},Auth:nil,SandboxConfig:nil,} DEBU[0001] PullImageResponse: &PullImageResponse{ImageRef:sha256:78096d0a54788961ca68393e5f8038704b97d8af374249dc5c8faec1b8045e42,} Image is up to date for sha256:78096d0a54788961ca68393e5f8038704b97d8af374249dc5c8faec1b8045e42 ``` NOTE: The configuration syntax used in this doc is in version 2 which is the recommended since `containerd` 1.3. For the previous config format you can reference ." } ]
{ "category": "Runtime", "file_name": "registry.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "Hardware virtualization is often seen as a requirement to provide an additional isolation layer for untrusted applications. However, hardware virtualization requires expensive bare-metal machines or cloud instances to run safely with good performance, increasing cost and complexity for Cloud users. gVisor, however, takes a more flexible approach. NOTE 2024-05: This post describes the ptrace and KVM platforms, which were the only two gVisor platforms at the time it was written. The was added since and provides better performance than ptrace. One of the pillars of gVisor's architecture is portability, allowing it to run anywhere that runs Linux. Modern Cloud-Native applications run in containers in many different places, from bare metal to virtual machines, and can't always rely on nested virtualization. It is important for gVisor to be able to support the environments where you run containers. gVisor achieves portability through an abstraction called a Platform. Platforms can have many implementations, and each implementation can cover different environments, making use of available software or hardware features. Before we can understand how gVisor achieves portability using platforms, we should take a step back and understand how applications interact with their host. Container sandboxes can provide an isolation layer between the host and application by virtualizing one of the layers below it, including the hardware or operating system. Many sandboxes virtualize the hardware layer by running applications in virtual machines. gVisor takes a different approach by virtualizing the OS layer. When an application is run in a normal situation the host operating system loads the application into user memory and schedules it for execution. The operating system scheduler eventually schedules the application to a CPU and begins executing it. It then handles the application's requests, such as for memory and the lifecycle of the application. gVisor virtualizes these interactions, such as system calls, and context switching that happen between an application and OS. allow applications to ask the OS to perform some task for it. System calls look like a normal function call in most programming languages though works a bit differently under the hood. When an application system call is encountered some special processing takes place to do a into kernel mode and begin executing code in the kernel before returning a result to the application. Context switching may happen in other situations as well. For example, to respond to an interrupt. gVisor provides a sandbox which implements the Linux OS interface, intercepting OS interactions such as system calls and implements them in the sandbox kernel. It does this to limit interactions with the host, and protect the host from an untrusted application running in the" }, { "data": "The Platform is the bottom layer of gVisor which provides the environment necessary for gVisor to control and manage applications. In general, the Platform must: Provide the ability to create and manage memory address spaces. Provide execution contexts for running applications in those memory address spaces. Provide the ability to change execution context and return control to gVisor at specific times (e.g. system call, page fault) This interface is conceptually simple, but very powerful. Since the Platform interface only requires these three capabilities, it gives gVisor enough control for it to act as the application's OS, while still allowing the use of very different isolation technologies under the hood. You can learn more about the Platform interface in the . While gVisor can make use of technologies like hardware virtualization, it doesn't necessarily rely on any one technology to provide a similar level of isolation. The flexibility of the Platform interface allows for implementations that use technologies other than hardware virtualization. This allows gVisor to run in VMs without nested virtualization, for example. By providing an abstraction for the underlying platform, each implementation can make various tradeoffs regarding performance or hardware requirements. Currently gVisor provides two gVisor Platform implementations; the Ptrace Platform, and the KVM Platform, each using very different methods to implement the Platform interface. The Ptrace Platform uses to trap syscalls, and uses the host for memory mapping and context switching. This platform can run anywhere that ptrace is available, which includes most Linux systems, VMs or otherwise. The KVM Platform uses virtualization, but in an unconventional way. gVisor runs in a virtual machine but as both guest OS and VMM, and presents no virtualized hardware layer. This provides a simpler interface that can avoid hardware initialization for fast start up, while taking advantage of hardware virtualization support to improve memory isolation and performance of context switching. The flexibility of the Platform interface allows for a lot of room to improve the existing KVM and ptrace platforms, as well as the ability to utilize new methods for improving gVisor's performance or portability in future Platform implementations. Through the Platform interface, gVisor is able to support bare metal, virtual machines, and Cloud environments while still providing a highly secure sandbox for running untrusted applications. This is especially important for Cloud and Kubernetes users because it allows gVisor to run anywhere that Kubernetes can run and provide similar experiences in multi-region, hybrid, multi-platform environments. Give gVisor's open source platforms a try. Using a Platform is as easy as providing the `--platform` flag to `runsc`. See the documentation on for how to use different platforms with Docker. We would love to hear about your experience so come chat with us in our , or send us an if you run into any problems." } ]
{ "category": "Runtime", "file_name": "2020-10-22-platform-portability.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List connection tracking entries ``` cilium-dbg bpf ct list ( global | endpoint | cluster ) [identifier] [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' -d, --time-diff print time difference for entries --time-diff-clocksource-hz int manually set clock source Hz (default 250) --time-diff-clocksource-mode string manually set clock source mode (instead of contacting the server) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Connection tracking tables" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_ct_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: Operational Guide menu_order: 45 search_type: Documentation This operational guide is intended to give you an overview of how to operate and manage a Weave Network in production. It consists of three main parts: A with which you will need to be familiar Detailed instructions for safely bootstrapping, growing and shrinking Weave networks in a number of different deployment scenarios: An [interactive installation](/site/operational-guide/interactive.md), suitable for evaluation and development A [uniformly configured cluster](/site/operational-guide/uniform-fixed-cluster.md) with a fixed number of initial nodes, suitable for automated provisioning but requiring manual intervention for resizing A comprising fixed and autoscaling components, suitable for a base load with automated scale-out/scale-in A [uniformly configured cluster](/site/operational-guide/uniform-dynamic-cluster.md) with dynamic nodes, suitable for automated provisioning and resizing. A list of [common administrative tasks](/site/operational-guide/tasks.md), such as configuring Weave Net to start on boot, upgrading clusters, cleaning up peers and reclaiming IP address space" } ]
{ "category": "Runtime", "file_name": "operational-guide.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Update a service ``` cilium-dbg service update [flags] ``` ``` --backend-weights uints Backend weights (100 default, 0 means maintenance state, only for maglev mode) (default []) --backends strings Backend address or addresses (<IP:Port>) --frontend string Frontend address -h, --help help for update --id uint Identifier --k8s-cluster-internal Set service as cluster-internal for externalTrafficPolicy=Local xor internalTrafficPolicy=Local --k8s-ext-traffic-policy string Set service with k8s externalTrafficPolicy as {Local,Cluster} (default \"Cluster\") --k8s-external Set service as a k8s ExternalIPs --k8s-host-port Set service as a k8s HostPort --k8s-int-traffic-policy string Set service with k8s internalTrafficPolicy as {Local,Cluster} (default \"Cluster\") --k8s-load-balancer Set service as a k8s LoadBalancer --k8s-node-port Set service as a k8s NodePort --local-redirect Set service as Local Redirect --states strings Backend state(s) as {active(default),terminating,quarantined,maintenance} ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage services & loadbalancers" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_service_update.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Name | Type | Description | Notes | - | - | - Socket | string | | `func NewTpmConfig(socket string, ) *TpmConfig` NewTpmConfig instantiates a new TpmConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewTpmConfigWithDefaults() *TpmConfig` NewTpmConfigWithDefaults instantiates a new TpmConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *TpmConfig) GetSocket() string` GetSocket returns the Socket field if non-nil, zero value otherwise. `func (o TpmConfig) GetSocketOk() (string, bool)` GetSocketOk returns a tuple with the Socket field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *TpmConfig) SetSocket(v string)` SetSocket sets Socket field to given value." } ]
{ "category": "Runtime", "file_name": "TpmConfig.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "- - - - - - https://github.com/heptio/ark/releases/tag/v0.9.11 Fix bug preventing PV snapshots from being restored (#1040, @ncdc) https://github.com/heptio/ark/releases/tag/v0.9.10 restore storageclasses before pvs and pvcs (#594, @shubheksha) AWS: Ensure that the order returned by ListObjects is consistent (#999, @bashofmann) Add CRDs to list of prioritized resources (#424, @domenicrosati) Verify PV doesn't exist before creating new volume (#609, @nrb) Update README.md - Grammar mistake corrected (#1018, @midhunbiju) https://github.com/heptio/ark/releases/tag/v0.9.9 Check if initContainers key exists before attempting to remove volume mounts. (#927, @skriss) https://github.com/heptio/ark/releases/tag/v0.9.8 Discard service account token volume mounts from init containers on restore (#910, @james-powis) Support --include-cluster-resources flag when creating schedule (#942, @captjt) Remove logic to get a GCP project (#926, @shubheksha) Only try to back up PVCs linked PV if the PVC's phase is Bound (#920, @skriss) Claim ownership of new AWS volumes on Kubernetes cluster being restored into (#801, @ljakimczuk) Remove timeout check when taking snapshots (#928, @carlisia) https://github.com/heptio/ark/releases/tag/v0.9.7 Preserve explicitly-specified node ports during restore (#712, @timoreimann) Enable restoring resources with ownerReference set (#837, @mwieczorek) Fix error when restoring ExternalName services (#869, @shubheksha) remove restore log helper for accurate line numbers (#891, @skriss) Display backup StartTimestamp in `ark backup get` output (#894, @marctc) Fix restic restores when using namespace mappings (#900, @skriss) https://github.com/heptio/ark/releases/tag/v0.9.6 Discard service account tokens from non-default service accounts on restore (#843, @james-powis) Update Docker images to use `alpine:3.8` (#852, @nrb) https://github.com/heptio/ark/releases/tag/v0.9.5 Fix issue causing restic restores not to work (#834, @skriss) https://github.com/heptio/ark/releases/tag/v0.9.4 Terminate plugin clients to resolve memory leaks (#797, @skriss) Fix nil map errors when merging annotations (#812, @nrb) https://github.com/heptio/ark/releases/tag/v0.9.3 Initialize Prometheus metrics when creating a new schedule (#689, @lemaral) https://github.com/heptio/ark/releases/tag/v0.9.2) - 2018-07-26 Fix issue where modifications made by backup item actions were not being saved to backup tarball (#704, @skriss) https://github.com/heptio/ark/releases/tag/v0.9.1 Require namespace for Ark's CRDs to already exist at server startup (#676, @skriss) Require all Ark CRDs to exist at server startup (#683, @skriss) Fix `latest` tagging in Makefile (#690, @skriss) Make Ark compatible with clusters that don't have the `rbac.authorization.k8s.io/v1` API group (#682, @nrb) Don't consider missing snapshots an error during backup deletion, limit backup deletion requests per backup to 1 (#687, @skriss) https://github.com/heptio/ark/releases/tag/v0.9.0 Ark now has support for backing up and restoring Kubernetes volumes using a free open-source backup tool called" }, { "data": "This provides users an out-of-the-box solution for backing up and restoring almost any type of Kubernetes volume, whether or not it has snapshot support integrated with Ark. For more information, see the . Support for Prometheus metrics has been added! View total number of backup attempts (including success or failure), total backup size in bytes, and backup durations. More metrics coming in future releases! Add restic support (#508 #532 #533 #534 #535 #537 #540 #541 #545 #546 #547 #548 #555 #557 #561 #563 #569 #570 #571 #606 #608 #610 #621 #631 #636, @skriss) Add prometheus metrics (#531 #551 #564, @ashish-amarnath @nrb) When backing up a service account, include cluster roles/cluster role bindings that reference it (#470, @skriss) When restoring service accounts, copy secrets/image pull secrets into the target cluster even if the service account already exists (#403, @nrb) Upgrade to Kubernetes 1.10 dependencies (#417, @skriss) Upgrade to go 1.10 and alpine 3.7 (#456, @skriss) Display no excluded resources/namespaces as `<none>` rather than `` (#453, @nrb) Skip completed jobs and pods when restoring (#463, @nrb) Set namespace correctly when syncing backups from object storage (#472, @skriss) When building on macOS, bind-mount volumes with delegated config (#478, @skriss) Add replica sets and daemonsets to cohabiting resources so they're not backed up twice (#482 #485, @skriss) Shut down the Ark server gracefully on SIGINT/SIGTERM (#483, @skriss) Only back up resources that support GET and DELETE in addition to LIST and CREATE (#486, @nrb) Show a better error message when trying to get an incomplete restore's logs (#496, @nrb) Stop processing when setting a backup deletion request's phase to `Deleting` fails (#500, @nrb) Add library code to install Ark's server components (#437 #506, @marpaia) Properly handle errors when backing up additional items (#512, @carlpett) Run post hooks even if backup actions fail (#514, @carlpett) GCP: fail backup if upload to object storage fails (#510, @nrb) AWS: don't require `region` as part of backup storage provider config (#455, @skriss) Ignore terminating resources while doing a backup (#526, @yastij) Log to stdout instead of stderr (#553, @ncdc) Move sample minio deployment's config to an emptyDir (#566, @runyontr) Add `omitempty` tag to optional API fields (@580, @nikhita) Don't restore PVs with a reclaim policy of `Delete` and no snapshot (#613, @ncdc) Don't restore mirror pods (#619, @ncdc) @gianrubio @castrojo @dhananjaysathe @c-knowles @mattkelly @ae-v @hamidzr" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-0.9.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Access metric status ``` -h, --help help for metrics ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - List all metrics" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_metrics.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- Thanks for sending a pull request! Here are some tips for you: If this is your first time, please read our developer guide: https://submariner.io/development/ Ensure you have added the appropriate tests for your PR: https://submariner.io/development/code-review/#test-new-functionality Read the code review guide to ease the review process: https://submariner.io/development/code-review/ If the PR is unfinished, mark it as a draft: https://submariner.io/development/code-review/#mark-work-in-progress-prs-as-drafts If you are using CI to debug, use your private fork: https://submariner.io/development/code-review/#use-private-forks-for-debugging-prs-by-running-ci Add labels to the PR as appropriate. This template is based on the K8s/K8s template: https://github.com/kubernetes/kubernetes/blob/master/.github/PULLREQUESTTEMPLATE.md -->" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Submariner", "subcategory": "Cloud Native Network" }
[ { "data": "title: Use JuiceFS on DigitalOcean sidebar_position: 6 slug: /clouds/digitalocean JuiceFS is designed for the cloud, using the cloud platform out-of-the-box storage and database services, and can be configured and put into use in as little as a few minutes. This article uses the DigitalOcean as an example to introduce how to quickly and easily install and use JuiceFS on the cloud computing platform. JuiceFS is powered by a combination of storage and database, so the things you need to prepare should include. The cloud server on DigitalOcean is called Droplet. If you already have a Droplet, you do not need to purchase a new one separately in order to use JuiceFS. Whichever cloud server needs to use JuiceFS storage on it, install the JuiceFS client for it. JuiceFS has no special hardware requirements, and any size Droplet can be used stably. However, it is recommended to choose a better performing SSD and reserve at least 1GB for JuiceFS to use as local cache. JuiceFS supports Linux, BSD, macOS and Windows. In this article, we will take Ubuntu Server 20.04 as an example. JuiceFS uses object storage to store all your data, and using Spaces on DigitalOcean is the easiest solution. Spaces is an S3-compatible object storage service that works right out of the box. It is recommended to choose the same region as Droplet to get the best access speed and also to avoid additional traffic costs. Of course, you can also use an object storage service from another platform or build it manually using Ceph or MinIO. In short, you are free to choose the object storage you want to use, just make sure that the JuiceFS client can access the object storage. Here, we created a Spaces storage bucket named `juicefs` with the region `sgp1` in Singapore, and it is accessible at: `https://juicefs.sgp1.digitaloceanspaces.com` In addition, you also need to create `Spaces access keys` in the API menu, which JuiceFS needs to access the Spaces API. Unlike normal file systems, JuiceFS stores all metadata corresponding to the data in a separate database, and the larger the size of the stored data, the better the performance. Currently, JuiceFS supports common databases such as Redis, TiKV, MySQL/MariaDB, PostgreSQL, SQLite, etc., while support for other databases is under continuous development. If the database you need is not supported at the moment, please submit feedback. Each database has its own advantages and disadvantages in terms of performance, size and reliability, and you should choose according to the actual needs of the scenario. Don't worry about the choice of database, the JuiceFS client provides a metadata migration feature that allows you to easily export and migrate metadata from one database to another. For this article, we use DigitalOcean's Redis 6 database hosting service, choose `Singapore`, and select the same VPC private network as the existing Droplet. It takes about 5 minutes to create the Redis, and we follow the setup wizard to initialize the database. By default, the Redis allows all inbound connections. For security reasons, you should select the Droplet that have access to the Redis in the security setting section of the setup wizard in the `Add trusted sources`, that is, only allow the selected host to access the Redis. In the setting of the eviction policy, it is recommended to select `noeviction`, that is, when the memory is exhausted, only errors are reported and no data is evictioned. Note: In order to ensure the safety and integrity of metadata, please do not select `allkeys-lru` and `allkey-random` for the eviction" }, { "data": "The access address of the Redis can be found in the `Connection Details` of the console. If all computing resources are in DigitalOcean, it is recommended to use the VPC private network for connection first, which can maximize security. We currently using Ubuntu Server 20.04, execute the following command to install the latest version of the client. ```shell curl -sSL https://d.juicefs.com/install | sh - ``` Execute the command and see the command help information returned to `juicefs`, which means that the client is installed successfully. ```shell $ juicefs NAME: juicefs - A POSIX file system built on Redis and object storage. USAGE: juicefs [global options] command [command options] [arguments...] VERSION: 0.16.2 (2021-08-25T04:01:15Z 29d6fee) COMMANDS: format format a volume mount mount a volume umount unmount a volume gateway S3-compatible gateway sync sync between two storage rmr remove directories recursively info show internal information for paths or inodes bench run benchmark to read/write/stat big/small files gc collect any leaked objects fsck Check consistency of file system profile analyze access log stats show runtime stats status show status of JuiceFS warmup build cache for target directories/files dump dump metadata into a JSON file load load metadata from a previously dumped JSON file help, h Shows a list of commands or help for one command GLOBAL OPTIONS: --verbose, --debug, -v enable debug log (default: false) --quiet, -q only warning and errors (default: false) --trace enable trace log (default: false) --no-agent disable pprof (:6060) agent (default: false) --help, -h show help (default: false) --version, -V print only the version (default: false) COPYRIGHT: Apache License 2.0 ``` In addition, you can also visit the page to select other versions for manual installation. To create a file system, use the `format` subcommand, the format is: ```shell juicefs format [command options] META-URL NAME ``` The following command creates a file system named `mystor`: ```shell $ juicefs format \\ --storage space \\ --bucket https://juicefs.sgp1.digitaloceanspaces.com \\ --access-key <your-access-key-id> \\ --secret-key <your-access-key-secret> \\ rediss://default:your-password@private-db-redis-sgp1-03138-do-user-2500071-0.b.db.ondigitalocean.com:25061/1 \\ mystor ``` Parameter Description: `--storage`: Specify the data storage engine, here is `space`, click here to view all . `--bucket`: Specify the bucket access address. `--access-key` and `--secret-key`: Specify the secret key for accessing the object storage API. The Redis managed by DigitalOcean needs to be accessed with TLS/SSL encryption, so it needs to use the `rediss://` protocol header. The `/1` added at the end of the link represents the use of Redis's No. 1 database. If you see output similar to the following, it means that the file system is created successfully. ```shell 2021/08/23 16:36:28.450686 juicefs[2869028] <INFO>: Meta address: rediss://default@private-db-redis-sgp1-03138-do-user-2500071-0.b.db.ondigitalocean.com:25061/1 2021/08/23 16:36:28.481251 juicefs[2869028] <WARNING>: AOF is not enabled, you may lose data if Redis is not shutdown properly. 2021/08/23 16:36:28.481763 juicefs[2869028] <INFO>: Ping redis: 331.706s 2021/08/23 16:36:28.482266 juicefs[2869028] <INFO>: Data uses space://juicefs/mystor/ 2021/08/23 16:36:28.534677 juicefs[2869028] <INFO>: Volume is formatted as {Name:mystor UUID:6b0452fc-0502-404c-b163-c9ab577ec766 Storage:space Bucket:https://juicefs.sgp1.digitaloceanspaces.com AccessKey:7G7WQBY2QUCBQC5H2DGK SecretKey:removed BlockSize:4096 Compression:none Shards:0 Partitions:0 Capacity:0 Inodes:0 EncryptKey:} ``` To mount a file system, use the `mount` subcommand, and use the `-d` parameter to mount it as a daemon. The following command mounts the newly created file system to the `mnt` directory under the current directory: ```shell sudo juicefs mount -d \\ rediss://default:your-password@private-db-redis-sgp1-03138-do-user-2500071-0.b.db.ondigitalocean.com:25061/1 mnt ``` The purpose of using `sudo` to perform the mount operation is to allow JuiceFS to have the authority to create a cache directory under `/var/`. Please note that when mounting the file system, you only need to specify the `database address` and the `mount point`, not the name of the file system. If you see output similar to the following, it means that the file system is mounted successfully. ```shell 2021/08/23 16:39:14.202151 juicefs[2869081] <INFO>: Meta address: rediss://default@private-db-redis-sgp1-03138-do-user-2500071-0.b.db.ondigitalocean.com:25061/1 2021/08/23" }, { "data": "juicefs[2869081] <WARNING>: AOF is not enabled, you may lose data if Redis is not shutdown properly. 2021/08/23 16:39:14.235536 juicefs[2869081] <INFO>: Ping redis: 446.247s 2021/08/23 16:39:14.236231 juicefs[2869081] <INFO>: Data use space://juicefs/mystor/ 2021/08/23 16:39:14.236540 juicefs[2869081] <INFO>: Disk cache (/var/jfsCache/6b0452fc-0502-404c-b163-c9ab577ec766/): capacity (1024 MB), free ratio (10%), max pending pages (15) 2021/08/23 16:39:14.738416 juicefs[2869081] <INFO>: OK, mystor is ready at mnt ``` Use the `df` command to see the mounting status of the file system: ```shell $ df -Th File system type capacity used usable used% mount point JuiceFS:mystor fuse.juicefs 1.0P 64K 1.0P 1% /home/herald/mnt ``` As you can see from the output information of the mount command, JuiceFS defaults to sets 1024 MB as the local cache. Setting a larger cache can make JuiceFS have better performance. You can set the cache (in MiB) through the `--cache-size` option when mounting a file system. For example, set a 20GB local cache: ```shell sudo juicefs mount -d --cache-size 20000 \\ rediss://default:your-password@private-db-redis-sgp1-03138-do-user-2500071-0.b.db.ondigitalocean.com:25061/1 mnt ``` After the file system is mounted, you can store data in the `~/mnt` directory just like using a local hard disk. Use the `status` subcommand to view the basic information and connection status of a file system. You only need to specify the database URL. ```shell $ juicefs status rediss://default:bn8l7ui2cun4iaji@private-db-redis-sgp1-03138-do-user-2500071-0.b.db.ondigitalocean.com:25061/1 2021/08/23 16:48:48.567046 juicefs[2869156] <INFO>: Meta address: rediss://default@private-db-redis-sgp1-03138-do-user-2500071-0.b.db.ondigitalocean.com:25061/1 2021/08/23 16:48:48.597513 juicefs[2869156] <WARNING>: AOF is not enabled, you may lose data if Redis is not shutdown properly. 2021/08/23 16:48:48.598193 juicefs[2869156] <INFO>: Ping redis: 491.003s { \"Setting\": { \"Name\": \"mystor\", \"UUID\": \"6b0452fc-0502-404c-b163-c9ab577ec766\", \"Storage\": \"space\", \"Bucket\": \"https://juicefs.sgp1.digitaloceanspaces.com\", \"AccessKey\": \"7G7WQBY2QUCBQC5H2DGK\", \"SecretKey\": \"removed\", \"BlockSize\": 4096, \"Compression\": \"none\", \"Shards\": 0, \"Partitions\": 0, \"Capacity\": 0, \"Inodes\": 0 }, \"Sessions\": [ { \"Sid\": 1, \"Heartbeat\": \"2021-08-23T16:46:14+08:00\", \"Version\": \"0.16.2 (2021-08-25T04:01:15Z 29d6fee)\", \"Hostname\": \"ubuntu-s-1vcpu-1gb-sgp1-01\", \"MountPoint\": \"/home/herald/mnt\", \"ProcessID\": 2869091 }, { \"Sid\": 2, \"Heartbeat\": \"2021-08-23T16:47:59+08:00\", \"Version\": \"0.16.2 (2021-08-25T04:01:15Z 29d6fee)\", \"Hostname\": \"ubuntu-s-1vcpu-1gb-sgp1-01\", \"MountPoint\": \"/home/herald/mnt\", \"ProcessID\": 2869146 } ] } ``` Use the `umount` subcommand to unmount a file system, for example: ```shell sudo juicefs umount ~/mnt ``` Note: Force unmount the file system in use may cause data damage or loss, please be careful to operate. Please refer to for more details. The JuiceFS file system supports being mounted by multiple cloud servers at the same time, and there is no requirement for the geographic location of the cloud server. It can easily realize the real-time data sharing of servers between the same platform, between cross-cloud platforms, and between public and private clouds. Not only that, the shared mount of JuiceFS can also provide strong data consistency guarantee. When multiple servers mount the same file system, the writes confirmed on the file system will be visible in real time on all hosts. To use the shared mount, it is important to ensure that the database and object storage service that make up the file system can be accessed by each host to mount it. In the demonstration environment of this article, the Spaces object storage is open to the entire Internet, and it can be read and written through the API as long as the correct access key is used. But for the Redis database managed by DigitalOcean, you need to configure the access strategy reasonably to ensure that the hosts outside the platform have access permissions. When you mount the same file system on multiple hosts, first create a file system on any host, then install the JuiceFS client on every hosts, and use the same database address to mount it with the `mount` command. Pay special attention to the fact that the file system only needs to be created once, and there should be no need to repeat file system creation operations on" } ]
{ "category": "Runtime", "file_name": "digitalocean.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Quick start evaluation install with Minio\" layout: docs The following example sets up the Velero server and client, then backs up and restores a sample application. For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster. For additional functionality with this setup, see the section below on how to . NOTE The example lets you explore basic Velero functionality. Configuring Minio for production is out of scope. See for how to configure Velero for a production environment. If you encounter issues with installing or configuring, see . Access to a Kubernetes cluster, version 1.7 or later. Note:* restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. Restic support is not required for this example, but may be of interest later. See . A DNS server on the cluster `kubectl` installed Sufficient disk space to store backups in Minio. You will need sufficient disk space available to handle any backups plus at least 1GB additional. Minio will not operate if less than 1GB of free disk space is available. On macOS, you can use to install the `velero` client: ```bash brew install velero ``` Download the tarball for your client platform. _We strongly recommend that you use an of Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!_ Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` The directory you extracted is called the \"Velero directory\" in subsequent steps. Move the `velero` binary from the Velero directory to somewhere in your PATH. These instructions start the Velero server and a Minio instance that is accessible from within the cluster only. See for information about configuring your cluster for outside access to Minio. Outside access is required to access logs and run `velero describe` commands. Create a Velero-specific credentials file (`credentials-velero`) in your Velero directory: ``` [default] awsaccesskey_id = minio awssecretaccess_key = minio123 ``` Start the server and the local storage service. In the Velero directory, run: ``` kubectl apply -f examples/minio/00-minio-deployment.yaml ``` Note: The example Minio yaml provided uses \"empty dir\". Your node needs to have enough space available to store the data being backed up plus 1GB of free space. If the node does not have enough space, you can modify the example yaml to use a Persistent Volume instead of \"empty dir\" ``` velero install \\ --provider aws \\ --plugins velero/velero-plugin-for-aws:v1.2.1 \\ --bucket velero \\ --secret-file ./credentials-velero \\ --use-volume-snapshots=false \\ --backup-location-config region=minio,s3ForcePathStyle=\"true\",s3Url=http://minio.velero.svc:9000 ``` This example assumes that it is running within a local cluster without a volume provider capable of snapshots, so no `VolumeSnapshotLocation` is created" }, { "data": "You may need to update AWS plugin version to one that is with the version of Velero you are installing. Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready. This example also assumes you have named your Minio bucket \"velero\". Deploy the example nginx application: ```bash kubectl apply -f examples/nginx-app/base.yaml ``` Check to see that both the Velero and nginx deployments are successfully created: ``` kubectl get deployments -l component=velero --namespace=velero kubectl get deployments --namespace=nginx-example ``` Create a backup for any object that matches the `app=nginx` label selector: ``` velero backup create nginx-backup --selector app=nginx ``` Alternatively if you want to backup all objects except those matching the label `backup=ignore`: ``` velero backup create nginx-backup --selector 'backup notin (ignore)' ``` (Optional) Create regularly scheduled backups based on a cron expression using the `app=nginx` label selector: ``` velero schedule create nginx-daily --schedule=\"0 1 *\" --selector app=nginx ``` Alternatively, you can use some non-standard shorthand cron expressions: ``` velero schedule create nginx-daily --schedule=\"@daily\" --selector app=nginx ``` See the for more usage examples. Simulate a disaster: ``` kubectl delete namespace nginx-example ``` To check that the nginx deployment and service are gone, run: ``` kubectl get deployments --namespace=nginx-example kubectl get services --namespace=nginx-example kubectl get namespace/nginx-example ``` You should get no results. NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up. Run: ``` velero restore create --from-backup nginx-backup ``` Run: ``` velero restore get ``` After the restore finishes, the output looks like the following: ``` NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none> ``` NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`. After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them. If there are errors or warnings, you can look at them in detail: ``` velero restore describe <RESTORE_NAME> ``` For more information, see . If you want to delete any backups you created, including data in object storage and persistent volume snapshots, you can run: ``` velero backup delete BACKUP_NAME ``` This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label" }, { "data": "Once fully removed, the backup is no longer visible when you run: ``` velero backup get BACKUP_NAME ``` To completely uninstall Velero, minio, and the nginx example app from your Kubernetes cluster: ``` kubectl delete namespace/velero clusterrolebinding/velero kubectl delete crds -l component=velero kubectl delete -f examples/nginx-app/base.yaml ``` When you run commands to get logs or describe a backup, the Velero server generates a pre-signed URL to download the requested items. To access these URLs from outside the cluster -- that is, from your Velero client -- you need to make Minio available outside the cluster. You can: Change the Minio Service type from `ClusterIP` to `NodePort`. Set up Ingress for your cluster, keeping Minio Service type `ClusterIP`. You can also specify a `publicUrl` config field for the pre-signed URL in your backup storage location config. The Minio deployment by default specifies a Service of type `ClusterIP`. You can change this to `NodePort` to easily expose a cluster service externally if you can reach the node from your Velero client. You must also get the Minio URL, which you can then specify as the value of the `publicUrl` field in your backup storage location config. In `examples/minio/00-minio-deployment.yaml`, change the value of Service `spec.type` from `ClusterIP` to `NodePort`. Get the Minio URL: if you're running Minikube: ```shell minikube service minio --namespace=velero --url ``` in any other environment: Get the value of an external IP address or DNS name of any node in your cluster. You must be able to reach this address from the Velero client. Append the value of the NodePort to get a complete URL. You can get this value by running: ```shell kubectl -n velero get svc/minio -o jsonpath='{.spec.ports[0].nodePort}' ``` Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URLFROMPREVIOUS_STEP>` as a field under `spec.config`. You must include the `http://` or `https://` prefix. If you're using Minio with HTTPS, you may see unintelligible text in the output of `velero describe`, or `velero logs` commands. To fix this, you can add a public URL to the `BackupStorageLocation`. In a terminal, run the following: ```shell kubectl patch -n velero backupstoragelocation default --type merge -p '{\"spec\":{\"config\":{\"publicUrl\":\"https://<a public IP for your Minio instance>:9000\"}}}' ``` If your certificate is self-signed, see the . Kubernetes in Docker does not have support for NodePort services (see ). In this case, you can use a port forward to access the Minio bucket. In a terminal, run the following: ```shell MINIO_POD=$(kubectl get pods -n velero -l component=minio -o jsonpath='{.items[0].metadata.name}') kubectl port-forward $MINIO_POD -n velero 9000:9000 ``` Then, in another terminal: ```shell kubectl edit backupstoragelocation default -n velero ``` Add `publicUrl: http://localhost:9000` under the `spec.config` section. Configuring Ingress for your cluster is out of scope for the Velero documentation. If you have already set up Ingress, however, it makes sense to continue with it while you run the example Velero configuration with Minio. In this case: Keep the Service type as `ClusterIP`. Edit your `BackupStorageLocation` YAML, adding `publicUrl: <URLANDPORTOFINGRESS>` as a field under `spec.config`." } ]
{ "category": "Runtime", "file_name": "minio.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Logging Guidelines ================== controller-runtime uses a kind of logging called structured logging. If you've used a library like Zap or logrus before, you'll be familiar with the concepts we use. If you've only used a logging library like the \"log\" package (in the Go standard library) or \"glog\" (in Kubernetes), you'll need to adjust how you think about logging a bit. With structured logging, we associate a constant log message with some variable key-value pairs. For instance, suppose we wanted to log that we were starting reconciliation on a pod. In the Go standard library logger, we might write: ```go log.Printf(\"starting reconciliation for pod %s/%s\", podNamespace, podName) ``` In controller-runtime, we'd instead write: ```go logger.Info(\"starting reconciliation\", \"pod\", req.NamespacedName) ``` or even write ```go func (r *Reconciler) Reconcile(req reconcile.Request) (reconcile.Response, error) { logger := logger.WithValues(\"pod\", req.NamespacedName) // do some stuff logger.Info(\"starting reconciliation\") } ``` Notice how we've broken out the information that we want to convey into a constant message (`\"starting reconciliation\"`) and some key-value pairs that convey variable information (`\"pod\", req.NamespacedName`). We've there-by added \"structure\" to our logs, which makes them easier to save and search later, as well as correlate with metrics and events. All of controller-runtime's logging is done via , a generic interface for structured logging. You can use whichever logging library you want to implement the actual mechanics of the logging. controller-runtime provides some helpers to make it easy to use as the implementation. You can configure the logging implementation using `\"sigs.k8s.io/controller-runtime/pkg/log\".SetLogger`. That package also contains the convenience functions for setting up Zap. You can get a handle to the \"root\" logger using `\"sigs.k8s.io/controller-runtime/pkg/log\".Log`, and can then call `WithName` to create individual named loggers. You can call `WithName` repeatedly to chain names together: ```go logger := log.Log.WithName(\"controller\").WithName(\"replicaset\") // in reconcile... logger = logger.WithValues(\"replicaset\", req.NamespacedName) // later on in reconcile... logger.Info(\"doing things with pods\", \"pod\", newPod) ``` As seen above, you can also call `WithValue` to create a new sub-logger that always attaches some key-value pairs to a logger. Finally, you can use `V(1)` to mark a particular log line as \"debug\" logs: ```go logger.V(1).Info(\"this is particularly verbose!\", \"state of the world\", allKubernetesObjectsEverywhere) ``` While it's possible to use higher log levels, it's recommended that you stick with `V(1)` or `V(0)` (which is equivalent to not specifying `V`), and then filter later based on key-value pairs or messages; different numbers tend to lose meaning easily over time, and you'll be left wondering why particular logs lines are at `V(5)` instead of `V(7)`. Errors should always be logged with" }, { "data": "which allows logr implementations to provide special handling of errors (for instance, providing stack traces in debug mode). It's acceptable to log call `log.Error` with a nil error object. This conveys that an error occurred in some capacity, but that no actual `error` object was involved. Errors returned by the `Reconcile` implementation of the `Reconciler` interface are commonly logged as a `Reconciler error`. It's a developer choice to create an additional error log in the `Reconcile` implementation so a more specific file name and line for the error are returned. Don't put variable content in your messages -- use key-value pairs for that. Never use `fmt.Sprintf` in your message. Try to match the terminology in your messages with your key-value pairs -- for instance, if you have a key-value pairs `api version`, use the term `APIVersion` instead of `GroupVersion` in your message. Kubernetes objects should be logged directly, like `log.Info(\"this is a Kubernetes object\", \"pod\", somePod)`. controller-runtime provides a special encoder for Zap that will transform Kubernetes objects into `name, namespace, apiVersion, kind` objects, when available and not in development mode. Other logr implementations should implement similar logic. Use lower-case, space separated keys. For example `object` for objects, `api version` for `APIVersion` Be consistent across your application, and with controller-runtime when possible. Try to be brief but descriptive. Match terminology in keys with terminology in the message. Be careful logging non-Kubernetes objects verbatim if they're very large. Kinds should not be logged alone (they're meaningless alone). Use a `GroupKind` object to log them instead, or a `GroupVersionKind` when version is relevant. If you need to log an API version string, use `api version` as the key (formatted as with a `GroupVersion`, or as received directly from API discovery). If code works with a generic Kubernetes `runtime.Object`, use the `object` key. For specific objects, prefer the resource name as the key (e.g. `pod` for `v1.Pod` objects). For non-Kubernetes objects, the `object` key may also be used, if you accept a generic interface. When logging a raw type, log it using the `type` key, with a value of `fmt.Sprintf(\"%T\", typ)` If there's specific context around a type, the key may be more specific, but should end with `type` -- for instance, `OwnerType` should be logged as `owner` in the context of `log.Error(err, \"Could not get ObjectKinds for OwnerType\", `owner type`, fmt.Sprintf(\"%T\"))`. When possible, favor communicating kind instead. When logging multiple things, simply pluralize the key. Reconcile requests should be logged as `request`, although normal code should favor logging the key. Reconcile keys should be logged as with the same key as if you were logging the object directly (e.g. `log.Info(\"reconciling pod\", \"pod\", req.NamespacedName)`). This ends up having a similar effect to logging the object directly." } ]
{ "category": "Runtime", "file_name": "TMP-LOGGING.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Managing Domains menu_order: 30 search_type: Documentation The following topics are discussed: If you don't supply a domain search path (with `--dns-search=`), Weave Net (via the or via `weave attach`) tells a container to look for \"bare\" hostnames, like `pingme`, in its own domain (or in `weave.local` if it has no domain). If you want to supply other entries for the domain search path, e.g. if you want containers in different sub-domains to resolve hostnames across all sub-domains plus some external domains, you need also to supply the `weave.local` domain to retain the above behaviour. ``` docker run -ti \\ --dns-search=zone1.weave.local --dns-search=zone2.weave.local \\ --dns-search=corp1.com --dns-search=corp2.com \\ --dns-search=weave.local weaveworks/ubuntu ``` By default, weaveDNS uses `weave.local.` as the domain for names on the Weave network. In general users do not need to change this domain, but you can force weaveDNS to use a different domain by launching it with the `--dns-domain` argument. For example, ``` $ weave launch --dns-domain=\"mycompany.local.\" ``` The local domain should end with `local.`, since these names are link-local as per , (though this is not strictly necessary). *" } ]
{ "category": "Runtime", "file_name": "managing-domains-weavedns.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Uninstalling Velero\" layout: docs If you would like to completely uninstall Velero from your cluster, the following commands will remove all resources created by `velero install`: ```bash velero uninstall ```" } ]
{ "category": "Runtime", "file_name": "uninstalling.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "The code in the following examples are verified on wasmedge-sys v0.10.0 wasmedge-types v0.3.0 In this example, we'll demonstrate how to use the APIs of `Vm` to Create Wasi and WasmEdgeProcess module instances implicitly by using a `Config` while creating a `Vm`. ```rust // create a Config context let mut config = Config::create()?; config.bulkmemoryoperations(true); assert!(config.bulkmemoryoperations_enabled()); config.wasi(true); assert!(config.wasi_enabled()); config.wasmedge_process(true); assert!(config.wasmedgeprocessenabled()); // create a Vm context with the given Config and Store let mut vm = Vm::create(Some(config), None)?; ``` Retrieve the Wasi and WasmEdgeProcess module instances from the `Vm`. ```rust // get the default Wasi module let wasiinstance = vm.wasimodule_mut()?; asserteq!(wasiinstance.name(), \"wasisnapshotpreview1\"); // get the default WasmEdgeProcess module instance let wasmedgeprocessinstance = vm.wasmedgeprocessmodule_mut()?; asserteq!(wasmedgeprocessinstance.name(), \"wasmedgeprocess\"); ``` Register an import module as a named module into the `Vm`. ```rust // create ImportModule instance let modulename = \"externmodule\"; let mut import = ImportModule::create(module_name)?; // a function to import fn realadd(frame: &CallingFrame, inputs: Vec<WasmValue>) -> Result<Vec<WasmValue>, u8> { if inputs.len() != 2 { return Err(1); } let a = if inputs[0].ty() == ValType::I32 { inputs[0].to_i32() } else { return Err(2); }; let b = if inputs[1].ty() == ValType::I32 { inputs[1].to_i32() } else { return Err(3); }; let c = a + b; Ok(vec![WasmValue::from_i32(c)]) } // add host function let func_ty = FuncType::create(vec![ValType::I32; 2], vec![ValType::I32])?; let hostfunc = Function::create(&functy, Box::new(real_add), 0)?; import.addfunc(\"add\", hostfunc); // add table let table_ty = TableType::create(RefType::FuncRef, 0..=u32::MAX)?; let table = Table::create(&table_ty)?; import.add_table(\"table\", table); // add memory let mem_ty = MemType::create(0..=u32::MAX)?; let memory = Memory::create(&mem_ty)?; import.add_memory(\"mem\", memory); // add global let ty = GlobalType::create(ValType::F32, Mutability::Const)?; let global = Global::create(&ty, WasmValue::from_f32(3.5))?; import.add_global(\"global\", global); // register the import module as a named module vm.registerwasmfrom_import(ImportObject::Import(import))?; ``` Retrieve the internal `Store` instance from the `Vm`, and retrieve the named module instance from the `Store` instance. ```rust let mut store = vm.store_mut()?; let namedinstance = store.module(modulename)?; assert!(namedinstance.getfunc(\"add\").is_ok()); assert!(namedinstance.gettable(\"table\").is_ok()); assert!(namedinstance.getmemory(\"mem\").is_ok()); assert!(namedinstance.getglobal(\"global\").is_ok()); ``` Register an active module into the `Vm`. ```rust // read the wasm bytes let wasm_bytes = wat2wasm( br#\" (module (export \"fib\" (func $fib)) (func $fib (param $n i32) (result i32) (if (i32.lt_s (get_local $n) (i32.const 2) ) (return (i32.const 1) ) ) (return (i32.add (call $fib (i32.sub (get_local $n) (i32.const 2) ) ) (call $fib (i32.sub (get_local $n) (i32.const 1) ) ) ) ) )" }, { "data": "\"#, )?; // load a wasm module from a in-memory bytes, and the loaded wasm module works as an anonymous // module (aka. active module in WasmEdge terminology) vm.loadwasmfrombytes(&wasmbytes)?; // validate the loaded active module vm.validate()?; // instantiate the loaded active module vm.instantiate()?; // get the active module instance let activeinstance = vm.activemodule()?; assert!(activeinstance.getfunc(\"fib\").is_ok()); ``` Retrieve the active module from the `Vm`. ```rust // get the active module instance let activeinstance = vm.activemodule()?; assert!(activeinstance.getfunc(\"fib\").is_ok()); ``` The complete code in this demo can be found on . In this example, we'll demonstrate how to use the APIs of `Executor` to Create an `Executor` and a `Store`. ```rust // create an Executor context let mut executor = Executor::create(None, None)?; // create a Store context let mut store = Store::create()?; ``` Register an import module into the `Executor`. ```rust // read the wasm bytes let wasm_bytes = wat2wasm( br#\" (module (export \"fib\" (func $fib)) (func $fib (param $n i32) (result i32) (if (i32.lt_s (get_local $n) (i32.const 2) ) (return (i32.const 1) ) ) (return (i32.add (call $fib (i32.sub (get_local $n) (i32.const 2) ) ) (call $fib (i32.sub (get_local $n) (i32.const 1) ) ) ) ) ) ) \"#, )?; // load module from a wasm file let config = Config::create()?; let loader = Loader::create(Some(config))?; let module = loader.frombytes(&wasmbytes)?; // validate module let config = Config::create()?; let validator = Validator::create(Some(config))?; validator.validate(&module)?; // register a wasm module into the store context let module_name = \"extern\"; let namedinstance = executor.registernamedmodule(&mut store, &module, modulename)?; assert!(namedinstance.getfunc(\"fib\").is_ok()); ``` Register an active module into the `Executor`. ```rust // read the wasm bytes let wasm_bytes = wat2wasm( br#\" (module (export \"fib\" (func $fib)) (func $fib (param $n i32) (result i32) (if (i32.lt_s (get_local $n) (i32.const 2) ) (return (i32.const 1) ) ) (return (i32.add (call $fib (i32.sub (get_local $n) (i32.const 2) ) ) (call $fib (i32.sub (get_local $n) (i32.const 1) ) ) ) ) ) ) \"#, )?; // load module from a wasm file let config = Config::create()?; let loader = Loader::create(Some(config))?; let module = loader.frombytes(&wasmbytes)?; // validate module let config = Config::create()?; let validator = Validator::create(Some(config))?; validator.validate(&module)?; // register a wasm module as an active module let activeinstance = executor.registeractive_module(&mut store, &module)?; assert!(activeinstance.getfunc(\"fib\").is_ok()); ``` The complete code in this demo can be found in ." } ]
{ "category": "Runtime", "file_name": "how_to_use_module_instance.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "(network-forwards)= ```{note} Network forwards are available for the {ref}`network-ovn` and the {ref}`network-bridge`. ``` Network forwards allow an external IP address (or specific ports on it) to be forwarded to an internal IP address (or specific ports on it) in the network that the forward belongs to. This feature can be useful if you have limited external IP addresses and want to share a single external address between multiple instances. There are two different ways how you can use network forwards in this case: Forward all traffic from the external address to the internal address of one instance. This method makes it easy to move the traffic destined for the external address to another instance by simply reconfiguring the network forward. Forward traffic from different port numbers of the external address to different instances (and optionally different ports on those instances). This method allows to \"share\" your external IP address and expose more than one instance at a time. Use the following command to create a network forward: ```bash incus network forward create <networkname> <listenaddress> [configuration_options...] ``` Each forward is assigned to a network. It requires a single external listen address (see {ref}`network-forwards-listen-addresses` for more information about which addresses can be forwarded, depending on the network that you are using). You can specify an optional default target address by adding the `targetaddress=<IPaddress>` configuration option. If you do, any traffic that does not match a port specification is forwarded to this address. Note that this target address must be within the same subnet as the network that the forward is associated to. Network forwards have the following properties: Property | Type | Required | Description :-- | :-- | :-- | :-- `listen_address` | string | yes | IP address to listen on `description` | string | no | Description of the network forward `config` | string set | no | Configuration options as key/value pairs (only `target_address` and `user.*` custom keys supported) `ports` | port list | no | List of {ref}`port specifications <network-forwards-port-specifications>` (network-forwards-listen-addresses)= The requirements for valid listen addresses vary depending on which network type the forward is associated" }, { "data": "Bridge network : - Any non-conflicting listen address is allowed. The listen address must not overlap with a subnet that is in use with another network. OVN network : - Allowed listen addresses must be defined in the uplink network's `ipv{n}.routes` settings or the project's {config:option}`project-restricted:restricted.networks.subnets` setting (if set). The listen address must not overlap with a subnet that is in use with another network. (network-forwards-port-specifications)= You can add port specifications to the network forward to forward traffic from specific ports on the listen address to specific ports on the target address. This target address must be different from the default target address. It must be within the same subnet as the network that the forward is associated to. Use the following command to add a port specification: ```bash incus network forward port add <networkname> <listenaddress> <protocol> <listenports> <targetaddress> [<target_ports>] ``` You can specify a single listen port or a set of ports. If you want to forward the traffic to different ports, you have two options: Specify a single target port to forward traffic from all listen ports to this target port. Specify a set of target ports with the same number of ports as the listen ports to forward traffic from the first listen port to the first target port, the second listen port to the second target port, and so on. Network forward ports have the following properties: Property | Type | Required | Description :-- | :-- | :-- | :-- `protocol` | string | yes | Protocol for the port(s) (`tcp` or `udp`) `listen_port` | string | yes | Listen port(s) (e.g. `80,90-100`) `target_address` | string | yes | IP address to forward to `targetport` | string | no | Target port(s) (e.g. `70,80-90` or `90`), same as `listenport` if empty `description` | string | no | Description of port(s) Use the following command to edit a network forward: ```bash incus network forward edit <networkname> <listenaddress> ``` This command opens the network forward in YAML format for editing. You can edit both the general configuration and the port specifications. Use the following command to delete a network forward: ```bash incus network forward delete <networkname> <listenaddress> ```" } ]
{ "category": "Runtime", "file_name": "network_forwards.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-operator-alibabacloud completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-operator-alibabacloud completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-alibabacloud_completion_powershell.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Share you story about the Kube-OVN journey with the community. | Type | Name | Website | Use-Case | |:-|:-|:-|:-| | Vendor | nix | https://aenix.io/ | nix provides consulting services for cloud providers and uses Kube-OVN as main CNI in free PaaS platform for running virtual machines and Kubernetes-as-a-Service. | | Vendor | Alauda Inc | https://www.alauda.cn/ | Alauda distributes Kube-OVN as part of the to provide enterprise level cloud native network. | | User | China Telecom CTYun | https://www.ctyun.cn/ | CTYun uses Kube-OVN as the cloud native SDN component in . | | User | China Yealink | https://www.yealink.com/ | Yealink uses Kube-OVN as the cloud native SDN component in Private Cloud. | | User | Inspur | https://www.inspur.com/ | Inspur uses Kube-OVN as the cloud native SDN component in Private Cloud. | | User | GPUHPC | https://gpuhpc.com/ | GPUHPC uses Kube-OVN as the cloud native SDN component in Edge Cloud. | | User | 99Cloud | https://www.99cloud.net/ | 99Cloud uses Kube-OVN to provider cloud native network for VM, Kata, VNF in Hybird Cloud. | | Vendor | Canonical | https://ubuntu.com/kubernetes | Canonical uses Kube-OVN inside of its Chramed Kube-OVN operator and is distributed on Charmed Kubernetes. |" } ]
{ "category": "Runtime", "file_name": "USERS.md", "project_name": "Kube-OVN", "subcategory": "Cloud Native Network" }
[ { "data": "(storage-ceph)= <!-- Include start Ceph intro --> is an open-source storage platform that stores its data in a storage cluster based on {abbr}`RADOS (Reliable Autonomic Distributed Object Store)`. It is highly scalable and, as a distributed system without a single point of failure, very reliable. Ceph provides different components for block storage and for file systems. <!-- Include end Ceph intro --> Ceph {abbr}`RBD (RADOS Block Device)` is Ceph's block storage component that distributes data and workload across the Ceph cluster. It uses thin provisioning, which means that it is possible to over-commit resources. <!-- Include start Ceph terminology --> Ceph uses the term object for the data that it stores. The daemon that is responsible for storing and managing data is the Ceph {abbr}`OSD (Object Storage Daemon)`. Ceph's storage is divided into pools, which are logical partitions for storing objects. They are also referred to as data pools, storage pools or OSD pools. <!-- Include end Ceph terminology --> Ceph block devices are also called RBD images, and you can create snapshots and clones of these RBD images. ```{note} To use the Ceph RBD driver, you must specify it as `ceph`. This is slightly misleading, because it uses only Ceph RBD (block storage) functionality, not full Ceph functionality. For storage volumes with content type `filesystem` (images, containers and custom file-system volumes), the `ceph` driver uses Ceph RBD images with a file system on top (see ). Alternatively, you can use the {ref}`CephFS <storage-cephfs>` driver to create storage volumes with content type `filesystem`. ``` <!-- Include start Ceph driver cluster --> Unlike other storage drivers, this driver does not set up the storage system but assumes that you already have a Ceph cluster installed. <!-- Include end Ceph driver cluster --> <!-- Include start Ceph driver remote --> This driver also behaves differently than other drivers in that it provides remote storage. As a result and depending on the internal network, storage access might be a bit slower than for local storage. On the other hand, using remote storage has big advantages in a cluster setup, because all cluster members have access to the same storage pools with the exact same contents, without the need to synchronize storage pools. <!-- Include end Ceph driver remote --> The `ceph` driver in Incus uses RBD images for images, and snapshots and clones to create instances and snapshots. <!-- Include start Ceph driver control --> Incus assumes that it has full control over the OSD storage pool. Therefore, you should never maintain any file system entities that are not owned by Incus in an Incus OSD storage pool, because Incus might delete them. <!-- Include end Ceph driver control --> Due to the way copy-on-write works in Ceph RBD, parent RBD images can't be removed until all children are gone. As a result, Incus automatically renames any objects that are removed but still referenced. Such objects are kept with a `zombie_` prefix until all references are gone and the object can safely be" }, { "data": "The `ceph` driver has the following limitations: Sharing custom volumes between instances : Custom storage volumes with {ref}`content type <storage-content-types>` `filesystem` can usually be shared between multiple instances different cluster members. However, because the Ceph RBD driver \"simulates\" volumes with content type `filesystem` by putting a file system on top of an RBD image, custom storage volumes can only be assigned to a single instance at a time. If you need to share a custom volume with content type `filesystem`, use the {ref}`CephFS <storage-cephfs>` driver instead. Sharing the OSD storage pool between installations : Sharing the same OSD storage pool between multiple Incus installations is not supported. Using an OSD pool of type \"erasure\" : To use a Ceph OSD pool of type \"erasure\", you must create the OSD pool beforehand. You must also create a separate OSD pool of type \"replicated\" that will be used for storing metadata. This is required because Ceph RBD does not support `omap`. To specify which pool is \"erasure coded\", set the configuration option to the erasure coded pool name and the configuration option to the replicated pool name. The following configuration options are available for storage pools that use the `ceph` driver and for storage volumes in these pools. (storage-ceph-pool-config)= Key | Type | Default | Description :-- | : | : | :- `ceph.cluster_name` | string | `ceph` | Name of the Ceph cluster in which to create new storage pools `ceph.osd.datapoolname` | string | - | Name of the OSD data pool `ceph.osd.pg_num` | string | `32` | Number of placement groups for the OSD storage pool `ceph.osd.pool_name` | string | name of the pool | Name of the OSD storage pool `ceph.rbd.clone_copy` | bool | `true` | Whether to use RBD lightweight clones rather than full dataset copies `ceph.rbd.du` | bool | `true` | Whether to use RBD `du` to obtain disk usage data for stopped instances `ceph.rbd.features` | string | `layering` | Comma-separated list of RBD features to enable on the volumes `ceph.user.name` | string | `admin` | The Ceph user to use when creating storage pools and volumes `source` | string | - | Existing OSD storage pool to use `volatile.pool.pristine` | string | `true` | Whether the pool was empty on creation time {{volume_configuration}} (storage-ceph-vol-config)= Key | Type | Condition | Default | Description :-- | : | :-- | : | :- `block.filesystem` | string | block-based volume with content type `filesystem` | same as `volume.block.filesystem` | {{block_filesystem}} `block.mountoptions` | string | block-based volume with content type `filesystem` | same as `volume.block.mountoptions` | Mount options for block-backed file system volumes `security.shared` | bool | custom block volume | same as `volume.security.shared` or `false` | Enable sharing the volume across multiple instances `security.shifted` | bool | custom volume | same as `volume.security.shifted` or `false` | {{enableIDshifting}} `security.unmapped` | bool | custom volume | same as `volume.security.unmapped` or `false` | Disable ID mapping for the volume `size` | string | | same as `volume.size` | Size/quota of the storage volume `snapshots.expiry` | string | custom volume | same as `volume.snapshots.expiry` | {{snapshotexpiryformat}} `snapshots.pattern` | string | custom volume | same as `volume.snapshots.pattern` or `snap%d` | {{snapshotpatternformat}} [^*] `snapshots.schedule` | string | custom volume | same as `volume.snapshots.schedule` | {{snapshotscheduleformat}}" } ]
{ "category": "Runtime", "file_name": "storage_ceph.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Cleans up all resources (files, network objects) associated with a pod just like `rkt gc`. This command can be used to immediately free resources without waiting for garbage collection to run. ``` rkt rm c138310f ``` Instead of passing UUID on command line, rm command can read the UUID from a text file. This can be paired with `--uuid-file-save` to remove pods by name: ``` rkt run --uuid-file-save=/run/rkt-uuids/mypod ... rkt rm --uuid-file=/run/rkt-uuids/mypod ``` See the table with ." } ]
{ "category": "Runtime", "file_name": "rm.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: \"Velero Volume Snapshot Location\" layout: docs A volume snapshot location is the location in which to store the volume snapshots created for a backup. Velero can be configured to take snapshots of volumes from multiple providers. Velero also allows you to configure multiple possible `VolumeSnapshotLocation` per provider, although you can only select one location per provider at backup time. Each VolumeSnapshotLocation describes a provider + location. These are represented in the cluster via the `VolumeSnapshotLocation` CRD. Velero must have at least one `VolumeSnapshotLocation` per cloud provider. A sample YAML `VolumeSnapshotLocation` looks like the following: ```yaml apiVersion: velero.io/v1 kind: VolumeSnapshotLocation metadata: name: aws-default namespace: velero spec: provider: aws config: region: us-west-2 profile: \"default\" ``` The configurable parameters are as follows: {{< table caption=\"Main config parameters\" >}} | Key | Type | Default | Meaning | | | | | | | `provider` | String | Required Field | The name for whichever storage provider will be used to create/store the volume snapshots. See for the appropriate value to use. | | `config` | map string string | None (Optional) | Provider-specific configuration keys/values to be passed to the volume snapshotter plugin. See for details. | {{< /table >}}" } ]
{ "category": "Runtime", "file_name": "volumesnapshotlocation.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Release Instructions\" layout: docs toc: \"true\" This page covers the steps to perform when releasing a new version of Velero. Please read the documented variables in each script to understand what they are for and how to properly format their values. You will need to have an upstream remote configured to use to the repository. You can check this using `git remote -v`. The release script () will use `upstream` as the default remote name if it is not specified using the environment variable `REMOTE`. GA release: major and minor releases only. Example: 1.0 (major), 1.5 (minor). Pre-releases: Any release leading up to a GA. Example: 1.4.0-beta.1, 1.5.0-rc.1 RC releases: Release Candidate, contains everything that is supposed to ship with the GA release. This is still a pre-release. Velero is on a \"train leaves the station\" model for releases. We will generate a release candidate (RC) at the scheduled time. Multiple release candidates may be generated, depending on if bugs are found during testing. When testing has passed a release build will be generated. The release candidate commit must meet the following criteria: No major bugs outstanding Unit tests pass E2E tests against latest Kubernetes on AWS, vSphere and kind pass Once the release has moved to RC, a code freeze is in effect. Only changes needed to release are allowable. In order for a release candidate to be released, it must meet the following criteria: Unit tests pass E2E tests against latest K8S and earliest supported K8S on Azure, vSphere, Kind, AWS, GCP Manual tests pass (manual tests will be converted to automated testing) When bugs are identified by any of these methods, we will determine whether the bug is a release blocker or not and a fix generated if it is. When release blocker bugs identifies in an release candidate are fixed, another RC will be generated and the test cycle will restart. For each major or minor release, create and publish a blog post to let folks know what's new. Please follow these . If you encounter the error `You don't have enough free space in /var/cache/apt/archives/` when running `make serve-docs`: run `docker system prune`. If it doesn't already exist: in a branch, create the file `changelogs/CHANGELOG-<major>.<minor>.md` by copying the most recent one. Update the file `changelogs/CHANGELOG-<major>.<minor>.md` Run `make changelog` to generate a list of all unreleased changes. Copy/paste the output into `CHANGELOG-<major>.<minor>.md`, under the \"All Changes\" section for the release. You may choose to tweak formatting on the list of changes by adding code blocks, etc. Update links at the top of the file to point to the new release version Update the main `CHANGELOG.md` file to properly reference the release-specific changelog file Under \"Current release\": Should contain only the current GA" }, { "data": "Under \"Development release\": Should contain only the latest pre-release Move any prior pre-release into \"Older releases\" GA Only: Remove all changelog files from `changelogs/unreleased`. Generate new docs Run `make gen-docs`, passing the appropriate variables. Examples: a) `VELEROVERSION=v1.5.0-rc.1 NEWDOCS_VERSION=v1.5.0-rc.1 make gen-docs`. b) `VELEROVERSION=v1.5.0 NEWDOCS_VERSION=v1.5 make gen-docs`). Note: `PREVIOUSDOCSVERSION=<doc-version-to-copy-from>` is optional; when not set, it will default to the latest doc version. Clean up when there is an existing set of pre-release versioned docs for the version you are releasing Example: `site/content/docs/v1.5.0-beta.1` exists, and you're releasing `v1.5.0-rc.1` or `v1.5` Remove the directory containing the pre-release docs, i.e. `site/content/docs/<pre-release-version>`. Delete the pre-release docs table of contents file, i.e. `site/data/docs/<pre-release-version>-toc.yml`. Remove the pre-release docs table of contents mapping entry from `site/data/toc-mapping.yml`. Remove all references to the pre-release docs from `site/config.yml`. Create the \"Upgrade to $major.minor\" page if it does not already exist (). If it already exists, update any usage of the previous version string within this file to use the new version string instead (). This needs to be done in both the versioned and the `main` folders. Review and submit PR Follow the additional instructions at `site/README-HUGO.md` to complete the docs generation process. Do a review of the diffs, and/or run `make serve-docs` and review the site. Submit a PR containing the changelog and the version-tagged docs. The image of velero is built based on . For the reproducibility of the release, before the release candidate is tagged, we need to make sure the in the Dockerfile on the release branch, the base image is referenced by digest, such as https://github.com/vmware-tanzu/velero/blob/release-1.7/Dockerfile#L53-L54 Pre-requisite: PR with the changelog and docs is merged, so that it's included in the release tag. This process is the same for both pre-release and GA. Refer to the above for instructions. If the dry-run fails with random errors, try running it again. Manually create the release branch on Github, in the form like `release-$major.$minor` Create a tagged release in dry-run mode This won't push anything to GitHub. Run `VELEROVERSION=v1.9.0-rc.1 REMOTE=<upstream-remote> GITHUBTOKEN=REDACTED ONRELEASEBRANCH=TRUE ./hack/release-tools/tag-release.sh`. Fix any issue. Create a tagged release and push it to GitHub Run `VELEROVERSION=v1.9.0-rc.1 REMOTE=<upstream-remote> GITHUBTOKEN=REDACTED ONRELEASEBRANCH=TRUE ./hack/release-tools/tag-release.sh publish`. Publish the release Navigate to the draft GitHub release at https://github.com/vmware-tanzu/velero/releases and edit the release. If this is a patch release (e.g. `v1.9.1`), note that the full `CHANGELOG-1.9.md` contents will be included in the body of the GitHub release. You need to delete the previous releases' content (e.g. `v1.9.0`'s changelog) so that only the latest patch release's changelog shows. Do a quick review for formatting. Note: the `goreleaser` process should have detected if it's a pre-release version and, if so, checked the box at the bottom of the GitHub release page appropriately, but it's always worth double-checking. Verify that GitHub has built and pushed all the images (it takes a while): https://github.com/vmware-tanzu/velero/actions Verify that the images are on Docker Hub: https://hub.docker.com/r/velero/velero/tags Verify that the assets were published to the GitHub release Publish the release. Test the release By now, the Docker images should have been" }, { "data": "Perform a smoke-test - for example: Download the CLI from the GitHub release Use it to install Velero into a cluster (or manually update an existing deployment to use the new images) Verify that `velero version` shows the expected output Run a backup/restore and ensure it works These are the steps to update the Velero Homebrew version. If you don't already have one, create a Run `export HOMEBREWGITHUBAPITOKEN=yourtoken_here` on your command line to make sure that `brew` can work on GitHub on your behalf. Run `hack/release-tools/brew-update.sh`. This script will download the necessary files, do the checks, and invoke the brew helper to submit the PR, which will open in your browser. Update Windows Chocolatey version. From a Windows computer, follow the step-by-step instructions to To release plugins maintained by the Velero team, follow the . After the plugin images are built, be sure to update any that use these plugins. Update the CRDs under helm chart folder `crds` according to the current Velero GA version, and add the labels for the helm chart CRDs. For example: https://github.com/vmware-tanzu/helm-charts/pull/248. Bump the Chart version `version` on the `Chart.yaml`. Bump the Velero version `appVersion` on the `Chart.yaml` file and `tag` on the `values.yaml` file. Bump the plugin version on the `values.yaml` if needed. Update the upgrade instruction and related tag on the `README.md` file. What to include in a release blog: Thank all contributors for their involvement in the release. Where possible shoutout folks by name or consider spotlighting new maintainers. Highlight the themes, or areas of focus, for the release. Some examples of themes are security, bug fixes, feature improvements. See past Velero for more examples. Include summaries of new features or workflows introduced in a release. This can also include new project initiatives, like a code-of-conduct update. Consider creating additional blog posts that go through new features in more detail. Plan to publish additional blogs after the release blog (all blogs dont have to be publish all at once). Release blog post PR: Prepare a PR containing the release blog post. Read the for more information on creating a blog post. It's usually easiest to make a copy of the most recent existing post, then replace the content as appropriate. You also need to update `site/index.html` to have \"Latest Release Information\" contain a link to the new post. Plan to publish the blog post the same day as the release. Once you are finished doing the release, let the rest of the world know it's available by posting messages in the following places. GA Only: Merge the blog post PR. Velero's Twitter account. Maintainers are encouraged to help spread the word by posting or reposting on social media. Community Slack channel. Google group message. What to include: Thank all contributors A brief list of highlights in the release Link to the release blog post, release notes, and/or github release page" } ]
{ "category": "Runtime", "file_name": "release-instructions.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, Longhorn does not enforce the upgrade path, even though we claim Longhorn only supports upgrading from the previous stable release, for example, upgrading to 1.5.x is only supported from 1.4.x or 1.5.0. Without upgrade enforcement, we will allow users to upgrade from any previous version. This will cause extra testing efforts to cover all upgrade paths. Additionally, the goal of this enhancement is to support rollback after upgrade failure and prevent downgrades. https://github.com/longhorn/longhorn/issues/5131 Enforce an upgrade path to prevent users from upgrading from any unsupported version. After rejecting the user's upgrade attempt, the user's Longhorn setup should remain intact without any impacts. Upgrade Longhorn from the authorized versions to a major release version. Support rollback the failed upgrade to the previous version. Prevent unexpected downgrade. Automatic rollback if the upgrade failed. When upgrading with `kubectl`, it will check the upgrade path at entry point of the pods for `longhorn-manager`, `longhorn-admission-webhook`, `longhorn-conversion-webhook` and `longhorn-recovery-backend`. When upgrading with `Helm` or as a `Rancher App Marketplace`, it will check the upgrade path by a `pre-upgrade` job of `Helm hook` As the admin, I want to upgrade Longhorn from x.y. or x.(y+1).0 to x.(y+1). by `kubectl`, `Helm` or `Rancher App Marketplace`, so that the upgrade should succeed. As the admin, I want to upgrade Longhorn from the previous authorized versions to a new major/minor version by `kubectl`, `Helm`, or `Rancher App Marketplace`, so that the upgrade should succeed. As the admin, I want to upgrade Longhorn from x.(y-1). to x.(y+1). by 'kubectl', 'Helm' or 'Rancher App Marketplace', so that the upgrade should be prevented and the system with the current version continues running w/o any interruptions. As the admin, I want to roll back Longhorn from the failed upgrade to the previous install by `kubectl`, `Helm`, or `Rancher App Marketplace`, so that the rollback should succeed. As the admin, I want to downgrade Longhorn to any lower version by `kubectl`, `Helm`, or `Rancher App Marketplace`, so that the downgrade should be prevented and the system with the current version continues running w/o any interruptions. Install Longhorn on any Kubernetes cluster by using this command: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/vx.y.*/deploy/longhorn.yaml ``` or ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/vx.(y+1).0/deploy/longhorn.yaml ``` After Longhorn works normally, upgrade Longhorn by using this command: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/vx.(y+1).*/deploy/longhorn.yaml ``` It will be allowed and Longhorn will be upgraded successfully. Install Longhorn x.y. or x.(y+1).0 with Helm as or install Longhorn x.y. or" }, { "data": "with a Rancher Apps as Upgrade to Longhorn x.(y+1). with Helm as or upgrade to Longhorn x.(y+1). with a Rancher Catalog App as It will be allowed and Longhorn will be upgraded successfully. Install Longhorn on any Kubernetes cluster by using this command: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/vx.y.*/deploy/longhorn.yaml ``` After Longhorn works normally, upgrade Longhorn by using this command: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v(x+1).0.*/deploy/longhorn.yaml ``` It will be allowed and Longhorn will be upgraded successfully. Install Longhorn x.y. with Helm such as or install Longhorn x.y. with a Rancher Apps as Upgrade to Longhorn (x+1).0. with Helm as or upgrade to Longhorn (x+1).0. with a Rancher Catalog App as It will be allowed and Longhorn will be upgraded successfully. Install Longhorn on any Kubernetes cluster by using this command: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/vx.(y-1).*/deploy/longhorn.yaml ``` After Longhorn works normally, upgrade Longhorn by using this command: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/vx.(y+1).*/deploy/longhorn.yaml ``` It will be not allowed and Longhorn will block the upgrade for `longhorn-manager`, `longhorn-admission-webhook`, `longhorn-conversion-webhook` and `longhorn-recovery-backend`. Users need to roll back Longhorn manually to restart `longhorn-manager` pods. Install Longhorn x.(y-1). with Helm as or install Longhorn x.(y-1). with a Rancher Apps as Upgrade to Longhorn x.(y+1). with Helm as or upgrade to Longhorn x.(y+1). with a Rancher Catalog App as It will not be allowed and a `pre-upgrade`job of `Helm hook` failed makes the whole helm upgrading process failed. Longhorn is intact and continues serving. Users need to recover Longhorn by using this command again: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/[previous installed version]/deploy/longhorn.yaml ``` Longhorn will be rolled back successfully. And users might need to delete new components introduced by new version Longhorn manually. Users need to recover Longhorn with `Helm` by using commands: ```shell helm history longhorn # to get previous installed Longhorn REVISION helm rollback longhorn [REVISION] ``` or ```shell helm upgrade longhorn longhorn/longhorn --namespace longhorn-system --version [previous installed version] ``` Users need to recover Longhorn with `Rancher Catalog Apps` by upgrading the previous installed Longhorn version at `Rancher App Marketplace` again. Longhorn will be rolled back successfully. When users try to upgrade Longhorn from v1.3.x to v1.5.x, a new deployment `longhorn-recovery-backend` will be introduced and the upgrade will fail. Users need to delete the deployment `longhorn-recovery-backend` manually after rolling back Longhorn Install Longhorn on any Kubernetes cluster by using this command: ```shell kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/vx.y.*/deploy/longhorn.yaml ``` After Longhorn works normally, upgrade Longhorn by using this command: ```shell kubectl apply -f" }, { "data": "``` It will be not allowed and Longhorn will block the downgrade for `longhorn-manager`. [or `longhorn-admission-webhook`, `longhorn-conversion-webhook` and `longhorn-recovery-backend` if downgrading version had these components] Users need to roll back Longhorn manually to restart `longhorn-manager` pods. Install Longhorn x.y. with Helm as or install Longhorn x.y. with a Rancher Apps as Downgrade to Longhorn (x-z).y. or x.(y-z). with Helm as or downgrade to Longhorn (x-z).y. or x.(y-z). with a Rancher Catalog App as It will not be allowed and a `pre-upgrade`job of `Helm hook` failed makes the whole helm downgrading process failed. Longhorn is intact and continues serving. `None` Check the upgrade path is supported or not at entry point of the `longhorn-manager`, `longhorn-admission-webhook`, `longhorn-conversion-webhook` and `longhorn-recovery-backend` Get Longhorn current version `currentVersion` by the function `GetCurrentLonghornVersion` Get Longhorn upgrading version `upgradeVersion` from `meta.Version` Compare currentVersion and upgradeVersion, only allow authorized version upgrade (e.g., 1.3.x to 1.5.x is not allowed) as following table. | currentVersion | upgradeVersion | Allow | | :-: | :-: | :-: | | x.y. | x.(y+1). | | | x.y.0 | x.y.* | | | x.y. | (x+1).y. | | | x.(y-1). | x.(y+1). | X | | x.(y-2). | x.(y+1). | X | | x.y. | x.(y-1). | X | | x.y. | x.y.(-1) | X | Downgrade is not allowed. When the upgrade path is not supported, new created pods of the `longhorn-manager`, `longhorn-admission-webhook`, `longhorn-conversion-webhook` and `longhorn-recovery-backend` will show logs and broadcast events for the upgrade path is not supported and return errors. Previous installed Longhorn will work normally still. Add a new job for pre-upgrade hook of `Helm` as the . ```txt apiVersion: batch/v1 kind: Job metadata: annotations: \"helm.sh/hook\": pre-upgrade \"helm.sh/hook-delete-policy\": hook-succeeded,before-hook-creation,hook-failed name: longhorn-pre-upgrade ... spec: ... template: metadata: name: longhorn-pre-upgrade ... spec: containers: name: longhorn-post-upgrade ... command: longhorn-manager pre-upgrade env: ... ``` When upgrading starts, the `pre-upgrade` job will start to run firstly and it will be failed if the upgrade path is not supported then `Helm` upgrading process will be failed. Install Longhorn v1.4.x. Wait for all pods ready. Create a Volume and write some data. Upgrade to Longhorn v1.5.0. Wait for all pods upgraded successfully. Check if data is not corrupted. Install Longhorn v1.3.x. Wait for all pods ready. Create a Volume and write some data. Upgrade to Longhorn v1.5.0. Upgrading process will be stuck or failed. Check if data is not corrupted. Rollback to Longhorn v1.3.x with the same setting. Longhorn v1.3.x will work normally. `None` `None`" } ]
{ "category": "Runtime", "file_name": "20230315-upgrade-path-enforcement.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-bugtool completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-bugtool completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-bugtool_completion_powershell.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "carina-scheduler can scheduling pods based on usage and capacity of nodes' disks. ```yaml config.json: |- { \"diskScanInterval\": \"300\", # disk scan intervals in seconds. Zero will disable scanning. \"diskGroupPolicy\": \"type\", # disk group policy \"schedulerStrategy\": \"spreadout\" # scheduler policy, supports binpack and spreadout. } ``` For schedulerStrategy, In case of `storageclass volumeBindingMode:Immediate`, the scheduler will only consider nodes' disk usage. For example, with `spreadout` policy, carina scheduler will pick the node with the largest disk capacity to create volume. In case of `schedulerStrategy``storageclass volumeBindingMode:WaitForFirstConsumer`, carina scheduler only affects the pod scheduleing by providing its rank. Kube-scheduler will pick a node finally. User can learn detailed messages in carina-scheduler's log. When multiples nodes have valid capacity ten times larger than requested, those node will share the same rank. Notethere is an carina webhook that will change the pod scheduler to carina-scheduler if it uses carina PVC." } ]
{ "category": "Runtime", "file_name": "capacity-scheduler.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "Self-Heal Daemon ================ The self-heal daemon (shd) is a glusterfs process that is responsible for healing files in a replicate/ disperse gluster volume. Every server (brick) node of the volume runs one instance of the shd. So even if one node contains replicate/ disperse bricks of multiple volumes, it would be healed by the same shd. This document only describes how the shd works for replicate (AFR) volumes. The shd is launched by glusterd when the volume starts (only if the volume includes a replicate configuration). The graph of the shd process in every node contains the following: The io-stats which is the top most xlator, its children being the replicate xlators (subvolumes) of only the bricks present in that particular node, and finally all the client xlators that are the children of the replicate xlators. The shd does two types of self-heal crawls: Index heal and Full heal. For both these types of crawls, the basic idea is the same: For each file encountered while crawling, perform metadata, data and entry heals under appropriate locks. An overview of how each of these heals is performed is detailed in the 'Self-healing' section of doc/features/afr-v1.md* The different file locks which the shd takes for each of these heals is detailed in doc/developer -guide/afr-locks-evolution.md* Metadata heal refers to healing extended attributes, mode and permissions of a file or directory. Data heal refers to healing the file contents. Entry self-heal refers to healing entries inside a directory. Index heal ========== The index heal is done: a) Every 600 seconds (can be changed via the `cluster.heal-timeout` volume option) b) When it is explicitly triggered via the `gluster vol heal <VOLNAME>` command c) Whenever a replica brick that was down comes back up. Only one heal can be in progress at one time, irrespective of reason why it was triggered. If another heal is triggered before the first one completes, it will be queued. Only one heal can be queued while the first one is running. If an Index heal is queued, it can be overridden by queuing a Full heal and not vice-versa. Also, before processing each entry in index heal, a check is made if a full heal is queued. If it is, then the index heal is aborted so that the full heal can proceed. In index heal, each shd reads the entries present inside .glusterfs/indices/xattrop/ folder and triggers heal on each entry with appropriate locks. The .glusterfs/indices/xattrop/ directory contains a base entry of the name \"xattrop-<virtual-gfid-string>\". All other entries are hardlinks to the base entry. The names of the hardlinks are the gfid strings of the files that may need" }, { "data": "When a client (mount) performs an operation on the file, the index xlator present in each brick process adds the hardlinks in the pre-op phase of the FOP's transaction and removes it in post-op phase if the operation is successful. Thus if an entry is present inside the .glusterfs/indices/xattrop/ directory when there is no I/O happening on the file, it means the file needs healing (or atleast an examination if the brick crashed after the post-op completed but just before the removal of the hardlink). <pre><code> In shd process of each node { opendir +readdir (.glusterfs/indices/xattrop/) for each entry inside it { selfhealentry() //Explained below. } } </code></pre> <pre><code> selfhealentry() { Call syncop_lookup(replicae subvolume) which eventually does { take appropriate locks determine source and sinks from AFR changelog xattrs perform whatever heal is needed (any of metadata, data and entry heal in that order) clear changelog xattrs and hardlink inside .glusterfs/indices/xattrop/ } } </code></pre> Note: If the gfid hardlink is present in the .glusterfs/indices/xattrop/ of both replica bricks, then each shd will try to heal the file but only one of them will be able to proceed due to the self-heal domain lock. While processing entries inside .glusterfs/indices/xattrop/, if shd encounters an entry whose parent is yet to be healed, it will skip it and it will be picked up in the next crawl. If a file is in data/ metadata split-brain, it will not be healed. If a directory is in entry split-brain, a conservative merge will be performed, wherein after the merge, the entries of the directory will be a union of the entries in the replica pairs. Full heal ========= A full heal is triggered by running `gluster vol heal <VOLNAME> full`. This command is usually run in disk replacement scenarios where the entire data is to be copied from one of the healthy bricks of the replica to the brick that was just replaced. Unlike the index heal which runs on the shd of every node in a replicate subvolume, the full heal is run only on the shd of one node per replicate subvolume: the node having the highest UUID. i.e In a 2x2 volume made of 4 nodes N1, N2, N3 and N4, If UUID of N1>N2 and UUID N4 >N3, then the full crawl is carried out by the shds of N1 and N4.(Node UUID can be found in `/var/lib/glusterd/glusterd.info`) The full heal steps are almost identical to the index heal, except the heal is performed on each replica starting from the root of the volume: <pre><code> In shd process of highest UUID node per replica { opendir +readdir (\"/\") for each entry inside it { selfhealentry() if (entry == directory) { / Recurse/ again opendir+readdir (directory) followed by selfhealentry() of each entry. } } } </code></pre>" } ]
{ "category": "Runtime", "file_name": "afr-self-heal-daemon.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- Are you in the right place? For issues or feature requests, please create an issue in this repository. For general technical and non-technical questions, we are happy to help you on our . Did you already search the existing open issues for anything similar? --> Is this a bug report or feature request? <!-- Remove only one --> Bug Report Feature Request Bug Report Expected behavior: Deviation from expected behavior: How to reproduce it (minimal and precise): <!-- Please let us know any circumstances for reproduction of your bug. --> File(s) to submit: Cluster CR (custom resource), typically called `cluster.yaml`, if necessary Operator's logs, if necessary Crashing pod(s) logs, if necessary To get logs, use `kubectl -n <namespace> logs <pod name>` When pasting logs, always surround them with backticks or use the `insert code` button from the GitHub UI. Read . Feature Request Are there any similar features already existing: What should the feature do: What would be solved through this feature: Does this have an impact on existing features: Environment: OS (e.g. from /etc/os-release): Kernel (e.g. `uname -a`): Cloud provider or hardware configuration: Rook version (use `rook version` inside of a Rook Pod): Storage backend version (e.g. for ceph do `ceph -v`): Kubernetes version (use `kubectl version`): Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Storage backend status (e.g. for Ceph use `ceph health` in the ):" } ]
{ "category": "Runtime", "file_name": "ISSUE_TEMPLATE.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Announcing Velero 1.1: Improved restic Support and More Visibility\" slug: announcing-velero-1.1 excerpt: For this release, weve focused on improving Veleros restic integration - making repository locks shorter lived, giving more visibility into restic repositories when migrating clusters, and expanding support to more volume types. author_name: Nolan Brubaker categories: ['velero','release'] tags: ['Velero Team', 'Nolan Brubaker'] Weve made big strides in improving Velero. Since our release of version 1.0 in May 2019, we have been hard at work improving our restic support and planning for the future of Velero. In addition, weve seen some helpful contributions from the community that will make life easier for all of our users. Also, the Velero community has reached 100 contributors! For this release, weve focused on improving Veleros restic integration: making repository locks shorter lived, giving more visibility into restic repositories when migrating clusters, and expanding support to more volume types. Additionally, we have made several quality-of-life improvements to the Velero deployment and client. Lets take a look at some of the highlights of this release. A big focus of our work this cycle was continuing to improve support for restic. To that end, weve fixed the following bugs: Prior to version 1.1, restic backups could be delayed or failed due to long-lived locks on the repository. Now, Velero removes stale locks from restic repositories every 5 minutes, ensuring they do not interrupt normal operations. Previously, the PodVolumeBackup custom resources that represented a restic backup within a cluster were not synchronized between clusters, making it unclear what restic volumes were available to restore into a new cluster. In version 1.1, these resources are synced into clusters, so they are more visible to you when you are trying to restore volumes. Originally, Velero would not validate the host path in which volumes were mounted on a given node. If a node did not expose the filesystem correctly, you wouldnt know about it until a backup failed. Now, Veleros restic server will validate that the directory structure is correct on startup, providing earlier feedback when its not. Veleros restic support is intended to work on a broad range of volume types. With the general release of the , Velero can now use restic to back up CSI volumes. Along with our bug fixes, weve provided an easier way to move restic backups between storage providers. Different providers often have different StorageClasses, requiring user intervention to make restores successfully complete. To make cross-provider moves simpler, weve introduced a StorageClass remapping" }, { "data": "It allows you to automatically translate one StorageClass on PersistentVolumeClaims and PersistentVolumes to another. You can read more about it in our . Weve also made several other enhancements to Velero that should benefit all users. Users sometimes ask about recommendations for Veleros resource allocation within their cluster. To help with this concern, weve added default resource requirements to the Velero Deployment and restic init containers, along with configurable requests and limits for the restic DaemonSet. All these values can be adjusted if your environment requires it. Weve also taken some time to improve Velero for the future by updating the Deployment and DaemonSet to use the apps/v1 API group, which will be the . This change means that `velero install` and the `velero plugin` commands will require Kubernetes 1.9 or later to work. Existing Velero installs will continue to work without needing changes, however. In order to help you better understand what resources have been backed up, weve added a list of resources in the `velero backup describe --details` command. This change makes it easier to inspect a backup without having to download and extract it. In the same vein, weve added the ability to put custom tags on cloud-provider snapshots. This approach should provide a better way to keep track of the resources being created in your cloud account. To add a label to a snapshot at backup time, use the `--labels` argument in the `velero backup create` command. Our final change for increasing visibility into your Velero installation is the `velero plugin get` command. This command will report all the plugins within the Velero deployment.. Velero has previously used a restore-only flag on the server to control whether a cluster could write backups to object storage. With Velero 1.1, weve now moved the restore-only behavior into read-only BackupStorageLocations. This move means that the Velero server can use a BackupStorageLocation as a source to restore from, but not for backups, while still retaining the ability to back up to other configured locations. In the future, the `--restore-only` flag will be removed in favor of configuring read-only BackupStorageLocations. We appreciate all community contributions, whether they be pull requests, bug reports, feature requests, or just questions. With this release, we wanted to draw attention to a few contributions in particular: For users of node-based IAM authentication systems such as kube2iam, `velero install` now supports the `--pod-annotations` argument for applying necessary annotations at install" }, { "data": "This support should make `velero install` more flexible for scenarios that do not use Secrets for access to their cloud buckets and volumes. You can read more about how to use this new argument in our . Huge thanks to for this contribution. Structured logging is important for any application, and Velero is no different. Starting with version 1.1, the Velero server can now output its logs in a JSON format, allowing easier parsing and ingestion. Thank you to for this feature. AWS supports multiple profiles for accessing object storage, but in the past Velero only used the default. With v.1.1, you can set the `profile` key on yourBackupStorageLocation to specify an alternate profile. If no profile is set, the default one is used, making this change backward compatible. Thanks for this change. Finally, thanks to testing by and , an issue with running Velero in non-default namespaces was found in our beta version for this release. If youre running Velero in a namespace other than `velero`, please follow the . For Velero 1.2, the current plan is to begin implementing CSI snapshot support at a beta level. If accepted, this approach would align Velero with the larger community, and in the future, it would allow Velero to snapshot far more volume providers. We have posted a for community review, so please be sure to take a look if this interests you. We are also working on volume cloning, so that a persistent volume could be snapshotted and then duplicated for use within another namespace in the cluster. The team has also been discussing different approaches to concurrent backup jobs. This is a longer term goal, that will not be included in 1.2. Comments on the would be really helpful. Finally, were running for our users. Let us know how you use Velero and what youd like the community to address in the future. Well be using this feedback to guide our roadmap planning. Anonymized results will be shared back with the community shortly after the survey closes. Velero is better because of our contributors and maintainers. It is because of them that we can bring great software to the community. Please join us during our and catch up with past meetings on YouTube on the . You can always find the latest project information at . Look for issues on GitHub marked or if you want to roll up your sleeves and write some code with us. You can find us on , and follow us on Twitter at ." } ]
{ "category": "Runtime", "file_name": "2019-08-22-announcing-velero-1.1.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Run Velero locally in development\" layout: docs Running the Velero server locally can speed up iterative development. This eliminates the need to rebuild the Velero server image and redeploy it to the cluster with each change. Velero runs against the Kubernetes API server as the endpoint (as per the `kubeconfig` configuration), so both the Velero server and client use the same `client-go` to communicate with Kubernetes. This means the Velero server can be run locally just as functionally as if it was running in the remote cluster. When running Velero, you will need to ensure that you set up all of the following: Appropriate RBAC permissions in the cluster Read access for all data from the source cluster and namespaces Write access to the target cluster and namespaces Cloud provider credentials Read/write access to volumes Read/write access to object storage for backup data A object definition for the Velero server (Optional) A object definition for the Velero server, to take PV snapshots See documentation on how to install Velero in some specific providers: After you use the `velero install` command to install Velero into your cluster, you scale the Velero deployment down to 0 so it is not simultaneously being run on the remote cluster and potentially causing things to get out of sync: `kubectl scale --replicas=0 deployment velero -n velero` To run the server locally, use the full path according to the binary you need. Example, if you are on a Mac, and using `AWS` as a provider, this is how to run the binary you built from source using the full path: `AWSSHAREDCREDENTIALSFILE=<path-to-credentials-file> ./output/bin/darwin/amd64/velero`. Alternatively, you may add the `velero` binary to your `PATH`. Start the server: `velero server [CLI flags]`. The following CLI flags may be useful to customize, but see `velero server --help` for full details: `--log-level`: set the Velero server's log level (default `info`, use `debug` for the most logging) `--kubeconfig`: set the path to the kubeconfig file the Velero server uses to talk to the Kubernetes apiserver (default `$KUBECONFIG`) `--namespace`: the set namespace where the Velero server should look for backups, schedules, restores (default `velero`) `--plugin-dir`: set the directory where the Velero server looks for plugins (default `/plugins`) The `--plugin-dir` flag requires the plugin binary to be present locally, and should be set to the directory containing this built binary. `--metrics-address`: set the bind address and port where Prometheus metrics are exposed (default `:8085`)" } ]
{ "category": "Runtime", "file_name": "run-locally.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Feature request about: Suggest an idea for kube-router title: '' labels: feature assignees: '' Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is and what the feature provides. Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context Add any other context or screenshots about the feature request here." } ]
{ "category": "Runtime", "file_name": "feature_request.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | | - | | -- | -- | | -- | | S00001 | An IP that is set in ReservedIP CRD should not be assigned to a pod | p2 | | done | | | S00002 | An IP that is set in the `excludeIPs` field of ippool, should not be assigned to a pod | p2 | | done | | | S00003 | Failed to set same IP in excludeIPs when an IP is assigned to a pod | p2 | | done | | | S00004 | Check excludeIPs for manually created ippools | p3 | | done | |" } ]
{ "category": "Runtime", "file_name": "reservedip.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "This manual is mainly about how to use isula search for iSulad community developers and users. Modify isulad `daemon.json` and config registry-mirrors: ```sh $ vim /etc/isulad/daemon.json ... \"registry-mirrors\": [ \"docker.io\" ], ... ``` tips If you do not specify registry when using `isula search`, the registry configured in `daemon.json` is used by default. If you want to use the http protocol to access the registry, you also need to add the registry to the insecure-registries in `daemon.json`: ```sh \"insecure-registries\": [ ], ``` Start isulad with root privileges ```sh $ isulad ``` Search registry for images information. ``` isula search [OPTIONS] TERM ``` | Name,shorthand | Discription | | -- | | | --limit | Max number of search results | | --no-trunc | Dont't truncate output | | --filter,-f | Filter output based on conditions provided | | --format | Format the output using the given go template | ```sh $ isula search busybox NAME DESCRIPTION STARS OFFICIAL AUTOMATED busybox Busybox base image. 2791 [OK] radial/busyboxplus Full-chain, Internet enabled, busybox made f... 49 [OK] yauritux/busybox-curl Busybox with CURL 18 arm32v7/busybox Busybox base image. 10 odise/busybox-curl - 4 [OK] arm64v8/busybox Busybox base image. 4 i386/busybox Busybox base image. 3 joeshaw/busybox-nonroot Busybox container with non-root user nobody 2 p7ppc64/busybox Busybox base image for ppc64. 2 busybox42/zimbra-docker-centos A Zimbra Docker image, based in ZCS 8.8.9 an... 2 [OK] s390x/busybox Busybox base image. 2 prom/busybox Prometheus Busybox Docker base images 2 [OK] vukomir/busybox busybox and curl 1 amd64/busybox Busybox base image. 1 ppc64le/busybox Busybox base image. 1 spotify/busybox Spotify fork of https://hub.docker.com/_/bus... 1 busybox42/nginx_php-docker-centos This is a nginx/php-fpm server running on Ce... 1 [OK] rancher/busybox - 0 ibmcom/busybox - 0 openebs/busybox-client - 0 antrea/busybox - 0 ibmcom/busybox-amd64 - 0 ibmcom/busybox-ppc64le - 0 busybox42/alpine-pod - 0 arm32v5/busybox Busybox base image. 0 ``` ```sh $ isula search --filter=stars=3 --no-trunc busybox NAME DESCRIPTION STARS OFFICIAL AUTOMATED busybox Busybox base image. 2791 [OK] radial/busyboxplus Full-chain, Internet enabled, busybox made from scratch. Comes in git and cURL flavors. 49 [OK] yauritux/busybox-curl Busybox with CURL 18 arm32v7/busybox Busybox base" }, { "data": "10 odise/busybox-curl - 4 [OK] arm64v8/busybox Busybox base image. 4 i386/busybox Busybox base image. 3 ``` ```sh $isula search --limit=1 busybox NAME DESCRIPTION STARS OFFICIAL AUTOMATED busybox Busybox base image. 2789 [OK] ``` The filtering flag (-f or --filter) format is a key=value pair. If there is more than one filter, then pass multiple flags (e.g. --filter is-automated=true --filter stars=3) The currently supported filters are: stars(int)Limit number of stars for the image. is-automated (boolean - true or false) :is the image automated or not. is-official (boolean - true or false) is the image official or not. ``` $ isula search --filter stars=3 busybox NAME DESCRIPTION STARS OFFICIAL AUTOMATED busybox Busybox base image. 2791 [OK] radial/busyboxplus Full-chain, Internet enabled, busybox made f... 49 [OK] yauritux/busybox-curl Busybox with CURL 18 arm32v7/busybox Busybox base image. 10 odise/busybox-curl - 4 [OK] arm64v8/busybox Busybox base image. 4 i386/busybox Busybox base image. 3 ``` ``` $ isula search --filter is-automated=true busybox NAME DESCRIPTION STARS OFFICIAL AUTOMATED radial/busyboxplus Full-chain, Internet enabled, busybox made f... 49 [OK] odise/busybox-curl - 4 [OK] busybox42/zimbra-docker-centos A Zimbra Docker image, based in ZCS 8.8.9 an... 2 [OK] prom/busybox Prometheus Busybox Docker base images 2 [OK] busybox42/nginx_php-docker-centos This is a nginx/php-fpm server running on Ce... 1 [OK] ``` ``` $ isula search --filter is-official=true --filter stars=3 busybox NAME DESCRIPTION STARS OFFICIAL AUTOMATED busybox Busybox base image. 2791 [OK] ``` The formatting option (--format) pretty-prints search output using a Go template. Valid placeholders for the Go template are: | Placeholder | Description | | :- | : | | `.Name` | Image Name | | `.Description` | Image description | | `.StarCount` | Number of stars for the image | | `.IsOfficial` | OK if image is official | | `.IsAutomated` | OK if image build was automated | For example ``` $ isula search --format \"table {{.Name}}\\t{{.IsAutomated}}\\t{{.IsOfficial}}\" nginx NAME AUTOMATED OFFICIAL nginx [OK] linuxserver/nginx bitnami/nginx [OK] ubuntu/nginx bitnami/nginx-ingress-controller [OK] rancher/nginx-ingress-controller webdevops/nginx [OK] ibmcom/nginx-ingress-controller bitnami/nginx-exporter bitnami/nginx-ldap-auth-daemon kasmweb/nginx rancher/nginx-ingress-controller-defaultbackend rancher/nginx rapidfort/nginx vmware/nginx vmware/nginx-photon wallarm/nginx-ingress-controller bitnami/nginx-intel ibmcom/nginx-ingress-controller-ppc64le ibmcom/nginx-ppc64le rapidfort/nginx-ib rancher/nginx-conf rancher/nginx-ssl continuumio/nginx-ingress-ws rancher/nginx-ingress-controller-amd64 ```" } ]
{ "category": "Runtime", "file_name": "isula_search.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "name: Bug Report about: Create a report to help us improve this project <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks! --> What happened: What you expected to happen: How to reproduce it: Anything else we need to know?: Environment: CSI Spec version: Others:" } ]
{ "category": "Runtime", "file_name": "bug-report.md", "project_name": "Container Storage Interface (CSI)", "subcategory": "Cloud Native Storage" }