content
listlengths
1
171
tag
dict
[ { "data": "We want to natively support RWX volumes, the creation and usage of these rwx volumes should preferably be transparent to the user. This would make it so that there is no manual user interaction necessary, and a rwx volume just looks the same as regular Longhorn volume. This would also allow any Longhorn volume to be used as rwx, without special requirements. https://github.com/longhorn/longhorn/issues/1470 https://github.com/Longhorn/Longhorn/issues/1183 support creation of RWX volumes via Longhorn support creation of RWX volumes via RWX pvc's support mounting NFS shares via the CSI driver creation of a share-manager that manages and exports volumes via NFS clustered NFS (highly available) distributed filesystem RWX volumes should support all operations that a regular RWO volumes supports (backup & restore, DR volume support, etc) Before this enhancement we create an RWX provisioner example that was using a regular Longhorn volume to export multiple NFS shares. The provisioner then created native kubernetes NFS persistent volumes, there where many limitations with this approach (multiple workload pvs on one Longhorn volume, restore & backup is iffy, etc) After this enhancement anytime a user uses an RWX pvc we will provision a Longhorn volume and expose the volume via a share-manager. The CSI driver will then mount this volume that is exported via a NFS server from the share-manager pod. Users can automatically provision and use an RWX volume by having their workload use an RWX pvc. Users can see the status of their RWX volumes in the Longhorn UI, same as for RWO volumes. Users can use the created RWX volume on different nodes at the same time. add `AccessMode` field to the `Volume.Spec` add `ShareEndpoint, ShareState` fields to the `Volume.Status` add a new ShareManager crd, details below ```go type ShareManagerState string const ( ShareManagerStateUnknown = ShareManagerState(\"unknown\") ShareManagerStateStarting = ShareManagerState(\"starting\") ShareManagerStateRunning = ShareManagerState(\"running\") ShareManagerStateStopped = ShareManagerState(\"stopped\") ShareManagerStateError = ShareManagerState(\"error\") ) type ShareManagerSpec struct { Image string `json:\"image\"` } type ShareManagerStatus struct { OwnerID string `json:\"ownerID\"` State ShareManagerState `json:\"state\"` Endpoint string `json:\"endpoint\"` } ``` Volume controller is responsible for creation of share manager crd and synchronising share status and endpoint of the volume with the share manager resource. Share manager controller will be responsible for managing share manager pods and ensuring volume attachment to the share manager pod. Share manager pod is responsible for health checking and managing the NFS server volume export. CSI driver is responsible for mounting the NFS export to the workload pod When a new RWX volume with name `test-volume` is created, the volume controller will create a matching share-manager resource with name `test-volume`. The share-manager-controller will pickup this new share-manager resource and create share-manager pod with name `share-manager-test-volume` in the longhorn-system namespace, as well as a service named `test-volume` that always points to the share-manager-pod `share-manager-test-volume`. The controller will set the `State=Starting` while the share-manager-pod is Pending and not `Ready` yet. The share-manager-pod is running our share-manager image which allows for exporting a block device via ganesha (NFS server). After starting the share-manager, the application waits for the attachment of the volume `test-volume`, this is done by an availability check of the block device in the bind-mounted `/host/dev/longhorn/test-volume` folder. The actual volume attachment is handled by the share-manager-controller setting volume.spec.nodeID to the `node` of the share-manager-pod. Once the volume is attached the share-manager will mount the volume, create export config and start ganesha (NFS" }, { "data": "Afterwards the share-manager will do periodic health check against the attached volume and on failure of a health check the pod will terminate. The share-manager pod will become `ready` as soon as ganesha is up and running this is accomplished via a check against `/var/run/ganesha.pid`. The share-manager-controller can now update the share-manager `State=Running` and `Endpoint=nfs://service-cluster-ip/test-volume\"`. The volume-controller will update the volumes `ShareState` and `ShareEndpoint` based on the values of the share-manager `State` and `Endpoint`. Once the volumes `ShareState` is `Running` the csi-driver can now successfully attach & mount the volume into the workload pods. On node failure Kubernetes will mark the share-manager-pod `share-manager-test-volume` as terminating. The share-manager-controller will mark the share-manager `State=Error` if the share-manager pod is in any state other than `Running` or `Pending`, unless the share-manager is no longer required for this volume (no workloads consuming this volume). When the share-manager `State` is `Error` the volume-controller will continuously set the volumes `RemountRequestedAt=Now` so that we will cleanup the workload pods till the share-manager is back in order. This cleanup is to force the workload pods to initiate a new connection against the NFS server. In the future we hope to reuse the NFS connection which will make this step no longer necessary. The share-manager-controller will start a new share-manager pod on a different node and set `State=Starting`. Once the pod is `Ready` the controller will set `State=Running` and the workload pods are now able to reconnect/remount again. See above for the complete flow from `Starting -> Ready -> Running -> volume share available` User needs to add a new share-manager-image to airgapped environments Created a to allow users to migrate data from previous NFS provisioner or other RWO volumes The dbus interface can be used to add & remove exports. As well as make the server go into the grace period. Rados requires ceph, some more difference between kv/ng https://bugzilla.redhat.com/show_bug.cgi?id=1557465 https://lists.nfs-ganesha.org/archives/list/[email protected]/thread/DPULRQKCGB2QQUCUMOVDOBHCPJL22QMX/ rados_kv: This is the original rados recovery backend that uses a key- value store. It has support for \"takeover\" operations: merging one recovery database into another such that another cluster node could take over addresses that were hosted on another node. Note that this recovery backend may not survive crashes that occur during a grace period. If it crashes and then crashes again during the grace period, the server is likely to fail to allow any clients to recover (the db will be trashed). rados_ng: a more resilient rados_kv backend. This one does not support takeover operations, but it should properly survive crashes that occur during the grace period. rados_cluster: the new (experimental) clustered recovery backend. This one also does not support address takeover, but should be resilient enough to handle crashes that occur during the grace period. FWIW, the semantics for fs vs. fs_ng are similar. fs doesn't survive crashes that occur during the grace period either. Unless you're trying to use the dbus \"grace\" command to initiate address takeover in an active/active cluster, you probably want rados_ng for now. ``` CreateVolume ++ DeleteVolume +->| CREATED +--+ | ++-^+ | | Controller | | Controller v +++ Publish | | Unpublish +++ |X| Volume | | Volume | | +-+ +v-++ +-+ | NODE_READY | ++-^+ Node | | Node Stage | | Unstage Volume | | Volume +v-++ | VOL_READY | ++-^+ Node | | Node Publish | | Unpublish Volume | | Volume +v-++ | PUBLISHED | ++ ``` Figure 6: The lifecycle of a dynamically provisioned volume, from creation to destruction, when the Node Plugin advertises the STAGEUNSTAGEVOLUME capability. https://github.com/longhorn/longhorn-manager/commit/2636a5dc6d79aa12116e7e5685ccd831747639df https://github.com/longhorn/longhorn-tests/commit/3a55d6bfe633165fb6eb9553235b7d0a2e651cec" } ]
{ "category": "Runtime", "file_name": "20201220-rwx-volume-support.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Access metric status of the operator ``` -h, --help help for metrics ``` - Run cilium-operator-alibabacloud - List all metrics for the operator" } ]
{ "category": "Runtime", "file_name": "cilium-operator-alibabacloud_metrics.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This document provides a high level perspective on the implications of restoring multiple VM clones from a single snapshot. We start with an overview of the Linux random number generation (RNG) facilities, then go through the potential issues weve identified related to cloning state, and finally conclude with a series of recommendations. Its worth stressing that we aim to prevent stale state being a problem only for the kernel interfaces. Some userspace applications or libraries keep their own equivalent of entropy pools and suffer from the same potential issues after being cloned. There is no generic solution under the current programming model, and all we can do is recommend against their use in pre-snapshot logic. The Linux kernel exposes three main `RNG` interfaces to userspace: the `/dev/random` and `/dev/urandom` special devices, and the `getrandom` syscall, which are described in the . Moreover, Firecracker supports the device which can provide additional entropy to guest VMs. It draws its random bytes from the crate which wraps the . Traditionally, `/dev/random` has been considered a source of true randomness, with the downside that reads block when the pool of entropy gets depleted. On the other hand, `/dev/urandom` doesnt block, which lead people believe that it provides lower quality results. It turns out the distinction in output quality is actually very hard to make. According to , for kernel versions prior to 4.8, both devices draw their output from the same pool, with the exception that `/dev/random` will block when the system estimates the entropy count has decreased below a certain threshold. The `/dev/urandom` output is considered secure for virtually all purposes, with the caveat that using it before the system gathers sufficient entropy for initialization may indeed produce low quality random numbers. The `getrandom` syscall helps with this situation; it uses the `/dev/urandom` source by default, but will block until it gets properly initialized (the behavior can be altered via configuration flags). Newer kernels (4.8+) have switched to an implementation where `/dev/random` output comes from a pool called the blocking pool, the output of `/dev/urandom` is given by a CSPRNG (cryptographically secure pseudorandom number generator), and theres also an input pool which gathers entropy from various sources available on the system, and is used to feed into or seed the other two components. A very detailed description is available . The details of this newer implementation are used to make the recommendations present in the document. There are in-kernel interfaces used to obtain random numbers as well, but they are similar to using `/dev/urandom` (or `getrandom` with the default source) from userspace. Whenever a VM clone is created based on a snapshot, execution resumes precisely from the previously saved" }, { "data": "Getting random bytes from either `/dev/random` or `/dev/urandom` does not lead to identical results for different clones created from the same snapshot because multiple parameters (such as timer data, or output from `CPU HWRNG` instructions which are present on Ivy Bridge or newer Intel processors and enabled in a Firecracker guest) are mixed with each result. Extra bits are mixed in both when reading random values, and in conjunction with entropy related events such as interrupts. Moreover, the guest kernel will eventually receive fresh entropy from `virtio-rng`, if attached. There are two questions here: Is the `CPU HWRNG` output always mixed in when the feature is present (as opposed to only when the `CPU HWRNG` is trusted)? Is the added noise strong enough to consider the final RNG output sufficiently divergent from all other clones? Both these questions are particularly relevant immediately after resuming a VM from a snapshot. After the VM gets to run for a \"sufficient\" amount of time it should be able to gather some more entropy by itself and its state should be sufficiently divergent that of any other clones. It seems the `CPU HWRNG` is always added to mix when present. More specifically, mentions using the `CPU HWRNG` when present for the entropy pool output function. Page 34 states *in case a CPU random number generator is known to the Linux-RNG, data from that hardware RNG is mixed into the entropy pool in a second step*. With respect to the initialization of the random pools and DRNG behind /dev/urandom. The discussion regarding DRNG state on page 35 mentions *the key part, the counter, and the nonce are XORed with the output of the CPU random number generator if one is present. If it is not present, one high-resolution time stamp obtained with the kernel function randomgetentropy word is XORed with the key part*. The `CPU HWRNG` is also used for the DRNG state transition function (as stated on page 36 point 1), and during the reseed operation (page 37 point 2). The document explicitly mentions when the `CPU HWRNG` has to be trusted (for example, the bullet points at the end of Section 3.3.2.3). Its not yet clear whether the noise that gets added for each clone post restore is sufficient to consider their RNG states distinct for security purposes. The conservative approach is to presume the stale state has a significant influence on RNG output, so we should reinitialize both sources based on fresh data after each restore. It would seem that simply writing data to `/dev/urandom` is enough to muddle the entropy pools, but the bits only get mixed with the input pool. Its not certain at this point whether such writes have any immediate impact on the blocking pool, and its unlikely they cause the `CSPRNG` to be automatically reseeded. The standard methods of interacting with the kernel RNG sources are documented in the" }, { "data": "It states that any writes to either `/dev/random` or `/dev/urandom` are mixed with the input entropy pool, but do not increase the current entropy estimation. There is also an `ioctl` interface which, given the appropriate privileges, can be used to add data to the input entropy pool while also increasing the count, or completely empty all pools. Since version 5.18, Linux has support for the . The purpose of VMGenID is to notify the guest about time shift events, such as resuming from a snapshot. The device exposes a 16-byte cryptographically random identifier in guest memory. Firecracker implements VMGenID. When resuming a microVM from a snapshot Firecracker writes a new identifier and injects a notification to the guest. Linux, . Quoting the random.c implementation of the kernel: ``` /* Handle a new unique VM ID, which is unique, not secret, so we don't credit it, but we do immediately force a reseed after so that it's used by the crng posthaste. */ ``` As a result, values returned by `getrandom()` and `/dev/(u)random` are distinct in all VMs started from the same snapshot, after the kernel handles the VMGenID notification. This leaves a race window between resuming vCPUs and Linux CSPRNG getting successfully re-seeded. In Linux 6.8, we to emit a uevent to user space when it handles the notification. User space can poll this uevent to know when it is safe to use `getrandom()`, et al. avoiding the race condition. Please note that, Firecracker will always enable VMGenID. In kernels earlier than 5.18, where there is no VMGenID driver, the device will not have any effect in the guest. Init systems (such as `systemd` used by AL2 and other distros) might save a random seed file after boot. For `systemd`, the path is `/var/lib/systemd/random-seed`. Just to be on the safe side, any such file should be deleted before taking a snapshot, to prevent its reuse for any purposes by the guest. Theres also the `/proc/sys/kernel/random/boot_id` special file, which gets initialized with a random string at boot time, and is read-only afterwards. All clones restored from the same snapshot will implicitly read the same value from this file. If thats not desirable, its possible to alter the read result via bind mounting another file on top of `/proc/sys/kernel/random/boot_id`. Delete `/var/lib/systemd/random-seed`, or any equivalent files. If changing the value present in `/proc/sys/kernel/random/boot_id` is important, bind mount another file on top of it. If microVMs run on machines with IvyBridge or newer Intel processors (which provide RDRAND; in addition, RDSEED is offered starting with Broadwell). Hardware supported reseeding is done on a cadence defined by the Linux Kernel and should be sufficient for most cases. Use `virtio-rng`. When present, the guest kernel uses the device as an additional source of entropy. On kernels before" }, { "data": "to be as safe as possible, the direct approach is to do the following (before customer code is resumed in the clone): Open one of the special devices files (either `/dev/random` or `/dev/urandom`). Take note that `RNDCLEARPOOL` no longer on the entropy pool. Issue an `RNDADDENTROPY` ioctl call (requires `CAPSYSADMIN`) to mix the provided bytes into the input entropy pool and increase the entropy count. This should also cause the `/dev/urandom` `CSPRNG` to be reseeded. The bytes can be generated locally in the guest, or obtained from the host. Issue a `RNDRESEEDCRNG` ioctl call (, , (requires `CAPSYSADMIN`)) that specifically causes the `CSPRNG` to be reseeded from the input pool. On kernels starting from 5.18 onwards, the CSPRNG will be automatically reseeded when the guest kernel handles the VMGenID notification. To completely avoid the race condition, users should follow the same steps as with kernels \\< 5.18. On kernels starting from 6.8, users can poll for the VMGenID uevent that the driver sends when the CSPRNG is reseeded after handling the VMGenID notification. Annex 1 contains the source code of a C program which implements the previous three steps. As soon as the guest kernel version switches to 4.19 (or higher), we can rely on the `CONFIGRANDOMTRUST_CPU` kernel option (or the random.trust_cpu=on cmdline parameter) to have the entropy pool automatically refilled using the `CPU HWRNG`, so step 3 would no longer be necessary. Another way around step 3 is to attach a `virtio-rng` device. However, we cannot control when the guest kernel will request for random bytes from the device. ```cpp void exit_usage() { printf(\"Usage: ./rerand [<hexadecimal_string>]\\n\" \"The length of the string must be a multiple of 8.\\n\"); exit(EXIT_FAILURE); } void exit_perror(const char *msg) { perror(msg); exit(EXIT_FAILURE); } int main(int argc, char argv) { if (argc > 2) { exit_usage(); } size_t len = 0; struct randpoolinfo *info = NULL; if (argc == 2) { len = strlen(argv[1]); // We want len to be a multiple of 8 such that we have an easier time // parsing argv[1] into an array of u32s. if (len % 8) { exit_usage(); } info = malloc(sizeof(struct randpoolinfo) + len / 8); if (info == NULL) { exitperror(\"Could not alloc randpool_info struct\"); } // This is measured in bits IIRC. info->entropy_count = len * 4; info->buf_size = len / 8; } int fd = open(\"/dev/urandom\", O_RDWR); if (fd < 0) { exit_perror(\"Unable to open /dev/urandom\"); } if (ioctl(fd, RNDCLEARPOOL) < 0) { exit_perror(\"Error issuing RNDCLEARPOOL operation\"); } if (argc == 1) { exit(EXIT_SUCCESS); } // Add the entropy bytes supplied by the user. char num_buf[9] = {}; size_t pos = 0; while (pos < len) { memcpy(num_buf, &argv[1] + pos, 8); info->buf[pos / 8] = strtoul(num_buf, NULL, 16); pos += 8; } if (ioctl(fd, RNDADDENTROPY, info) < 0) { exit_perror(\"Error issuing RNDADDENTROPY operation\"); } } ```" } ]
{ "category": "Runtime", "file_name": "random-for-clones.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "<p align=\"center\"><img alt=\"sysbox\" src=\"../figures/dind.png\" width=\"800x\" /></p> Sysbox has support for running Docker inside containers (aka Docker-in-Docker). Unlike all other alternatives, Sysbox enables users to do this easily and securely, without resorting to complex Docker run commands and container images, and without using privileged containers or bind-mounting the host's Docker socket into the container. The inner Docker is totally isolated from the Docker on the host. This is useful for Docker sandboxing, testing, and CI/CD use cases. The easiest way is to use a system container image that has Docker preinstalled in it. You can find a few such images in the . The Dockerfiles for the images are . Alternatively, you can always deploy a baseline system container image (e.g., ubuntu or alpine) and install Docker in it just as you would on a physical host or VM. In fact, the system container images that come with Docker preinstalled are created with a Dockerfile that does just that. You do this just as you would on a physical host or VM (e.g., by executing `docker run -it alpine` inside the container). The has several examples showing how to run Docker inside a system container. Sysbox enables you to easily create system container images that come preloaded with inner Docker container images. This way, when you deploy the system container, the inner Docker images are already present, and void the need to pull them from the network. There are two ways to do this: 1) Using `docker build` 2) Using `docker commit` See the for a full explanation on how to do this. Inner container images that are inside the system container image are persistent: they are present every time a new system container is created. However, inner container images that are pulled into the system container at runtime are not by default persistent: they get destroyed when the system container is removed (e.g., via `docker rm`). But it's easy to make these runtime inner container images (and even inner containers) persistent too. You do this by simply mounting a host directory into the system container's `/var/lib/docker` directory (i.e., where the inner Docker stores its container images). The Sysbox Quick Start Guide has examples and . A couple of caveats though: 1) A given host directory mounted into a system container's `/var/lib/docker` must only be mounted on a single system container at any given" }, { "data": "This is a restriction imposed by the inner Docker daemon, which does not allow its image cache to be shared concurrently among multiple daemon instances. Sysbox will check for violations of this rule and report an appropriate error during system container creation. 2) Do not mount the host's `/var/lib/docker` to a system container's `/var/lib/docker`. Doing so breaks container isolation since the system container can now inspect all sibling containers on the host. Furthermore, as mentioned in bullet (1) above, you cannot share `/var/lib/docker` across multiple container instances, so you can't share the host's Docker cache with a Docker instance running inside a sysbox container. Inside a system container you can deploy privileged Docker containers (e.g., by issuing the following command to the inner Docker: `docker run --privileged ...`). NOTE: due to a bug in Docker, this requires the inner Docker to be version 19.03 or newer. The ability to run privileged containers inside a system container is useful when deploying inner containers that require full privileges (typically containers for system services such as Kubernetes control-plane pods). Note however that a privileged container inside a system container is only privileged within the context of the system container, but has no privileges on the underlying host. For example, when running a privileged container inside a system container, the procfs (i.e., `/proc`) mounted inside the privileged container only allows access to resources associated with the system container. It does not allow access to all host resources. This is a unique and key security feature of Sysbox: it allows you to run privileged containers inside a system container without risking host security. Most Docker functionality works perfectly inside the system container. However, there are some limitations at this time. This section describes these limitations. The inner Docker must store it's images at the usual `/var/lib/docker`. This directory is known as the Docker \"data-root\". While it's possible to configure the inner Docker to store it's images at some other location within the system container (via the Docker daemon's `--data-root` option), Sysbox does not currently support this (i.e., the inner Docker won't work). The inner Docker must not be configured with . Enabling userns-remap on the inner Docker would cause the Linux user-namespace to be used on the inner containers, further isolating them from the rest of the software in the system container. This is useful and we plan to support it in the future. Note however that even without the inner Docker userns remap, inner containers are already well isolated from the host by the system container itself, since the system container uses the Linux user namespace." } ]
{ "category": "Runtime", "file_name": "dind.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC" } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "(devices-gpu)= GPU devices make the specified GPU device or devices appear in the instance. ```{note} For containers, a `gpu` device may match multiple GPUs at once. For VMs, each device can match only a single GPU. ``` The following types of GPUs can be added using the `gputype` device option: (container and VM): Passes an entire GPU through into the instance. This value is the default if `gputype` is unspecified. (VM only): Creates and passes a virtual GPU through into the instance. (container only): Creates and passes a MIG (Multi-Instance GPU) through into the instance. (VM only): Passes a virtual function of an SR-IOV-enabled GPU into the instance. The available device options depend on the GPU type and are listed in the tables in the following sections. (gpu-physical)= ```{note} The `physical` GPU type is supported for both containers and VMs. It supports hotplugging only for containers, not for VMs. ``` A `physical` GPU device passes an entire GPU through into the instance. GPU devices of type `physical` have the following device options: Key | Type | Default | Description :-- | :-- | :-- | :-- `gid` | int | `0` | GID of the device owner in the instance (container only) `id` | string | - | The DRM card ID of the GPU device `mode` | int | `0660` | Mode of the device in the instance (container only) `pci` | string | - | The PCI address of the GPU device `productid` | string | - | The product ID of the GPU device `uid` | int | `0` | UID of the device owner in the instance (container only) `vendorid` | string | - | The vendor ID of the GPU device (gpu-mdev)= ```{note} The `mdev` GPU type is supported only for VMs. It does not support hotplugging. ``` An `mdev` GPU device creates and passes a virtual GPU through into the instance. You can check the list of available `mdev` profiles by running" }, { "data": "GPU devices of type `mdev` have the following device options: Key | Type | Default | Description :-- | :-- | :-- | :-- `id` | string | - | The DRM card ID of the GPU device `mdev` | string | - | The `mdev` profile to use (required - for example, `i915-GVTgV54`) `pci` | string | - | The PCI address of the GPU device `productid` | string | - | The product ID of the GPU device `vendorid` | string | - | The vendor ID of the GPU device (gpu-mig)= ```{note} The `mig` GPU type is supported only for containers. It does not support hotplugging. ``` A `mig` GPU device creates and passes a MIG compute instance through into the instance. Currently, this requires NVIDIA MIG instances to be pre-created. GPU devices of type `mig` have the following device options: Key | Type | Default | Description :-- | :-- | :-- | :-- `id` | string | - | The DRM card ID of the GPU device `mig.ci` | int | - | Existing MIG compute instance ID `mig.gi` | int | - | Existing MIG GPU instance ID `mig.uuid` | string | - | Existing MIG device UUID (`MIG-` prefix can be omitted) `pci` | string | - | The PCI address of the GPU device `productid` | string | - | The product ID of the GPU device `vendorid` | string | - | The vendor ID of the GPU device You must set either `mig.uuid` (NVIDIA drivers 470+) or both `mig.ci` and `mig.gi` (old NVIDIA drivers). (gpu-sriov)= ```{note} The `sriov` GPU type is supported only for VMs. It does not support hotplugging. ``` An `sriov` GPU device passes a virtual function of an SR-IOV-enabled GPU into the instance. GPU devices of type `sriov` have the following device options: Key | Type | Default | Description :-- | :-- | :-- | :-- `id` | string | - | The DRM card ID of the parent GPU device `pci` | string | - | The PCI address of the parent GPU device `productid` | string | - | The product ID of the parent GPU device `vendorid` | string | - | The vendor ID of the parent GPU device" } ]
{ "category": "Runtime", "file_name": "devices_gpu.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Show contents of table \"devices\" ``` cilium-dbg statedb devices [flags] ``` ``` -h, --help help for devices -w, --watch duration Watch for new changes with the given interval (e.g. --watch=100ms) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Inspect StateDB" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_statedb_devices.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "All API communication between Antrea control plane components is encrypted with TLS. The TLS certificates that Antrea requires can be automatically generated. You can also provide your own certificates. This page explains the certificates that Antrea requires and how to configure and rotate them for Antrea. <!-- toc --> - <!-- /toc --> Currently Antrea only requires a single server certificate for the antrea-controller API server endpoint, which is for the following communication: The antrea-agents talks to the antrea-controller for fetching the computed NetworkPolicies The kube-aggregator (i.e. kube-apiserver) talks to the antrea-controller for proxying antctl's requests (when run in \"controller\" mode) Antrea doesn't require client certificates for its own components as it delegates authentication and authorization to the Kubernetes API, using Kubernetes for client authentication. By default, antrea-controller generates a self-signed certificate. You can override the behavior by . Either way, the antrea-controller will distribute the CA certificate as a ConfigMap named `antrea-ca` in the Antrea deployment Namespace and inject it into the APIServices resources created by Antrea in order to allow its clients (i.e. antrea-agent, kube-apiserver) to perform authentication. Typically, clients that wish to access the antrea-controller API can authenticate the server by validating against the CA certificate published in the `antrea-ca` ConfigMap. Since Antrea v0.7.0, you can provide your own certificates to Antrea. To do so, you must set the `selfSignedCert` field of `antrea-controller.conf` to `false`, so that the antrea-controller will read the certificate key pair from the `antrea-controller-tls` Secret. The example manifests and descriptions below assume Antrea is deployed in the `kube-system` Namespace. If you deploy Antrea in a different Namepace, please update the Namespace name in the manifests accordingly. ```yaml apiVersion: v1 kind: ConfigMap metadata: labels: app: antrea name: antrea-config namespace: kube-system data: antrea-controller.conf: | selfSignedCert: false ``` You can generate the required certificate manually, or through" }, { "data": "Either way, the certificate must be issued with the following key usages and DNS names: X509 key usages: digital signature key encipherment server auth DNS names: antrea.kube-system.svc antrea.kube-system.svc.cluster.local Note: It assumes you are using `cluster.local` as the cluster domain, you should replace it with the actual one of your Kubernetes cluster. You can then create the `antrea-controller-tls` Secret with the certificate key pair and the CA certificate in the following form: ```yaml apiVersion: v1 kind: Secret type: kubernetes.io/tls metadata: name: antrea-controller-tls namespace: kube-system data: ca.crt: <BASE64 ENCODED CA CERTIFICATE> tls.crt: <BASE64 ENCODED TLS CERTIFICATE> tls.key: <BASE64 ENCODED TLS KEY> ``` You can use `kubectl apply -f <PATH TO SECRET YAML>` to create the above secret, or use `kubectl create secret`: ```bash kubectl create secret generic antrea-controller-tls -n kube-system \\ --from-file=ca.crt=<PATH TO CA CERTIFICATE> --from-file=tls.crt=<PATH TO TLS CERTIFICATE> --from-file=tls.key=<PATH TO TLS KEY> ``` If you set up to manage your certificates, it can be used to issue and renew the certificate required by Antrea. To get started, follow the [cert-manager installation documentation]( https://cert-manager.io/docs/installation/kubernetes/) to deploy cert-manager and configure `Issuer` or `ClusterIssuer` resources. The `Certificate` should be created in the `kube-system` namespace. For example, A `Certificate` may look like: ```yaml apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: antrea-controller-tls namespace: kube-system spec: secretName: antrea-controller-tls commonName: antrea dnsNames: antrea.kube-system.svc antrea.kube-system.svc.cluster.local usages: digital signature key encipherment server auth issuerRef: name: ca-issuer kind: Issuer ``` Once the `Certificate` is created, you should see the `antrea-controller-tls` Secret created in the `kube-system` Namespace. Note it may take up to 1 minute for Kubernetes to propagate the Secret update to the antrea-controller Pod if the Pod starts before the Secret is created. Antrea v0.7.0 and higher supports certificate rotation. It can be achieved by simply updating the `antrea-controller-tls` Secret. The antrea-controller will react to the change, updating its serving certificate and re-distributing the latest CA certificate (if applicable). If you are using cert-manager to issue the certificate, it will renew the certificate before expiry and update the Secret automatically. If you are using certificates signed by Antrea, Antrea will rotate the certificate automatically before expiration." } ]
{ "category": "Runtime", "file_name": "securing-control-plane.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Apps launched by rkt have access to some basic devices and file systems as defined by the App Container spec in the section. In addition to the basic devices and file systems mandated by the App Container spec, rkt gives access to the following files. Support for /etc/hosts is optional in the App Container spec. rkt creates it. `/etc/resolv.conf` is automatically . Since rkt v1.2.0, rkt gives access to systemd-journald's sockets in the /run/systemd/journal directory: /run/systemd/journal/dev-log /run/systemd/journal/socket /run/systemd/journal/stdout Since rkt v1.2.0, if /dev/log does not exist in the image, it will be created as a symlink to /run/systemd/journal/dev-log." } ]
{ "category": "Runtime", "file_name": "app-environment.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | ||-|-|-|--|-| | F00001 | Succeed to keep static IP for kubevirt VM/VMI after restarting the VM/VMI pod | P1 | | done | | | F00002 | Succeed to keep static IP for the kubevirt VM live migration | P1 | | done | | | F00003 | Succeed to allocate multiple NICs | P1 | | done | |" } ]
{ "category": "Runtime", "file_name": "kubevirt.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: Performance Benchmark sidebar_position: 1 slug: . description: This article describes benchmarking the file system using FIO, mdtest, and the bench command that comes with JuiceFS. Redis is used as Metadata Engine in this benchmark. Under this test condition, JuiceFS performs 10x better than and . JuiceFS provides a subcommand `bench` to run a few basic benchmarks to evaluate how it works in your environment: Performed sequential read/write benchmarks on JuiceFS, and by . Here is the result: ](../images/sequential-read-write-benchmark.svg) It shows JuiceFS can provide 10X more throughput than the other two. Read . Performed a simple benchmark on JuiceFS, and by . Here is the result: ](../images/metadata-benchmark.svg) It shows JuiceFS can provide significantly more metadata IOPS than the other two. Read . See if you encounter performance issues." } ]
{ "category": "Runtime", "file_name": "benchmark.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "| Author | | | | - | | Date | 2020-10-21 | | Email | | The volume command during image creation can specify the container runtime to create an anonymous volume for storing the data that needs to be persisted during the container operation. corresponding to the following configuration items: ```bash $ isula inspect -f \"{{.image.Spec.config.Volumes}}\" vol { \"/vol\":{} } ```` The volume module needs to support the configuration of this item to support anonymous volumes. Since anonymous volumes will remain after the container is destroyed, the volume module also needs to provide a means to delete volumes to prevent unused volumes from remaining. The volume module also needs to provide commands to list local volumes and delete volumes. In addition, the volume module also supports volume management for the -v and --mount parameters. ````mermaid sequenceDiagram participant isula participant isulad participant volume_store participant volume_driver(local) isulad -->> volumestore:volumeinit volumestore -->> volumedriver(local):registerlocalvolume volumedriver(local) -->> volumedriver(local):localvolumeinit volumedriver(local) -->> volumedriver(local):register_driver isula ->> isulad:create request isulad -->> volumedriver(local):volumecreate isula ->> isulad:volume ls request isulad -->> volumedriver(local):volumelist isulad -->> isula:return all volume info isula ->> isulad:volume remove request isulad -->> volumedriver(local):volumeremove isulad -->> isula:return removed volume id isula ->> isulad:volume prune request isulad -->> volumedriver(local):volumeprune isulad -->> isula:return all removed volume id ```` ````c typedef struct { struct volume (create)(char *name); struct volume (get)(char *name); int (mount)(char name); int (umount)(char name); struct volumes (list)(void); int (remove)(char name); } volume_driver; struct volume { char *driver; char *name; char *path; // volume mount point, valid only when mounted char *mount_point; }; struct volumes { struct volume vols; sizet volslen; }; struct volume_names { char names; sizet nameslen; }; struct volume_options { char *ref; }; // Initialization of volume int volumeinit(char *rootdir); // Register volumedriver in volumestore int registerdriver(char *name, volumedriver *driver); // Create a new volume named name in the volumedriver of the specified drivername, // The container id refering the volume is stored in volume_options struct volume volume_create(char drivername, char *name, struct volumeoptions *opts); int volume_mount(char *name); int volume_umount(char *name); // list all volumes struct volumes *volume_list(void); // Add the container id refering the volume int volumeaddref(char name, char ref); // delete container id refering volume int volumedelref(char name, char ref); // Add the volume with the specified name int volume_remove(char *name); // clear all unused volumes int volumeprune(struct volumenames pruned); ```` The container creation/running process creates/reuses anonymous volumes: Use anonymous volumes in the mirror. No change in interface. During the container creation/running process, whether to create an anonymous volume is determined according to whether this parameter exists in the image configuration. Specify the use of anonymous volumes on the command line. ````sh ```` Use the -v or --volume command to add an anonymous volume. Note that the anonymous volume only has a path inside the container, without \":\". If the source source is not filled in, it is an anonymous volume. --mount also needs to specify the type as the volume mode. In addition, dst can also be written as target (new keyword). Specify the use of a named volume on the command" }, { "data": "````sh ```` The volume name filled in the source parameter of -v or --mount is a named volume (other parameters are the same as the above description). Reuse volumes or bind mounts of other containers. ```bash ```` Use the --volumes-from parameter to specify which container's volumes and bind mounts to reuse. The --volumes-from parameter can be used multiple times, that is, it can be specified to reuse anonymous volumes from multiple containers. Use the following command to query the currently existing anonymous volume (the inspect command is not provided): ````sh DRIVER VOLUME NAME local f6391b735a917ffbaff138970dc45290508574e6ab92e06a1e9dd290f31592ca ```` When the container is stopped/destroyed, the anonymous volume will not be destroyed. You need to manually execute the command to destroy: Delete a single anonymous volume: ```bash ```` The name of the anonymous volume queried by isula volume ls is followed by rm. Delete all unused anonymous volumes: ```bash ```` Where -f means that no manual confirmation is required, and the deletion is performed directly. Add a new folder volumes under the directory /var/lib/isulad to save anonymous volumes. Each anonymous volume creates a folder named after the anonymous volume name. The anonymous volume name is a 64-bit random character (character range a-f, 0-9), save the data and configuration in the folder. The configuration is temporarily unavailable, and the space in the folder is reserved for saving the configuration. The data is stored in a folder named _data, which is the source mount directory of the anonymous volume: ```bash $ tree 71c0fba4a5fd549133d92a5826f821128714e43a0eef46ee4569b627488d0f79 71c0fba4a5fd549133d92a5826f821128714e43a0eef46ee4569b627488d0f79 _data 1 directory, 0 files ```` When iSulad is initialized, it traverses the /var/lib/isulad/volumes directory and loads the following directory information into memory. The volume is specified in the command line parameter, or the anonymous volume is specified in the mirror. When isulad creates a container, it adds the configuration of anonymous volume to the configuration of the container, and creates an anonymous volume in the /var/lib/isulad/volumes directory (see the figure above for the structure). At the same time, the information is stored in the volume management structure of the memory. If selinux is enabled and mountlabel is configured, relabel the newly created directory. If there are files in the volume mount path of the image, copy these files to /var/lib/isulad/volume/$volumename. Note that this copy is only made once when the container is created, and if the volume already has content, it will not be copied again. Copied file types include: Hard links, soft links, normal files, normal folders, character device files, block device files. Other files are not supported, for example, if there is a fifo file in the source path, an error will be reported. During the copying process, the corresponding time, permissions, and extended attributes are also copied. Mount the volume of the container into the container, which is completed by the original volume mount function of the container (the configuration item has been added earlier). It is similar to the \"volume creation\" process, except that steps 2 and 3 are not actually created, but the volume path of the original container is directly reused as the source path of the current volume. Returns all in-memory volume information. Iterates over all containers, checks volume usage, and prohibits deletion if any container is still in use." } ]
{ "category": "Runtime", "file_name": "local_volume_design.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "Longhorn supports SMB/CIFS share as a backup storage. https://github.com/longhorn/longhorn/issues/3599 Support SMB/CIFS share as a backup storage. Introduce SMB/CIFS client for supporting SMB/CIFS as a backup storage. Longhorn already supports NFSv4 and S3 servers as backup storage. However, certain users may encounter compatibility issues with their backup servers, particularly those running on Windows, as the protocols for NFSv4 and S3 are not always supported. To address this issue, the enhancement will enhance support for backup storage options with a focus on the commonly used SMB/CIFS protocol, which is compatible with both Linux and Windows-based servers. Check each Longhorn node's kernel supports the CIFS filesystem by ``` cat /boot/config-`uname -r` | grep CONFIG_CIFS ``` Install the CIFS filesystem user-space tools `cifs-utils` on each Longhorn node Users can configure a SMB/CIFS share as a backup storage Set Backup Target. The path to a SMB/CIFS share is like ```bash cifs://${IP address}/${share name} ``` Set Backup Target Credential Secret Create a secret and deploy it ```yaml apiVersion: v1 kind: Secret metadata: name: cifs-secret namespace: longhorn-system type: Opaque data: CIFSUSERNAME: ${CIFSUSERNAME} CIFSPASSWORD: ${CIFSPASSWORD} ``` Set the setting Backup Target Credential Secret to `cifs-secret` longhorn-manager Introduce the fields `CIFSUSERNAME` and `CIFSPASSWORD` in credentials. The two fields are passed to engine and replica processes for volume backup and restore operations. backupstore Implement SMB/CIFS register/unregister and mount/unmount functions Set a SMB/CIFS share as backup storage. Back up volumes to the backup storage and the operation should succeed. Restore backups and the operation should succeed." } ]
{ "category": "Runtime", "file_name": "20230116-smb-cifs-backup-store-support.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "What are the main conponents of Carina and their resposibility? Carina has three main components: carina-schedulercarina-controllercarina-node. User can get detailed runtime information by checking each components' logs. carina-schedulerall pods using PVC backed by Carina will be scheduled by carina-scheduler. carina-controllerWatching the events of PVC and creates LogicVolume internally. carina-nodeManaging local disks and watching events of LogicVolume and create local LVM or raw volumes. Known issue, PV creation may fail if the local disks' performance is really poor. Carina will try to create LVM volume every ten seconds. The creation will be failed if retries 10 times. User can learn more details by using `kubectl get lv`. Once the PV has been created successfully, can the Pod migrate to other nodes. For typical local volume solutions, if node failes, the pod using local disks can't migrate to other nodes. But Carina can detect the node status and let pod migrate. The newly-borned Pod will have an empty carina volume however. How to run a pod using an specified PV on one of the nodes? using `spec.nodeName` to bypass the scheduler. For StorageClass with `WaitForFirstConsumer`, user can add one annotation `volume.kubernetes.io/selected-node: ${nodeName}` to PVC and then the pod will be scheduled to specified node. This is not recommanded unless knowning the machanisums very clearly. How to deal with the PVs if it's node been deleted from K8S cluster? Just delete the PVC and rebuild. How to create local disks for testing usage? user can create loop device if there are not enough physical disks. ```shell for i in $(seq 1 5); do truncate --size=200G /tmp/disk$i.device && \\ losetup -f /tmp/disk$i.device done ``` How to simulate local SSD disks? ```shell $ echo 0 > /sys/block/loop0/queue/rotational $ lsblk -d -o name,rota NAME ROTA loop1 1 loop0 0 ``` About bcache of each node. bcache is an kernel module. Some the Linux distributions may not enable bcache, you can disable carina's bcache suppport by below methods. ```shell $ modprobe bcache $ lsmod | grep bcache bcache 233472 0 crc64 16384 1 bcache name: host-bcache mountPath: /sys/fs/bcache name: host-bcache hostPath: path: /sys/fs/bcache ``` Enjoy Carina!" } ]
{ "category": "Runtime", "file_name": "FAQ.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark completion\" layout: docs Output shell completion code for the specified shell (bash or zsh) Generate shell completion code. Auto completion supports both bash and zsh. Output is to STDOUT. Load the ark completion code for bash into the current shell - source <(ark completion bash) Load the ark completion code for zsh into the current shell - source <(ark completion zsh) ``` ark completion SHELL [flags] ``` ``` -h, --help help for completion ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources." } ]
{ "category": "Runtime", "file_name": "ark_completion.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Velero Install CLI\" layout: docs This document serves as a guide to using the `velero install` CLI command to install `velero` server components into your kubernetes cluster. NOTE: `velero install` will, by default, use the CLI's version information to determine the version of the server components to deploy. This behavior may be overridden by using the `--image` flag. Refer to . This section explains some of the basic flags supported by the `velero install` CLI command. For a complete explanation of the flags, please run `velero install --help` ```bash velero install \\ --plugins <PLUGINCONTAINERIMAGE [PLUGINCONTAINERIMAGE]> --provider <YOUR_PROVIDER> \\ --bucket <YOUR_BUCKET> \\ --secret-file <PATHTOFILE> \\ --velero-pod-cpu-request <CPU_REQUEST> \\ --velero-pod-mem-request <MEMORY_REQUEST> \\ --velero-pod-cpu-limit <CPU_LIMIT> \\ --velero-pod-mem-limit <MEMORY_LIMIT> \\ [--use-restic] \\ [--default-volumes-to-restic] \\ [--restic-pod-cpu-request <CPU_REQUEST>] \\ [--restic-pod-mem-request <MEMORY_REQUEST>] \\ [--restic-pod-cpu-limit <CPU_LIMIT>] \\ [--restic-pod-mem-limit <MEMORY_LIMIT>] ``` The values for the resource requests and limits flags follow the same format as For plugin container images, please refer to our page. This section provides examples that serve as a starting point for more customized installations. ```bash velero install --provider gcp --plugins velero/velero-plugin-for-gcp:v1.0.0 --bucket mybucket --secret-file ./gcp-service-account.json velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.0.0 --bucket backups --secret-file ./aws-iam-creds --backup-location-config region=us-east-2 --snapshot-location-config region=us-east-2 --use-restic velero install --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.0.0 --bucket $BLOBCONTAINER --secret-file ./credentials-velero --backup-location-config resourceGroup=$AZUREBACKUPRESOURCEGROUP,storageAccount=$AZURESTORAGEACCOUNTID[,subscriptionId=$AZUREBACKUPSUBSCRIPTIONID] --snapshot-location-config apiTimeout=<YOURTIMEOUT>[,resourceGroup=$AZUREBACKUPRESOURCEGROUP,subscriptionId=$AZUREBACKUPSUBSCRIPTION_ID] ```" } ]
{ "category": "Runtime", "file_name": "velero-install.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide gives an overview of the debugging techniques that can be used to pinpoint problems with rkt itself or with particular images. is the most straight-forward technique. It allows you to enter the namespaces of an app within a pod so you can explore its filesystem and see what's running. By default it tries to run `/bin/bash` if it's present in the app's filesystem but you can specify a command to run if the app doesn't provide a bash binary. ```bash $ rkt list UUID APP IMAGE NAME STATE CREATED STARTED NETWORKS 54ca5d22 busybox kinvolk.io/aci/busybox:1.24 running 1 hour ago 1 hour ago default:ip4=172.16.28.114 $ sudo rkt enter 54ca5d22 sh / # ls bin dev etc home proc root run sys tmp usr var / # ps PID USER TIME COMMAND 1 root 0:00 /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 3 root 0:00 /usr/lib/systemd/systemd-journald 6 root 0:00 /bin/sh 40 root 0:00 sh 44 root 0:00 ps ``` This works well for images that have a shell baked in but a common practice is creating minimal images with only a statically compiled binary or a binary plus its needed libraries, which limits the usefulness of `rkt enter`. We'll later see how to overcome this limitation. As explained in , execution with rkt is divided into several distinct stages. While with `rkt enter` you enter stage2, it is sometimes useful to see what happens in stage1. For example, you might want to check what options are passed to the unit file of an app, or interact with stage1 systemd. rkt doesn't include a subcommand you can use to enter stage1, but you can use for that purpose. Note that this only applies for container stage1 images: kvm pods run in an isolated VM and `nsenter` can't enter its stage1, and fly only allows one app per pod and it doesn't have long-running processes on stage1 so there's nothing to enter. Let's enter a pod's stage1. We'll need to find stage1 systemd's PID, which is the child of the PID shown in `rkt status`. Then we can just call nsenter with it: ```bash $ rkt status 86e6df38 state=running created=2017-12-19 14:49:00.376 +0100 CET started=2017-12-19 14:49:00.466 +0100 CET networks=default:ip4=172.16.28.27 pid=8469 exited=false $ ps auxf | grep -A1 [8]469 root 8469 0.1 0.0 54204 5076 pts/1 S+ 14:48 0:00 \\ stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --notify-ready=yes --register=true --link-journal=try-guest --quiet --uuid=86e6df38-0762-4261-af21-d2b265555179 --machine=rkt-86e6df38-0762-4261-af21-d2b265555179 --directory=stage1/rootfs --capability=CAPAUDITWRITE,CAPCHOWN,CAPDACOVERRIDE,CAPFSETID,CAPFOWNER,CAPKILL,CAPMKNOD,CAPNETRAW,CAPNETBINDSERVICE,CAPSETUID,CAPSETGID,CAPSETPCAP,CAPSETFCAP,CAPSYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0 root 8505 0.0 0.0 62276 7300 ? Ss 14:48 0:00 \\_ /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 $ sudo nsenter -m -u -i -p -t 8505 -bash-4.3# ``` However, the stage1 environment doesn't include basic binaries necessary for debugging: ```bash -bash-4.3# ls -bash: ls: command not found -bash-4.3# cat -bash: cat: command not found ``` If you only need to inspect the stage1 filesystem, you can check `/proc/$PID/root/` from the host: ```bash appadd appstart attach dev enter etc gc iottymux lib64 prepare-app reaper.sh root stop systemd-version usr apprm appstop bin diagnostic enterexec flavor init lib opt proc rkt run sys tmp var ``` Luckily, rkt includes a helper script that injects a static busybox binary and makes entering stage1 easier. It can be found in `scripts/debug/stage1installbusybox.sh`. It takes a pod UUID as a parameter and it outputs the right nsenter command: ```bash $ rkt list --full UUID APP IMAGE NAME IMAGE ID STATE CREATED STARTED NETWORKS 86e6df38-0762-4261-af21-d2b265555179 busybox kinvolk.io/aci/busybox:1.24 sha512-140375b2a2bd running 2017-12-19 14:49:00.376 +0100 CET 2017-12-19 14:49:00.466 +0100 CET default:ip4=172.16.28.27 $ ./scripts/debug/stage1installbusybox.sh 86e6df38-0762-4261-af21-d2b265555179 Busybox" }, { "data": "Use the following command to enter pod's stage1: sudo nsenter -m -u -i -p -t 8505 -bash-4.3# cat rkt/env/busybox SHELL=/bin/sh USER=root LOGNAME=root HOME=/root PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin ACAPPNAME=busybox ``` By default, the script creates links for common busybox programs, but if you need to run more commands you can call busybox accordingly: ```bash -bash-4.3# busybox head -n2 /lib/systemd/system/busybox.service [Unit] After=systemd-journald.service ``` Run `busybox` without arguments to see a list of available commands. If you need custom tools you can add them by copying them to `/proc/$PID/root/bin`. It's easier if they're static binaries to avoid complications with dynamic libraries. As mentioned, `rkt enter` has limitations when the image to debug doesn't ship with tools to debug it or even a shell. One possible way to debug those images is to use the ACI dependencies mechanism to add a custom tools image to your container. Let's use to patch the manifest. ```bash $ rkt fetch debug-image.aci sha512-847f2ca60e473121311fac62d014a4f1 $ actool cat-manifest --pretty-print image-to-patch.aci > manifest.json ``` We'll now add the dependency to the manifest, right before the \"app\" section: ```json ], \"dependencies\": [ { \"imageID\": \"sha512-847f2ca60e473121311fac62d014a4f1\" } ], \"app\": { ``` Then we'll patch the image and run it: ```bash $ actool patch-manifest --manifest manifest.json --replace image-to-patch.aci $ sudo rkt --insecure-options=image run --interactive image-to-patch.aci (...) ``` You'll have your favorite debugging tools available. For examples of how to build images, check out . Creating images with statically built binaries is preferred to avoid complications with shared libraries. As an alternative to adding a dependency in the ACI, you can also add a volume after a container has been started using : ```bash $ sudo mkdir /proc/$PID/root/opt/stage2/$APP_NAME/rootfs/tools $ machinectl MACHINE CLASS SERVICE OS VERSION ADDRESSES rkt-b7458bf5-8e19-40c1-a7dc-7c5aa25e4970 container rkt - - 172.16.28.27... 1 machines listed. $ sudo machinectl bind $PWD/tools /opt/stage2/$APP_NAME/rootfs/tools ``` Where `$PID` is still the PID of systemd in the pod. is a very powerful tool to inspect what applications are doing. For container stage1 images or fly, you can just attach strace to processes running inside pods like any other process on the host: Let's trace the shell process running in our example busybox container. We'll also follow forks to trace new processes created by the shell: ```bash $ ps auxf | grep -A3 [8]469 root 8469 0.1 0.0 54204 5076 pts/1 S+ 14:48 0:00 \\ stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --notify-ready=yes --register=true --link-journal=try-guest --quiet --uuid=86e6df38-0762-4261-af21-d2b265555179 --machine=rkt-86e6df38-0762-4261-af21-d2b265555179 --directory=stage1/rootfs --capability=CAPAUDITWRITE,CAPCHOWN,CAPDACOVERRIDE,CAPFSETID,CAPFOWNER,CAPKILL,CAPMKNOD,CAPNETRAW,CAPNETBINDSERVICE,CAPSETUID,CAPSETGID,CAPSETPCAP,CAPSETFCAP,CAPSYS_CHROOT -- --default-standard-output=tty --log-target=null --show-status=0 root 8505 0.0 0.0 62276 7300 ? Ss 14:48 0:00 \\_ /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --show-status=0 root 8507 0.0 0.0 66408 8740 ? Ss 14:48 0:00 \\_ /usr/lib/systemd/systemd-journald root 8510 0.0 0.0 1212 672 pts/3 Ss+ 14:48 0:00 \\_ /bin/sh $ sudo strace -f -p 8510 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WSTOPPED, NULL) = 2646 SIGCHLD {sisigno=SIGCHLD, sicode=CLDEXITED, sipid=2646, siuid=0, sistatus=0, siutime=0, sistime=0} ioctl(10, TIOCSPGRP, [6]) = 0 wait4(-1, 0x7ffc0226b918, WNOHANG|WSTOPPED, NULL) = -1 ECHILD (No child processes) wait4(-1, 0x7ffc0226b7b8, WNOHANG|WSTOPPED, NULL) = -1 ECHILD (No child processes) write(1, \"This is the current date: \", 26) = 26 clone(strace: Process 12807 attached (...) ``` To find out more about strace, check out Julia Evans' . Sometimes strace is not enough to figure out problems with your applications. From version 7.10, it is possible to use to debug processes running in containers from the host. Same as with strace, you need to find the PID of the process you want to debug and then just run: ``` $ sudo gdb -p $PID ```" } ]
{ "category": "Runtime", "file_name": "debugging.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "<!-- toc --> - - - - - - - <!-- /toc --> In Kubernetes, implementing Services of type LoadBalancer usually requires an external load balancer. On cloud platforms (including public clouds and platforms like NSX-T) that support load balancers, Services of type LoadBalancer can be implemented by the Kubernetes Cloud Provider, which configures the cloud load balancers for the Services. However, the load balancer support is not available on all platforms, or in some cases, it is complex or has extra cost to deploy external load balancers. This document describes two options for supporting Services of type LoadBalancer with Antrea, without an external load balancer: Using Antrea's built-in external IP management for Services of type LoadBalancer Leveraging Antrea supports external IP management for Services of type LoadBalancer since version 1.5, which can work together with `AntreaProxy` or `kube-proxy` to implement Services of type LoadBalancer, without requiring an external load balancer. With the external IP management feature, Antrea can allocate an external IP for a Service of type LoadBalancer from an , and select a Node based on the ExternalIPPool's NodeSelector to host the external IP. Antrea configures the Service's external IP on the selected Node, and thus Service requests to the external IP will get to the Node, and they are then handled by `AntreaProxy` or `kube-proxy` on the Node and distributed to the Service's Endpoints. Antrea also implements a Node failover mechanism for Service external IPs. When Antrea detects a Node hosting an external IP is down, it will move the external IP to another available Node of the ExternalIPPool. If you are using `kube-proxy` in IPVS mode, you need to make sure `strictARP` is enabled in the `kube-proxy` configuration. For more information about how to configure `kube-proxy`, please refer to the [Interoperability with kube-proxy IPVS mode](#interoperability-with-kube-proxy-ipvs-mode) section. If you are using `kube-proxy` iptables mode or , no extra configuration change is needed. At this moment, external IP management for Services is an alpha feature of Antrea. The `ServiceExternalIP` feature gate of `antrea-agent` and `antrea-controller` must be enabled for the feature to work. You can enable the `ServiceExternalIP` feature gate in the `antrea-config` ConfigMap in the Antrea deployment YAML: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: ServiceExternalIP: true antrea-controller.conf: | featureGates: ServiceExternalIP: true ``` The feature works with both `AntreaProxy` and `kube-proxy`, including the following configurations: `AntreaProxy` without `proxyAll` enabled - this is `antrea-agent`'s default configuration, in which `kube-proxy` serves the request traffic for Services of type LoadBalancer (while `AntreaProxy` handles Service requests from Pods). `AntreaProxy` with `proxyAll` enabled - in this case, `AntreaProxy` handles all Service traffic, including Services of type LoadBalancer. `AntreaProxy` disabled - `kube-proxy` handles all Service traffic, including Services of type LoadBalancer. Service external IPs are allocated from an ExternalIPPool, which defines a pool of external IPs and the set of Nodes to which the external IPs can be assigned. To learn more information about ExternalIPPool, please refer to [the Egress documentation](egress.md#the-externalippool-resource). The example below defines an ExternalIPPool with IP range \"10.10.0.2 -" }, { "data": "and it selects the Nodes with label \"network-role: ingress-node\" to host the external IPs: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ExternalIPPool metadata: name: service-external-ip-pool spec: ipRanges: start: 10.10.0.2 end: 10.10.0.10 nodeSelector: matchLabels: network-role: ingress-node ``` For Antrea to manage the externalIP for a Service of type LoadBalancer, the Service should be annotated with `service.antrea.io/external-ip-pool`. For example: ```yaml apiVersion: v1 kind: Service metadata: name: my-service annotations: service.antrea.io/external-ip-pool: \"service-external-ip-pool\" spec: selector: app: MyApp ports: protocol: TCP port: 80 targetPort: 9376 type: LoadBalancer ``` You can also request a particular IP from an ExternalIPPool by setting the loadBalancerIP field in the Service spec to that specific IP available in the ExternalIPPool, Antrea will allocate the IP from the ExternalIPPool for the Service. For example: ```yaml apiVersion: v1 kind: Service metadata: name: my-service annotations: service.antrea.io/external-ip-pool: \"service-external-ip-pool\" spec: selector: app: MyApp loadBalancerIP: \"10.10.0.2\" ports: protocol: TCP port: 80 targetPort: 9376 type: LoadBalancer ``` Once Antrea allocates an external IP for a Service of type LoadBalancer, it will set the IP to the `loadBalancer.ingress` field in the Service resource `status`. For example: ```yaml apiVersion: v1 kind: Service metadata: name: my-service annotations: service.antrea.io/external-ip-pool: \"service-external-ip-pool\" spec: selector: app: MyApp ports: protocol: TCP port: 80 targetPort: 9376 clusterIP: 10.96.0.11 type: LoadBalancer status: loadBalancer: ingress: ip: 10.10.0.2 hostname: node-1 ``` You can validate that the Service can be accessed from the client using the `<external IP>:<port>` (`10.10.0.2:80/TCP` in the above example). As described above, the Service externalIP management by Antrea configures a Service's external IP to a Node, so that the Node can receive Service requests. However, this requires that the externalIP on the Node be reachable through the Node network. The simplest way to achieve this is to reserve a range of IPs from the Node network subnet, and define Service ExternalIPPools with the reserved IPs, when the Nodes are connected to a layer 2 subnet. Or, another possible way might be to manually configure Node network routing (e.g. by adding a static route entry to the underlay router) to route the Service traffic to the Node that hosts the Service's externalIP. As of now, Antrea supports Service externalIP management only on Linux Nodes. Windows Nodes are not supported yet. MetalLB also implements external IP management for Services of type LoadBalancer, and it can be deployed to a Kubernetes cluster with Antrea. MetalLB supports two modes - layer 2 mode and BGP mode - to advertise an Service external IP to the Node network. The layer 2 mode is similar to what Antrea external IP management implements and has the same limitation that the external IPs must be allocated from the Node network subnet. The BGP mode leverages BGP to advertise external IPs to the Node network router. It does not have the layer 2 subnet limitation, but requires the Node network to support BGP. MetalLB will automatically allocate external IPs for every Service of type LoadBalancer, and it sets the allocated IP to the `loadBalancer.ingress` field in the Service resource `status`. MetalLB also supports user specified `loadBalancerIP` in the Service spec. For more information, please refer to the . To learn more about MetalLB concepts and functionalities, you can read the" }, { "data": "You can run the following commands to install MetalLB using the YAML manifests: ```bash kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.11/config/manifests/metallb-native.yaml ``` The commands will deploy MetalLB version 0.13.11 into Namespace `metallb-system`. You can also refer to this [MetalLB installation guide](https://metallb.universe.tf/installation) for other ways of installing MetalLB. As MetalLB will allocate external IPs for all Services of type LoadBalancer, once it is running, the Service external IP management feature of Antrea should not be enabled to avoid conflicts with MetalLB. You can deploy Antrea with the default configuration (in which the `ServiceExternalIP` feature gate of `antrea-agent` is set to `false`). MetalLB can work with both `AntreaProxy` and `kube-proxy` configurations of `antrea-agent`. Similar to the case of Antrea Service external IP management, MetalLB layer 2 mode also requires `kube-proxy`'s `strictARP` configuration to be enabled, when you are using `kube-proxy` IPVS. Please refer to the [Interoperability with kube-proxy IPVS mode](#interoperability-with-kube-proxy-ipvs-mode) section for more information. MetalLB is configured through Custom Resources (since v0.13). To configure MetalLB to work in the layer 2 mode, you need to create an `L2Advertisement` resource, as well as an `IPAddressPool` resource, which provides the IP ranges to allocate external IPs from. The IP ranges should be from the Node network subnet. For example: ```yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: 10.10.0.2-10.10.0.10 apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: example namespace: metallb-system ``` The BGP mode of MetalLB requires more configuration parameters to establish BGP peering to the router. The example resources below configure MetalLB using AS number 64500 to connect to peer router 10.0.0.1 with AS number 64501: ```yaml apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: sample namespace: metallb-system spec: myASN: 64500 peerASN: 64501 peerAddress: 10.0.0.1 apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: 10.10.0.2-10.10.0.10 apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example namespace: metallb-system ``` In addition to the basic layer 2 and BGP mode configurations described in this document, MetalLB supports a few more advanced BGP configurations and supports configuring multiple IP pools which can use different modes. For more information, please refer to the . Both Antrea Service external IP management and MetalLB layer 2 mode require `kube-proxy`'s `strictARP` configuration to be enabled, to work with `kube-proxy` in IPVS mode. You can check the `strictARP` configuration in the `kube-proxy` ConfigMap: ```bash $ kubectl describe configmap -n kube-system kube-proxy | grep strictARP strictARP: false ``` You can set `strictARP` to `true` by editing the `kube-proxy` ConfigMap: ```bash kubectl edit configmap -n kube-system kube-proxy ``` Or, simply run the following command to set it: ```bash $ kubectl get configmap kube-proxy -n kube-system -o yaml | \\ sed -e \"s/strictARP: false/strictARP: true/\" | \\ kubectl apply -f - -n kube-system ``` Last, to check the change is made: ```bash $ kubectl describe configmap -n kube-system kube-proxy | grep strictARP strictARP: true ``` If you are using Antrea v1.7.0 or later, please ignore the issue. The previous implementation of Antrea Egress before v1.7.0 does not work with the `strictARP` configuration of `kube-proxy`. It means Antrea Egress cannot work together with Service external IP management or MetalLB layer 2 mode, when `kube-proxy` IPVS is used. This issue was fixed in Antrea v1.7.0." } ]
{ "category": "Runtime", "file_name": "service-loadbalancer.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Run Velero on GCP\" layout: docs You can run Kubernetes on Google Cloud Platform in either: Kubernetes on Google Compute Engine virtual machines Google Kubernetes Engine If you do not have the `gcloud` and `gsutil` CLIs locally installed, follow the to set them up. Download the tarball for your client platform. _We strongly recommend that you use an of Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!_ Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` We'll refer to the directory you extracted to as the \"Velero directory\" in subsequent steps. Move the `velero` binary from the Velero directory to somewhere in your PATH. Velero requires an object storage bucket in which to store backups, preferably unique to a single Kubernetes cluster (see the for more details). Create a GCS bucket, replacing the <YOUR_BUCKET> placeholder with the name of your bucket: ```bash BUCKET=<YOUR_BUCKET> gsutil mb gs://$BUCKET/ ``` To integrate Velero with GCP, create a Velero-specific : View your current config settings: ```bash gcloud config list ``` Store the `project` value from the results in the environment variable `$PROJECT_ID`. ```bash PROJECT_ID=$(gcloud config get-value project) ``` Create a service account: ```bash gcloud iam service-accounts create velero \\ --display-name \"Velero service account\" ``` > If you'll be using Velero to backup multiple clusters with multiple GCS buckets, it may be desirable to create a unique username per cluster rather than the default `velero`. Then list all accounts and find the `velero` account you just created: ```bash gcloud iam service-accounts list ``` Set the `$SERVICEACCOUNTEMAIL` variable to match its `email` value. ```bash SERVICEACCOUNTEMAIL=$(gcloud iam service-accounts list \\ --filter=\"displayName:Velero service account\" \\ --format 'value(email)') ``` Attach policies to give `velero` the necessary permissions to function: ```bash ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get ) gcloud iam roles create velero.server \\ --project $PROJECT_ID \\ --title \"Velero Server\" \\ --permissions \"$(IFS=\",\"; echo \"${ROLE_PERMISSIONS[*]}\")\" gcloud projects add-iam-policy-binding $PROJECT_ID \\ --member serviceAccount:$SERVICEACCOUNTEMAIL \\ --role projects/$PROJECT_ID/roles/velero.server gsutil iam ch serviceAccount:$SERVICEACCOUNTEMAIL:objectAdmin gs://${BUCKET} ``` Create a service account key, specifying an output file (`credentials-velero`) in your local directory: ```bash gcloud iam service-accounts keys create credentials-velero \\ --iam-account $SERVICEACCOUNTEMAIL ``` If you run Google Kubernetes Engine (GKE), make sure that your current IAM user is a cluster-admin. This role is required to create RBAC objects. See for more information. Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it. ```bash velero install \\ --provider gcp \\ --bucket $BUCKET \\ --secret-file ./credentials-velero ``` Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready. (Optional) Specify `--snapshot-location-config snapshotLocation=<YOUR_LOCATION>` to keep snapshots in a specific availability zone. See the for details. (Optional) Specify for the `--backup-location-config` flag. (Optional) Specify for the `--snapshot-location-config` flag. (Optional) Specify for the Velero/restic pods. For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation." } ]
{ "category": "Runtime", "file_name": "gcp-config.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "oep-number: draft-20190605 title: OpenEBS Enhancement Proposal Process authors: \"@amitkumardas\" owners: \"@kmova\" \"@vishnuitta\" editor: \"@amitkumardas\" creation-date: 2019-06-05 last-updated: 2019-06-05 status: provisional * * * * * * * * * A standardized development process for OpenEBS is proposed in order to provide a common structure for proposing changes to OpenEBS ensure that the motivation for a change is clear allow for the enumeration stability milestones and stability graduation criteria persist project information in a Version Control System (VCS) for future OpenEBS contributors & users support the creation of high value user facing information such as: an overall project development roadmap motivation for impactful user facing changes reserve GitHub issues for tracking work in flight rather than creating \"umbrella\" issues ensure community participants are successfully able to drive changes to completion across one or more releases while stakeholders are adequately represented throughout the process This process is supported by a unit of work called a OpenEBS Enhancement Proposal or OEP. A OEP attempts to combine aspects of a feature, and effort tracking document a product requirements document design document into one file which is created incrementally in collaboration with one or more owners. A single GitHub Issue or Pull request seems to be required in order to understand and communicate upcoming changes to OpenEBS. In a blog post describing the , Russ Cox explains that it is difficult but essential to describe the significance of a problem in a way that someone working in a different environment can understand as a project it is vital to be able to track the chain of custody for a proposed enhancement from conception through implementation. Without a standardized mechanism for describing important enhancements our talented technical writers and product managers struggle to weave a coherent narrative explaining why a particular release is important. Additionally for critical infrastructure such as OpenEBS adopters need a forward looking road map in order to plan their adoption strategy. The purpose of the OEP process is to reduce the amount of \"tribal knowledge\" in our community. By moving decisions from a smattering of mailing lists, video calls and hallway conversations into a well tracked artifact this process aims to enhance communication and discoverability. A OEP is broken into sections which can be merged into source control incrementally in order to support an iterative development process. An important goal of the OEP process is ensuring that the process for submitting the content contained in is both clear and efficient. The OEP process is intended to create high quality uniform design and implementation documents for OWNERs to deliberate. The definition of what constitutes an \"enhancement\" is a foundational concern for the OpenEBS project. Roughly any OpenEBS user or operator facing enhancement should follow the OEP process: if an enhancement would be described in either written or verbal communication to anyone besides the OEP author or developer then consider creating a OEP. Similarly, any technical effort (refactoring, major architectural change) that will impact a large section of the development community should also be communicated widely. The OEP process is suited for this even if it will have zero impact on the typical user or" }, { "data": "As the local bodies of governance, OWNERSs should have broad latitude in describing what constitutes an enhancement which should be tracked through the OEP process. OWNERs may find that helpful to enumerate what does not require a OEP rather than what does. OWNERs also have the freedom to customize the OEP template according to their OWNER specific concerns. For example the OEP template used to track FEATURE changes will likely have different subsections than the template for proposing governance changes. However, as changes start impacting other aspects or the larger developer community, the OEP process should be used to coordinate and communicate. Enhancements that have major impacts on multiple OWNERs should use the OEP process. A single OWNER will own the OEP but it is expected that the set of approvers will span the impacted OWNERs. The OEP process is the way that OWNERs can negotiate and communicate changes that cross boundaries. OEPs will also be used to drive large changes that will cut across all parts of the project. These OEPs will be owned among the OWNERs and should be seen as a way to communicate the most fundamental aspects of what OpenEBS is. The template for a OEP is precisely defined There is a place in each OEP for a YAML document that has standard metadata. This will be used to support tooling around filtering and display. It is also critical to clearly communicate the status of a OEP. Metadata items: oep-number* Required Each proposal has a number. This is to make all references to proposals as clear as possible. This is especially important as we create a network cross references between proposals. Before having the `Approved` status, the number for the OEP will be in the form of `draft-YYYYMMDD`. The `YYYYMMDD` is replaced with the current date when first creating the OEP. The goal is to enable fast parallel merges of pre-acceptance OEPs. On acceptance a sequential dense number will be assigned. This will be done by the editor and will be done in such a way as to minimize the chances of conflicts. The final number for a OEP will have no prefix. title* Required The title of the OEP in plain language. The title will also be used in the OEP filename. See the template for instructions and details. status* Required The current state of the OEP. Must be one of `provisional`, `implementable`, `implemented`, `deferred`, `rejected`, `withdrawn`, or `replaced`. authors* Required A list of authors for the OEP. This is simply the Github ID. In the future we may enhance this to include other types of identification. owners* Required An OWNER is the person or entity that works on the proposal. OWNERs consist of `approvers` and `reviewers` joined from the file OWNERs are listed as `@owner` where the name matches up with the Github ID. The OWNER that is most closely associated with this OEP. If there is code or other artifacts that will result from this OEP, then it is expected that this OWNER will take responsibility for the bulk of those artifacts. editor* Required Someone to keep things moving forward. If not yet chosen replace with `TBD` Same name/contact scheme as `authors` creation-date* Required The date that the OEP was first submitted in a" }, { "data": "In the form `yyyy-mm-dd` While this info will also be in source control, it is helpful to have the set of OEP files stand on their own. last-updated* Optional The date that the OEP was last changed significantly. In the form `yyyy-mm-dd` see-also* Optional A list of other OEPs that are relevant to this OEP. In the form `OEP 123` replaces* Optional A list of OEPs that this OEP replaces. Those OEPs should list this OEP in their `superseded-by`. In the form `OEP 123` superseded-by* A list of OEPs that supersede this OEP. Use of this should be paired with this OEP moving into the `Replaced` status. In the form `OEP 123` A OEP has the following states `provisional`: The OEP has been proposed and is actively being defined. This is the starting state while the OEP is being fleshed out and actively defined and discussed. The OWNER has accepted that this is work that needs to be done. `implementable`: The approvers have approved this OEP for implementation and OWNERs create, if appropriate, a to track implementation work. `implemented`: The OEP has been implemented and is no longer actively changed. OWNERs reflect the status change and close its matching milestone, if appropriate. `deferred`: The OEP is proposed but not actively being worked on. `rejected`: The approvers and authors have decided that this OEP is not moving forward. The OEP is kept around as a historical document. `withdrawn`: The OEP has been withdrawn by the authors. `replaced`: The OEP has been replaced by a new OEP. The `superseded-by` metadata value should point to the new OEP. OEPs are checked into under the `/contribute/design/feature` directory. New OEPs can be checked in with a file name in the form of `draft-YYYYMMDD-my-title.md`. As significant work is done on the OEP the authors can assign a OEP number. No other changes should be put in that PR so that it can be approved quickly and minimize merge conflicts. The OEP number can also be done as part of the initial submission if the PR is likely to be uncontested and merged quickly. Taking a cue from the , we define the role of a OEP editor. The job of an OEP editor is likely very similar to the and will hopefully provide another opportunity for people who do not write code daily to contribute to OpenEBS. In keeping with the OEP editors which Read the OEP to check if it is ready: sound and complete. The ideas must make technical sense, even if they don't seem likely to be accepted. The title should accurately describe the content. Edit the OEP for language (spelling, grammar, sentence structure, etc.), markup (for yaml, schema naming conventions), code style (examples should match idiomatic maya standards). OEP editors should generally not pass judgement on a OEP beyond editorial corrections. OEP editors can also help inform authors about the process and otherwise help things move" }, { "data": "It is proposed that the primary metrics which would signal the success or failure of the OEP process are how many \"enhancements\" are tracked with a OEP distribution of time a OEP spends in each state OEP rejection rate PRs referencing a OEP merged per week number of issues open which reference a OEP number of contributors who authored a OEP number of contributors who authored a OEP for the first time number of orphaned OEPs number of retired OEPs number of superseded OEPs The OEP process as proposed was essentially stolen from the KUDO project which has references to Kubernetes process that also is the which itself seems to be very similar to the Any additional process has the potential to engender resentment within the community. There is also a risk that the OEP process as designed will not sufficiently address the scaling challenges we face today. PR review bandwidth is already at a premium and we may find that the OEP process introduces an unreasonable bottleneck on our development velocity. It certainly can be argued that the lack of a dedicated issue/defect tracker beyond GitHub issues contributes to our challenges in managing a project like OpenEBS, however, given that other large organizations, including GitHub itself, make effective use of GitHub issues perhaps the argument is overblown. The centrality of Git and GitHub within the OEP process also may place too high a barrier to potential contributors, however, given that both Git and GitHub are required to contribute code changes to OpenEBS today perhaps it would be reasonable to invest in providing support to those unfamiliar with this tooling. Expanding the proposal template beyond the single sentence description currently required in the may be a heavy burden for non native English speakers and here the role of the OEP editor combined with kindness and empathy will be crucial to making the process successful. The use of GitHub issues when proposing changes does not provide OWNERs good facilities for signaling approval or rejection of a proposed change to OpenEBS since anyone can open a GitHub issue at any time. Additionally managing a proposed change across multiple releases is somewhat cumbersome as labels and milestones need to be updated for every release that a change spans. These long lived GitHub issues lead to an ever increasing number of issues open against `kubernetes/features` which itself has become a management problem in the Kubernetes community. In addition to the challenge of managing issues over time, searching for text within an issue can be challenging. The flat hierarchy of issues can also make navigation and categorization tricky. While not all community members might not be comfortable using Git directly, it is imperative that as a community we work to educate people on a standard set of tools so they can take their experience to other projects they may decide to work on in the future. While git is a fantastic version control system (VCS), it is not a project management tool nor a cogent way of managing an architectural catalog or backlog; this proposal is limited to motivating the creation of a standardized definition of work in order to facilitate project management. This primitive for describing a unit of work may also allow contributors to create their own personalized view of the state of the project while relying on Git and GitHub for consistency and durable storage. How reviewers and approvers are assigned to a OEP Example schedule, deadline, and time frame for each stage of a OEP Communication/notification mechanisms Review meetings and escalation procedure" } ]
{ "category": "Runtime", "file_name": "oep-process.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Java API Applications primarily interact with Alluxio through its File System API. Java users can either use the , or the , which wraps the Alluxio Java Client to implement the Hadoop API. Another common option is through Alluxio . Users can interact with Alluxio using the same S3 clients used for AWS S3 operations. This makes it easy to change existing S3 workloads to use Alluxio. Alluxio also provides a after mounting Alluxio as a local FUSE volume. Right now Alluxio POSIX API mainly targets the ML/AI workloads (especially read heavy workloads). By setting up an Alluxio Proxy, users can also interact with Alluxio through for operations doesn't support. This is mainly for admin actions not supported by S3 API. For example, mount and unmount operations. The REST API is currently used for the Go and Python language bindings. Alluxio provides access to data through a file system interface. Files in Alluxio offer write-once semantics: they become immutable after they have been written in their entirety and cannot be read before being completed. Alluxio provides users with two different File System APIs to access the same file system: and The Alluxio file system API provides full functionality, while the Hadoop compatible API gives users the flexibility of leveraging Alluxio without having to modify existing code written using Hadoop's API with limitations. To build your Java application to access Alluxio File System using , include the artifact `alluxio-shaded-client` in your `pom.xml` like the following: ```xml <dependency> <groupId>org.alluxio</groupId> <artifactId>alluxio-shaded-client</artifactId> <version>{{site.ALLUXIOVERSIONSTRING}}</version> </dependency> ``` Available since `2.0.1`, this artifact is self-contained by including all its transitive dependencies in a shaded form to prevent potential dependency conflicts. This artifact is recommended generally for a project to use Alluxio client. Alternatively, an application can also depend on the `alluxio-core-client-fs` artifact for the or the `alluxio-core-client-hdfs` artifact for the of Alluxio. These two artifacts do not include transitive dependencies and therefore much smaller in size. They also both include in `alluxio-shaded-client` artifact. This section introduces the basic operations to use the Alluxio `FileSystem` interface. Read the for the complete list of API methods. All resources with the Alluxio Java API are specified through an which represents the path to the resource. To obtain an Alluxio File System client in Java code, use : ```java FileSystem fs = FileSystem.Factory.get(); ``` All metadata operations as well as opening a file for reading or creating a file for writing are executed through the `FileSystem` object. Since Alluxio files are immutable once written, the idiomatic way to create files is to use , which returns a stream object that can be used to write the file. For example: Note: there are some file path name limitation when creating files through Alluxio. Please check ```java FileSystem fs = FileSystem.Factory.get(); AlluxioURI path = new AlluxioURI(\"/myFile\"); // Create a file and get its output stream FileOutStream out = fs.createFile(path); // Write data out.write(...); // Close and complete file out.close(); ``` All operations on existing files or directories require the user to specify the" }, { "data": "An `AlluxioURI` can be used to perform various operations, such as modifying the file metadata (i.e. TTL or pin state) or getting an input stream to read the file. Use to obtain a stream object that can be used to read a file. For example: ```java FileSystem fs = FileSystem.Factory.get(); AlluxioURI path = new AlluxioURI(\"/myFile\"); // Open the file for reading FileInStream in = fs.openFile(path); // Read data in.read(...); // Close file relinquishing the lock in.close(); ``` For all `FileSystem` operations, an additional `options` field may be specified, which allows users to specify non-default settings for the operation. For example: ```java FileSystem fs = FileSystem.Factory.get(); AlluxioURI path = new AlluxioURI(\"/myFile\"); // Generate options to set a custom blocksize of 64 MB CreateFilePOptions options = CreateFilePOptions .newBuilder() .setBlockSizeBytes(64 * Constants.MB) .build(); FileOutStream out = fs.createFile(path, options); ``` Alluxio configuration can be set through `alluxio-site.properties`, but these properties apply to all instances of Alluxio that read from the file. If fine-grained configuration management is required, pass in a customized configuration object when creating the `FileSystem` object. The generated `FileSystem` object will have modified configuration properties, independent of any other `FileSystem` clients. ```java FileSystem normalFs = FileSystem.Factory.get(); AlluxioURI normalPath = new AlluxioURI(\"/normalFile\"); // Create a file with default properties FileOutStream normalOut = normalFs.createFile(normalPath); ... normalOut.close(); // Create a file system with custom configuration InstancedConfiguration conf = InstancedConfiguration.defaults(); conf.set(PropertyKey.SECURITYLOGINUSERNAME, \"alice\"); FileSystem customizedFs = FileSystem.Factory.create(conf); AlluxioURI customizedPath = new AlluxioURI(\"/customizedFile\"); // The newly created file will be created under the username \"alice\" FileOutStream customizedOut = customizedFs.createFile(customizedPath); ... customizedOut.close(); // normalFs can still be used as a FileSystem client with the default username. // Likewise, using customizedFs will use the username \"alice\". ``` Alluxio uses two different storage types: Alluxio managed storage and under storage. Alluxio managed storage is the memory, SSD, and/or HDD allocated to Alluxio workers. Under storage is the storage resource managed by the underlying storage system, such as S3 or HDFS. Users can specify the interaction with Alluxio managed storage and under storage through `ReadType` and `WriteType`. `ReadType` specifies the data read behavior when reading a file. `WriteType` specifies the data write behavior when writing a new file, i.e. whether the data should be written in Alluxio Storage. Below is a table of the expected behaviors of `ReadType`. Reads will always prefer Alluxio storage over the under storage. <table class=\"table table-striped\"> <tr><th>Read Type</th><th>Behavior</th> </tr> {% for readtype in site.data.table.ReadType %} <tr> <td>{{readtype.readtype}}</td> <td>{{site.data.table.en.ReadType[readtype.readtype]}}</td> </tr> {% endfor %} </table> Below is a table of the expected behaviors of `WriteType`. <table class=\"table table-striped\"> <tr><th>Write Type</th><th>Behavior</th> </tr> {% for writetype in site.data.table.WriteType %} <tr> <td>{{writetype.writetype}}</td> <td>{{site.data.table.en.WriteType[writetype.writetype]}}</td> </tr> {% endfor %} </table> Alluxio provides location policy to choose which workers to store the blocks of a file. Using Alluxio's Java API, users can set the policy in `CreateFilePOptions` for writing files and `OpenFilePOptions` for reading files into Alluxio. Users can override the default policy class in the . Two configuration properties are available: `alluxio.user.ufs.block.read.location.policy` This controls which worker is selected to cache a block that is not currently cached in Alluxio and will be read from UFS." }, { "data": "This controls which worker is selected to cache a block generated from the client, and possibly persist it to the UFS. The built-in policies include: This is the default policy. > A policy that returns the local worker first, and if the local worker doesn't > exist or doesn't have enough capacity, will select the nearest worker from the active > workers list with sufficient capacity. If no worker meets capacity criteria, will randomly select a worker from the list of all workers. This is the same as `LocalFirstPolicy` with the following addition: > The property `alluxio.user.block.avoid.eviction.policy.reserved.size.bytes` > is used as buffer space on each worker when calculating available space > to store each block. If no worker meets availability criteria, will randomly select a worker from the list of all workers. > A policy that returns the worker with the most available bytes. If no worker meets availability criteria, will randomly select a worker from the list of all workers. > A policy that chooses the worker for the next block in a round-robin manner > and skips workers that do not have enough space. If no worker meets availability criteria, will randomly select a worker from the list of all workers. > Always returns a worker with the hostname specified by property `alluxio.worker.hostname`. If no value is set, will randomly select a worker from the list of all workers. > This policy maps the blockId to several deterministic Alluxio workers. The number of workers a block > can be mapped to can be specified by `alluxio.user.ufs.block.read.location.policy.deterministic.hash.shards`. > The default is 1. It skips the workers that do not have enough capacity to hold the block. > > This policy is useful for limiting the amount of replication that occurs when reading blocks from > the UFS with high concurrency. With 30 workers and 100 remote clients reading the same block > concurrently, the replication level for the block would get close to 30 as each worker reads > and caches the block for one or more clients. If the clients use DeterministicHashPolicy with > 3 shards, the 100 clients will split their reads between just 3 workers, so that the replication > level for the block will be only 3 when the data is first loaded. > > Note that the hash function relies on the number of workers in the cluster, so if the number of > workers changes, the workers chosen by the policy for a given block will likely change. > This policy chooses a worker with a probability equal to the worker's normalized capacity, > i.e. the ratio of its capacity over the total capacity of all workers. It randomly distributes > workload based on the worker capacities so that larger workers get more requests. > > This policy is useful for clusters where workers have heterogeneous storage capabilities, but > the distribution of workload does not match that of" }, { "data": "For example, in a cluster of 5 > workers, one of the workers has only half the capacity of the others, however, it is co-located > with a client that generates twice the amount of read requests than others. In this scenario, > the default LocalFirstPolicy will quickly cause the smaller worker to go out of space, while > the larger workers has plenty of storage left unused. Although the client will retry with a > different worker when the local worker is out of space, this will increase I/O latency. > > Note that the randomness is > based on capacity instead of availability, because in the long run, all workers will be > filled up and have availability close to 0, which would cause this policy to degrade to a > uniformly distributed random policy. > This policy is a combination of DeterministicHashPolicy and CapacityBaseRandomPolicy. > It ensures each block is always assigned to the same set of workers. Additionally, provided > that block requests follow a uniform distribution, they are assigned to each worker with a probability > equal to the worker's normalized capacity. The number of workers that a block can be assigned > to can be specified by `alluxio.user.ufs.block.read.location.policy.deterministic.hash.shards`. > > This policy is useful when CapacityBaseRandomPolicy causes too many replicas across multiple > workers, and one wish to limit the number of replication, in a way similar to > DeterministicHashPolicy. > > Note that this is not a random policy in itself. The outcome distribution of this policy is > dependent on the distribution of the block requests. When the distribution of block > requests is highly skewed, the workers chosen will not follow a distribution based on workers' > normalized capacities. Alluxio supports custom policies, so you can also develop your own policy appropriate for your workload by implementing the interface `alluxio.client.block.policy.BlockLocationPolicy`. Note that a default policy must have a constructor which takes `alluxio.conf.AlluxioConfiguration`. To use the `ASYNC_THROUGH` write type, all the blocks of a file must be written to the same worker. Alluxio allows a client to select a tier preference when writing blocks to a local worker. Currently this policy preference exists only for local workers, not remote workers; remote workers will write blocks to the highest tier. By default, data is written to the top tier. Users can modify the default setting through the `alluxio.user.file.write.tier.default` or override it through an option to the API call. For additional API information, please refer to the . On top of the , Alluxio also has a convenience class `alluxio.hadoop.FileSystem` that provides applications with a . This client translates Hadoop file operations to Alluxio file system operations, allowing users to reuse existing code written for Hadoop without modification. Read its for more details. Here is a piece of example code to read ORC files from the Alluxio file system using the Hadoop interface. ```java // create a new hadoop configuration org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration(); // enforce hadoop client to bind alluxio.hadoop.FileSystem for URIs like alluxio:// conf.set(\"fs.alluxio.impl\", \"alluxio.hadoop.FileSystem\"); conf.set(\"fs.AbstractFileSystem.alluxio.impl\", \"alluxio.hadoop.AlluxioFileSystem\"); // Now alluxio address can be used like any other hadoop-compatible file system URIs org.apache.orc.OrcFile.ReaderOptions options = new org.apache.orc.OrcFile.ReaderOptions(conf) org.apache.orc.Reader orc = org.apache.orc.OrcFile.createReader( new Path(\"alluxio://localhost:19998/path/file.orc\"), options); ``` There are several example Java programs. They are:" } ]
{ "category": "Runtime", "file_name": "Java-API.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Website Guidelines\" layout: docs When making changes to the website, please run the site locally before submitting a PR and manually verify your changes. At the root of the project, run: ```bash make serve-docs ``` This runs all the Hugo dependencies in a container. Alternatively, for quickly loading the website, under the `velero/site/` directory run: ```bash hugo serve ``` For more information on how to run the website locally, please see our . To add a blog post, create a new markdown (.MD) file in the `/site/content/posts/` folder. A blog post requires the following front matter. ```yaml title: \"Title of the blog\" excerpt: Brief summary of thee blog post that appears as a preview on velero.io/blogs author_name: Jane Smith slug: URL-For-Blog categories: ['velero','release'] image: /img/posts/example-image.jpg tags: ['Velero Team', 'Nolan Brubaker'] ``` Include the `author_name` value in tags field so the page that lists the author's posts will work properly, for example https://velero.io/tags/carlisia-thompson/. Ideally each blog will have a unique image to use on the blog home page, but if you do not include an image, the default Velero logo will be used instead. Use an image that is less than 70KB and add it to the `/site/static/img/posts` folder." } ]
{ "category": "Runtime", "file_name": "website-guidelines.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "We can build a RISC-V virtual environment by using the QEMU virtual machine on the host. Specifically, you should use any Linux distribution as the host to install the QEMU virtual machine, then start the RISC-V openEuler image in the virtual machine, and finally install iSulad in the virtual machine image. The first is to install QEMU on the host, open a terminal, and run the following commands in turn: ```shell wget https://download.qemu.org/qemu-5.1.0.tar.xz tar xvJf qemu-5.1.0.tar.xz cd qemu-5.1.0 ./configure --target-list=riscv64-softmmu make make install ``` After installing the QEMU that supports RISC-V, you can use it to start the image of the virtual machine. For the download and installation of the image, please refer to . The following files are required to start the QEMU virtual machine: runoe1rv64.sh(optional) You can create `runoe1rv64.sh` as follows: ```shell qemu-system-riscv64 \\ -machine virt \\ -nographic \\ -smp 8 \\ -m 124G \\ -drive file=oe-rv-base-expand.qcow2,format=qcow2,id=hd0 \\ -object rng-random,filename=/dev/urandom,id=rng0 \\ -device virtio-rng-device,rng=rng0 \\ -device virtio-blk-device,drive=hd0 \\ -netdev user,id=usernet,hostfwd=tcp::12055-:22 \\ -device virtio-net-device,netdev=usernet \\ -append 'root=/dev/vda1 systemd.defaulttimeoutstart_sec=600 selinux=0 rw highres=off console=ttyS0 mem=4096M earlycon' \\ -kernel fw_payload.elf \\ ``` There are some parameter settings, you can view the parameter description of QEMU and adjust it according to the local computer configuration. There are two ways to start a virtual machine: Enter the contents of the shell file directly in the terminal. If the shell file is created, just type `sh runoe1rv64.sh` in the terminal. The default login username/password is: root/openEuler12#$ First use yum to install the required dependent packages, and then refer to 's `Build and install iSulad from source by yourself `. The errors that may occur during the process and their solutions are given below. Use the yum to install rpm packages. If you have just used `oe-rv-rv64g-30G.qcow2` without the yum provided, you can use the following command to install yum: ```shell wget https://isrc.iscas.ac.cn/mirror/openeuler-sig-riscv/oe-RISCV-repo/noarch/yum-4.2.15-8.noarch.rpm --no-check-certificate rpm -ivh yum-4.2.15-8.noarch.rpm ``` After that, use the yum to install the required packages: ```shell sudo yum --enablerepo='*' install -y automake autoconf libtool cmake make libcap libcap-devel libselinux libselinux-devel libseccomp libseccomp-devel yajl-devel git libcgroup tar python3 python3-pip device-mapper-devel libarchive libarchive-devel libcurl-devel zlib-devel glibc-headers openssl-devel gcc gcc-c++ systemd-devel systemd-libs libtar libtar-devel vim ``` If you want to modify the yum repository, you can change the `oe-rv.repo` file under `/etc/yum.repos.d/`. Usually set the yum repository as . Adjust the virtual machine time to local time. The format of the time adjustment command is as follows: date -s 2020-09-28. Different from build_guide, you need to choose to install in either of the following two ways, so that the subsequent grpc can be build smoothly: First run the following command: ```javascript $ pkg-config --cflags protobuf $ pkg-config --libs protobuf $ pkg-config --cflags --libs protobuf $ git clone https://gitee.com/src-openeuler/protobuf.git $ cd protobuf $ git checkout openEuler-20.03-LTS-tag $ tar -xzvf protobuf-all-3.9.0.tar.gz $ cd protobuf-3.9.0 ``` This process refers to . if you follow the buildguide, ` 'std::systemerror'` appears when building grpc. Before installing, make some modifications in the `src/google/protobuf/stubs/common.cc` : ```sh vi" }, { "data": "``` In this file, comment out all the code related to _WIN32, as follows: ``` // updated by Aimer on linux platform //#ifdef _WIN32 //#define WIN32LEANAND_MEAN // We only need minimal includes //#include <windows.h> //#define snprintf _snprintf // see comment in strutil.cc //#elif defined(HAVE_PTHREAD) //#else //#error \"No suitable threading library available.\" //#endif ``` Refer to ``` shell $ sudo -E ./autogen.sh $ sudo -E ./configure CXXFLAGS=\"$(pkg-config --cflags protobuf)\" LIBS=\"$(pkg-config --libs protobuf)\" $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` Finally, verify that the installation was successful. ``` protoc --version ``` Output: libprotoc 3.9.0 (or other version) Due to the dependencies between protobuf and grpc installation, you can regard them as a combination, install grpc first, and then install protobuf in the protobuf directory under the third_party folder. The related compilation method can be searched for `protobuf+grpc compilation`. But the success rate of combined installation is very low. ```shell $ git clone https://gitee.com/src-openeuler/grpc.git $ cd grpc $ git checkout openEuler-20.03-LTS-tag $ tar -xzvf grpc-1.22.0.tar.gz $ cd grpc-1.22.0 ``` Modify the source: Add the followings in `include/grpcpp/impl/codegen/callopset.h` line 90 ```shell /// Default assignment operator WriteOptions& operator=(const WriteOptions& other) = default; ``` Change `gettid` in `src/core/lib/gpr/loglinux.cc`, `src/core/lib/gpr/logposix.cc`, `src/core/lib/iomgr/evepollexlinux.cc` () to `sys_gettid()` Refer to ```shell $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` After that, you will encounter the problem of `cannot find -latomic`, and you can handle it according to the : grpc test case: ```bash cd examples/cpp/helloworld/ make //build ./greeter_server //server ./greeter_client //clientreopen a server connection ``` There are two problems encountered in the process of building lxc: About the `NR_signalfd` solution: The error of `cannot find -latomic` is reported again This error is caused by the lack of a static link library. Use the find command to search for libatomic.a and copy it to /usr/lib. The startup of iSulad also requires an `overlay` kernel module. Since the virtual machine image does not provide `overlay` by default, you need to enable this module and build the package. Download the kernel source code of the version consistent with the current mirror system (the kernel version can be viewed using the `uname -a` command) ```shell git clone https://gitee.com/openeuler/kernel.git git checkout ``` In the directory of the kernel source code, execute `make menuconfig`, find `File systems` > Configure it as [M] or [*] before Overlay filesystem support (click the space bar to switch), then save and exit; Use `make Image` to generate an Image file under ./arch/riscv/boot/; Download the kernel packaging tool `opensbi`: ```shell git clone https://gitee.com/src-openeuler/opensbi.git cd opensbi unzip v0.6.zip cd opensbi-0.6 make O=build-oe/qemu-virt PLATFORM=qemu/virt FWPAYLOAD=y FWPAYLOAD_PATH=/Generated Image path/Image ``` This step will generate the elf file, and the location of the elf file will be prompted at the end of the build. First use `scp` to copy the elf file to the host. Then put the .qcow2 file, .elf file, and .sh file in the same path. Finally modify the elf file name at the kernel parameter in `run_oe1-rv64.sh` to the name of generated elf file. Execute sh `run_oe1-rv64.sh` https://arkingc.github.io/2018/09/05/2018-09-05-linux-kernel/ https://gitee.com/src-openeuler/risc-v-kernel/blob/master/kernel.spec https://gitee.com/src-openeuler/opensbi/blob/master/opensbi.spec" } ]
{ "category": "Runtime", "file_name": "build_guide_riscv.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "This feature allows users to control the count and size of snapshots of a volume. https://github.com/longhorn/longhorn/issues/6563 A replica honors snapshot count and size limitations when it creates a new snapshot. Users can set snapshot count and size limitation on a Volume. Users can set a global default snapshot maximum count. Snapshot space management on a node or disk level. Freeze disk before taking snapshot. Auto deleting snapshots if there is no space for creating a new system snapshot. With snapshot space management, snapshot space usage is controllable. Users can set snapshot limitation on a Volume and evaluate maximum space usage. Before snapshot space management: The default maximum snapshot count for each volume is `250`. This is a constants value in the and users don't have a way to control it. If a volume size is 1G, the maximum snapshot space usage will be 250G. After snapshot space management: There is a configurable default maximum snapshot count setting. Users can update it to overwrite the fixed value in the system. Users can set different maximum snapshot count and size on each volume. A more important volume can have more snapshot space usage. Add a new setting definition: ```go const ( SettingNameSnapshotMaxCount = SettingName(\"snapshot-max-count\") ) var ( SettingDefinitionSnapshotMaxCount = SettingDefinition{ DisplayName: \"Snapshot Maximum Count\", Description: \"Maximum snapshot count for a volume. The value should be between 2 to 250\", Category: SettingCategorySnapshot, Type: SettingTypeInt, Required: true, ReadOnly: false, Default: \"250\", } ) ``` Add two fields to `VolumeSpec`: ```go type VolumeSpec struct { // ... // +optional SnapshotMaxCount int `json:\"snapshotMaxCount\"` // +kubebuilder:validation:Type=string // +optional SnapshotMaxSize int64 `json:\"snapshotMaxSize,string\"` } ``` If `SnapshotMaxSize` is `0`, it means there is no snapshot size limit for a Volume. If `SnapshotMaxCount` is `0`, using the `snapshot-max-count` setting value to update it. If a volume is expanded, checking whether `snapshot-max-size` is smaller than `size 2`. If it is, using `size 2` to update it. The `SnapshotMaxCount` should be between `2` to `250`. This limitation includes user and system snapshot. In LH, all snapshots can be merged to one snapshot, so at least one snapshot can't be delete. To create another snapshot, we need to have enough count for it. In conclusion, the minimum value for `SnapshotMaxCount` is `2`. If `SnapshotMaxSize` is't" }, { "data": "The minimum value for `SnapshotMaxSize` is same as `Size * 2` in a Volume, because a volume can have at least two snapshots. Add two fields to `Replica`: ```go type Replica struct { // ... snapshotMaxCount int snapshotMaxSize int64 } ``` Add a function `GetSnapshotCountUsage` to retrieve snapshot count usage. We should skip the volume head, backing disk, and removed disks. Add a function `GetSnapshotSizeUsage` to retrieve total snapshot size usage. We should skip the volume head, backing disk, and removed disks. Add the `remainsnapshotsize` field to `Replica` proto message: ```protobuf message Replica { // ... int64 remainsnapshotsize = 17; } ``` Update the `getReplica` function to return `SnapshotCountUsage` and `SnapshotSizeUsage` fields. Add a `RemainSnapshotSize` field to `ReplicaInfo`: ```go type ReplicaInfo struct { // ... RemainSnapshotSize int `json:\"remainsnapshotSize\"` } ``` Add a new function `GetSnapshotCountAndSizeUsage` to return current snapshot count and size usage. Add a new function `GetSnapshotCountAndSizeUsage` to return current snapshot count and size usage. We should get the biggest value from all replicas, because replica data may be unsynced when the system is unsteady. Add a new function `canDoSnapshot` to check whether snapshot count and size usage is under limitation. Add new action `updateSnapshotMaxCount` to `Volume` (`\"/v1/volumes/{name}\"`) Add new action `updateSnapshotMaxSize` to `Volume` (`\"/v1/volumes/{name}\"`) Integration test plan. `snapshot-max-count` setting Validate the value should be between `2` to `250`. Create a Volume with empty `SnapshotMaxCount` and mutator should replace the value with `snapshot-max-count` setting value. Create a Volume with nonempty `SnapshotMaxCount` and mutator shouldn't update the value. A volume with `1G` size, `2` snapshot max count, and `0` snapshot max size. Create the first snapshot. It should be successful. Create the second snapshot. It should be successful. Create the third snapshot. It should be failed. Delete a snapshot and create a new snapshot. It should be successful. A volume with `1G` size, `250` snapshot max count, and `2.5G` snapshot max size. Write `0.5G` data and create the first snapshot. It should be successful. Write `1G` data and create the second snapshot. It should be successful. Write `1G` data and create the third snapshot. It should be successful. Write `1G` data and create the fourth snapshot. It should be failed. Delete the second or third snapshot and create a new snapshot. It should be successful. `None`" } ]
{ "category": "Runtime", "file_name": "20230905-snapshot-space-management.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "BR: Backup & Restore Backup Storage: The storage that meets BR requirements, for example, scalable, durable, cost-effective, etc., therefore, Backup Storage is usually implemented as Object storage or File System storage, it may be on-premises or in cloud. Backup Storage is not BR specific necessarily, so it usually doesnt provide most of the BR related features. On the other hand, storage vendors may provide BR specific storages that include some BR features like deduplication, compression, encryption, etc. For a standalone BR solution (i.e. Velero), the Backup Storage is not part of the solution, it is provided by users, so the BR solution should not assume the BR related features are always available from the Backup Storage. Backup Repository: Backup repository is layered between BR data movers and Backup Storage to provide BR related features. Backup Repository is a part of BR solution, so generally, BR solution by default leverages the Backup Repository to provide the features because Backup Repository is always available; when Backup Storage provides duplicated features, and the latter is more beneficial (i.e., performance is better), BR solution should have the ability to opt to use the Backup Storages implementation. Data Mover: The BR module to read/write data from/to workloads, the aim is to eliminate the differences of workloads. TCO: Total Cost of Ownership. This is a general criteria for products/solutions, but also means a lot for BR solutions. For example, this means what kind of backup storage (and its cost) it requires, the retention policy of backup copies, the ways to remove backup data redundancy, etc. RTO: Recovery Time Objective. This is the duration of time that users business can recover after a disaster. As a Kubernetes BR solution, Velero is pursuing the capability to back up data from the volatile and limited production environment into the durable, heterogeneous and scalable backup storage. This relies on two parts: Move data from various production workloads. The data mover has this role. Depending on the type of workload, Velero needs different data movers. For example, file system data mover, block data mover, and data movers for specific applications. At present, Velero supports moving file system data from PVs through Restic, which plays the role of the File System Data Mover. Persist data in backup storage. For a BR solution, this is the responsibility of the backup repository. Specifically, the backup repository is required to: Efficiently save data so as to reduce TCO. For example, deduplicate and compress the data before saving it Securely save data so as to meet security criteria. For example, encrypt the data on rest, make the data immutable after backup, and detect/protect from ransomware Efficiently retrieve data during restore so as to meet RTO. For example, restore a small unit of data or data associated with a small span of time Effectively manage data from all kinds of data movers in all kinds of backup storage. This means 2 things: first, apparently, backup storages are different from each other; second, some data movers may save quite different data from others, for example, some data movers save a portion of the logical object for each backup and need to visit and manage the portions as an entire logic object, aka. incremental" }, { "data": "The backup repository needs to provide unified functionalities to eliminate the differences from the both ends Provide scalabilities so that users could assign resources (CPU, memory, network, etc.) in a flexible way to the backup repository since backup repository contains resource consuming modules At present, Velero provides some of these capabilities by leveraging Restic (e.g., deduplication and encryption on rest). This means that in addition to being a data mover for file system level data, Restic also plays the role of a backup repository, albeit one that is incomplete and limited: Restic is an inseparable unit made up of a file system data mover and a repository. This means that the repository capabilities are only available for Restic file system backup. We cannot provide the same capabilities to other data movers using Restic. The backup storage Velero supports through our Restic backup path depends on the storage Restic supports. As a result, if there is a requirement to introduce backup storage that Restic doesnt support, we have no way to make it. There is no way to enhance or extend the repository capabilities, because of the same reason Restic is an inseparable unit, we cannot insert one or more customized layers to make the enhancements and extensions. Moreover, as reflected by user-reported issues, Restic seems to have many performance issues on both the file system data mover side and the repository side. On the other hand, based on a previous analysis and testing, we found that Kopia has better performance, with more features and more suitable to fulfill Veleros repository targets (Kopias architecture divides modules more clearly according to their responsibilities, every module plays a complete role with clear interfaces. This makes it easier to take individual modules to Velero without losing critical functionalities). Define a Unified Repository Interface that various data movers could interact with. This is for below purposes: All kinds of data movers acquire the same set of backup repository capabilities very easily Provide the possibility to plugin in different backup repositories/backup storages without affecting the upper layers Provide the possibility to plugin in modules between data mover and backup repository, so as to extend the repository capabilities Provide the possibility to scale the backup repository without affecting the upper layers Use Kopia repository to implement the Unified Repository Use Kopia uploader as the file system data mover for Pod Volume Backup Have Kopia uploader calling the Unified Repository Interface and save/retrieve data to/from the Unified Repository Make Kopia uploader generic enough to move any file system data so that other data movement cases could use it Use the existing logic or add new logic to manage the unified repository and Kopia uploader Preserve the legacy Restic path, this is for the consideration of backward compatibility The Unified Repository supports all kinds of data movers to save logic objects into it. How these logic objects are organized for a specific data mover (for example, how a volumes block data is organized and represented by a unified repository object) should be included in the related data mover design. At present, Velero saves Kubernetes resources, backup metedata, debug logs separately. Eventually, we want to save them in the Unified Repository. How to organize these data into the Unified Repository should be included in a separate design. For PodVolume BR, this design focuses on the data path only, other parts beyond the data read/write and data persistency are irrelevant and kept" }, { "data": "Kopia uploader is made generic enough to move any file system data. How it is integrated in other cases, is irrelevant to this design. Take CSI snapshot backup for example, how the snapshot is taken and exposed to Kopia uploader should be included in the related data mover design. The adanced modes of the Unified Repository, for example, backup repository/storage plugin, backup repository extension, etc. are not included in this design. We will have separate designs to cover them whenever necessary. Below shows the primary modules and their responsibilities: Kopia uploader, as been well isolated, could move all file system data either from the production PV (as Veleros PodVolume BR does), or from any kind of snapshot (i.e., CSI snapshot). Unified Repository Interface, data movers call the Unified Repository Interface to write/read data to/from the Unified Repository. Kopia repository layers, CAOS and CABS, work as the backup repository and expose the Kopia Repository interface. A Kopia Repository Library works as an adapter between Unified Repository Interface and Kopia Repository interface. Specifically, it implements Unified Repository Interface and calls Kopia Repository interface. At present, there is only one kind of backup repository -- Kopia Repository. If a new backup repository/storage is required, we need to create a new Library as an adapter to the Unified Repository Interface At present, the Kopia Repository works as a single piece in the same process of the caller, in future, we may run its CABS into a dedicated process or node. At present, we dont have a requirement to extend the backup repository, if needed, an extra module could be added as an upper layer into the Unified Repository without changing the data movers. Neither Kopia uploader nor Kopia Repository is invoked through CLI, instead, they are invoked through code interfaces, because we need to do lots of customizations. The Unified Repository takes two kinds of data: Unified Repository Object: This is the user's logical data, for example, files/directories, blocks of a volume, data of a database, etc. Unified Repository Manifest: This could include all other data to maintain the object data, for example, snapshot information, etc. For Unified Repository Object/Manifest, a brief guidance to data movers are as below: Data movers treat the simple unit of data they recognize as an Object. For example, file system data movers treat a file or a directory as an Object; block data movers treat a volume as an Object. However, it is unnecessary that every data mover has a unique data format in the Unified Repository, to the opposite, it is recommended that data movers could share the data formats unless there is any reason not to, in this way, the data generated by one data mover could be used by other data movers. Data movers don't need to care about the differences between full and incremental backups regarding the data organization. Data movers always have full views of their objects, if an object is partially written, they use the object writer's Seek function to skip the unchanged parts Unified Repository may divide the data movers' logical Object into sub-objects or slices, or append internal metadata, but they are transparent to data movers Every Object has an unified identifier, in order to retrieve the Object later, data movers need to save the identifiers into the snapshot information. The snapshot information is saved as a" }, { "data": "Manifests could hold any kind of small piece data in a K-V manner. Inside the backup repository, these kinds of data may be processed differently from Object data, but it is transparent to data movers. A Manifest also has an unified identifier, the Unified Repository provides the capabilities to list all the Manifests or a specified Manifest by its identifier, or a specified Manifest by its name, or a set of Manifests by their labels. Velero by default uses the Unified Repository for all kinds of data movement, it is also able to integrate with other data movement paths from any party, for any purpose. Details are concluded as below: Built-in Data Path: this is the default data movement path, which uses Velero built-in data movers to backup/restore workloads, the data is written to/read from the Unified Repository. Data Mover Replacement: Any party could write its own data movers and plug them into Velero. Meanwhile, these plugin data movers could also write/read data to/from Veleros Unified Repository so that these data movers could expose the same capabilities that provided by the Unified Repository. In order to do this, the data mover providers need to call the Unified Repository Interface from inside their plugin data movers. Data Path Replacement: Some vendors may already have their own data movers and backup repository and they want to replace Veleros entire data path (including data movers and backup repository). In this case, the providers only need to implement their plugin data movers, all the things downwards are a black box to Velero and managed by providers themselves (including API call, data transport, installation, life cycle management, etc.). Therefore, this case is out of the scope of Unified Repository. Below are the definitions of the Unified Repository Interface. All the functions are synchronization functions. ``` // BackupRepoService is used to initialize, open or maintain a backup repository type BackupRepoService interface { // Init creates a backup repository or connect to an existing backup repository. // repoOption: option to the backup repository and the underlying backup storage. // createNew: indicates whether to create a new or connect to an existing backup repository. Init(ctx context.Context, repoOption RepoOptions, createNew bool) error // Open opens an backup repository that has been created/connected. // repoOption: options to open the backup repository and the underlying storage. Open(ctx context.Context, repoOption RepoOptions) (BackupRepo, error) // Maintain is periodically called to maintain the backup repository to eliminate redundant data. // repoOption: options to maintain the backup repository. Maintain(ctx context.Context, repoOption RepoOptions) error // DefaultMaintenanceFrequency returns the defgault frequency of maintenance, callers refer this // frequency to maintain the backup repository to get the best maintenance performance DefaultMaintenanceFrequency() time.Duration } // BackupRepo provides the access to the backup repository type BackupRepo interface { // OpenObject opens an existing object for read. // id: the object's unified identifier. OpenObject(ctx context.Context, id ID) (ObjectReader, error) // GetManifest gets a manifest data from the backup repository. GetManifest(ctx context.Context, id ID, mani *RepoManifest) error // FindManifests gets one or more manifest data that match the given labels FindManifests(ctx context.Context, filter ManifestFilter) ([]*ManifestEntryMetadata, error) // NewObjectWriter creates a new object and return the object's writer interface. // return: A unified identifier of the object on success. NewObjectWriter(ctx context.Context, opt ObjectWriteOptions) ObjectWriter // PutManifest saves a manifest object into the backup repository. PutManifest(ctx" }, { "data": "mani RepoManifest) (ID, error) // DeleteManifest deletes a manifest object from the backup repository. DeleteManifest(ctx context.Context, id ID) error // Flush flushes all the backup repository data Flush(ctx context.Context) error // Time returns the local time of the backup repository. It may be different from the time of the caller Time() time.Time // Close closes the backup repository Close(ctx context.Context) error type ObjectReader interface { io.ReadCloser io.Seeker // Length returns the logical size of the object Length() int64 } type ObjectWriter interface { io.WriteCloser // Seeker is used in the cases that the object is not written sequentially io.Seeker // Checkpoint is periodically called to preserve the state of data written to the repo so far. // Checkpoint returns a unified identifier that represent the current state. // An empty ID could be returned on success if the backup repository doesn't support this. Checkpoint() (ID, error) // Result waits for the completion of the object write. // Result returns the object's unified identifier after the write completes. Result() (ID, error) } ``` Some data structure & constants used by the interfaces: ``` type RepoOptions struct { // StorageType is a repository specific string to identify a backup storage, i.e., \"s3\", \"filesystem\" StorageType string // RepoPassword is the backup repository's password, if any RepoPassword string // ConfigFilePath is a custom path to save the repository's configuration, if any ConfigFilePath string // GeneralOptions takes other repository specific options GeneralOptions map[string]string // StorageOptions takes storage specific options StorageOptions map[string]string // Description is a description of the backup repository/backup repository operation. // It is for logging/debugging purpose only and doesn't control any behavior of the backup repository. Description string } // ObjectWriteOptions defines the options when creating an object for write type ObjectWriteOptions struct { FullPath string // Full logical path of the object DataType int // OBJECTDATATYPE_* Description string // A description of the object, could be empty Prefix ID // A prefix of the name used to save the object AccessMode int // OBJECTDATAACCESS_* BackupMode int // OBJECTDATABACKUP_* } const ( // Below consts descrbe the data type of one object. // Metadata: This type describes how the data is organized. // For a file system backup, the Metadata describes a Dir or File. // For a block backup, the Metadata describes a Disk and its incremental link. ObjectDataTypeUnknown int = 0 ObjectDataTypeMetadata int = 1 ObjectDataTypeData int = 2 // Below consts defines the access mode when creating an object for write ObjectDataAccessModeUnknown int = 0 ObjectDataAccessModeFile int = 1 ObjectDataAccessModeBlock int = 2 ObjectDataBackupModeUnknown int = 0 ObjectDataBackupModeFull int = 1 ObjectDataBackupModeInc int = 2 ) // ManifestEntryMetadata is the metadata describing one manifest data type ManifestEntryMetadata struct { ID ID // The ID of the manifest data Length int32 // The data size of the manifest data Labels map[string]string // Labels saved together with the manifest data ModTime" }, { "data": "// Modified time of the manifest data } type RepoManifest struct { Payload interface{} // The user data of manifest Metadata *ManifestEntryMetadata // The metadata data of manifest } type ManifestFilter struct { Labels map[string]string } ``` We preserve the bone of the existing BR workflow, that is: Still use the Velero Server pod and VeleroNodeAgent daemonSet (originally called Restic daemonset) pods to hold the corresponding controllers and modules Still use the Backup/Restore CR and BackupRepository CR (originally called ResticRepository CR) to drive the BR workflow The modules in gray color in below diagram are the existing modules and with no significant changes. In the new design, we will have separate and independent modules/logics for backup repository and uploader (data mover), specifically: Repository Provider provides functionalities to manage the backup repository. For example, initialize a repository, connect to a repository, manage the snapshots in the repository, maintain a repository, etc. Uploader Provider provides functionalities to run a backup or restore. The Repository Provider and Uploader Provider use options to choose the path legacy path vs. new path (Kopia uploader + Unified Repository). Specifically, for legacy path, Repository Provider will manage Restic Repository only, otherwise, it manages Unified Repository only; for legacy path, Uploader Provider calls Restic to do the BR, otherwise, it calls Kopia uploader to do the BR. In order to manage Restic Repository, the Repository Provider calls Restic Repository Provider, the latter invokes the existing Restic CLIs. In order to manage Unified Repository, the Repository Provider calls Unified Repository Provider, the latter calls the Unified Repository module through the udmrepo.BackupRepoService interface. It doesnt know how the Unified Repository is implemented necessarily. In order to use Restic to do BR, the Uploader Provider calls Restic Uploader Provider, the latter invokes the existing Restic CLIs. In order to use Kopia to do BR, the Uploader Provider calls Kopia Uploader Provider, the latter do the following things: Call Unified Repository through the udmrepo.BackupRepoService interface to open the unified repository for read/write. Again, it doesnt know how the Unified Repository is implemented necessarily. It gets a BackupRepos read/write handle after the call succeeds Wrap the BackupRepo handle into a Kopia Shim which implements Kopia Repository interface Call the Kopia Uploader. Kopia Uploader is a Kopia module without any change, so it only understands Kopia Repository interface Kopia Uploader starts to backup/restore the corresponding PVs file system data and write/read data to/from the provided Kopia Repository implementation, that is, Kopia Shim here When read/write calls go into Kopia Shim, it in turn calls the BackupRepo handle for read/write Finally, the read/write calls flow to Unified Repository module The Unified Repository provides all-in-one functionalities of a Backup Repository and exposes the Unified Repository Interface. Inside, Kopia Library is an adapter for Kopia Repository to translate the Unified Repository Interface calls to Kopia Repository interface calls. Both Kopia Shim and Kopia Library rely on Kopia Repository interface, so we need to have some Kopia version control. We may need to change Kopia Shim and Kopia Library when upgrading Kopia to a new version and the Kopia Repository interface has some changes in the new version. The modules in blue color in below diagram represent the newly added modules/logics or reorganized logics. The modules in yellow color in below diagram represent the called Kopia modules without changes. The Delete Snapshot workflow follows the similar manner with BR workflow, that is, we preserve the upper-level workflows until the calls reach to BackupDeletionController, then: Leverage Repository Provider to switch between Restic implementation and Unified Repository implementation in the same mechanism as BR For Restic implementation, the Restic Repository Provider invokes the existing Forget Restic CLI For Unified Repository implementation, the Unified Repository Provider calls" }, { "data": "DeleteManifest to delete a snapshot Backup Repository/Backup Storage may need to periodically reorganize its data so that it could guarantee its QOS during the long-time service. Some Backup Repository/Backup Storage does this in background automatically, so the user doesnt need to interfere; some others need the caller to explicitly call their maintenance interface periodically. Restic and Kopia both go with the second way, that is, Velero needs to periodically call their maintenance interface. Velero already has an existing workflow to call Restic maintenance (it is called Prune in Restic, so Velero uses the same word). The existing workflow is as follows: The Prune is triggered at the time of the backup When a BackupRepository CR (originally called ResticRepository CR) is created by PodVolumeBackup/Restore Controller, the BackupRepository controller checks if it reaches to the Prune Due Time, if so, it calls PruneRepo In the new design, the Repository Provider implements PruneRepo call, it uses the same way to switch between Restic Repository Provider and Unified Repository Provider, then: For Restic Repository, Restic Repository Provider invokes the existing Prune CLI of Restic For Unified Repository, Unified Repository Provider calls udmrepo.BackupRepoServices Maintain function Kopia has two maintenance modes the full maintenance and quick maintenance. There are many differences between full and quick mode, but briefly speaking, quick mode only processes the hottest data (primarily, it is the metadata and index data), so quick maintenance is much faster than full maintenance. On the other hand, quick maintenance also scatters the burden of full maintenance so that the full maintenance could finish fastly and make less impact. We will also take this quick maintenance into Velero. We will add a new Due Time to Velero, finally, we have two Prune Due Time: Normal Due Time: For Restic, this will invoke Restic Prune; for Unified Repository, this will invoke udmrepo.BackupRepoServices Maintain(full) call and finally call Kopias full maintenance Quick Due Time: For Restic, this does nothing; for Unified Repository, this will invoke udmrepo.BackupRepoServices Maintain(quick) call and finally call Kopias quick maintenance We assign different values to Normal Due Time and Quick Due Time, as a result of which, the quick maintenance happens more frequently than full maintenance. Because Kopia Uploader is an unchanged Kopia module, we need to find a way to get its progress during the BR. Kopia Uploader accepts a Progress interface to update rich information during the BR, so the Kopia Uploader Provider will implement a Kopias Progress interface and then pass it to Kopia Uploader during its initialization. In this way, Velero will be able to get the progress as shown in the diagram below. In the current design, Velero is using two unchanged Kopia modules the Kopia Uploader and the Kopia Repository. Both will generate debug logs during their run. Velero will collect these logs in order to aid the debug. Kopias Uploader and Repository both get the Logger information from the current GO Context, therefore, the Kopia Uploader Provider/Kopia Library could set the Logger interface into the current context and pass the context to Kopia Uploader/Kopia Repository. Velero will set Logger interfaces separately for Kopia Uploader and Kopia Repository. In this way, the Unified Repository could serve other data movers without losing the debug log capability; and the Kopia Uploader could write to any repository without losing the debug log" }, { "data": "Kopias debug logs will be written to the same log file as Velero server or VeleroNodeAgent daemonset, so Velero doesnt need to upload/download these debug logs separately. As mentioned above, There will be two paths. The related controllers need to identify the path during runtime and adjust its working mode. According to the requirements, path changing is fulfilled at the backup/restore level. In order to let the controllers know the path, we need to add some option values. Specifically, there will be option/mode values for path selection in two places: Add the uploader-type option as a parameter of the Velero server. The parameters will be set by the installation. Currently the option has two values, either \"restic\" or \"kopia\" (in future, we may add other file system uploaders, then we will have more values). Add a \"uploaderType\" value in the PodVolume Backup/Restore CR and a \"repositoryType\" value in the BackupRepository CR. \"uploaderType\" currently has two values , either \"restic\" or \"kopia\"; \"repositoryType\" currently has two values, either \"restic\" or \"kopia\" (in future, the Unified Repository could opt among multiple backup repository/backup storage, so there may be more values. This is a good reason that repositoryType is a multivariate flag, however, in which way to opt among the backup repository/backup storage is not covered in this PR). If the values are missing in the CRs, it by default means \"uploaderType=restic\" and \"repositoryType=restic\", so the legacy CRs are handled correctly by Restic. The corresponding controllers handle the CRs by checking the CRs' path value. Some examples are as below: The PodVolume BR controller checks the \"uploaderType\" value from PodVolume CRs and decide its working path The BackupRepository controller checks the \"repositoryType\" value from BackupRepository CRs and decide its working path The Backup controller that runs in Velero server checks its uploader-type parameter to decide the path for the Backup it is going to create and then create the PodVolume Backup CR and BackupRepository CR The Restore controller checks the Backup, from which it is going to restore, for the path and then create the PodVolume Restore CR and BackupRepository CR As described above, the uploader-type parameter of the Velero server is only used to decide the path when creating a new Backup, for other cases, the path selection is driven by the related CRs. Therefore, we only need to add this parameter to the Velero server. We will change below CRs' name to make them more generic: \"ResticRepository\" CR to \"BackupRepository\" CR This means, we add a new CR type and deprecate the old one. As a result, if users upgrade from the old release, the old CRs will be orphaned, Velero will neither refer to it nor manage it, users need to delete these CRs manually. As a side effect, when upgrading from an old release, even though the path is not changed, the BackupRepository gets created all the time, because Velero will not refer to the old CR's status. This seems to cause the repository to initialize more than once, however, it won't happen. In the BackupRepository controller, before initializing a repository, it always tries to connect to the repository first, if it is connectable, it won't do the initialization. When backing up with the new release, Velero always creates BackupRepository CRs instead of ResticRepository CRs. When restoring from an old backup, Velero always creates BackupRepository CRs instead of ResticRepository" }, { "data": "When there are already backups or restores running during the upgrade, since after upgrade, the Velero server pods and VeleroNodeAgent daemonset pods are restarted, the existing backups/restores will fail immediately. The backup repository needs some parameters to connect to various backup storage. For example, for a S3 compatible storage, the parameters may include bucket name, region, endpoint, etc. Different backup storage have totally different parameters. BackupRepository CRs, PodVolume Backup CRs and PodVolume Restore CRs save these parameters in their spec, as a string called repoIdentififer. The format of the string is for S3 storage only, it meets Restic CLI's requirements but is not enough for other backup repository. On the other hand, the parameters that are used to generate the repoIdentififer all come from the BackupStorageLocation. The latter has a map structure that could take parameters from any storage kind. Therefore, for the new path, Velero uses the information in the BackupStorageLocation directly. That is, whenever Velero needs to initialize/connect to the Unified Repository, it acquires the storage configuration from the corresponding BackupStorageLocation. Then no more elements will be added in BackupRepository CRs, PodVolume Backup CRs or PodVolume Restore CRs. The legacy path will be kept as is. That is, Velero still sets/gets the repoIdentififer in BackupRepository CRs, PodVolume Backup CRs and PodVolume Restore CRs and then passes to Restic CLI. We will add a new flag \"--uploader-type\" during installation. The flag has 2 meanings: It indicates the file system uploader to be used by PodVolume BR It implies the backup repository type manner, Restic if uploader-type=restic, Unified Repository in all other cases The flag has below two values: \"Restic\": it means Velero will use Restic to do the pod volume backup. Therefore, the Velero server deployment will be created as below: ``` spec: containers: args: server --features= --uploader-type=restic command: /velero ``` The BackupRepository CRs and PodVolume Backup/Restore CRs created in this case are as below: ``` spec: backupStorageLocation: default maintenanceFrequency: 168h0m0s repositoryType: restic volumeNamespace: nginx-example ``` ``` spec: backupStorageLocation: default node: aks-agentpool-27359964-vmss000000 pod: kind: Pod name: nginx-stateful-0 namespace: nginx-example uid: 86aaec56-2b21-4736-9964-621047717133 tags: ... uploaderType: restic volume: nginx-log ``` ``` spec: backupStorageLocation: default pod: kind: Pod name: nginx-stateful-0 namespace: nginx-example uid: e56d5872-3d94-4125-bfe8-8a222bf0fcf1 snapshotID: 1741e5f1 uploaderType: restic volume: nginx-log ``` \"Kopia\": it means Velero will use Kopia uploader to do the pod volume backup (so it will use Unified Repository as the backup target). Therefore, the Velero server deployment will be created as below: ``` spec: containers: args: server --features= --uploader-type=kopia command: /velero ``` The BackupRepository CRs created in this case are hard set with \"kopia\" at present, sice Kopia is the only option as a backup repository. The PodVolume Backup/Restore CRs are created with \"kopia\" as well: ``` spec: backupStorageLocation: default maintenanceFrequency: 168h0m0s repositoryType: kopia volumeNamespace: nginx-example ``` ``` spec: backupStorageLocation: default node: aks-agentpool-27359964-vmss000000 pod: kind: Pod name: nginx-stateful-0 namespace: nginx-example uid: 86aaec56-2b21-4736-9964-621047717133 tags: ... uploaderType: kopia volume: nginx-log ``` ``` spec: backupStorageLocation: default pod: kind: Pod name: nginx-stateful-0 namespace: nginx-example uid: e56d5872-3d94-4125-bfe8-8a222bf0fcf1 snapshotID: 1741e5f1 uploaderType: kopia volume: nginx-log ``` We will add the flag for both CLI installation and Helm Chart Installation. Specifically: Helm Chart Installation: add the \"--uploaderType\" and \"--default-volumes-to-fs-backup\" flag into its value.yaml and then generate the deployments according to the value. Value.yaml is the user-provided configuration file, therefore, users could set this value at the time of installation. The changes in Value.yaml are as below: ``` command: /velero args: server {{- with" }, { "data": "}} --uploader-type={{ default \"restic\" .uploaderType }} {{- if .defaultVolumesToFsBackup }} --default-volumes-to-fs-backup {{- end }} ``` CLI Installation: add the \"--uploaderType\" and \"--default-volumes-to-fs-backup\" flag into the installation command line, and then create the two deployments accordingly. Users could change the option at the time of installation. The CLI is as below: ```velero install --uploader-type=restic --default-volumes-to-fs-backup --use-node-agent``` ```velero install --uploader-type=kopia --default-volumes-to-fs-backup --use-node-agent``` For upgrade, we allow users to change the path by specifying \"--uploader-type\" flag in the same way as the fresh installation. Therefore, the flag change should be applied to the Velero server after upgrade. Additionally, We need to add a label to Velero server to indicate the current path, so as to provide an easy for querying it. Moreover, if users upgrade from the old release, we need to change the existing Restic Daemonset name to VeleroNodeAgent daemonSet. The name change should be applied after upgrade. The recommended way for upgrade is to modify the related Velero resource directly through kubectl, the above changes will be applied in the same way. We need to modify the Velero doc for all these changes. Below Velero CLI or its output needs some changes: ```Velero backup describe```: the output should indicate the path ```Velero restore describe```: the output should indicate the path ```Velero restic repo get```: the name of this CLI should be changed to a generic one, for example, \"Velero repo get\"; the output of this CLI should print all the backup repository if Restic repository and Unified Repository exist at the same time At present, we don't have a requirement for selecting the path during backup, so we don't change the ```Velero backup create``` CLI for now. If there is a requirement in future, we could simply add a flag similar to \"--uploader-type\" to select the path. Below sample files demonstrate complete CRs with all the changes mentioned above: BackupRepository CR: https://gist.github.com/Lyndon-Li/f38ad69dd8c4785c046cd7ed0ef2b6ed#file-backup-repository-sample-yaml PodVolumeBackup CR: https://gist.github.com/Lyndon-Li/f38ad69dd8c4785c046cd7ed0ef2b6ed#file-pvb-sample-yaml PodVolumeRestore CR: https://gist.github.com/Lyndon-Li/f38ad69dd8c4785c046cd7ed0ef2b6ed#file-pvr-sample-yaml This design aims to provide a flexible backup repository layer and a generic file system uploader, which are fundermental for PodVolume and other data movements. Although this will make Velero more capable, at present, we don't pursue to expose differentiated features end to end. Specifically: For a fresh installation, if the \"--uploader-type\" is not specified, there is a default value for PodVolume BR. We will keep it as \"restic\" for at least one release, then we switch the value to \"kopia\" Even when changing to the new path, Velero still allows users to restore from the data backed up by Restic The capability of PodVolume BR under the new path is kept the same as it under Restic path and the same as the existing PodVolume BR The operational experiences are kept the same as much as possible, the known changes are listed below Below user experiences are changed for this design: Installation CLI change: a new option is added to the installation CLI, see the Installation section for details CR change: One or more existing CRs have been renamed, see the Velero CR Changes section for details Velero CLI name and output change, see the CLI section for details Velero daemonset name change Wording Alignment: as the existing situation, many places are using the word of \"Restic\", for example, \"default-volume-to-restic\" option, most of them are not accurate anymore, we will change these words and give a detailed list of the changes" } ]
{ "category": "Runtime", "file_name": "unified-repo-and-kopia-integration.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Upgrading to Velero 1.6\" layout: docs Velero installed. If you're not yet running at least Velero v1.5, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Install the Velero v1.6 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.6.2 Git commit: <git SHA> ``` Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: If you are upgrading Velero in Kubernetes 1.14.x or earlier, you will need to use `kubectl apply`'s `--validate=false` option when applying the CRD configuration above. See and for more context. Update the container image used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.6.2 \\ --namespace velero kubectl set image daemonset/restic \\ restic=velero/velero:v1.6.2 \\ --namespace velero ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.6.2 Git commit: <git SHA> Server: Version: v1.6.2 ``` We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the on how to indicate which backup storage location is the default one. After upgrading, if there is a previously created backup storage location with the name that matches what was defined on the server side as the default, it will be automatically set as the `default`." } ]
{ "category": "Runtime", "file_name": "upgrade-to-1.6.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "% runc-delete \"8\" runc-delete - delete any resources held by the container runc delete [--force|-f] container-id --force|-f : Forcibly delete the running container, using SIGKILL signal(7) to stop it first. If the container id is ubuntu01 and runc list currently shows its status as stopped, the following will delete resources held for ubuntu01, removing it from the runc list: runc-kill(8), runc(8)." } ]
{ "category": "Runtime", "file_name": "runc-delete.8.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "What version of Go are you running? (Paste the output of `go version`) What version of gorilla/mux are you at? (Paste the output of `git rev-parse HEAD` inside `$GOPATH/src/github.com/gorilla/mux`) Describe your problem (and what you have tried so far) Paste a minimal, runnable, reproduction of your issue below (use backticks to format it)" } ]
{ "category": "Runtime", "file_name": "ISSUE_TEMPLATE.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "There was a time when every fop in glusterfs used to incur cost of allocations/de-allocations for every stack wind/unwind between xlators because stack/frame/*localtt in every wind/unwind was allocated and de-allocated. Because of all these system calls in the fop path there was lot of latency and the worst part is that most of the times the number of frames/stacks active at any time wouldn't cross a threshold. So it was decided that this threshold number of frames/stacks would be allocated in the beginning of the process only once. Get one of them from the pool of stacks/frames whenever `STACKWIND` is performed and put it back into the pool in `STACKUNWIND`/`STACK_DESTROY` without incurring any extra system calls. The data structures are allocated only when threshold number of such items are in active use i.e. pool is in complete use.% increase in the performance once this was added to all the common data structures (inode/fd/dict etc) in xlators throughout the stack was tremendous. ``` struct mem_pool { struct list_head list; /*Each member in the mempool is element padded with a doubly-linked-list + ptr of mempool + is-in -use info. This list is used to add the element to the list of free members in the mem-pool*/ int hot_count;/number of mempool elements that are in active use/ int cold_count;/*number of mempool elements that are not in use. If a new allocation is required it will be served from here until all the elements in the pool are in use i.e. cold-count becomes 0.*/ gflockt lock;/synchronization mechanism/ unsigned long paddedsizeoftype;/*Each mempool element is padded with a doubly-linked-list + ptr of mempool + is-in -use info to operate the pool of elements, this size is the element-size after padding*/ void pool;/Starting address of pool*/ void pool_end;/Ending address of pool*/ / If an element address is in the range between pool, pool_end addresses then it is alloced from the pool otherwise it is 'calloced' this is very useful for functions like 'mem_put'/ int realsizeoftype;/ size of just the element without any padding/ uint64t alloccount; /Number of times this type of data is allocated through out the life of this process. This may include calloced elements as well/ uint64t poolmisses; /Number of times the element had to be allocated from heap because all elements from the pool are in active use./ int max_alloc; /Maximum number of elements from the pool in active use at any point in the life of the process. This does not include calloced elements/ int curr_stdalloc;/Number of elements that are allocated from heap at the moment because the pool is in completed" }, { "data": "It should be '0' when pool is not in complete use/ int max_stdalloc;/Maximum number of allocations from heap after the pool is completely used that are in active use at any point in the life of the process./ char name; /Contains xlator-name:data-type as a string struct listhead globallist;/*This is used to insert it into the global_list of mempools maintained in 'glusterfs-ctx' }; ``` ``` mempoolnew (data_type, unsigned long count) This is a macro which expands to mempoolnewfn (sizeof (datatype), count, string-rep-of-data_type) struct mem_pool * mempoolnewfn (unsigned long sizeoftype, unsigned long count, char *name) Padded-element: |list-ptr|mem-pool-address|in-use|Element| ``` This function allocates the `mem-pool` structure and sets up the pool for use. `name` parameter above is the `string` containing type of the datatype. This `name` is appended to `xlator-name + ':'` so that it can be easily identified in things like statedump. `count` is the number of elements that need to be allocated. `sizeoftype` is the size of each element. Ideally `('sizeoftype''count')` should be the size of the total pool. But to manage the pool using `mem_get`/`mem_put` (will be explained after this section) each element needs to be padded in the front with a `('list', 'mem-pool-address', 'in_use')`. So the actual size of the pool it allocates will be `('padded_sizeof_type''count')`. Why these extra elements are needed will be evident after understanding how `memget` and `memput` are implemented. In this function it just initializes all the `list` structures in front of each element and adds them to the `mempool->list` which represent the list of `cold` elements which can be allocated whenever `memget` is called on this mempool. It remembers mempool's start and end addresses in `mempool->pool`, `mempool->poolend` respectively. Initializes `mempool->coldcount` to `count` and `mempool->hotcount` to `0`. This mem-pool will be added to the list of `globallist` maintained in `glusterfs-ctx` ``` void mem_get (struct mem_pool mem_pool) Initial-list before mem-get | Pool | | -- | - - | | pool-list | |<> |list-ptr|mem-pool-address|in-use|Element|<>|list-ptr|mem-pool-address|in-use|Element| | -- | - - list after mem-get from the pool | Pool | | -- | - | | pool-list | |<>|list-ptr|mem-pool-address|in-use|Element| | -- | - List when the pool is full: | Pool | extra element that is allocated | -- | - | | pool-list | | |list-ptr|mem-pool-address|in-use|Element| | -- | - ``` This function is similar to `malloc()` but it gives memory of type `element` of this pool. When this function is called it increments `mempool->alloccount`, checks if there are any free elements in the pool that can be returned by inspecting `mempool->coldcount`. If `mempool->coldcount` is non-zero then it means there are elements in the pool which are not in active" }, { "data": "It deletes one element from the list of free elements and decrements `mempool->coldcount` and increments `mempool->hotcount` to indicate there is one more element in active use. Updates `mempool->maxalloc` accordingly. Sets `element->inuse` in the padded memory to `1`. Sets `element->mempool` address to this mempool also in the padded memory(It is useful for memput). Returns the address of the memory after the padded boundary to the caller of this function. In the cases where all the elements in the pool are in active use it `callocs` the element with padded size and sets mempool address in the padded memory. To indicate the pool-miss and give useful accounting information of the pool-usage it increments `mempool->poolmisses`, `mempool->currstdalloc`. Updates `mempool->max_stdalloc` accordingly. ``` void mem_get0 (struct mem_pool mem_pool) ``` Just like `calloc` is to `malloc`, `memget0` is to `memget`. It memsets the memory to all '0' before returning the element. ``` void mem_put (void *ptr) list before mem-put from the pool | Pool | | -- | - | | pool-list | |<>|list-ptr|mem-pool-address|in-use|Element| | -- | - list after mem-put to the pool | Pool | | -- | - - | | pool-list | |<> |list-ptr|mem-pool-address|in-use|Element|<>|list-ptr|mem-pool-address|in-use|Element| | -- | - - If mem_put is putting an element not from pool then it is just freed so no change to the pool | Pool | | -- | | | pool-list | | | -- | ``` This function is similar to `free()`. Remember that ptr passed to this function is the address of the element, so this function gets the ptr to its head of the padding in front of it. If this memory falls in bettween `mempool->pool`, `mempool->poolend` then the memory is part of the 'pool' memory that is allocated so it does some sanity checks to see if the memory is indeed head of the element by checking if `inuse` is set to `1`. It resets `inuse` to `0`. It gets the mempool address stored in the padded region and adds this element to the list of free elements. Decreases `mempool->hotcount` increases `mempool->coldcount`. In the case where padded-element address does not fall in the range of `mempool->pool`, `mempool->poolend` it just frees the element and decreases `mempool->curr_stdalloc`. ``` void mempooldestroy (struct mem_pool *pool) ``` Deletes this pool from the `globallist` maintained by `glusterfs-ctx` and frees all the memory allocated in `mempool_new`. This varies from work-load to work-load. Create the mem-pool with some random size and run the work-load. Take the statedump after the work-load is complete. In the statedump if `maxalloc` is always less than `coldcount` may be reduce the size of the pool closer to `maxalloc`. On the otherhand if there are lots of `pool-misses` then increase the `poolsize` by `max_stdalloc` to achieve better 'hit-rate' of the pool." } ]
{ "category": "Runtime", "file_name": "datastructure-mem-pool.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "English This article will show how to implement the bandwidth management capabilities of IPVlan CNI with the help of the project . Kubernetes supports setting the ingress/egress bandwidth of a Pod by injecting Annotations into the Pod, refer to When we use IPVlan as CNI, it does not have the ability to manage the ingress/egress traffic bandwidth of the Pod itself. The open source project Cilium supports cni-chaining to work with IPVlan, based on eBPF technology, it can help IPVlan realize accelerated access to services, bandwidth capacity management and other functions. However, Cilium has removed support for IPVlan Dataplane in the latest release. project is built on cilium v1.12.7 and supports IPVlan Dataplane, we can use it to support IPVlan Dataplane by cilium v1.12.7. Dataplane, we can use it to help IPVlan realize the network bandwidth management capability of Pod. Helm and Kubectl binary tools. Requires node kernel to be greater than 4.19. Installation of Spiderpool can be found in the documentation: Install the cilium-chaining project with the following command. ```shell kubectl apply -f https://raw.githubusercontent.com/spidernet-io/cilium-chaining/main/manifests/cilium-chaining.yaml ``` Check the status of the installation: ```shell ~# kubectl get po -n kube-system | grep cilium-chain cilium-chaining-gl76b 1/1 Running 0 137m cilium-chaining-nzvrg 1/1 Running 0 137m ``` Refer to the following command to create a CNI configuration file: ```shell IPVLANMASTERINTERFACE=ens192 IPPOOL_NAME=ens192-v4 cat << EOF | kubectl apply -f - - apiVersion: k8s-v4 apiVersion: k8s.cni.cncf.io/v1 Type: NetworkAttachmentDefinition Metadata: Name: ipvlan Namespace: kube-system spec: config: ' { \"cniVersion\": \"0.4.0\", \"name\": \"terway-chainer\", \"plugins\": [ { \"type\": \"ipvlan\", \"mode\": \"l2\", \"master\": \"${IPVLANMASTERINTERFACE}\", \"ipam\": { \"type\": \"spiderpool\", \"defaultipv4ippool\": [\"${ippool_name}\"]}} , { \"type\": \"cilium-cni\" }, { \"type\": \"coordinator\" } ] }' EOF ``` Configure cilium-cni in cni-chain mode with ipvlan cni Create a CNI configuration by referring to the following command. ```shell cat << EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata. name: ${IPPOOL_NAME} spec: ${IPPOOL_NAME} default: false disable: false gateway: 172.51.0.1 ipVersion: 4 ips: 172.51.0.100-172.51.0.108 172.51.0.100-172.51.0.108 subnet: 172.51.0.230/16 ``` Note that ens192 needs to exist on the host, and the network segment on which the IP pool is configured needs to be the same as the physical network on which ens192 resides. Create a test application using the CNI configuration and IP pool created above to verify that the Pod's bandwidth is" }, { "data": "```shell cat << EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: test spec: replicas: 2 selector: matchLabels: app: test template: metadata: annotations: v1.multus-cni.io/default-network: kube-system/ipvlan kubernetes.io/ingress-bandwidth: 100M kubernetes.io/egress-bandwidth: 100M labels: app: test spec: containers: env: name: NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: nginx imagePullPolicy: IfNotPresent name: nginx ports: containerPort: 80 name: http protocol: TCP resources: {} ``` A few annotations to introduce. `v1.multus-cni.io/default-network: ipvlan`: Specifies that the default CNI for the Pod is the previously created ipvlan. `kubernetes.io/ingress-bandwidth: 100m`: Sets the ingress bandwidth of the Pod to 100M. `kubernetes.io/ingress-bandwidth: 100m`: Sets the Pod's egress bandwidth to 100M. ```shell ~# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-58d785fb4c-b9cld 1/1 Running 0 175m 172.51.0.102 10-20-1-230 <none> <none> test-58d785fb4c-kwh4h 1/1 Running 0 175m 172.51.0.100 10-20-1-220 <none> <none> ``` When the Pod is created, go to the Pod's network namespace and test its network bandwidth using the `iperf3` utility. ```shell root@10-20-1-230:~# crictl ps | grep test 0e3e211f83723 8f2213208a7f5 39 seconds ago Running nginx 0 3f668220e8349 test-58d785fb4c-b9cld root@10-20-1-230:~# crictl inspect 0e3e211f83723 | grep pid \"pid\": 976027, \"pid\": 1 \"type\": \"pid\" root@10-20-1-230:~# nsenter -t 976027 -n root@10-20-1-230:~# root@10-20-1-230:~# iperf3 -c 172.51.0.100 Connecting to host 172.51.0.100, port 5201 [ 5] local 172.51.0.102 port 50504 connected to 172.51.0.100 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 37.1 MBytes 311 Mbits/sec 0 35.4 KBytes [ 5] 1.00-2.00 sec 11.2 MBytes 94.4 Mbits/sec 0 103 KBytes [ 5] 2.00-3.00 sec 11.2 MBytes 94.4 Mbits/sec 0 7.07 KBytes [ 5] 3.00-4.00 sec 11.2 MBytes 94.4 Mbits/sec 0 29.7 KBytes [ 5] 4.00-5.00 sec 11.2 MBytes 94.4 Mbits/sec 0 33.9 KBytes [ 5] 5.00-6.00 sec 12.5 MBytes 105 Mbits/sec 0 29.7 KBytes [ 5] 6.00-7.00 sec 10.0 MBytes 83.9 Mbits/sec 0 62.2 KBytes [ 5] 7.00-8.00 sec 12.5 MBytes 105 Mbits/sec 0 22.6 KBytes [ 5] 8.00-9.00 sec 10.0 MBytes 83.9 Mbits/sec 0 69.3 KBytes [ 5] 9.00-10.00 sec 10.0 MBytes 83.9 Mbits/sec 0 52.3 KBytes - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 137 MBytes 115 Mbits/sec 0 sender [ 5] 0.00-10.00 sec 134 MBytes 113 Mbits/sec receiver iperf Done. ``` You can see that the result is 115 Mbits/sec, indicating that the Pod's bandwidth has been limited to the size we defined in the annotations." } ]
{ "category": "Runtime", "file_name": "ipvlan_bandwidth.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for the specified shell Generate the autocompletion script for cilium-agent for the specified shell. See each sub-command's help for details on how to use the generated script. ``` -h, --help help for completion ``` - Run the cilium agent - Generate the autocompletion script for bash - Generate the autocompletion script for fish - Generate the autocompletion script for powershell - Generate the autocompletion script for zsh" } ]
{ "category": "Runtime", "file_name": "cilium-agent_completion.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "oep-number: CStor Pool Migration title: Migrating of CStor Pools from one node to other by migrating underlying disks authors: \"@mittachaitu\" owners: \"@kmova\" \"@sonasingh46\" editor: \"@mittachaitu\" creation-date: 2020-08-26 last-updated: 2020-08-26 status: provisional - - - - - - - - This proposal brings out the design details to implement pool migration from one node to other node. NOTE: Before pool migration all the disks participating in the cStor pools should be attached to newer node. CStor pool migration should be supported when the disks were dettached and attached to different node. Following are the use cases: Scaling down the nodes in cluster to 0 and scaling up nodes in should work by updating the node selectors on CSPC(Use case in cloud environment). Detaching and attaching underlying disks to different nodes. Migrate the CStorPools from one node to another node when the underlying disks are moved to different node. Moving pools automatically to different node where ever disks are attached without any trigger from user. User has to take care of moving all the disks participating in pool to different node. High level operator to manage all these operation automatically. As an OpenEBS user, I should be able to migrate pools from terminated node to new node. As an OpenEBS user, I should be able to migrate pools to replaced node. As an OpenEBS user, I should be able to scaledown the cluster to 0 and scale back the cluster should import the pool by changing the node selectors on CSPC. Currently to provision the cStorPoolInstances(cStor pools) user will create CSPC API. Once the CSPC is created watcher in CSPC-Operator will get an event and process CSPC for provisioning cStorPoolInstances. Not only create event even any updates made to CSPC watcher will get notified and proceess the changes accordingly. CSPC-Operator currently supports only adding blockdevices, replacing the blockdevices and changing the pool configurations like resource limits, tolerations, priority" }, { "data": "To support this use case CSPC-Operator should handle the node selector changes also. NOTE: To know more information about the CSPC click . Update the node selectors on the CSPC spec with new node details wherever blockdevices were attached. Consider following example for mirror pool migration This CSPC corresponds to a mirror pool on node `node1` and `node2`. ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: node1 dataRaidGroups: blockDevices: blockDeviceName: \"blockdevice-disk1\" blockDeviceName: \"blockdevice-disk2\" poolConfig: dataRaidGroupType: \"mirror\" nodeSelector: kubernetes.io/hostname: node2 dataRaidGroups: blockDevices: blockDeviceName: \"blockdevice-disk3\" blockDeviceName: \"blockdevice-disk4\" poolConfig: dataRaidGroupType: \"mirror\" ``` Update the nodeSelector values to point to new node -- the spec will look following: ```yaml apiVersion: openebs.io/v1alpha1 kind: CStorPoolCluster metadata: name: cstor-pool-mirror spec: pools: nodeSelector: kubernetes.io/hostname: node3 dataRaidGroups: blockDevices: blockDeviceName: \"blockdevice-disk1\" blockDeviceName: \"blockdevice-disk2\" poolConfig: dataRaidGroupType: \"mirror\" nodeSelector: kubernetes.io/hostname: node2 dataRaidGroups: blockDevices: blockDeviceName: \"blockdevice-disk3\" blockDeviceName: \"blockdevice-disk4\" poolConfig: dataRaidGroupType: \"mirror\" ``` In the above CSPC spec `node1` nodeselector is updated with `node3` nodeselector. When users updates the nodeSelector value with watcher in CSPC-Operator will get an event and process in the following manner. Usually CSPC-Operator will identify provisioned CStorPoolInstances of corresponding CSPC pool specs via nodeSelector but with this feature nodeSelector also will be modified. So to mitigate this CSPC-Operator will identify the CStorPoolInstances in following mannaer: First by verifying CSPC poolSpec nodeSelector and CSPI spec nodeSelector. If nodeSelector mismatches then it will identify using data raidgroup blockdevices. Scenario1: What happens when pool migration and horizontal pools scale(scale down and scaleup as well) operations(by adding pool spec) triggered at same time? CSPC-Operator will identify the changes and provision the cStorPoolInstance on new node later it will identify that nodeSelector has been updated for existing CStorPoolInstance then updates the pool-manager and CStorPoolInstance nodeSelector according to the new nodeSelector. Scenario2: What happens when disk replacement/pool expansion are performed on the migration spec? CSPC-Operator will identify the nodeSelector changes and then perform blockdevice replacement/pool expansion accordingly. No schema changes are required." } ]
{ "category": "Runtime", "file_name": "pool-migration.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: feature-name target-version: release-X.X List the specific goals of the proposal. How will we know that this has succeeded? What is out of scope for this proposal? Listing non-goals helps to focus discussion and make progress. This is where we get down to the nitty gritty of what the proposal actually is. API changed in any, Golang snippets. What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger OKD ecosystem. The idea is to find the best form of an argument why this feature should not be implemented. Similar to the `Drawbacks` section the `Alternatives` section is used to highlight and record other possible approaches to delivering the value proposed by a feature. This is where to call out areas of the design that require closure before deciding to implement the design." } ]
{ "category": "Runtime", "file_name": "design_template.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Introduction to JuiceFS sidebar_position: 1 slug: . pagination_next: introduction/architecture is an open-source, high-performance distributed file system designed for the cloud, released under the Apache License 2.0. By providing full compatibility, it allows almost all kinds of object storage to be used as massive local disks and to be mounted and accessed on different hosts across platforms and regions. JuiceFS separates \"data\" and \"metadata\" storage. Files are split into chunks and stored in like Amazon S3. The corresponding metadata can be stored in various such as Redis, MySQL, TiKV, and SQLite, based on the scenarios and requirements. JuiceFS provides rich APIs for various forms of data management, analysis, archiving, and backup. It seamlessly interfaces with big data, machine learning, artificial intelligence and other application platforms without modifying code, and delivers massive, elastic, and high-performance storage at low cost. With JuiceFS, you do not need to worry about availability, disaster recovery, monitoring, and scalability. This greatly reduces maintenance work and makes it an excellent choice for DevOps. POSIX Compatible: JuiceFS can be used like a local file system, making it easy to integrate with existing applications. HDFS Compatible: JuiceFS is fully compatible with the , which can enhance metadata performance. S3 Compatible: JuiceFS provides an to implement an S3-compatible access interface. Cloud-Native: It is easy to use JuiceFS in Kubernetes via the . Distributed: Each file system can be mounted on thousands of servers at the same time with high-performance concurrent reads and writes and shared data. Strong Consistency: Any changes committed to files are immediately visible on all servers. Outstanding Performance: JuiceFS achieves millisecond-level latency and nearly unlimited throughput depending on the object storage scale (see ). Data Security: JuiceFS supports encryption in transit and encryption at rest (view ). File Lock: JuiceFS supports BSD lock (flock) and POSIX lock (fcntl). Data Compression: JuiceFS supports the and compression algorithms to save storage space. JuiceFS is designed for massive data storage and can be used as an alternative to many distributed file systems and network file systems, especially in the following scenarios: Big Data: JuiceFS is compatible with HDFS and can be seamlessly integrated with mainstream computing engines such as Spark, Presto, and Hive, bringing much better performance than directly using object storage. Machine Learning: JuiceFS is compatible with POSIX and supports all machine learning and deep learning frameworks. As a shareable file storage, JuiceFS can improve the efficiency of team management and data usage. Kubernetes: JuiceFS supports Kubernetes CSI, providing decoupled persistent storage for pods so that your application can be stateless, also great for data sharing among containers. Shared Workspace: JuiceFS file system can be mounted on any host, allowing concurrent read/write operations without limitations. Its POSIX compatibility ensures smooth data flow and supports scripting operations. Data Backup: JuiceFS provides scalable storage space for backing up all kinds of data. With its shared mount feature, data from multiple hosts can be aggregated into one place and then backed up together. JuiceFS is an open-source software available on . When using JuiceFS to store data, the data is split into chunks according to specific rules and stored in custom object storage or other storage media, and the corresponding metadata is stored in a custom database. Use case*: For more use cases of similar scenarios, please visit . Join the community*: Welcome to join to discuss with JuiceFS users." } ]
{ "category": "Runtime", "file_name": "readme.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Inspect the hive ``` cilium-operator-generic hive [flags] ``` ``` --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host" }, { "data": "--gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for hive --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created ``` - Run cilium-operator-generic - Output the dependencies graph in graphviz dot format" } ]
{ "category": "Runtime", "file_name": "cilium-operator-generic_hive.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "name: Bug report about: Create a report to help us improve title: '' labels: kind/bug assignees: '' Describe the bug <!-- A clear and concise description of what the bug is. If you believe this bug is a security issue, please don't use this template and follow our --> To Reproduce <!-- Steps to reproduce the behavior. --> Expected <!-- A clear and concise description of what you expected to happen. --> Actual behavior <!-- A clear and concise description of what's the actual behavior. If applicable, add screenshots, log messages, etc. to help explain the problem. --> Versions: <!-- Please provide the following information: Antrea version (Docker image tag). Kubernetes version (use `kubectl version`). If your Kubernetes components have , please provide the version for all of them. Container runtime: which runtime are you using (e.g. containerd, cri-o, docker) and which version are you using? Linux kernel version on the Kubernetes Nodes (`uname -r`). If you chose to compile the Open vSwitch kernel module manually instead of using the kernel module built into the Linux kernel, which version of the OVS kernel module are you using? Include the output of `modinfo openvswitch` for the Kubernetes Nodes. --> Additional context <!-- Add any other context about the problem here, such as Antrea logs, kubelet logs, etc. --> <!-- (Please consider pasting long output into a or any other pastebin.) -->" } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "(devices-nic)= ```{note} The `nic` device type is supported for both containers and VMs. NICs support hotplugging for both containers and VMs (with the exception of the `ipvlan` NIC type). ``` Network devices, also referred to as Network Interface Controllers or NICs, supply a connection to a network. Incus supports several different types of network devices (NIC types). When adding a network device to an instance, there are two methods to specify the type of device that you want to add: through the `nictype` device option or the `network` device option. These two device options are mutually exclusive, and you can specify only one of them when you create a device. However, note that when you specify the `network` option, the `nictype` option is derived automatically from the network type. `nictype` : When using the `nictype` device option, you can specify a network interface that is not controlled by Incus. Therefore, you must specify all information that Incus needs to use the network interface. When using this method, the `nictype` option must be specified when creating the device, and it cannot be changed later. `network` : When using the `network` device option, the NIC is linked to an existing {ref}`managed network <managed-networks>`. In this case, Incus has all required information about the network, and you need to specify only the network name when adding the device. When using this method, Incus derives the `nictype` option automatically. The value is read-only and cannot be changed. Other device options that are inherited from the network are marked with a \"yes\" in the \"Managed\" column of the NIC-specific tables of device options. You cannot customize these options directly for the NIC if you're using the `network` method. See {ref}`networks` for more information. The following NICs can be added using the `nictype` or `network` options: : Uses an existing bridge on the host and creates a virtual device pair to connect the host bridge to the instance. : Sets up a new network device based on an existing one, but using a different MAC address. : Passes a virtual function of an SR-IOV-enabled physical network device into the instance. : Passes a physical device from the host through to the instance. The targeted device will vanish from the host and appear in the instance. The following NICs can be added using only the `network` option: : Uses an existing OVN network and creates a virtual device pair to connect the instance to it. The following NICs can be added using only the `nictype` option: : Sets up a new network device based on an existing one, using the same MAC address but a different IP. : Creates a virtual device pair, putting one side in the instance and leaving the other side on the host. : Creates a virtual device pair to connect the host to the instance and sets up static routes and proxy ARP/NDP entries to allow the instance to join the network of a designated parent interface. The available device options depend on the NIC type and are listed in the tables in the following sections. (nic-bridged)= ```{note} You can select this NIC type through the `nictype` option or the `network` option (see {ref}`network-bridge` for information about the managed `bridge` network). ``` A `bridged` NIC uses an existing bridge on the host and creates a virtual device pair to connect the host bridge to the" }, { "data": "NIC devices of type `bridged` have the following device options: Key | Type | Default | Managed | Description :-- | :-- | :-- | :-- | :-- `boot.priority` | integer | - | no | Boot priority for VMs (higher value boots first) `host_name` | string | randomly assigned | no | The name of the interface inside the host `hwaddr` | string | randomly assigned | no | The MAC address of the new interface `ipv4.address` | string | - | no | An IPv4 address to assign to the instance through DHCP (can be `none` to restrict all IPv4 traffic when `security.ipv4_filtering` is set) `ipv4.routes` | string | - | no | Comma-delimited list of IPv4 static routes to add on host to NIC `ipv4.routes.external` | string | - | no | Comma-delimited list of IPv4 static routes to route to the NIC and publish on uplink network (BGP) `ipv6.address` | string | - | no | An IPv6 address to assign to the instance through DHCP (can be `none` to restrict all IPv6 traffic when `security.ipv6_filtering` is set) `ipv6.routes` | string | - | no | Comma-delimited list of IPv6 static routes to add on host to NIC `ipv6.routes.external` | string | - | no | Comma-delimited list of IPv6 static routes to route to the NIC and publish on uplink network (BGP) `limits.egress` | string | - | no | I/O limit in bit/s for outgoing traffic (various suffixes supported, see {ref}`instances-limit-units`) `limits.ingress` | string | - | no | I/O limit in bit/s for incoming traffic (various suffixes supported, see {ref}`instances-limit-units`) `limits.max` | string | - | no | I/O limit in bit/s for both incoming and outgoing traffic (same as setting both `limits.ingress` and `limits.egress`) `limits.priority` | integer | - | no | The `skb->priority` value (32-bit unsigned integer) for outgoing traffic, to be used by the kernel queuing discipline (qdisc) to prioritize network packets (The effect of this value depends on the particular qdisc implementation, for example, `SKBPRIO` or `QFQ`. Consult the kernel qdisc documentation before setting this value.) `mtu` | integer | parent MTU | yes | The MTU of the new interface `name` | string | kernel assigned | no | The name of the interface inside the instance `network` | string | - | no | The managed network to link the device to (instead of specifying the `nictype` directly) `parent` | string | - | yes | The name of the host device (required if specifying the `nictype` directly) `queue.tx.length` | integer | - | no | The transmit queue length for the NIC `security.ipv4filtering`| bool | `false` | no | Prevent the instance from spoofing another instance's IPv4 address (enables `security.macfiltering`) `security.ipv6filtering`| bool | `false` | no | Prevent the instance from spoofing another instance's IPv6 address (enables `security.macfiltering`) `security.mac_filtering` | bool | `false` | no | Prevent the instance from spoofing another instance's MAC address `security.port_isolation`| bool | `false` | no | Prevent the NIC from communicating with other NICs in the network that have port isolation enabled `vlan` | integer | - | no | The VLAN ID to use for non-tagged traffic (can be `none` to remove port from default VLAN)" }, { "data": "| integer | - | no | Comma-delimited list of VLAN IDs or VLAN ranges to join for tagged traffic (nic-macvlan)= ```{note} You can select this NIC type through the `nictype` option or the `network` option (see {ref}`network-macvlan` for information about the managed `macvlan` network). ``` A `macvlan` NIC sets up a new network device based on an existing one, but using a different MAC address. If you are using a `macvlan` NIC, communication between the Incus host and the instances is not possible. Both the host and the instances can talk to the gateway, but they cannot communicate directly. NIC devices of type `macvlan` have the following device options: Key | Type | Default | Managed | Description :-- | :-- | :-- | :-- | :-- `boot.priority` | integer | - | no | Boot priority for VMs (higher value boots first) `gvrp` | bool | `false` | no | Register VLAN using GARP VLAN Registration Protocol `hwaddr` | string | randomly assigned | no | The MAC address of the new interface `mtu` | integer | parent MTU | yes | The MTU of the new interface `name` | string | kernel assigned | no | The name of the interface inside the instance `network` | string | - | no | The managed network to link the device to (instead of specifying the `nictype` directly) `parent` | string | - | yes | The name of the host device (required if specifying the `nictype` directly) `vlan` | integer | - | no | The VLAN ID to attach to (nic-sriov)= ```{note} You can select this NIC type through the `nictype` option or the `network` option (see {ref}`network-sriov` for information about the managed `sriov` network). ``` An `sriov` NIC passes a virtual function of an SR-IOV-enabled physical network device into the instance. An SR-IOV-enabled network device associates a set of virtual functions (VFs) with the single physical function (PF) of the network device. PFs are standard PCIe functions. VFs, on the other hand, are very lightweight PCIe functions that are optimized for data movement. They come with a limited set of configuration capabilities to prevent changing properties of the PF. Given that VFs appear as regular PCIe devices to the system, they can be passed to instances just like a regular physical device. VF allocation : The `sriov` interface type expects to be passed the name of an SR-IOV enabled network device on the system via the `parent` property. Incus then checks for any available VFs on the system. By default, Incus allocates the first free VF it finds. If it detects that either none are enabled or all currently enabled VFs are in use, it bumps the number of supported VFs to the maximum value and uses the first free VF. If all possible VFs are in use or the kernel or card doesn't support incrementing the number of VFs, Incus returns an error. ```{note} If you need Incus to use a specific VF, use a `physical` NIC instead of a `sriov` NIC and set its `parent` option to the VF name. ``` NIC devices of type `sriov` have the following device options: Key | Type | Default | Managed | Description :-- | :-- | :-- | :-- | :--" }, { "data": "| integer | - | no | Boot priority for VMs (higher value boots first) `hwaddr` | string | randomly assigned | no | The MAC address of the new interface `mtu` | integer | kernel assigned | yes | The MTU of the new interface `name` | string | kernel assigned | no | The name of the interface inside the instance `network` | string | - | no | The managed network to link the device to (instead of specifying the `nictype` directly) `parent` | string | - | yes | The name of the host device (required if specifying the `nictype` directly) `security.mac_filtering`| bool | `false` | no | Prevent the instance from spoofing another instance's MAC address `vlan` | integer | - | no | The VLAN ID to attach to (nic-ovn)= ```{note} You can select this NIC type only through the `network` option (see {ref}`network-ovn` for information about the managed `ovn` network). ``` An `ovn` NIC uses an existing OVN network and creates a virtual device pair to connect the instance to it. (devices-nic-hw-acceleration)= SR-IOV hardware acceleration : To use `acceleration=sriov`, you must have a compatible SR-IOV physical NIC that supports the Ethernet switch device driver model (`switchdev`) in your Incus host. Incus assumes that the physical NIC (PF) is configured in `switchdev` mode and connected to the OVN integration OVS bridge, and that it has one or more virtual functions (VFs) active. To achieve this, follow these basic prerequisite setup steps: Set up PF and VF: Activate some VFs on PF (called `enp9s0f0np0` in the following example, with a PCI address of `0000:09:00.0`) and unbind them. Enable `switchdev` mode and `hw-tc-offload` on the PF. Rebind the VFs. ``` echo 4 > /sys/bus/pci/devices/0000:09:00.0/sriov_numvfs for i in $(lspci -nnn | grep \"Virtual Function\" | cut -d' ' -f1); do echo 0000:$i > /sys/bus/pci/drivers/mlx5_core/unbind; done devlink dev eswitch set pci/0000:09:00.0 mode switchdev ethtool -K enp9s0f0np0 hw-tc-offload on for i in $(lspci -nnn | grep \"Virtual Function\" | cut -d' ' -f1); do echo 0000:$i > /sys/bus/pci/drivers/mlx5_core/bind; done ``` Set up OVS by enabling hardware offload and adding the PF NIC to the integration bridge (normally called `br-int`): ``` ovs-vsctl set openvswitch . otherconfig:hw-offload=true systemctl restart openvswitch-switch ovs-vsctl add-port br-int enp9s0f0np0 ip link set enp9s0f0np0 up ``` VDPA hardware acceleration : To use `acceleration=vdpa`, you must have a compatible VDPA physical NIC. The setup is the same as for SR-IOV hardware acceleration, except that you must also enable the `vhost_vdpa` module and check that you have some available VDPA management devices : ``` modprobe vhost_vdpa && vdpa mgmtdev show ``` NIC devices of type `ovn` have the following device options: Key | Type | Default | Managed | Description :-- | :-- | :-- | :-- | :-- `acceleration` | string | `none` | no | Enable hardware offloading (either `none`, `sriov` or `vdpa`, see {ref}`devices-nic-hw-acceleration`) `boot.priority` | integer | - | no | Boot priority for VMs (higher value boots first) `host_name` | string | randomly assigned | no | The name of the interface inside the host `hwaddr` | string | randomly assigned | no | The MAC address of the new interface `ipv4.address` | string | - | no | An IPv4 address to assign to the instance through DHCP `ipv4.routes` | string | - | no | Comma-delimited list of IPv4 static routes to route to the NIC `ipv4.routes.external` | string | - | no | Comma-delimited list of IPv4 static routes to route to the NIC and publish on uplink network `ipv6.address` | string | - | no | An IPv6 address to assign to the instance through DHCP" }, { "data": "| string | - | no | Comma-delimited list of IPv6 static routes to route to the NIC `ipv6.routes.external` | string | - | no | Comma-delimited list of IPv6 static routes to route to the NIC and publish on uplink network `name` | string | kernel assigned | no | The name of the interface inside the instance `nested` | string | - | no | The parent NIC name to nest this NIC under (see also `vlan`) `network` | string | - | yes | The managed network to link the device to (required) `security.acls` | string | - | no | Comma-separated list of network ACLs to apply `security.acls.default.egress.action` | string | `reject` | no | Action to use for egress traffic that doesn't match any ACL rule `security.acls.default.egress.logged` | bool | `false` | no | Whether to log egress traffic that doesn't match any ACL rule `security.acls.default.ingress.action`| string | `reject` | no | Action to use for ingress traffic that doesn't match any ACL rule `security.acls.default.ingress.logged`| bool | `false` | no | Whether to log ingress traffic that doesn't match any ACL rule `vlan` | integer | - | no | The VLAN ID to use when nesting (see also `nested`) (nic-physical)= ```{note} You can select this NIC type through the `nictype` option or the `network` option (see {ref}`network-physical` for information about the managed `physical` network). You can have only one `physical` NIC for each parent device. ``` A `physical` NIC provides straight physical device pass-through from the host. The targeted device will vanish from the host and appear in the instance (which means that you can have only one `physical` NIC for each targeted device). NIC devices of type `physical` have the following device options: Key | Type | Default | Managed | Description :-- | :-- | :-- | :-- | :-- `boot.priority` | integer | - | no | Boot priority for VMs (higher value boots first) `gvrp` | bool | `false` | no | Register VLAN using GARP VLAN Registration Protocol `hwaddr` | string | randomly assigned | no | The MAC address of the new interface `mtu` | integer | parent MTU | no | The MTU of the new interface `name` | string | kernel assigned | no | The name of the interface inside the instance `network` | string | - | no | The managed network to link the device to (instead of specifying the `nictype` directly) `parent` | string | - | yes | The name of the host device (required if specifying the `nictype` directly) `vlan` | integer | - | no | The VLAN ID to attach to (nic-ipvlan)= ```{note} This NIC type is available only for containers, not for virtual machines. You can select this NIC type only through the `nictype` option. This NIC type does not support hotplugging. ``` An `ipvlan` NIC sets up a new network device based on an existing one, using the same MAC address but a different IP. If you are using an `ipvlan` NIC, communication between the Incus host and the instances is not possible. Both the host and the instances can talk to the gateway, but they cannot communicate directly. Incus currently supports IPVLAN in L2 and L3S mode. In this mode, the gateway is automatically set by Incus, but the IP addresses must be manually specified using the `ipv4.address` and/or `ipv6.address` options before the container is" }, { "data": "DNS : The name servers must be configured inside the container, because they are not set automatically. To do this, set the following `sysctls`: When using IPv4 addresses: ``` net.ipv4.conf.<parent>.forwarding=1 ``` When using IPv6 addresses: ``` net.ipv6.conf.<parent>.forwarding=1 net.ipv6.conf.<parent>.proxy_ndp=1 ``` NIC devices of type `ipvlan` have the following device options: Key | Type | Default | Description :-- | :-- | :-- | :-- `gvrp` | bool | `false` | Register VLAN using GARP VLAN Registration Protocol `hwaddr` | string | randomly assigned | The MAC address of the new interface `ipv4.address` | string | - | Comma-delimited list of IPv4 static addresses to add to the instance (in `l2` mode, these can be specified as CIDR values or singular addresses using a subnet of `/24`) `ipv4.gateway` | string | `auto` (`l3s`), - (`l2`) | In `l3s` mode, whether to add an automatic default IPv4 gateway (can be `auto` or `none`); in `l2` mode, the IPv4 address of the gateway `ipv4.host_table` | integer | - | The custom policy routing table ID to add IPv4 static routes to (in addition to the main routing table) `ipv6.address` | string | - | Comma-delimited list of IPv6 static addresses to add to the instance (in `l2` mode, these can be specified as CIDR values or singular addresses using a subnet of `/64`) `ipv6.gateway` | string | `auto` (`l3s`), - (`l2`) | In `l3s` mode, whether to add an automatic default IPv6 gateway (can be `auto` or `none`); in `l2` mode, the IPv6 address of the gateway `ipv6.host_table` | integer | - | The custom policy routing table ID to add IPv6 static routes to (in addition to the main routing table) `mode` | string | `l3s` | The IPVLAN mode (either `l2` or `l3s`) `mtu` | integer | parent MTU | The MTU of the new interface `name` | string | kernel assigned | The name of the interface inside the instance `parent` | string | - | The name of the host device (required) `vlan` | integer | - | The VLAN ID to attach to (nic-p2p)= ```{note} You can select this NIC type only through the `nictype` option. ``` A `p2p` NIC creates a virtual device pair, putting one side in the instance and leaving the other side on the host. NIC devices of type `p2p` have the following device options: Key | Type | Default | Description :-- | :-- | :-- | :-- `boot.priority` | integer | - | Boot priority for VMs (higher value boots first) `host_name` | string | randomly assigned | The name of the interface inside the host `hwaddr` | string | randomly assigned | The MAC address of the new interface `ipv4.routes` | string | - | Comma-delimited list of IPv4 static routes to add on host to NIC `ipv6.routes` | string | - | Comma-delimited list of IPv6 static routes to add on host to NIC `limits.egress` | string | - | I/O limit in bit/s for outgoing traffic (various suffixes supported, see {ref}`instances-limit-units`) `limits.ingress` | string | - | I/O limit in bit/s for incoming traffic (various suffixes supported, see {ref}`instances-limit-units`) `limits.max` | string | - | I/O limit in bit/s for both incoming and outgoing traffic (same as setting both `limits.ingress` and `limits.egress`)" }, { "data": "| integer | - | The `skb->priority` value (32-bit unsigned integer) for outgoing traffic, to be used by the kernel queuing discipline (qdisc) to prioritize network packets (The effect of this value depends on the particular qdisc implementation, for example, `SKBPRIO` or `QFQ`. Consult the kernel qdisc documentation before setting this value.) `mtu` | integer | kernel assigned | The MTU of the new interface `name` | string | kernel assigned | The name of the interface inside the instance `queue.tx.length` | integer | - | The transmit queue length for the NIC (nic-routed)= ```{note} You can select this NIC type only through the `nictype` option. ``` A `routed` NIC creates a virtual device pair to connect the host to the instance and sets up static routes and proxy ARP/NDP entries to allow the instance to join the network of a designated parent interface. For containers it uses a virtual Ethernet device pair, and for VMs it uses a TAP device. This NIC type is similar in operation to `ipvlan`, in that it allows an instance to join an external network without needing to configure a bridge and shares the host's MAC address. However, it differs from `ipvlan` because it does not need IPVLAN support in the kernel, and the host and the instance can communicate with each other. This NIC type respects `netfilter` rules on the host and uses the host's routing table to route packets, which can be useful if the host is connected to multiple networks. IP addresses, gateways and routes : You must manually specify the IP addresses (using `ipv4.address` and/or `ipv6.address`) before the instance is started. For containers, the NIC configures the following link-local gateway IPs on the host end and sets them as the default gateways in the container's NIC interface: 169.254.0.1 fe80::1 For VMs, the gateways must be configured manually or via a mechanism like `cloud-init` (see the {ref}`how to guide <instances-routed-nic-vm>`). ```{note} If your container image is configured to perform DHCP on the interface, it will likely remove the automatically added configuration. In this case, you must configure the IP addresses and gateways manually or via a mechanism like `cloud-init`. ``` The NIC type configures static routes on the host pointing to the instance's `veth` interface for all of the instance's IPs. Multiple IP addresses : Each NIC device can have multiple IP addresses added to it. However, it might be preferable to use multiple `routed` NIC interfaces instead. In this case, set the `ipv4.gateway` and `ipv6.gateway` values to `none` on any subsequent interfaces to avoid default gateway conflicts. Also consider specifying a different host-side address for these subsequent interfaces using `ipv4.hostaddress` and/or `ipv6.hostaddress`. Parent interface : This NIC can operate with and without a `parent` network interface set. : With the `parent` network interface set, proxy ARP/NDP entries of the instance's IPs are added to the parent interface, which allows the instance to join the parent interface's network at layer 2. : To enable this, the following network configuration must be applied on the host via `sysctl`: When using IPv4 addresses: ``` net.ipv4.conf.<parent>.forwarding=1 ``` When using IPv6 addresses: ``` net.ipv6.conf.all.forwarding=1 net.ipv6.conf.<parent>.forwarding=1 net.ipv6.conf.all.proxy_ndp=1" }, { "data": "``` NIC devices of type `routed` have the following device options: Key | Type | Default | Description :-- | :-- | :-- | :-- `gvrp` | bool | `false` | Register VLAN using GARP VLAN Registration Protocol `host_name` | string | randomly assigned | The name of the interface inside the host `hwaddr` | string | randomly assigned | The MAC address of the new interface `ipv4.address` | string | - | Comma-delimited list of IPv4 static addresses to add to the instance `ipv4.gateway` | string | `auto` | Whether to add an automatic default IPv4 gateway (can be `auto` or `none`) `ipv4.host_address` | string | `169.254.0.1` | The IPv4 address to add to the host-side `veth` interface `ipv4.host_table` | integer | - | The custom policy routing table ID to add IPv4 static routes to (in addition to the main routing table) `ipv4.neighbor_probe` | bool | `true` | Whether to probe the parent network for IP address availability `ipv4.routes` | string | - | Comma-delimited list of IPv4 static routes to add on host to NIC (without L2 ARP/NDP proxy) `ipv6.address` | string | - | Comma-delimited list of IPv6 static addresses to add to the instance `ipv6.gateway` | string | `auto` | Whether to add an automatic default IPv6 gateway (can be `auto` or `none`) `ipv6.host_address` | string | `fe80::1` | The IPv6 address to add to the host-side `veth` interface `ipv6.host_table` | integer | - | The custom policy routing table ID to add IPv6 static routes to (in addition to the main routing table) `ipv6.neighbor_probe` | bool | `true` | Whether to probe the parent network for IP address availability `ipv6.routes` | string | - | Comma-delimited list of IPv6 static routes to add on host to NIC (without L2 ARP/NDP proxy) `limits.egress` | string | - | I/O limit in bit/s for outgoing traffic (various suffixes supported, see {ref}`instances-limit-units`) `limits.ingress` | string | - | I/O limit in bit/s for incoming traffic (various suffixes supported, see {ref}`instances-limit-units`) `limits.max` | string | - | I/O limit in bit/s for both incoming and outgoing traffic (same as setting both `limits.ingress` and `limits.egress`) `limits.priority` | integer | - | The `skb->priority` value (32-bit unsigned integer) for outgoing traffic, to be used by the kernel queuing discipline (qdisc) to prioritize network packets (The effect of this value depends on the particular qdisc implementation, for example, `SKBPRIO` or `QFQ`. Consult the kernel qdisc documentation before setting this value.) `mtu` | integer | parent MTU | The MTU of the new interface `name` | string | kernel assigned | The name of the interface inside the instance `parent` | string | - | The name of the host device to join the instance to `queue.tx.length` | integer | - | The transmit queue length for the NIC `vlan` | integer | - | The VLAN ID to attach to The `bridged`, `macvlan` and `ipvlan` interface types can be used to connect to an existing physical network. `macvlan` effectively lets you fork your physical NIC, getting a second interface that is then used by the instance. This method saves you from creating a bridge device and virtual Ethernet device pairs and usually offers better performance than a bridge. The downside to this method is that `macvlan` devices, while able to communicate between themselves and to the outside, cannot talk to their parent device. This means that you can't use `macvlan` if you ever need your instances to talk to the host itself. In such case, a `bridge` device is preferable. A bridge also lets you use MAC filtering and I/O limits," } ]
{ "category": "Runtime", "file_name": "devices_nic.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Please keep this document in mind when using these guest kernel configuration files. Firecracker as a virtual machine monitor is designed and built for use with specific goals, so these kernel configurations are tuned to be secure and to use the host's resources as optimally as possible, specifically allowing for as many guests to be running concurrently as possible (high density). For example, one of the mechanisms to improve density is to reduce virtual memory areas of the guest. This decreases the page table size and improves available memory on the host for other guests to occupy. As Firecracker is intended for ephemeral compute (short-lived environments, not intended to run indefinitely), a Firecracker guest is not expected to require large memory sizes. One interesting use-case where this can be seen to cause odd side affects is one where golang's race detector for aarch64 expected a 48-bit space, but the guest's kernel config enforced 39-bit. See ." } ]
{ "category": "Runtime", "file_name": "DISCLAIMER.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Longhorn is a lightweight, reliable and easy to use distributed block storage system for Kubernetes. Once deployed, users can leverage persistent volumes provided by Longhorn. Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Longhorn supports snapshots, backups and even allows you to schedule recurring snapshots and backups! Important: Please install Longhorn chart in `longhorn-system` namespace only. Warning: Longhorn doesn't support downgrading from a higher version to a lower version." } ]
{ "category": "Runtime", "file_name": "app-readme.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List node IDs and associated information ``` -h, --help help for nodeid ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - List node IDs and the associated IP addresses" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_nodeid.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Over time, we update the base development environment that everyone is using. The current target is the x86_64 2021.4.x image series as noted in . The purpose of this guide is to describe how an existing development zone (x86_64 2018.4) should be upgraded from one version of pkgsrc to the next (x86_64 2021.4). In the past, the upgrade for this target was more disruptive than previous ones, as we went from a `multiarch` release to an `x86_64` release, which needed an entirely new /opt/local installation as pkgsrc cannot upgraded across this sort of boundary. The simplest and safest route still is to simply create a [fresh zone](../README.md#setting-up-a-build-environment) However, if you have customisations in your zone, reinstalling may be time-consuming. This guide documents a procedure to allow you to upgrade without provisioning a new devzone. If you have advice on what else we could add to these instructions, please do get in touch. NOTE: All pkgin and pkg_add instructions below should be done as root, or with pfexec, or with sudo. First, it's helpful to snapshot the package list that you have installed. You should do this by running: ``` pkgin export | sort > /package.list ``` Because all the base pkgsrc libraries that are used are going to change, each build environment will need to be cleaned out. You should identify the root of each smartos-live repository clone and for each one, run the `gmake clean` target. This will cause all the builds to be cleaned out. The next thing you should do is take a snapshot of your instance or potentially back up important information. To snapshot your instance, you can use the tool. If you have not already, follow the instructions such that you can point it at the system with your instance. First, identify the name of the instance that you are working on. This can usually be done by running `triton inst list`. For example: ``` $ triton inst list SHORTID NAME IMG STATE FLAGS AGE 122fff4d march-dev [email protected] running - 3y b80d08de python [email protected] running - 2y 2de86964 iasl ubuntu-16.04@20161004 running - 1y ``` In this case, we're interested in upgrading the instance called `march-dev`. Next we create a snapshot and verify it exists: ``` $ triton inst snapshot create --name=2021.4-upgrade march-dev Creating snapshot 2021.4-upgrade of instance march-dev $ triton inst snapshot list march-dev NAME STATE CREATED 2021.4-upgrade created 2018-09-28T18:40:14.000Z ``` When you're not running in a Triton environment, you can use vmadm to create the snapshot. First, find the VM you want to use in vmadm list. Then you use the `create-snapshot` option. ``` [root@00-0c-29-37-80-28 ~]# vmadm list UUID TYPE RAM STATE ALIAS 79809c3b-6c21-4eee-ba85-b524bcecfdb8 OS 4096 running multiarch [root@00-0c-29-37-80-28 ~]# vmadm create-snapshot 79809c3b-6c21-4eee-ba85-b524bcecfdb8 2021.4-upgrade Created snapshot 2021.4-upgrade for VM 79809c3b-6c21-4eee-ba85-b524bcecfdb8 ``` If your VM has delegated snapshots, you won't be able to use `vmadm` to take snapshots. In this case (and assuming you have CLI access to the global zone) you should take a manual recursive snapshot of the VM instead: ``` [root@00-0c-29-37-80-28 ~]# zfs snapshot -r zones/79809c3b-6c21-4eee-ba85-b524bcecfdb8@pre-upgrade ``` You will not be able to use `vmadm` to roll-back to snapshots created with `zfs`, so you would need to use manual zfs commands to do" }, { "data": "You should halt the VM before attempting such a rollback. At this point we will be taking steps which will potentially break your development zone. If you encounter problems, please don't hesitate to reach out for assistance. The approach we describe is to cleanly shutdown services that are running from /opt/local, move /opt/local aside, install the 2021Q4 x86-64 pkgsrc bootstrap bundle. We then reinstall as many packages as possible from the set that was previously manually installed, noting that some packages may have been dropped from the pkgsrc repository. First we need to determine which SMF services are running from /opt/local in order to shut them down cleanly. We save a list of SMF manifests as well as the SMF services and instances that were present. You may choose to further prune this list to select only services that were actually online at the time of upgrade. ``` cd /tmp for mf in $(pkg_admin dump | grep svc/manifest | cut -d' ' -f2); do echo \"Disabling services from $mf\" echo $mf >> /old-smf-manifests.list for svc in $(svccfg inventory $mf); do svcadm disable $svc echo $svc >> /disabled-services.list done done ``` Determining which SMF properties were set locally on SMF services or SMF instances vs. which are the shipped defaults is tricky in SmartOS. Doing a diff of the `svccfg -s <instance> listprop` against the output of: ``` svc:/instance> selectsnap initial [initial]svc:/instance> listprop ``` can provide some answers (omitting the 'general/enabled' property group in the comparison). As 'listprop' output adjusts to column width, stripping spaces and sorting with `sed -e 's/ \\+/ /g' | sort` will be necessary. Unfortunately, that diff won't include a list of the properties modified on the SMF service itself since SMF snapshots in SmartOS do not track properties set at the service level. To find those properties, it might be possible to logically compare the XML produced from: `svccfg -s <service> export` against the shipped XML manifest from /opt/local, included in the list of manifest files we found earlier. The pkgsrc package `xmlstarlet` provides a command `xml canonic`, which when run against an XML file, produces a canonical representation which could then be used to determine modified properties. `xmllint` may also be used to produce a view of a given pair of XML files in order to more easily compare them. NOTE: This step is revertable if no subsequent steps are taken. Edit these files: ``` /opt/local/etc/pkg_install.conf /opt/local/etc/pkgin/repositories.conf ``` And change any instance of `2018Q4` to `2021Q4`, and any instance of `pkgsrc.joyent.com` to `pkgsrc.smartos.org`. There should be one instance in each file. Here's is a pre-upgrade view: ``` smartos-build(~)[0]% grep Q4 /opt/local/etc/pkg_install.conf PKGPATH=https://pkgsrc.joyent.com/packages/SmartOS/2018Q4/x8664/All smartos-build(~)[0]% grep Q4 /opt/local/etc/pkgin/repositories.conf https://pkgsrc.joyent.com/packages/SmartOS/2018Q4/x86_64/All smartos-build(~)[0]% ``` and a post-upgrade view: ``` smartos-build-2(~)[0]% grep Q4 /opt/local/etc/pkg_install.conf PKGPATH=https://pkgsrc.smartos.org/packages/SmartOS/2021Q4/x8664/All smartos-build-2(~)[0]% grep Q4 /opt/local/etc/pkgin/repositories.conf https://pkgsrc.smartos.org/packages/SmartOS/2021Q4/x86_64/All smartos-build-2(~)[0]% ``` NOTE: This step is NOT revertable once taken. A few upgrades first need to be installed explicitly, to prevent dependency tripping: ``` pkgadd -U libarchive pkginstall pkgin ``` Those will enable a 2021.4-savvy pkgin to perform the next step. Now that we've bootstrapped, we'd like to" }, { "data": "``` pkgin upgrade ``` The output should look like this: ``` smartos-build-2(~)[0]% pfexec pkgadd -U libarchive pkginstall pkgin =========================================================================== The following directories are no longer being used by openssl-1.0.2p, and they can be removed if no other packages are using them: /opt/local/etc/openssl/certs libgpg-error-1.43 libgcrypt-1.9.4 libfontenc-1.1.4 libffi-3.4.2nb1 libfastjson-0.99.8nb1 libestr-0.1.11 libcares-1.18.1 libXi-1.8 libXft-2.3.4 libXext-1.3.4 libXdmcp-1.1.3 libXau-1.0.9 libX11-1.7.3.1 less-563 lcms2-2.12 jbigkit-2.1nb1 icu-70.1 http-parser-2.9.4 harfbuzz-3.1.2nb1 gtar-base-1.34 gsed-4.8nb1 grep-3.7 gmp-6.2.1nb2 gmake-4.3nb3 glib2-2.70.2nb1 git-gitk-2.34.1 git-2.34.1 giflib-5.2.1nb4 gettext-lib-0.21 gettext-asprintf-0.21 gettext-0.21 gcc7-libs-7.5.0nb5 gcc7-7.5.0nb5 gawk-5.1.1 freetype2-2.10.4 fontconfig-2.13.1nb5 findutils-4.8.0 expat-2.4.1 encodings-1.0.5 emacs26-nox11-26.3nb1 diffutils-3.7 db4-4.8.30nb1 cyrus-sasl-2.1.27nb2 curl-7.81.0 coreutils-9.0 cdrtools-3.02a10 bmake-20200524nb1 bison-3.8.2 binutils-2.37 automake-1.16.5 autoconf-2.71nb1 python27-2.7.18nb6 perl-5.34.0nb3 git-docs-2.34.1 git-contrib-2.34.1 git-base-2.34.1nb1 gettext-tools-0.21nb3 gettext-m4-0.21 p5-Net-SSLeay-1.90nb1 pcre2-10.39 7 packages to install: libXScrnSaver-1.2.3 brotli-1.0.9 blas-3.10.0 gcc10-10.3.0 lmdb-0.9.29 graphite2-1.3.14 python39-3.9.9nb1 26 to refresh, 125 to upgrade, 7 to install 726M to download, 595M to install proceed ? [Y/n] ``` After the install has completed, you should review the install output, and consult `/var/db/pkgin/pkg_install-err.log` to see if there are any packages which failed to install which may be important. We can now enable the SMF services that were previously disabled. If you had previously identified SMF properties that should be reset on your updated instances, you should set those properties on instances before enabling them. Similarly, if there were /opt/local/etc configuration files that need to be restored or merged from any changes you may have made in /opt/local.bak/etc, now is the time to do that. Recall that before upgrading, we saved a list of old SMF manifests in `/old-smf-manifests`. You should check that those manifest files still exist on your new /opt/local pkgsrc installation. If those manifests do not exist, then it's likely that the corresponding package does not exist in the 2021Q4 pkgsrc install, and that attempting to re-enable the SMF service post-upgrade will fail. In that case, the SMF service should be deleted using: ``` svccfg -s <instance> delete ``` Otherwise, the services can now be enabled using: ``` svcadm restart manifest-import for service in $(cat disabled-services.list) ; do svcadm enable -rs $service done ``` During this command, we may see warnings about `svc:/milestone/network:default` having a dependency on `svc:/network/physical`, which has multiple instances, but this warning can be ignored. Finally, we can compare which packages are now installed: ``` pkgin export | sort > /package.list.new /opt/local/bin/diff -y /package.list /package.list.new ``` Note that the packages normally installed by smartos-live's `configure` script might be missing at this point. When you next run `configure` in advance of doing a smartos-live build, they will be installed from http://us-central.manta.mnx.io/Joyent_Dev/public/releng/pkgsrc. At this point, you should be able to build a post-OS-8349 (2021.4) revision of smartos-live and repos. NOTE that illumos-extra must be updated concurrently with smartos-live. You may also reboot your dev zone and have it come up cleanly. Note that the following files in /etc will now lie to you: /etc/motd /etc/pkgsrc_version You may find it useful to manually update those files to correspond to the /opt/local 2021Q4 pkgsrc installation. To test this, start a fresh clone of smartos-live and build. For example: ``` $ git clone https://github.com/TritonDataCenter/smartos-live test $ cd test $ ./configure && gmake live $ ``` Once you're satisfied, you should go through and delete the snapshots that you created in the beginning. To do so, you would either use the `triton inst snapshot delete` or `vmadm delete-snapshot`. This section is a placeholder for issues users may encounter during upgrade. To date, no issues have been encountered." } ]
{ "category": "Runtime", "file_name": "dev-upgrade.md", "project_name": "SmartOS", "subcategory": "Container Runtime" }
[ { "data": "THIRD PARTY OPEN SOURCE SOFTWARE NOTICE Please note we provide an open source software notice for the third party open source software along with this software and/or this software component contributed by Huawei (in the following just this SOFTWARE). The open source software licenses are granted by the respective right holders. Warranty Disclaimer THE OPEN SOURCE SOFTWARE IN THIS SOFTWARE IS DISTRIBUTED IN THE HOPE THAT IT WILL BE USEFUL, BUT WITHOUT ANY WARRANTY, WITHOUT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. SEE THE APPLICABLE LICENSES FOR MORE DETAILS. Copyright Notice and License Texts Software: libc 0.2.146 Copyright notice: Copyright (c) 2014-2020 The Rust Project Developers License: MIT or Apache License Version 2.0 Copyright (C) <year> <copyright holders> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION Definitions. \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License. \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix" }, { "data": "\"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\" \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed." }, { "data": "You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this" }, { "data": "However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Software: log 0.4.18 Copyright notice: Copyright (c) 2014 The Rust Project Developers Copyright 2014-2015 The Rust Project Developers Copyright 2015 The Rust Project Developers License: MIT or Apache License Version 2.0 Please see above. Software: byteorder 1.4.3 Copyright notice: Copyright (c) 2015 Andrew Gallant License: MIT or Unlicense Please see above. This is free and unencumbered software released into the public domain. Anyone is free to copy, modify, publish, use, compile, sell, or distribute this software, either in source code form or as a compiled binary, for any purpose, commercial or non-commercial, and by any means. In jurisdictions that recognize copyright laws, the author or authors of this software dedicate any and all copyright interest in the software to the public domain. We make this dedication for the benefit of the public at large and to the detriment of our heirs and successors. We intend this dedication to be an overt act of relinquishment in perpetuity of all present and future rights to this software under copyright law. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. For more information, please refer to <http://unlicense.org/> Software: serde 1.0.163 Copyright notice: Copyright (c) David Tolnay <[email protected]> Copyright (c) Erick Tryzelaar <[email protected]> License: MIT or Apache License Version 2.0 Please see above. Software: serde_json 1.0.96 Copyright notice: Copyright (c) David Tolnay <[email protected]> Copyright (c) Erick Tryzelaar <[email protected]> License: MIT or Apache License Version 2.0 Please see above. Software: anyhow 1.0.71 Copyright notice: Copyright (c) David Tolnay <[email protected]> License: MIT or Apache License Version 2.0 Please see above. Software: thiserror 1.0 Copyright notice: Copyright (c) David Tolnay <[email protected]> License: MIT or Apache License Version 2.0 Please see above. Software: vmm-sys-util 0.11.1 Copyright notice: Copyright 2019 Intel Corporation. All Rights Reserved. Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Portions Copyright 2017 The Chromium OS Authors. All rights reserved. Copyright 2017 The Chromium OS Authors. All rights" }, { "data": "Copyright (C) 2019 Alibaba Cloud Computing. All rights reserved. Copyright 2020 Amazon.com, Inc. or its affiliates. All Rights Reserved. Copyright 2018 The Chromium OS Authors. All rights reserved. Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. License: Apache License Version 2.0 or BSD 3-Clause Please see above. Software: libusb1-sys 0.6.4 Copyright notice: Copyright (c) 2015 David Cuddeback License: MIT Please see above. Copyright (c) <year> <owner>. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Software: kvm-ioctls 0.13.0 Copyright notice: Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Portions Copyright 2017 The Chromium OS Authors. All rights reserved. Copyright 2017 The Chromium OS Authors. All rights reserved. Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. License: MIT or Apache License Version 2.0 Please see above. Software: kvm-bindings 0.6.0 Copyright notice: Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. License: The APACHE 2.0 License Please see above. Software: arc-swap 1.6.0 Copyright notice: Copyright (c) 2017 arc-swap developers License: MIT or Apache License Version 2.0 Please see above. Software: syn 2.0.18 Copyright notice: Copyright (c) David Tolnay <[email protected]> License: MIT or Apache License Version 2.0 Please see above. Software: quote 1.0.28 Copyright notice: Copyright (c) David Tolnay <[email protected]> License: MIT or Apache License Version 2.0 Please see above. Software: proc-macro2 1.0.59 Copyright notice: Copyright (c) David Tolnay <[email protected]> Copyright (c) Alex Crichton <[email protected]> License: MIT or Apache License Version 2.0 Please see above. Software: strum 0.24.1 Copyright notice: Copyright (c) 2019 Peter Glotfelty License: MIT Please see above. Software: strum_macros 0.24.3 Copyright notice: Copyright (c) 2019 Peter Glotfelty License: MIT Please see above. Software: vfio-bindings 0.3.1 Copyright notice: Copyright (c) 2019 Intel Corporation. All Rights Reserved. License: Apache License Version 2.0 or BSD 3-Clause License Please see above. Software: once_cell 1.18.0 Copyright notice: Copyright (c) Aleksey Kladov <[email protected]> License: MIT OR Apache-2.0 Please see above. Software: io-uring 0.6.0 Copyright notice: Copyright (c) tokio-rs License: MIT OR Apache-2.0 Please see above. Software: capng 0.2.3 Copyright notice: Copyright (C) 2020 Red Hat, Inc. All rights reserved. License: Apache License Version 2.0 or BSD 3-Clause License Please see above." } ]
{ "category": "Runtime", "file_name": "Third_Party_Open_Source_Software_Notice.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for bash Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: source <(cilium-operator-generic completion bash) To load completions for every new session, execute once: cilium-operator-generic completion bash > /etc/bash_completion.d/cilium-operator-generic cilium-operator-generic completion bash > $(brew --prefix)/etc/bash_completion.d/cilium-operator-generic You will need to start a new shell for this setup to take effect. ``` cilium-operator-generic completion bash ``` ``` -h, --help help for bash --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-generic_completion_bash.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "oep-number: NDM 0002 title: Metrics Collection using Node-Disk-Manager authors: \"@akhilerm\" owners: \"@kmova\" \"@vishnuitta\" editor: \"@akhilerm\" creation-date: 2019-09-05 last-updated: 2020-03-26 status: provisional * * * * * This proposal brings out the design details to implement a metrics collector for block devices. The metrics will include static data like NDM assigned UUID, Device state along with continuously varying data like used capacity, temperature data etc. NDM currently stores all the data related to the block devices in etcd in a custom resource. This is not a good approach for continuously varying metrics. Also the metrics should be available on demand at an end point from where users can query and get relevant information. This will help in monitoring of the block devices rather at the pool or volume level. A cluster level exporter that exposes static and rarely changing metrics of block devices like device state and UUIDs A node level exporter that exposes continuously varying metrics like temperature and free space on the block device The exporter will be working only at the block level and will not provide metrics about pool or volumes that are built on top of these devices. For a high level view of storage on the cluster, data points like when the disk went offline, total capacity of the disk etc should be available to the cluster admin. For deep monitoring of the storage devices, metrics like IOPS, drive temperature etc are required and these data should be queried from each block device from each node. The implementation details of exporter for collecting block level metrics. The current implementation just stores all the data related to the block devices in etcd, which is difficult to retrieve by monitoring systems like prometheus. In the current implementation all the details related to the block devices are stored in etcd. Even information like temperature is stored in the custom resource. The data in etcd is updated only when a udev event happens on the system. Since the time at which udev events occur cannot be predicted and is not so frequent the metric data stored in etcd will be obsolete most of the time. Also, this approach cannot be used in cases of continuously varying metrics like temperature and free space on the drive. There will be 2 components to the exporter. One will be running at the cluster level and the other at node level. The components can be customized depending on the metrics you need to fetch from the cluster. This component of the exporter will run at the cluster level and there will be only one running instance of this component at any time in the cluster. The primary responsibility of this exporter is to take static / rarely changing data about the block devices from etcd and expose it as prometheus" }, { "data": "The static data like Model number, UUID etc will be cached in the exporter, and metrics like state(Offline/Online) of the device will be fetched from etcd every time a request comes at the end point for the metrics. The node level exporter runs as a daemon in each node and collects all the metrics of the block device like IOPS, temperature, free space etc. It exposes a rest endpoint which can be queried to get the data. Every time the exporter is queried, all the metrics related are fetched from the disk using SMART and Seachest libraries and returned. ``` ++ ++ 2 ++ | | 1 | +>+ | | +>+ cluster level | | etcd | | Prometheus | | exporter +<+ | | | | | 3 ++ | +<+ | | | 4 | | ++--++-+--+ ++ | ^ | ^ | | | +--8-+ | | | | | | +--5-+ | 5 | | | | 8 +-+ +--v-+--+ | | | | | | | +--+ | | | | | Node1 | | Node2 | | | | | | +->+ exporter-d | | exporter-d | +--+-+ +--+-+ 6| ^ 6| ^ v |7 v |7 +-+-+ +-+-+ |::Disk | |::Disk | ++ ++ ``` Cluster Level exporter Queries the cluster level operator for static data about all block devices in the cluster Requests etcd for all blockdevices and their properties Response from etcd with all blockdevices Response from cluster level exporter to prometheus with static metrics Sample metrics when cluster exporter endpoint is queried ``` nodeblockdevice_state{blockdevicename=\"blockdevice-6a0ec0732d1f709810a3fbbde81fc3bb\",hostname=\"minikube\",nodename=\"minikube\",path=\"sda\"} 0 nodeerrorrequest_count 2 noderejectrequest_count 0 ``` Node Level exporter Queries each storage node for disk metrics exporter-d queries certain pages on the disk to get relevant metrics data The information from the pages is analysed using SMART and seachest libraries to get relevant metric about the disk The live metrics are send back to prometheus. Sample metrics when node level exporter is queried ``` seachestblockdevicecurrenttemperature_celsius{blockdevicename=\"blockdevice-6a0ec0732d1f709810a3fbbde81fc3bb\",hostname=\"minikube\",nodename=\"minikube\",path=\"sda\"} 38 seachestblockdevicecurrenttemperature_valid{blockdevicename=\"blockdevice-6a0ec0732d1f709810a3fbbde81fc3bb\",hostname=\"minikube\",nodename=\"minikube\",path=\"sda\"} 1 seachesterrorrequest_count 0 seachestrejectrequest_count 0 ``` NDM Exporter can be installed by following the YAML . Make sure that the exporter deployment and daemonset is installed in the same namespace as of OpenEBS-NDM. The NDM exporter exposes metrics such that it can be used along with prometheus node exporter. The `hostname`, `nodename` and `path` labels are made available in all metrics exposed by NDM exporter. This labels help to club together metrics from NDM exporter and node exporter, thus giving a complete details view of the block devices in the system. New collectors for fetching additional metrics can be added by following the steps described . The exporter should be able to expose both static and dynamic metrics of the block devices. The exporter should be also customizable such that depending on the type of metrics required, the cluster or node level exporter should be deployed. Owner acceptance of `Summary` and `Motivation` sections - YYYYMMDD Agreement on `Proposal` section - YYYYMMDD Date implementation started - YYYYMMDD First OpenEBS release where an initial version of this OEP was available - YYYYMMDD Version of OpenEBS where this OEP graduated to general availability - YYYYMMDD If this OEP was retired or superseded - YYYYMMDD NA NA" } ]
{ "category": "Runtime", "file_name": "20190905-ndm-exporter-integration.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "This plugin aims to help debugging or troubleshooting in CNI plugin development. ``` { \"cniVersion\": \"0.3.1\", \"name\": \"mynet\", \"plugins\": [ { \"type\": \"ptp\", \"ipMasq\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"172.16.30.0/24\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ] } }, { \"type\": \"debug\", \"cniOutput\": \"/tmp/cni_output.txt\", \"addHooks\": [ [ \"sh\", \"-c\", \"ip link set $CNI_IFNAME promisc on\" ] ] }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true}, \"externalSetMarkChain\": \"KUBE-MARK-MASQ\" } ] } ``` `cniOutput` (string, optional): output CNI request into file. `addHooks` (string array, optional): commands executed in container network namespace at interface add. (note: but just execute it and does not catch command failure) `delHooks` (string array, optional): commands executed in container network namespace at interface delete. (note: but just execute it and does not catch command failure) `checkHooks` (string array, optional): commands executed in container network namespace at interface check. (note: but just execute it and does not catch command failure) ``` CmdAdd ContainerID: cnitool-20c433bb2b1d6ede56d6 Netns: /var/run/netns/cnitest IfName: eth0 Args: Path: /opt/cni/bin StdinData: {\"cniOutput\":\"/tmp/cni_output.txt\",\"cniVersion\":\"0.3.1\",\"name\":\"test\",\"prevResult\":{\"cniVersion\":\"0.3.1\",\"interfaces\":[{\"name\":\"veth92e295cc\",\"mac\":\"56:22:7f:b7:5b:75\"},{\"name\":\"eth0\",\"mac\":\"46:b3:f3:77:bf:21\",\"sandbox\":\"/var/run/netns/cnitest\"}],\"ips\":[{\"version\":\"4\",\"interface\":1,\"address\":\"10.1.1.2/24\",\"gateway\":\"10.1.1.1\"}],\"dns\":{\"nameservers\":[\"10.64.255.25\",\"8.8.8.8\"]}},\"type\":\"none\"} CmdDel ContainerID: cnitool-20c433bb2b1d6ede56d6 Netns: /var/run/netns/cnitest IfName: eth0 Args: Path: /opt/cni/bin StdinData: {\"cniOutput\":\"/tmp/cni_output.txt\",\"cniVersion\":\"0.3.1\",\"name\":\"test\",\"type\":\"none\"} ```" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Container Network Interface (CNI)", "subcategory": "Cloud Native Network" }
[ { "data": "% runc-spec \"8\" runc-spec - create a new specification file runc spec [option ...] The spec command creates the new specification file named config.json for the bundle. The spec generated is just a starter file. Editing of the spec is required to achieve desired results. For example, the newly generated spec includes an args parameter that is initially set to call the sh command when the container is started. Calling sh may work for an ubuntu container or busybox, but will not work for containers that do not include the sh binary. --bundle|-b path : Set path to the root of the bundle directory. --rootless : Generate a configuration for a rootless container. Note this option is entirely different from the global --rootless option. To run a simple \"hello-world\" container, one needs to set the args parameter in the spec to call hello. This can be done using sed(1), jq(1), or a text editor. The following commands will: create a bundle for hello-world; change the command to run in a container to /hello using jq(1); run the hello command in a new hello-world container named container1. mkdir hello cd hello docker pull hello-world docker export $(docker create hello-world) > hello-world.tar mkdir rootfs tar -C rootfs -xf hello-world.tar runc spec jq '.process.args |= [\"/hello\"]' < config.json > new.json mv -f new.json config.json runc run container1 In the run command above, container1 is the name for the instance of the container that you are starting. The name you provide for the container instance must be unique on your host. An alternative for generating a customized spec config is to use oci-runtime-tool; its sub-command oci-runtime-tool generate has lots of options that can be used to do any customizations as you want. See to get more information. When starting a container through runc, the latter usually needs root privileges. If not already running as root, you can use sudo(8), for example: sudo runc start container1 Alternatively, you can start a rootless container, which has the ability to run without root privileges. For this to work, the specification file needs to be adjusted accordingly. You can pass the --rootless option to this command to generate a proper rootless spec file. runc-run(8), runc(8)." } ]
{ "category": "Runtime", "file_name": "runc-spec.8.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.4.2 `velero/velero:v1.4.2` https://velero.io/docs/v1.4/ https://velero.io/docs/v1.4/upgrade-to-1.4/ log a warning instead of erroring if an additional item returned from a plugin can't be found in the Kubernetes API (#2595, @skriss) Adjust restic default time out to 4 hours and base pod resource requests to 500m CPU/512Mi memory. (#2696, @nrb) capture version of the CRD prior before invoking the remapcrdversion backup item action (#2683, @ashish-amarnath) This tag was created in code, but has no associated docker image due to misconfigured building infrastructure. v1.4.2 fixes this. https://github.com/vmware-tanzu/velero/releases/tag/v1.4.0 `velero/velero:v1.4.0` https://velero.io/docs/v1.4/ https://velero.io/docs/v1.4/upgrade-to-1.4/ Added beta-level CSI support! Added custom CA certificate support Backup progress reporting Changed backup tarball format to support all versions of a given resource increment restic volumesnapshot count after successful pvb create (#2542, @ashish-amarnath) Add details of CSI volumesnapshotcontents associated with a backup to `velero backup describe` when the `EnableCSI` feature flag is given on the velero client. (#2448, @nrb) Allow users the option to retrieve all versions of a given resource (instead of just the preferred version) from the API server with the `EnableAPIGroupVersions` feature flag. (#2373, @brito-rafa) Changed backup tarball format to store all versions of a given resource, updated backup tarball format to 1.1.0. (#2373, @brito-rafa) allow feature flags to be passed from install CLI (#2503, @ashish-amarnath) sync backups' CSI API objects into the cluster as part of the backup sync controller (#2496, @ashish-amarnath) bug fix: in error location logging hook, if the item logged under the `error` key doesn't implement the `error` interface, don't return an error since this is a valid scenario (#2487, @skriss) bug fix: in CRD restore plugin, don't use runtime.DefaultUnstructuredConverter.FromUnstructured(...) to avoid conversion issues when float64 fields contain int values (#2484, @skriss) during backup deletion also delete CSI volumesnapshotcontents that were created as a part of the backup but the associated volumesnapshot object does not exist (#2480, @ashish-amarnath) If plugins don't support the `--features` flag, don't pass it to them. Also, update the standard plugin server to ignore unknown flags. (#2479, @skriss) At backup time, if a CustomResourceDefinition appears to have been created via the v1beta1 endpoint, retrieve it from the v1beta1 endpoint instead of simply changing the APIVersion. (#2478, @nrb) update container base images from ubuntu:bionic to" }, { "data": "(#2471, @skriss) bug fix: when a resource includes/excludes list contains unresolvable items, don't remove them from the list, so that the list doesn't inadvertently end up matching all* resources. (#2462, @skriss) Azure: add support for getting storage account key for restic directly from an environment variable (#2455, @jaygridley) Support to skip VSL validation for the backup having SnapshotVolumes set to false or created with `--snapshot-volumes=false` (#2450, @mynktl) report backup progress (number of items backed up so far out of an estimated total number of items) during backup in the logs and as status fields on the Backup custom resource (#2440, @skriss) bug fix: populate namespace in logs for backup errors (#2438, @skriss) during backup deletion also delete CSI volumesnapshots that were created as a part of the backup (#2411, @ashish-amarnath) bump Kubernetes module dependencies to v0.17.4 to get fix for https://github.com/kubernetes/kubernetes/issues/86149 (#2407, @skriss) bug fix: save PodVolumeBackup manifests to object storage even if the volume was empty, so that on restore, the PV is dynamically reprovisioned if applicable (#2390, @skriss) Adding new restoreItemAction for PVC to update the selected-node annotation (#2377, @mynktl) Added a --cacert flag to the install command to provide the CA bundle to use when verifying TLS connections to object storage (#2368, @mansam) Added a `--cacert` flag to the velero client describe, download, and logs commands to allow passing a path to a certificate to use when verifying TLS connections to object storage. Also added a corresponding client config option called `cacert` which takes a path to a certificate bundle to use as a default when `--cacert` is not specified. (#2364, @mansam) support setting a custom CA certificate on a BSL to use when verifying TLS connections (#2353, @mansam) adding annotations on backup CRD for k8s major, minor and git versions (#2346, @brito-rafa) When the EnableCSI feature flag is provided, upload CSI VolumeSnapshots and VolumeSnapshotContents to object storage as gzipped JSON. (#2323, @nrb) add CSI snapshot API types into default restore priorities (#2318, @ashish-amarnath) refactoring: wait for all informer caches to sync before running controllers (#2299, @skriss) refactor restore code to lazily resolve resources via discovery and eliminate second restore loop for instances of restored CRDs (#2248, @skriss) upgrade to go 1.14 and migrate from `dep` to go modules (#2214, @skriss) clarify the wording for restore describe for namespaces included" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.4.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> List all metrics for the operator ``` cilium-operator-alibabacloud metrics list [flags] ``` ``` -h, --help help for list -p, --match-pattern string Show only metrics whose names match matchpattern -o, --output string json| yaml| jsonpath='{}' -s, --server-address string Address of the operator API server (default \"localhost:9234\") ``` - Access metric status of the operator" } ]
{ "category": "Runtime", "file_name": "cilium-operator-alibabacloud_metrics_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Use Velero with a storage provider secured by a self-signed certificate\" layout: docs If you are using an S3-Compatible storage provider that is secured with a self-signed certificate, connections to the object store may fail with a `certificate signed by unknown authority` message. To proceed, provide a certificate bundle when adding the storage provider. When using the `velero install` command, you can use the `--cacert` flag to provide a path to a PEM-encoded certificate bundle to trust. ```bash velero install \\ --plugins <PLUGINCONTAINERIMAGE [PLUGINCONTAINERIMAGE]> --provider <YOUR_PROVIDER> \\ --bucket <YOUR_BUCKET> \\ --secret-file <PATHTOFILE> \\ --cacert <PATHTOCA_BUNDLE> ``` Velero will then automatically use the provided CA bundle to verify TLS connections to that storage provider when backing up and restoring. To use the describe, download, or logs commands to access a backup or restore contained in storage secured by a self-signed certificate as in the above example, you must use the `--cacert` flag to provide a path to the certificate to be trusted. ```bash velero backup describe my-backup --cacert <PATHTOCA_BUNDLE> ``` In case you are using a custom S3-compatible server, you may encounter that the backup fails with an error similar to one below. ``` rpc error: code = Unknown desc = RequestError: send request failed caused by: Get https://minio.com:3000/k8s-backup-bucket?delimiter=%2F&list-type=2&prefix=: remote error: tls: alert(116) ``` Error 116 represents certificate required as seen here in . Velero as a client does not include its certificate while performing SSL handshake with the server. From , verifying client certificate is optional on the server. You will need to change this setting on the server to make it work. Note: The `--insecure-skip-tls-verify` flag is insecure and susceptible to man-in-the-middle attacks and meant to help your testing and developing scenarios in an on-premise environment. Using this flag in production is not recommended. Velero provides a way for you to skip TLS verification on the object store when using the or by passing the `--insecure-skip-tls-verify` flag with the following Velero commands, velero backup describe velero backup download velero backup logs velero restore describe velero restore log If true, the object store's TLS certificate will not be checked for validity before Velero connects to the object store or Restic repo. You can permanently skip TLS verification for an object store by setting `Spec.Config.InsecureSkipTLSVerify` to true in the CRD. Note that Velero's Restic integration uses Restic commands to do data transfer between object store and Kubernetes cluster disks. This means that when you specify `--insecure-skip-tls-verify` in Velero operations that involve interacting with Restic, Velero will add the Restic global command parameter `--insecure-tls` to Restic commands." } ]
{ "category": "Runtime", "file_name": "self-signed-certificates.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "The API service lists and introspects pods and images. The API service is implemented with . The API service is designed to run without root privileges, and currently provides a read-only interface. The API service is optional for running pods, the start/stop/crash of the API service won't affect any pods or images. The API service listens for gRPC requests on the address and port specified by the `--listen` option. The default is to listen on the loopback interface on port number `15441`, equivalent to invoking `rkt api-service --listen=localhost:15441`. Specify the address `0.0.0.0` to listen on all interfaces. Typically, the API service will be run via a unit file similar to the one included in the . The interfaces are defined in the . Here is a small that illustrates how to use the API service. | Flag | Default | Options | Description | | | | | | | `--listen` | `localhost:15441` | An address to listen on | Address to listen for client API requests | See the table with ." } ]
{ "category": "Runtime", "file_name": "api-service.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "layout: global title: Logging This page describes the basic logging function provided by Alluxio server processes (e.g., masters, workers, and etc.) and application processes utilizing Alluxio clients (e.g., Spark or MapReduce jobs running on Alluxio). Alluxio logging is implemented using {:target=\"_blank\"} and thus most of the configuration is done through modifying `log4j.properties`. The master and worker logs are useful to understand what the Alluxio Master and Workers are doing, especially when running into any issues. If you do not understand the error messages, search for them in the {:target=\"_blank\"}, in the case the problem has been discussed before. You can also join our and seek help there. You can find more details about the Alluxio server logs . The client-side logs are helpful when Alluxio service is running but the client cannot connect to the servers. Alluxio client emits logging messages through log4j, so the location of the logs is determined by the client side log4j configuration used by the application. You can find more details about the client-side logs . The user logs in `${ALLUXIO_HOME}/logs/user/` are the logs from running Alluxio shell. Each user will have separate log files. Log files for each individual Alluxio server process (e.g., master, worker, FUSE, proxy) can be found under `${ALLUXIO_HOME}/logs/`. Each process should have two corresponding files ending with `.log` or `.out`, such as `worker.log` and `worker.out` for the worker process. `jobworker.log`, `jobworker.out` and `user/user_${USER}.log`. Files suffixed with `.log` like `master.log` or `worker.log` are generated by `log4j`, logging the events that Alluxio system is recording through JVM. These files are typically the main target for users to investigate logs. Files suffixed with `.out` like `master.out` or `worker.out` are the redirection of `stdout` and `stderr` of the corresponding process. Fatal error messages (e.g., killed by the OS), will most likely go to these files. The log location can be customized by setting environment variable `ALLUXIOLOGSDIR`. See the for more information. By default, the `*.log` files rotate. For example this is the default log4j configuration for master.log: ```properties log4j.appender.MASTER_LOGGER=org.apache.log4j.RollingFileAppender log4j.appender.MASTER_LOGGER.File=${alluxio.logs.dir}/master.log log4j.appender.MASTER_LOGGER.MaxFileSize=10MB log4j.appender.MASTER_LOGGER.MaxBackupIndex=100 log4j.appender.MASTER_LOGGER.layout=org.apache.log4j.PatternLayout log4j.appender.MASTER_LOGGER.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} - %m%n ``` However, the `*.out` files do not rotate. So it makes sense to regularly check the size of these files, and clean them up if necessary. Log files for the Alluxio client utilized by different applications are located with their respective application logs. Please check out particular compute frameworks on where their logs may be" }, { "data": "Here are the documentation to configure individual application logs including , , . For the , the user log is located at `${ALLUXIOHOME}/logs/user/user${user_name}.log`. Alluxio uses the following five logging levels: `TRACE`: verbose information, only useful for debugging a certain method or class `DEBUG`: fine-grained information, most useful for debugging purposes `INFO`: messages that highlight the status or progress `WARN`: potentially harmful events that users may need to know about, but the process will still continue running `ERROR`: system errors that users should pay attention to By default, Alluxio server processes write logs at `INFO` level, which includes all events at `INFO`, `WARN` and `ERROR` levels. You can modify `${ALLUXIO_HOME}/conf/log4j.properties` to customize logging levels and restart corresponding server processes to apply new changes. This is the recommended way to modify logging configurations. For example, to modify the level for all logs to `DEBUG`, change the `rootLogger` level by modifying the first line of `log4j.properties` as the following: ```properties log4j.rootLogger=DEBUG, ${alluxio.logger.type}, ${alluxio.remote.logger.type} ``` To modify the logging level for a particular Java class (e.g., set `alluxio.client.file.FileSystemContext` to `DEBUG`), add a new line at the end of this file: ```properties log4j.logger.alluxio.client.file.FileSystemContext=DEBUG ``` To modify the logging level for a package (e.g., set all classes under `alluxio` to `DEBUG`), add a new line in the end of this file as below. This can be helpful when you do not know what the target classes are, or many classes are relevant. ```properties log4j.logger.alluxio=DEBUG ``` An alternative way to modify logging configurations is use the `logLevel` command. This allows someone to modify the configuration at runtime without needing to restart processes. This is not the recommended way as any modifications will not be persisted across restart, and causes a configuration mismatch between the running process and its `log4j.properties` file. See the for the command options. For example, the following command sets the logger level of the class `alluxio.underfs.hdfs.HdfsUnderFileSystem` to `DEBUG` on master as well as a worker at `192.168.100.100:30000`: ```shell $ ./bin/alluxio conf log --name=alluxio.underfs.hdfs.HdfsUnderFileSystem \\ --target=master,192.168.100.100:30000 --level=DEBUG ``` And the following command returns the log level of the class `alluxio.underfs.hdfs.HdfsUnderFileSystem` among all the workers: ```shell $ ./bin/alluxio conf log --name=alluxio.underfs.hdfs.HdfsUnderFileSystem --target=workers ``` You can also update the log level at a package level. For example, you can update the log level of all classes in `alluxio.underfs` package with the following command: ```shell $ ./bin/alluxio conf log --name=alluxio.underfs --target=workers --level=DEBUG ``` This works because log4j loggers will inherit the log level from their" }, { "data": "In this case `alluxio.underfs.hdfs.HdfsUnderFileSystem` inherits the log level if it is set on `alluxio.underfs` or `alluxio.underfs.hdfs`. Furthermore, you can turn on Alluxio debug logging when you are troubleshooting a certain issue in a running cluster, and turn it off when you are done. ```shell $ ./bin/alluxio conf log --name=alluxio --level=DEBUG ``` ```shell $ ./bin/alluxio conf log --name=alluxio --level=INFO ``` Finally, if your Alluxio deployment uses custom web ports (e.g. `alluxio.master.web.port` is different from 19999, or `alluxio.worker.web.port` is different from 30000), you can use the format `host:port:role` for your target. `role` can be one of `master` or `worker` or `jobmaster` or `jobworker`. For example, if your master running on `10.10.10.10` has `alluxio.master.web.port=2181` configured, you would use: ```shell $ ./bin/alluxio conf log --name=alluxio --target=10.10.10.10:2181:master --level=DEBUG ``` If your worker is running on `127.0.0.1` with `alluxio.worker.web.port=25252` configured, you would use: ```shell $ ./bin/alluxio conf log --name=alluxio --target=127.0.0.1:25252:worker --level=DEBUG ``` Add the following line to `conf/allulxio-env.sh` to enable logging GC events for server processes in log files with `.out` suffix like `master.out` and `worker.out`: ```sh ALLUXIOJAVAOPTS+=\" -XX:+PrintGCDetails -XX:+PrintTenuringDistribution -XX:+PrintGCTimeStamps\" ``` `ALLUXIOJAVAOPTS` is included in Java VM options for all Alluxio server processes. Alternatively, modify `ALLUXIOMASTERJAVAOPTS`, `ALLUXIOWORKERJAVAOPTS` to turn on GC for each individual process. Set in `conf/log4j.properties`: ```properties log4j.logger.alluxio.fuse.AlluxioJniFuseFileSystem=DEBUG ``` You will see debug logs at both the start and end of each FUSE API call with its arguments and result in `logs/fuse.log`: ``` 2020-03-03 14:33:35,129 DEBUG AlluxioJniFuseFileSystem - Enter: chmod(path=/aaa,mode=100644) 2020-03-03 14:33:35,131 DEBUG AlluxioJniFuseFileSystem - Exit (0): chmod(path=/aaa,mode=100644) in 2 ms 2020-03-03 14:33:35,132 DEBUG AlluxioJniFuseFileSystem - Enter: getattr(path=/aaa) 2020-03-03 14:33:35,135 DEBUG AlluxioJniFuseFileSystem - Exit (0): getattr(path=/aaa) in 3 ms 2020-03-03 14:33:35,138 DEBUG AlluxioJniFuseFileSystem - Enter: getattr(path=/._aaa) 2020-03-03 14:33:35,140 DEBUG AlluxioJniFuseFileSystem - Failed to get info of /._aaa, path does not exist or is invalid 2020-03-03 14:33:35,140 DEBUG AlluxioJniFuseFileSystem - Exit (-2): getattr(path=/._aaa) in 2 ms ``` Add the following to your application-side `log4j.properties` to capture RPCs between the Alluxio client and FileSystem Master: ```properties log4j.logger.alluxio.client.file.FileSystemMasterClient=DEBUG ``` Similarly, capture lower-level RPCs between Alluxio client and Block Master: ```properties log4j.logger.alluxio.client.block.BlockMasterClient=DEBUG ``` You will see debug logs at the beginning and end of each RPC with its arguments and result in the client logs like the following: ``` 2020-03-03 15:56:40,115 DEBUG FileSystemMasterClient - Enter: GetStatus(path=/.DS_Store,options=loadMetadataType: ONCE commonOptions { syncIntervalMs: -1 ttl: -1 ttlAction: DELETE } ) 2020-03-03 15:56:40,117 DEBUG FileSystemMasterClient - Exit (ERROR): GetStatus(path=/.DS_Store,options=loadMetadataType: ONCE commonOptions { syncIntervalMs: -1 ttl: -1 ttlAction: DELETE } ) in 2 ms: alluxio.exception.status.NotFoundException: Path \"/.DS_Store\" does not exist. ```" } ]
{ "category": "Runtime", "file_name": "Logging.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark schedule get\" layout: docs Get schedules Get schedules ``` ark schedule get [flags] ``` ``` -h, --help help for get --label-columns stringArray a comma-separated list of labels to be displayed as columns -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default \"table\") -l, --selector string only show items matching this label selector --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with schedules" } ]
{ "category": "Runtime", "file_name": "ark_schedule_get.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: S3 API Alluxio supports a that is compatible with the basic operations of the Amazon {:target=\"_blank\"}. The Alluxio S3 API should be used by applications designed to communicate with an S3-like storage and would benefit from the other features provided by Alluxio, such as data caching, data sharing with file system based applications, and storage system abstraction (e.g., using Ceph instead of S3 as the backing store). For example, a simple application that downloads reports generated by analytic tasks can use the S3 API instead of the more complex file system API. Only top-level Alluxio directories are treated as buckets by the S3 API. Hence the root directory of the Alluxio filesystem is not treated as an S3 bucket. Any root-level objects (eg: `alluxio://file`) will be inaccessible through the Alluxio S3 API. To treat sub-directories as a bucket, the separator `:` must be used in the bucket name (eg: `s3://sub:directory:bucket/file`). Note that this is purely a convenience feature and hence is not returned by API Actions such as ListBuckets. Alluxio uses `/` as a reserved separator. Therefore, any S3 paths with objects or folders named `/` (eg: `s3://example-bucket//`) will cause undefined behavior. For additional limitations on object key names please check this page: {:target=\"_blank\"} is not supported in the Alluxio S3 API. Therefore, S3 clients must utilize {:target=\"_blank\"} (i.e: `http://s3.amazonaws.com/{bucket}/{object}` and NOT `http://{bucket}.s3.amazonaws.com/{object}`). As described in the AWS S3 docs for {:target=\"_blank\"}: Amazon S3 is a distributed system. If it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. Amazon S3 does not provide object locking; if you need this, make sure to build it into your application layer or use versioning instead. Note that at the moment the Alluxio S3 API does not support object versioning Alluxio S3 will overwrite the existing key and the temporary directory for multipart upload. All sub-directories in Alluxio will be returned in ListObjects(V2) as 0-byte folders. This behavior is in accordance with if you used the AWS S3 console to create all parent folders for each object. User-defined tags on buckets & objects are limited to 10 and obey the {:target=\"_blank\"}. Set the property key `alluxio.proxy.s3.tagging.restrictions.enabled=false` to disable this behavior. The maximum size for user-defined metadata in PUT-requests is 2KB by default in accordance with {:target=\"_blank\"}. Set the property key `alluxio.proxy.s3.header.metadata.max.size` to change this behavior. The S3 API leverages the , introducing an additional network hop for Alluxio clients. For optimal performance, it is recommended to run the proxy server and an Alluxio worker on each compute node. It is also recommended to put all the proxy servers behind a load balancer. <table class=\"table table-striped\"> <tr> <th>Header</th> <th>Content</th> <th>Description</th> </tr> <tr> <td><a href=\"https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html\" target=\"_blank\">Authorization</a></td> <td>AWS4-HMAC-SHA256 Credential=<b>{user}</b>/..., SignedHeaders=..., Signature=...</td> <td>There is currently no support for access & secret keys in the Alluxio S3 API. The only supported authentication scheme is the SIMPLE authentication type. By default, the user that is used to perform any operations is the user that was used to launch the Alluxio proxy process. <br/><br/> Therefore this header is used exclusively to specify an Alluxio ACL username to perform an operation with. In order to remain compatible with other S3 clients, the header is still expected to follow the <a" }, { "data": "target=\"_blank\">AWS Signature Version 4</a> format. <br/><br/> When supplying an access key to an S3 client, put the intended Alluxio ACL username. The secret key is unused so you may use any dummy value.</td> </tr> </table> The following table describes the support status for current {:target=\"_blank\"}: <table class=\"table table-striped\"> <tr> <th>S3 API Action</th> <th>Supported Headers</th> <th>Supported Query Parameters</th> </tr> {% for item in site.data.table.s3-api-supported-actions %} <tr> <td><a href=\"https://docs.aws.amazon.com/AmazonS3/latest/API/API{{ item.action }}.html\" target=\"blank\">{{ item.action }}</a></td> <td> {% assign headers = item.headers | split: \"|\" %} {% if headers.size == 0 %} N/A {% else %} <ul style=\"list-style-type:none;margin:0;padding:0\"> {% for header in headers %} {% if forloop.last %} <li>{{ header }}</li> {% else %} <li>{{ header }},</li> {% endif %} {% endfor %} </ul> {% endif %} </td> <td> {% assign params = item.queryParams | split: \"|\" %} {% if params.size == 0 %} N/A {% else %} <ul style=\"list-style-type:none;margin:0;padding:0\"> {% for param in params %} {% if forloop.last %} <li>{{ param }}</li> {% else %} <li>{{ param }},</li> {% endif %} {% endfor %} </ul> {% endif %} </td> </tr> {% endfor %} </table> The following table contains the configurable which pertain to the Alluxio S3 API. <table class=\"table table-striped\"> <tr><th>Property Name</th><th>Default</th><th>Description</th></tr> {% for item in site.data.table.common-configuration %} {% if item.propertyName contains \"alluxio.proxy.s3\" %} <tr> <td><a class=\"anchor\" name=\"{{ item.propertyName }}\"></a> {{ item.propertyName }}</td> <td>{{ item.defaultValue }}</td> <td>{{ site.data.table.en.common-configuration[item.propertyName] }}</td> </tr> {% endif %} {% endfor %} </table> {% navtabs s3apiactions %} {% navtab AWS CLI %} You can use the {:target=\"_blank\"} to send S3 API requests to the Alluxio S3 API. Note that you will have to provide the `--endpoint` parameter to specify the location of the Alluxio S3 REST API with the server's base URI included (i.e: `--endpoint \"http://{alluxio.proxy.web.hostname}:{alluxio.proxy.web.port}/api/v1/s3/\"`). As a pre-requisite for operations which involve the `Authorization` header you may need to {:target=\"_blank\"}. See the for details on how Alluxio uses this header ```shell $ aws configure --profile alluxio-s3 AWS Access Key ID [None]: {user} AWS Secret Access Key [None]: {dummy value} Default region name [None]: Default output format [None]: ``` {% endnavtab %} {% navtab REST Clients %} You can directly use any HTTP client to send S3 API requests to the Alluxio S3 API. Note that the base URI for the Alluxio S3 API's REST server is `/api/v1/s3/` (i.e: your requests should be directed to `\"http://{alluxio.proxy.web.hostname}:{alluxio.proxy.web.port}/api/v1/s3/\"`). At the moment, access key and secret key validation does not exist for the Alluxio S3 API. Therefore the is used purely to specify the intended user to perform a request. The header follows the {:target=\"_blank\"} format. ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" ... ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs abortmultipartupload %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects-v2 \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"multipartcopy.txt6367cf96-ea4e-4447-b931-c5bc91200375/\", \"LastModified\": \"2022-05-03T13:00:13.429000+00:00\", \"Size\": 0 }, { \"Key\": \"multipartcopy.txt6367cf96-ea4e-4447-b931-c5bc91200375/1\", \"LastModified\": \"2022-05-03T13:00:13.584000+00:00\", \"Size\": 27040 }, { \"Key\": \"test.txt\", \"LastModified\": \"2022-05-03T11:55:01.925000+00:00\", \"Size\": 27040 } ] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api abort-multipart-upload \\ --bucket=testbucket --key=multipart_copy.txt --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 $ % aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects-v2 \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"test.txt\", \"LastModified\":" }, { "data": "\"Size\": 27040 } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:45:17 GMT Content-Type: application/xml Content-Length: 583 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Marker/> <Prefix/> <IsTruncated>false</IsTruncated> <Name>testbucket</Name> <Contents> <Key>multipart.txt_6367cf96-ea4e-4447-b931-c5bc91200375/</Key> <Size>0</Size> <LastModified>2022-05-03T16:44:17.490Z</LastModified> </Contents> <Contents> <Key>multipart.txt_6367cf96-ea4e-4447-b931-c5bc91200375/1</Key> <Size>27040</Size> <LastModified>2022-05-03T16:44:17.715Z</LastModified> </Contents> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X DELETE \"http://localhost:39999/api/v1/s3/testbucket/multipart.txt?uploadId=6367cf96-ea4e-4447-b931-c5bc91200375\" HTTP/1.1 204 No Content Date: Tue, 03 May 2022 23:45:30 GMT Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:45:36 GMT Content-Type: application/xml Content-Length: 318 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Marker/> <Prefix/> <IsTruncated>false</IsTruncated> <Name>testbucket</Name> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs completemultipartupload %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api complete-multipart-upload \\ --bucket=testbucket --key=multipart.txt --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 \\ --multipart-upload=\"Parts=[{PartNumber=1},{PartNumber=2}]\" { \"Location\": \"/testbucket/multipart.txt\", \"Bucket\": \"testbucket\", \"Key\": \"multipart.txt\", \"ETag\": \"911df44b7ff57801ca8d74568e4ebfbe\" } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api head-object \\ --bucket=testbucket --key=multipart.txt { \"LastModified\": \"2022-05-03T20:01:43+00:00\", \"ContentLength\": 27040, \"ETag\": \"0cc175b9c0f1b6a831c399e269772661\", \"ContentType\": \"application/octet-stream\", \"Metadata\": {} } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ cat complete_upload.xml <CompleteMultipartUpload xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Part> <PartNumber>1</PartNumber> </Part> <Part> <PartNumber>2</PartNumber> </Part> </CompleteMultipartUpload> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -H \"Content-Type: application/xml\" -d \"@complete_upload.xml\" \\ -X POST \"http://localhost:39999/api/v1/s3/testbucket/multipart.txt?uploadId=6367cf96-ea4e-4447-b931-c5bc91200375\" Date: Tue, 03 May 2022 23:59:17 GMT Content-Type: text/xml;charset=utf-8 Transfer-Encoding: chunked Server: Jetty(9.4.43.v20210629) <CompleteMultipartUploadResult> <Location>/testbucket/multipart.txt</Location> <Bucket>testbucket</Bucket> <Key>multipart.txt</Key> <ETag>911df44b7ff57801ca8d74568e4ebfbe</ETag> <Code/> <Message/> </CompleteMultipartUploadResult> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ --head \"http://localhost:39999/api/v1/s3/testbucket/multipart.txt\" HTTP/1.1 200 OK Date: Wed, 04 May 2022 00:00:40 GMT Last-Modified: Tue, 03 May 2022 23:59:18 GMT ETag: 0cc175b9c0f1b6a831c399e269772661 Content-Type: application/octet-stream Content-Length: 27040 Server: Jetty(9.4.43.v20210629) ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs copy_object %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api copy-object \\ --copy-source=testbucket/test.txt --bucket=testbucket --key=test_copy.txt { \"CopyObjectResult\": { \"ETag\": \"911df44b7ff57801ca8d74568e4ebfbe\", \"LastModified\": \"2022-05-03T11:37:16.015000+00:00\" } } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"test.txt\", \"LastModified\": \"2022-05-03T11:35:59.243000+00:00\", \"Size\": 27040 }, { \"Key\": \"test_copy.txt\", \"LastModified\": \"2022-05-03T11:37:16.185000+00:00\", \"Size\": 27040 } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -H \"x-amz-copy-source: testbucket/test.txt\" \\ -X PUT http://localhost:39999/api/v1/s3/testbucket/test_copy.txt HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:50:07 GMT Content-Type: application/xml Content-Length: 135 Server: Jetty(9.4.43.v20210629) <CopyObjectResult> <ETag>911df44b7ff57801ca8d74568e4ebfbe</ETag> <LastModified>2022-05-03T14:50:07.781Z</LastModified> </CopyObjectResult> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/testbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:50:26 GMT Content-Type: application/xml Content-Length: 434 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <Contents> <Key>test_copy.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:50:07.790Z</LastModified> </Contents> <Marker/> <IsTruncated>false</IsTruncated> <Prefix/> <Name>testbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs create_bucket %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api create-bucket \\ --bucket=testbucket $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-buckets { \"Buckets\": [ { \"Name\": \"testbucket\", \"CreationDate\": \"2022-05-03T11:32:34.156000+00:00\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X PUT http://localhost:39999/api/v1/s3/testbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:35:05 GMT Content-Length: 0 Server:" }, { "data": "$ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/ HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:35:23 GMT Content-Type: application/xml Content-Length: 161 Server: Jetty(9.4.43.v20210629) <ListAllMyBucketsResult> <Buckets> <Bucket> <Name>testbucket</Name> <CreationDate>2022-05-03T14:34:56.420Z</CreationDate> </Bucket> </Buckets> </ListAllMyBucketsResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs createmultipartupload %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api create-multipart-upload \\ --bucket=testbucket --key=multipart.txt { \"Bucket\": \"testbucket\", \"Key\": \"multipart.txt\", \"UploadId\": \"6367cf96-ea4e-4447-b931-c5bc91200375\" } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X POST \"http://localhost:39999/api/v1/s3/testbucket/multipart.txt?uploads\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:41:26 GMT Content-Type: application/xml Content-Length: 147 Server: Jetty(9.4.43.v20210629) <InitiateMultipartUploadResult> <Bucket>testbucket</Bucket> <Key>multipart.txt</Key> <UploadId>6367cf96-ea4e-4447-b931-c5bc91200375</UploadId> </InitiateMultipartUploadResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs delete_bucket %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-buckets { \"Buckets\": [ { \"Name\": \"tempbucket\", \"CreationDate\": \"2022-05-03T11:55:58.134000+00:00\" }, { \"Name\": \"testbucket\", \"CreationDate\": \"2022-05-03T11:32:34.156000+00:00\" } ] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api delete-bucket \\ --bucket=tempbucket $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-buckets { \"Buckets\": [ { \"Name\": \"testbucket\", \"CreationDate\": \"2022-05-03T11:32:34.156000+00:00\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/ HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:43:20 GMT Content-Type: application/xml Content-Length: 254 Server: Jetty(9.4.43.v20210629) <ListAllMyBucketsResult> <Buckets> <Bucket> <Name>tempbucket</Name> <CreationDate>2022-05-03T14:43:03.651Z</CreationDate> </Bucket> <Bucket> <Name>testbucket</Name> <CreationDate>2022-05-03T14:34:56.420Z</CreationDate> </Bucket> </Buckets> </ListAllMyBucketsResult> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X DELETE http://localhost:39999/api/v1/s3/tempbucket HTTP/1.1 204 No Content Date: Tue, 03 May 2022 21:43:25 GMT Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/ HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:43:28 GMT Content-Type: application/xml Content-Length: 161 Server: Jetty(9.4.43.v20210629) <ListAllMyBucketsResult> <Buckets> <Bucket> <Name>testbucket</Name> <CreationDate>2022-05-03T14:34:56.420Z</CreationDate> </Bucket> </Buckets> </ListAllMyBucketsResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs deletebuckettagging %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-bucket-tagging \\ --bucket=testbucket { \"TagSet\": [ { \"Key\": \"key1\", \"Value\": \"val1\" }, { \"Key\": \"key2\", \"Value\": \"val2\" } ] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api delete-bucket-tagging \\ --bucket=testbucket $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-bucket-tagging \\ --bucket=testbucket { \"TagSet\": [] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:31:07 GMT Content-Type: application/xml Content-Length: 124 Server: Jetty(9.4.43.v20210629) <Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X DELETE \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 204 No Content Date: Tue, 03 May 2022 23:32:26 GMT Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:32:27 GMT Content-Type: application/xml Content-Length: 28 Server: Jetty(9.4.43.v20210629) <Tagging><TagSet/></Tagging> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs delete_object %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects-v2 \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"temp.txt\", \"LastModified\": \"2022-05-03T11:55:01.925000+00:00\", \"Size\": 27040 }, { \"Key\": \"test.txt\", \"LastModified\":" }, { "data": "\"Size\": 27040 } ] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api delete-object \\ --bucket=testbucket --key=temp.txt $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects-v2 \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"test.txt\", \"LastModified\": \"2022-05-03T11:55:01.925000+00:00\", \"Size\": 27040 } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/testbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:59:27 GMT Content-Type: application/xml Content-Length: 540 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Contents> <Key>temp.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:50:07.790Z</LastModified> </Contents> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <Marker/> <IsTruncated>false</IsTruncated> <Prefix/> <Name>testbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X DELETE http://localhost:39999/api/v1/s3/testbucket/temp.txt HTTP/1.1 204 No Content Date: Tue, 03 May 2022 22:01:56 GMT Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/testbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 22:01:59 GMT Content-Type: application/xml Content-Length: 318 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <Marker/> <IsTruncated>false</IsTruncated> <Prefix/> <Name>testbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs delete_objects %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects-v2 \\ --bucket=tempbucket { \"Contents\": [ { \"Key\": \"foo.txt\", \"LastModified\": \"2022-05-03T11:57:00.767000+00:00\", \"Size\": 27040 }, { \"Key\": \"temp.txt\", \"LastModified\": \"2022-05-03T11:56:11.245000+00:00\", \"Size\": 27040 }, { \"Key\": \"temp2.txt\", \"LastModified\": \"2022-05-03T11:56:31.414000+00:00\", \"Size\": 27040 } ] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api delete-objects \\ --bucket=tempbucket --delete=\"Objects=[{Key=temp.txt},{Key=temp2.txt}]\" $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects-v2 \\ --bucket=tempbucket { \"Contents\": [ { \"Key\": \"foo.txt\", \"LastModified\": \"2022-05-03T11:57:00.767000+00:00\", \"Size\": 27040 } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/tempbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:59:27 GMT Content-Type: application/xml Content-Length: 540 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Contents> <Key>foo.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:59:05.906Z</LastModified> </Contents> <Contents> <Key>temp.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:58:58.204Z</LastModified> </Contents> <Contents> <Key>temp2.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:59:01.987Z</LastModified> </Contents> <Marker/> <IsTruncated>false</IsTruncated> <Prefix/> <Name>tempbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> $ cat delete.xml <Delete xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Object> <Key>temp.txt</Key> </Object> <Object> <Key>temp2.txt</Key> </Object> <Quiet>false</Quiet> </Delete> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -H \"Content-Type: application/xml\" \\ -X POST --data \"@delete.xml\" \"http://localhost:39999/api/v1/s3/testbucket?delete\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 22:56:17 GMT Content-Type: application/xml Content-Length: 208 Server: Jetty(9.4.43.v20210629) <DeleteResult> <Deleted> <Key>temp2.txt</Key> <DeleteMarker/> <DeleteMarkerVersionId/> <VersionId/> </Deleted> <Deleted> <Key>temp.txt</Key> <DeleteMarker/> <DeleteMarkerVersionId/> <VersionId/> </Deleted> </DeleteResult> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/tempbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 22:28:31 GMT Content-Type: application/xml Content-Length: 317 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Contents> <Key>foo.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:59:05.906Z</LastModified> </Contents> <Marker/> <IsTruncated>false</IsTruncated> <Prefix/> <Name>tempbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs deleteobjecttagging %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-object-tagging \\ --bucket=testbucket --key=test.txt { \"TagSet\": [ { \"Key\": \"key1\", \"Value\": \"val1\" }, { \"Key\": \"key2\", \"Value\": \"val2\" } ] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api delete-object-tagging \\ --bucket=testbucket --key=test.txt $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-object-tagging \\ --bucket=testbucket --key=test.txt { \"TagSet\": [] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:31:07 GMT Content-Type: application/xml Content-Length: 124 Server:" }, { "data": "<Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X DELETE \"http://localhost:39999/api/v1/s3/testbucket/test.txt?tagging\" HTTP/1.1 204 No Content Date: Tue, 03 May 2022 23:37:46 GMT Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket/test.txt?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:37:47 GMT Content-Type: application/octet-stream Content-Length: 28 Server: Jetty(9.4.43.v20210629) <Tagging><TagSet/></Tagging> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs getbuckettagging %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-bucket-tagging \\ --bucket=testbucket { \"TagSet\": [ { \"Key\": \"key1\", \"Value\": \"val1\" }, { \"Key\": \"key2\", \"Value\": \"val2\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:31:07 GMT Content-Type: application/xml Content-Length: 124 Server: Jetty(9.4.43.v20210629) <Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs get_object %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-object \\ --bucket=testbucket --key=test.txt /tmp/test.txt { \"LastModified\": \"2022-05-03T18:55:01+00:00\", \"ContentLength\": 27040, \"ETag\": \"0cc175b9c0f1b6a831c399e269772661\", \"ContentType\": \"application/octet-stream\", \"Metadata\": {} } $ stat /tmp/test.txt File: /tmp/test.txt Size: 27040 Blocks: 56 IO Block: 4096 regular file ... ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/testbucket/test.txt HTTP/1.1 200 OK Date: Tue, 03 May 2022 22:59:43 GMT Last-Modified: Tue, 03 May 2022 21:47:36 GMT ETag: 0cc175b9c0f1b6a831c399e269772661 Content-Type: application/octet-stream Content-Length: 27040 Server: Jetty(9.4.43.v20210629) ................. file contents ................. ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs getobjecttagging %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-object-tagging \\ --bucket=testbucket --key=test.txt { \"TagSet\": [ { \"Key\": \"key1\", \"Value\": \"val1\" }, { \"Key\": \"key2\", \"Value\": \"val2\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:31:07 GMT Content-Type: application/xml Content-Length: 124 Server: Jetty(9.4.43.v20210629) <Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs head_bucket %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api head-bucket \\ --bucket=testbucket ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ --head http://localhost:39999/api/v1/s3/testbucket HTTP/1.1 200 OK Date: Tue, 15 Nov 2022 04:49:12 GMT Content-Type: application/xml Content-Length: 0 Server: Jetty(9.4.43.v20210629) ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs head_object %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api head-object \\ --bucket=testbucket --key=test.txt { \"LastModified\": \"2022-05-03T18:55:01+00:00\", \"ContentLength\": 27040, \"ETag\": \"0cc175b9c0f1b6a831c399e269772661\", \"ContentType\": \"application/octet-stream\", \"Metadata\": {} } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ --head http://localhost:39999/api/v1/s3/testbucket/test.txt HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:54:22 GMT Last-Modified: Tue, 03 May 2022 21:47:36 GMT ETag: 0cc175b9c0f1b6a831c399e269772661 Content-Type: application/octet-stream Content-Length: 27040 Server:" }, { "data": "``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs list_buckets %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-buckets { \"Buckets\": [ { \"Name\": \"testbucket\", \"CreationDate\": \"2022-05-03T11:32:34.156000+00:00\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/ HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:35:23 GMT Content-Type: application/xml Content-Length: 161 Server: Jetty(9.4.43.v20210629) <ListAllMyBucketsResult> <Buckets> <Bucket> <Name>testbucket</Name> <CreationDate>2022-05-03T14:34:56.420Z</CreationDate> </Bucket> </Buckets> </ListAllMyBucketsResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs list_objects %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"test.txt\", \"LastModified\": \"2022-05-03T11:35:59.243000+00:00\", \"Size\": 27040 }, { \"Key\": \"test_copy.txt\", \"LastModified\": \"2022-05-03T11:37:16.185000+00:00\", \"Size\": 27040 } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/testbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:50:26 GMT Content-Type: application/xml Content-Length: 434 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <Contents> <Key>test_copy.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:50:07.790Z</LastModified> </Contents> <Marker/> <IsTruncated>false</IsTruncated> <Prefix/> <Name>testbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs list_uploads %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint=\"http://localhost:39999/api/v1/s3\" s3api list-multipart-uploads --bucket \"testbucket\" { \"Uploads\": [ { \"UploadId\": \"c4cddf71-914a-4cee-b2af-8cfb7def7d04\", \"Key\": \"object\", \"Initiated\": \"2022-07-01T11:21:14.738000+00:00\" }, { \"UploadId\": \"6367cf96-ea4e-4447-b931-c5bc91200375\", \"Key\": \"object\", \"Initiated\": \"2022-07-01T11:18:25.290000+00:00\" }, { \"UploadId\": \"e111c33b-5c18-4ecd-b543-2849cdbbf22b\", \"Key\": \"object2\", \"Initiated\": \"2022-07-01T11:21:25.182000+00:00\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?uploads\" HTTP/1.1 200 OK Date: Fri, 01 Jul 2022 18:23:43 GMT Content-Type: application/xml Content-Length: 499 Server: Jetty(9.4.46.v20220331) <ListMultipartUploadsResult> <Bucket>testbucket</Bucket> <Upload> <Key>object</Key> <UploadId>c4cddf71-914a-4cee-b2af-8cfb7def7d04</UploadId> <Initiated>2022-07-01T11:21:14.738Z</Initiated> </Upload> <Upload> <Key>object</Key> <UploadId>6367cf96-ea4e-4447-b931-c5bc91200375</UploadId> <Initiated>2022-07-01T11:18:25.290Z</Initiated> </Upload> <Upload> <Key>object2</Key> <UploadId>e111c33b-5c18-4ecd-b543-2849cdbbf22b</UploadId> <Initiated>2022-07-01T11:21:25.182Z</Initiated> </Upload> </ListMultipartUploadsResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs listobjectsv2 %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects-v2 \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"test.txt\", \"LastModified\": \"2022-05-03T11:35:59.243000+00:00\", \"Size\": 27040 }, { \"Key\": \"test_copy.txt\", \"LastModified\": \"2022-05-03T11:37:16.185000+00:00\", \"Size\": 27040 } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?list-type=2\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:56:20 GMT Content-Type: application/xml Content-Length: 438 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>true</version2> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <Contents> <Key>test_copy.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:50:07.790Z</LastModified> </Contents> <IsTruncated>false</IsTruncated> <Prefix/> <KeyCount>2</KeyCount> <Name>testbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs list_parts %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-parts \\ --bucket=testbucket --key=multipart.txt --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 { \"Parts\": [ { \"PartNumber\": 1, \"LastModified\": \"2022-05-03T12:56:27.775000+00:00\", \"ETag\": \"\", \"Size\": 27040 } ], \"ChecksumAlgorithm\": null, \"Initiator\": null, \"Owner\": null, \"StorageClass\": \"STANDARD\" } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket/multipart.txt?uploadId=6367cf96-ea4e-4447-b931-c5bc91200375\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:49:05 GMT Content-Type: application/octet-stream Content-Length: 314 Server: Jetty(9.4.43.v20210629) <ListPartsResult> <Bucket>/testbucket</Bucket> <Key>multipart.txt</Key> <UploadId>6367cf96-ea4e-4447-b931-c5bc91200375</UploadId> <StorageClass>STANDARD</StorageClass> <IsTruncated>false</IsTruncated> <Part> <PartNumber>1</PartNumber>" }, { "data": "<ETag></ETag> <Size>27040</Size> </Part> </ListPartsResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs putbuckettagging %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-bucket-tagging \\ --bucket=testbucket { \"TagSet\": [] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api put-bucket-tagging \\ --bucket=testbucket --tagging='TagSet=[{Key=key1,Value=val1},{Key=key2,Value=val2}]' $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-bucket-tagging \\ --bucket=testbucket { \"TagSet\": [ { \"Key\": \"key1\", \"Value\": \"val1\" }, { \"Key\": \"key2\", \"Value\": \"val2\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:30:25 GMT Content-Type: application/xml Content-Length: 28 Server: Jetty(9.4.43.v20210629) <Tagging><TagSet/></Tagging> $ cat tags.xml <Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -H \"Content-Type: application/xml\" \\ -X PUT \"http://localhost:39999/api/v1/s3/testbucket?tagging\" --data-binary \"@tags.xml\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:31:05 GMT Content-Length: 0 Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:31:07 GMT Content-Type: application/xml Content-Length: 124 Server: Jetty(9.4.43.v20210629) <Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs put_object %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api put-object \\ --bucket=testbucket --key=test.txt --body=\"${ALLUXIO_HOME}/LICENSE\" { \"ETag\": \"911df44b7ff57801ca8d74568e4ebfbe\" } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-objects \\ --bucket=testbucket { \"Contents\": [ { \"Key\": \"test.txt\", \"LastModified\": \"2022-05-03T11:35:59.243000+00:00\", \"Size\": 27040 } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X PUT http://localhost:39999/api/v1/s3/testbucket/test.txt -T \"${ALLUXIO_HOME}/LICENSE\" HTTP/1.1 100 Continue HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:47:36 GMT ETag: 911df44b7ff57801ca8d74568e4ebfbe Content-Length: 0 Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET http://localhost:39999/api/v1/s3/testbucket HTTP/1.1 200 OK Date: Tue, 03 May 2022 21:47:44 GMT Content-Type: application/xml Content-Length: 318 Server: Jetty(9.4.43.v20210629) <ListBucketResult> <version2>false</version2> <Contents> <Key>test.txt</Key> <Size>27040</Size> <LastModified>2022-05-03T14:47:36.600Z</LastModified> </Contents> <Marker/> <IsTruncated>false</IsTruncated> <Prefix/> <Name>testbucket</Name> <MaxKeys>1000</MaxKeys> <EncodingType>url</EncodingType> </ListBucketResult> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs putobjecttagging %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-object-tagging \\ --bucket=testbucket --key=test.txt { \"TagSet\": [] } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api put-object-tagging \\ --bucket=testbucket --key=test.txt --tagging='TagSet=[{Key=key1,Value=val1},{Key=key2,Value=val2}]' $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api get-object-tagging \\ --bucket=testbucket --key=test.txt { \"TagSet\": [ { \"Key\": \"key1\", \"Value\": \"val1\" }, { \"Key\": \"key2\", \"Value\": \"val2\" } ] } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket/test.txt?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:30:25 GMT Content-Type: application/xml Content-Length: 28 Server: Jetty(9.4.43.v20210629) <Tagging><TagSet/></Tagging> $ cat tags.xml <Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -H \"Content-Type: application/xml\" \\ -X PUT \"http://localhost:39999/api/v1/s3/testbucket/test.txt?tagging\" --data-binary \"@tags.xml\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:35:28 GMT Content-Length: 0 Server: Jetty(9.4.43.v20210629) $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X GET \"http://localhost:39999/api/v1/s3/testbucket/test.txt?tagging\" HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:35:58 GMT Content-Type: application/octet-stream Content-Length: 126 Server:" }, { "data": "<Tagging> <TagSet> <Tag> <Key>key1</Key> <Value>val1</Value> </Tag> <Tag> <Key>key2</Key> <Value>val2</Value> </Tag> </TagSet> </Tagging> ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs upload_part %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api upload-part \\ --bucket=testbucket --key=multipart.txt --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 --part-number=1 --body=\"${ALLUXIO_HOME}/LICENSE\" { \"ETag\": \"911df44b7ff57801ca8d74568e4ebfbe\" } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-parts \\ --bucket=testbucket --key=multipart.txt --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 { \"Parts\": [ { \"PartNumber\": 1, \"LastModified\": \"2022-05-03T12:56:27.775000+00:00\", \"ETag\": \"\", \"Size\": 27040 } ], \"ChecksumAlgorithm\": null, \"Initiator\": null, \"Owner\": null, \"StorageClass\": \"STANDARD\" } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -X PUT -T \"${ALLUXIO_HOME}/LICENSE\" \"http://localhost:39999/api/v1/s3/testbucket/multipart.txt?uploadId=6367cf96-ea4e-4447-b931-c5bc91200375&partNumber=1\" HTTP/1.1 100 Continue HTTP/1.1 200 OK Date: Tue, 03 May 2022 23:51:19 GMT ETag: \"911df44b7ff57801ca8d74568e4ebfbe\" Content-Length: 0 Server: Jetty(9.4.43.v20210629) $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-parts \\ --bucket=testbucket --key=multipart_copy.txt --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 { \"Parts\": [ { \"PartNumber\": 1, \"LastModified\": \"2022-05-03T13:00:13.584000+00:00\", \"ETag\": \"\", \"Size\": 27040 } ], \"ChecksumAlgorithm\": null, \"Initiator\": null, \"Owner\": null, \"StorageClass\": \"STANDARD\" } ``` {% endnavtab %} {% endnavtabs %} See {:target=\"_blank\"} on AWS {% navtabs uploadpartcopy %} {% navtab AWS CLI %} ```shell $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api upload-part-copy \\ --bucket=testbucket --key=object --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 --part-number=1 --copy-source=testbucket/object { \"CopyPartResult\": { \"ETag\": \"0cc175b9c0f1b6a831c399e269772661\" } } $ aws --profile alluxio-s3 --endpoint \"http://localhost:39999/api/v1/s3/\" s3api list-parts \\ --bucket=testbucket --key=object --upload-id=6367cf96-ea4e-4447-b931-c5bc91200375 { \"Parts\": [ { \"PartNumber\": 1, \"LastModified\": \"2022-07-01T11:27:48.942000+00:00\", \"ETag\": \"\", \"Size\": 1 } ], \"ChecksumAlgorithm\": null, \"Initiator\": null, \"Owner\": null, \"StorageClass\": \"STANDARD\" } ``` {% endnavtab %} {% navtab REST Clients %} ```shell $ curl -i -H \"Authorization: AWS4-HMAC-SHA256 Credential=testuser/... SignedHeaders=... Signature=...\" \\ -H \"x-amz-copy-source: testbucket/object\" \\ -X PUT 'http://localhost:39999/api/v1/s3/testbucket/object?uploadId=6367cf96-ea4e-4447-b931-c5bc91200375&partNumber=1' HTTP/1.1 200 OK Date: Fri, 01 Jul 2022 18:31:34 GMT Content-Type: application/xml Content-Length: 78 Server: Jetty(9.4.46.v20220331) <CopyPartResult><ETag>0cc175b9c0f1b6a831c399e269772661</ETag></CopyPartResult> ``` {% endnavtab %} {% endnavtabs %} Tested for Python 2.7. Please note you have to install boto package first. ```shell $ pip install boto ``` ```python import boto import boto.s3.connection conn = boto.connect_s3( awsaccesskey_id = '', awssecretaccess_key = '', host = 'localhost', port = 39999, path = '/api/v1/s3', is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) ``` By default, authenticating with no accesskeyid uses the user that was used to launch the proxy as the user performing the file system actions. Set the ```awsaccesskey_id``` to a different username to perform the actions under a different user. ```python bucketName = 'bucket-for-testing' bucket = conn.create_bucket(bucketName) ``` Authenticating as a user is necessary to have buckets returned by this operation. ```python conn = boto.connect_s3( awsaccesskey_id = 'testuser', awssecretaccess_key = '', host = 'localhost', port = 39999, path = '/api/v1/s3', is_secure=False, calling_format = boto.s3.connection.OrdinaryCallingFormat(), ) conn.getallbuckets() ``` ```python smallObjectKey = 'small.txt' smallObjectContent = 'Hello World!' key = bucket.new_key(smallObjectKey) key.setcontentsfrom_string(smallObjectContent) ``` ```python assert smallObjectContent == key.getcontentsas_string() ``` Create a 8MB file on local file system. ```shell $ dd if=/dev/zero of=8mb.data bs=1048576 count=8 ``` Then use python S3 client to upload this as an object ```python largeObjectKey = 'large.txt' largeObjectFile = '8mb.data' key = bucket.new_key(largeObjectKey) with open(largeObjectFile, 'rb') as f: key.setcontentsfrom_file(f) with open(largeObjectFile, 'rb') as f: largeObject = f.read() ``` ```python assert largeObject == key.getcontentsas_string() ``` ```python bucket.delete_key(smallObjectKey) bucket.delete_key(largeObjectKey) ``` ```python mp = bucket.initiatemultipartupload(largeObjectKey) ``` ```python import math, os from filechunkio import FileChunkIO sourceSize = os.stat(largeObjectFile).st_size chunkSize = 1048576 chunkCount = int(math.ceil(sourceSize / float(chunkSize))) for i in range(chunkCount): offset = chunkSize * i bytes = min(chunkSize, sourceSize - offset) with FileChunkIO(largeObjectFile, 'r', offset=offset, bytes=bytes) as fp: mp.uploadpartfromfile(fp, partnum=i + 1) ``` ```python mp.complete_upload() ``` Non-completed uploads can be aborted. ```python mp.cancel_upload() ``` ```python bucket.delete_key(largeObjectKey) conn.delete_bucket(bucketName) ```" } ]
{ "category": "Runtime", "file_name": "S3-API.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "target-version: release-1.10 AWS server side encryption SSE-S3 support for RGW The S3 protocol supports three different types of : SSE-C, SSE-KMS and SSE-S3. For the last two RGW server need to configure with external services such as . Currently Rook configure RGW with `SSE-KMS` options to handle the S3 requests with the `sse:kms` header. Recently the support for handling the `sse:s3` was added to RGW, so Rook will now provide the option to configure RGW with `sse:s3`. The `sse:s3` is supported only from Ceph v17 an onwards, so this feature can only be enabled for Quincy or newer. Configure RGW with `SSE-S3` options, so that RGW can handle request with `sse:s3` headers. Introducing new field `s3` in `SecuritySpec` which defines `AWS-SSE-S3` support for RGW. ```yaml security: kms: .. s3: connectionDetails: KMS_PROVIDER: vault VAULT_ADDR: https://vault.default.svc.cluster.local:8200 VAULTSECRETENGINE: transit VAULTAUTHMETHOD: token tokenSecretName: rook-vault-token ``` The values for this configuration will be available in of `CephObjectStoreSpec`, so depending on the above option RGW can be configured with `SSE-KMS` and `SSE-S3` options. These two options can be configured independently and they both are mutually exclusive in Rook and Ceph level. Before this design is implemented, the user could manually set below options from the toolbox pod to configure with `SSE-S3` options for RGW. ``` ceph config set <ceph auth user for rgw> rgwcryptsses3backend vault ceph config set <ceph auth user for rgw> rgwcryptsses3vaultsecretengine <kv|transit> # https://docs.ceph.com/en/latest/radosgw/vault/#vault-secrets-engines ceph config set <ceph auth user for rgw> rgwcryptsses3vault_addr <vault address> ceph config set <ceph auth user for rgw> rgwcryptsses3vault_auth token ceph config set <ceph auth user for rgw> rgwcryptsses3vaulttokenfile <location of file containing token for accessing vault> ``` The `ceph auth user for rgw` is the (ceph client user)[https://docs.ceph.com/en/latest/rados/operations/user-management/#user-management] who has permissions change the settings for RGW server. This can be listed from `ceph auth ls` command and in Rook ceph client user for RGW will always begins with `client.rgw`, followed by `store` name." } ]
{ "category": "Runtime", "file_name": "ceph-sse-s3.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "The `Master` metric listening port is the service port, and the metric monitoring module is enabled by default. The `Blobstore` metric listening port is the service port, and the service custom monitoring metric items are enabled by default. Public metric items need to be opened by modifying the configuration file. : Other modules need to configure the metric listening port, which is disabled by default. `exporterPort`: Metric listening port. `consulAddr`: Consul registration server address. If set, the automatic discovery service of the CubeFS node exporter can be realized in conjunction with the Prometheus automatic discovery mechanism. If not set, the Consul automatic registration service will not be enabled. `consulMeta`: Consul metadata configuration. Non-required item, set metadata information when registering with Consul. `ipFilter`: Regular expression-based filter. Non-required item, default is empty. Exposed to Consul, used when the machine has multiple IPs. Supports forward and reverse filtering. `enablePid`: Whether to report partition id, default is false; if you want to display the information of dp or mp in the cluster, you can configure it as true. ```json { \"exporterPort\": 9505, \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"consulMeta\": \"k1=v1;k2=v2\", \"ipFilter\": \"10.17.*\", \"enablePid\": \"false\" } ``` Requesting the corresponding metric listening interface of the service can obtain monitoring metrics, such as `curl localhost:port/metrics` For `Master`, `MetaNode`, `DataNode`, and `ObjectNode`, there are two ways to implement metric collection: Configure the Consul address of Prometheus (or the Consul address that supports the Prometheus standard syntax). After the configuration takes effect, Prometheus will actively pull monitoring metrics. If the Consul address is not configured, the following example is used: Modify the Prometheus YAML configuration file and add the metric collection source ```yaml job_name: 'cubefs01' filesdconfigs: files: [ '/home/service/app/prometheus/cubefs01/*.yml' ] refresh_interval: 10s ``` Access the exporter and create an exporter file under the above configuration directory. Taking `Master` as an example, create the `master_exporter.yaml` file ```yaml targets: [ 'master1_ip:17010' ] labels: cluster: cubefs01 ``` After the configuration is complete, start Prometheus. The relevant services of the `Erasure Coding Subsystem (Blobstore)` currently only support the second method of collecting metrics." } ]
{ "category": "Runtime", "file_name": "collect.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "``` bash curl -v \"http://10.196.59.198:17010/dataPartition/create?count=400&name=test\" ``` Creates a specified number of data shards. Parameter List | Parameter | Type | Description | |--|--|| | count | int | Number of data shards to create | | name | string | Volume name | ``` bash curl -v \"http://10.196.59.198:17010/dataPartition/get?id=100\" | python -m json.tool ``` Displays detailed information about the data shard, including the number of replicas, volume information, etc. Parameter List | Parameter | Type | Description | |--|--|| | id | uint64 | Data shard ID | Response Example ``` json { \"PartitionID\": 100, \"LastLoadedTime\": 1544082851, \"ReplicaNum\": 3, \"Status\": 2, \"Replicas\": {}, \"Hosts\": {}, \"Peers\": {}, \"Zones\": {}, \"MissingNodes\": {}, \"VolName\": \"test\", \"VolID\": 2, \"FileInCoreMap\": {}, \"FilesWithMissingReplica\": {} } ``` ``` bash curl -v \"http://10.196.59.198:17010/dataPartition/decommission?id=13&addr=10.196.59.201:17310\" ``` Removes a replica of the data shard and creates a new replica. Parameter List | Parameter | Type | Description | |--|--|--| | id | uint64 | Data shard ID | | addr | string | Address of the replica to be removed | ``` bash curl -v \"http://10.196.59.198:17010/dataPartition/load?id=1\" ``` Sends a task to compare the replica files for each replica of the data shard, and then asynchronously checks whether the file CRC on each replica is consistent. Parameter List | Parameter | Type | Description | |--|--|| | id | uint64 | Data shard ID |" } ]
{ "category": "Runtime", "file_name": "data-partition.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Submariner Member Request about: Request Submariner GitHub org Membership title: 'REQUEST: New Membership request for <your-GH-handle>' labels: member-request assignees: '' e.g. (at)example_user [ ] I have reviewed the contributor roles guidelines (https://submariner.io/community/contributor-roles/) [ ] I have enabled 2FA on my GitHub account (https://github.com/settings/security) [ ] I have subscribed to the submariner-dev e-mail list (https://groups.google.com/forum/#!forum/submariner-dev) [ ] I have multiple contributions to Submariner that meet the requirements listed in the community membership guidelines [ ] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines [ ] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application (at)sponsor-1 (at)sponsor-2 Multiple contributions to Submariner that meet the requirements listed in the community membership guidelines" } ]
{ "category": "Runtime", "file_name": "membership.md", "project_name": "Submariner", "subcategory": "Cloud Native Network" }
[ { "data": "- https://github.com/heptio/ark/tree/v0.5.1 If a Service is headless, retain ClusterIP = None when backing up and restoring. Use the specified --label-selector when listing backups, schedules, and restores. Restore namespace mapping functionality that was accidentally broken in 0.5.0. Always include namespaces in the backup, regardless of the --include-cluster-resources setting. https://github.com/heptio/ark/tree/v0.5.0 The backup tar file format has changed. Backups created using previous versions of Ark cannot be restored using v0.5.0. When backing up one or more specific namespaces, cluster-scoped resources are no longer backed up by default, with the exception of PVs that are used within the target namespace(s). Cluster-scoped resources can still be included by explicitly specifying `--include-cluster-resources`. Add customized user-agent string for Ark CLI Switch from glog to logrus Exclude nodes from restoration Add a FAQ Record PV availability zone and use it when restoring volumes from snapshots Back up the PV associated with a PVC Add `--include-cluster-resources` flag to `ark backup create` Add `--include-cluster-resources` flag to `ark restore create` Properly support resource restore priorities across cluster-scoped and namespace-scoped resources Support `ark create ...` and `ark get ...` Make ark run as cluster-admin Add pod exec backup hooks Support cross-compilation & upgrade to go 1.9 Make config change detection more robust" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-0.5.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Authnode provides a general authentication & authorization service among CubeFS nodes. Client, Master, Meta and Data node are required to be authenticated and authorized before any resource access in another node. Initially, each node (Auth, Client, Master, Meta or Data node) is launched with a secure key which is distributed by a authenticated person (for instance, cluster admin). With a valid key, a node can be identified in Authnode service and granted a ticket for resource access. The overall work flow is: key creation and distribution > ticket retrieval with key > resource access with ticket. Key: a bit of secret shared data between a node and Authnode that asserts identity of a client. Ticket: a bit of data that cryptographically asserts identity and authorization (through a list of capabilities) for a service for a period of time. Capability: a capability is defined in the format of node:object:action where node refers to a service node (such as auth, master, meta or data), object refers to the resource and action refers to permitted activities (such as read, write and access). See examples below. Capability Example | Capability | Specifications | |--|| | auth:createkey:access | Access permission for createkey in Authnode | | master:\\:\\ | Any permissions for any objects in Master node | | \\:\\:\\* | Any permissions for any objects in any nodes | cfs-authtool is a client-side utility of Authnode, providing key managements (creation, view and modification) and ticket retrieval in and from Authnode keystore. In particular, Each key is associated with an entity name or id, secret key string, creation time, role and capability specification. Key structure | Key | Type | Description | ||--|| | id | string | Unique key identifier composed of letters and digits | | key | string | Base64 encoded secret key | | role | string | The role of the key (either client or service) | | caps | string | The capabilities of the key | Use the following commands to build client side tool for Authnode: ```bash $ git clone http://github.com/cubefs/cubefs.git $ cd cubefs $ make build ``` If successful, the tool cfs-authtool can be found in build/bin. cfs-authtool ticket -host=AuthNodeAddress [-keyfile=Keyfile] [-output=TicketOutput] [-https=TrueOrFalse -certfile=AuthNodeCertfile] TicketService Service cfs-authtool api -host=AuthNodeAddress -ticketfile=TicketFile [-data=RequestDataFile] [-output=KeyFile] [-https=TrueOrFalse -certfile=AuthNodeCertfile] Service Request cfs-authtool authkey [-keylen=KeyLength] TicketService := [getticket] Service := [AuthService | MasterService | MetaService | DataService] Request := [createkey | deletekey | getkey | addcaps | deletecaps | getcaps | addraftnode | removeraftnode] Authnode use JSON as configuration file" }, { "data": "Properties | Key | Type | Description | Mandatory | |-|--||--| | role | string | Role of process and must be set to master | Yes | | ip | string | host ip | Yes | | port | string | Http port which api service listen on | Yes | | prof | string | golang pprof port | Yes | | id | string | identy different master node | Yes | | peers | string | the member information of raft group | Yes | | logDir | string | Path for log file storage | Yes | | logLevel | string | Level operation for logging. Default is error. | No | | retainLogs | string | the number of raft logs will be retain. | Yes | | walDir | string | Path for raft log file storage. | Yes | | storeDir | string | Path for RocksDB file storage,path must be exist | Yes | | clusterName | string | The cluster identifier | Yes | | exporterPort | int | The prometheus exporter port | No | | authServiceKey | string | The secret key used for authentication of AuthNode | Yes | | authRootKey | string | The secret key used for key derivation (session and client secret key) | Yes | | enableHTTPS | bool | Option whether enable HTTPS protocol | No | Example: ```json { \"role\": \"authnode\", \"ip\": \"192.168.0.14\", \"port\": \"8080\", \"prof\":\"10088\", \"id\":\"1\", \"peers\": \"1:192.168.0.14:8080,2:192.168.0.15:8081,3:192.168.0.16:8082\", \"logDir\": \"/export/Logs/authnode\", \"logLevel\":\"info\", \"retainLogs\":\"100\", \"walDir\":\"/export/Data/authnode/raft\", \"storeDir\":\"/export/Data/authnode/rocksdbstore\", \"exporterPort\": 9510, \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"clusterName\":\"test\", \"authServiceKey\":\"9h/sNq4+5CUAyCnAZM927Y/gubgmSixh5hpsYQzZG20=\", \"authRootKey\":\"wbpvIcHT/bLxLNZhfo5IhuNtdnw1n8kom+TimS2jpzs=\", \"enableHTTPS\":false } ``` Run the command: ```bash $ ./cfs-authtool authkey ``` If successful, two key files can be generated `authroot.json` and `authservice.json` under current directory. They represent authServiceKey and authRootKey respectively. example `authservice.json` : ```json { \"id\": \"AuthService\", \"key\": \"9h/sNq4+5CUAyCnAZM927Y/gubgmSixh5hpsYQzZG20=\", \"create_ts\": 1573801212, \"role\": \"AuthService\", \"caps\": \"{\\\"*\\\"}\" } ``` Edit `authnode.json` in docker/conf as following: `authRootKey`: use the value of `key` in `authroot.json` `authServiceKey`: use the value of `key` in `authService.json` example `authnode.json` : ```json { \"role\": \"authnode\", \"ip\": \"192.168.0.14\", \"port\": \"8080\", \"prof\":\"10088\", \"id\":\"1\", \"peers\": \"1:192.168.0.14:8080,2:192.168.0.15:8081,3:192.168.0.16:8082\", \"retainLogs\":\"2\", \"logDir\": \"/export/Logs/authnode\", \"logLevel\":\"info\", \"walDir\":\"/export/Data/authnode/raft\", \"storeDir\":\"/export/Data/authnode/rocksdbstore\", \"exporterPort\": 9510, \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"clusterName\":\"test\", \"authServiceKey\":\"9h/sNq4+5CUAyCnAZM927Y/gubgmSixh5hpsYQzZG20=\", \"authRootKey\":\"wbpvIcHT/bLxLNZhfo5IhuNtdnw1n8kom+TimS2jpzs=\", \"enableHTTPS\":false } ``` In directory docker/authnode, run the following command to start a Authnode cluster. ```bash $ docker-compose up -d ``` Get Authnode ticket using authServiceKey: ```bash $ ./cfs-authtool ticket -host=192.168.0.14:8080 -keyfile=authservice.json -output=ticket_auth.json getticket AuthService ``` example `ticket_auth.json` : ```json { \"id\": \"AuthService\", \"session_key\": \"A9CSOGEN9CFYhnFnGwSMd4WFDBVbGmRNjaqGOhOinJE=\", \"service_id\": \"AuthService\", \"ticket\": \"RDzEiRLX1xjoUyp2TDFviE/eQzXGlPO83siNJ3QguUrtpwiHIA3PLv4edyKzZdKcEb3wikni8UxBoIJRhKzS00+nB7/9CjRToAJdT9Glhr24RyzoN8psBAk82KEDWJhnl+Y785Av3f8CkNpKv+kvNjYVnNKxs7f3x+Ze7glCPlQjyGSxqARyLisoXoXbiE6gXR1KRT44u7ENKcUjWZ2ZqKEBML9U4h0o58d3IWT+n4atWKtfaIdp6zBIqnInq0iUueRzrRlFEhzyrvi0vErw+iU8w3oPXgTi+um/PpUyto20c1NQ3XbnkWZb/1ccx4U0\" } ``` Create admin using Authnode ticket: ```bash $ ./cfs-authtool api -host=192.168.0.14:8080 -ticketfile=ticketauth.json -data=dataadmin.json -output=key_admin.json AuthService createkey ``` example `data_admin.json` : ```json { \"id\": \"admin\", \"role\": \"service\", \"caps\": \"{\\\"API\\\":[\\\"::*\\\"]}\" } ``` Get Authnode ticket using admin key: ```bash $ ./cfs-authtool ticket -host=192.168.0.14:8080 -keyfile=keyadmin.json -output=ticketadmin.json getticket AuthService ``` Create key for Master ```bash $ ./cfs-authtool api -host=192.168.0.14:8080 -ticketfile=ticketadmin.json -data=datamaster.json -output=key_master.json AuthService createkey ``` example `data_master.json` : ```json { \"id\": \"MasterService\", \"role\": \"service\", \"caps\": \"{\\\"API\\\":[\\\"::*\\\"]}\" } ``` Specifications: | Key | Description | ||| | id | will set Client ID | | role | will set the role of id | | caps | will set the capabilities of id | Edit `master.json as` following: `masterServiceKey`: use the value of `key` in `key_master.json` Create key for Client ```bash $ ./cfs-authtool api -host=192.168.0.14:8080 -ticketfile=ticketadmin.json -data=dataclient.json -output=key_client.json AuthService createkey ``` example `data_client.json`: ```json { \"id\": \"ltptest\", \"role\": \"client\", \"caps\": \"{\\\"API\\\":[\\\"::\\\"], \\\"Vol\\\":[\\\"::\\\"]}\" } ``` Edit `client.json` as following: `clientKey`: use the value of `key` in `key_client.json` example `client.json` ```json { \"masterAddr\": \"192.168.0.11:17010,192.168.0.12:17010,192.168.0.13:17010\", \"mountPoint\": \"/cfs/mnt\", \"volName\": \"ltptest\", \"owner\": \"ltptest\", \"logDir\": \"/cfs/log\", \"logLevel\": \"info\", \"consulAddr\": \"http://192.168.0.101:8500\", \"exporterPort\": 9500, \"profPort\": \"17410\", \"authenticate\": true, \"ticketHost\":" }, { "data": "\"clientKey\": \"jgBGSNQp6mLbu7snU8wKIdEkytzl+pO5/OZOJPpIgH4=\", \"enableHTTPS\": \"false\" } ``` Specifications: | Key | Description | |--|| | authenticate | will enable authentication flow if set true | | ticketHost | will set the IP/URL of Authnode cluster | | clientKey | will set the key generated by Authnode | | enableHTTPS | will enable HTTPS if set true | Run the following to launch CubeFS cluster with AuthNode enabled: ```bash $ docker/run_docker.sh -r -d /data/disk ``` To prevent MITM (Man In The Middle) attacks, HTTPS is required for the communication between client and service. The following steps show the generation of self-sign a certificate with a private (.key) and public key. Generating Key and Self Signed Cert: ```bash $ openssl req \\ -x509 \\ -nodes \\ -newkey rsa:2048 \\ -keyout server.key \\ -out server.crt \\ -days 3650 \\ -subj \"/C=GB/ST=China/L=Beijing/O=jd.com/OU=Infra/CN=*\" ``` Specifications: `server.crt`: `AuthNode` public certificate needed to be sent to `Client` `server.key`: `AuthNode` private key needed to be securely placed in `/app` folder in `Authnode` For easy deployment, current implementation of AuthNode uses TLS option insecureskipverify and tls.RequireAndVerifyClientCert, which would skip secure verification of both client and server. For environment with high security command, these options should be turned off. Master has numerous APIs, such as creating and deleting volumes, so it is necessary to authenticate access to the master API to improve cluster security. With the excellent authentication capability of authnode, we have optimized the existing authentication mechanism to achieve the goal of simplifying the authentication process. example `master.json` ```json { \"role\": \"master\", \"ip\": \"127.0.0.1\", \"listen\": \"17010\", \"prof\":\"17020\", \"id\":\"1\", \"peers\": \"1:127.0.0.1:17010,2:127.0.0.2:17010,3:127.0.0.3:17010\", \"retainLogs\":\"20000\", \"logDir\": \"/cfs/master/log\", \"logLevel\":\"info\", \"walDir\":\"/cfs/master/data/wal\", \"storeDir\":\"/cfs/master/data/store\", \"consulAddr\": \"http://consul.prometheus-cfs.local\", \"clusterName\":\"cubefs01\", \"metaNodeReservedMem\": \"1073741824\", \"masterServiceKey\": \"jgBGSNQp6mLbu7snU8wKIdEkytzl+pO5/OZOJPpIgH4=\", \"authenticate\": true, \"authNodeHost\": \"192.168.0.14:8080,192.168.0.15:8081,192.168.0.16:8082\", \"authNodeEnableHTTPS\": false } ``` Properties | Key | Description | |--|--| | authenticate | will enable Master API Authentication if set true | | authNodeHost | will set the IP/URL of Authnode cluster | | authNodeEnableHTTPS | will enable HTTPS if set true | When accessing the Master API, the parameter clientIDKey for authentication must be included. When using authtool to create a key, authidkey is generated. This key will be used as the clientIDKey when accessing the master API. Access the master API via HTTP and write the parameter clientIDKey, such as expanding a volume: ```bash curl --location 'http://127.0.0.1:17010/vol/update?name=ltptest&authKey=0e20229116d5a9a4a9e876806b514a85&capacity=100&clientIDKey=eyJpZCI6Imx0cHRlc3QiLCJhdXRoX2tleSI6ImpnQkdTTlFwNm1MYnU3c25VOHdLSWRFa3l0emwrcE81L09aT0pQcElnSDQ9In0=' ``` Access the master API via cfs-cli and write the clientIDKey to configuration file .cfs-cli.json, so that any cluster management command is authenticated for permissions. example `.cfs-cli.json` ```json { \"masterAddr\": [ \"127.0.0.1:17010\", \"127.0.0.2:17010\", \"127.0.0.3:17010\" ], \"timeout\": 60, \"clientIDKey\": \"eyJpZCI6Imx0cHRlc3QiLCJhdXRoX2tleSI6ImpnQkdTTlFwNm1MYnU3c25VOHdLSWRFa3l0emwrcE81L09aT0pQcElnSDQ9In0=\" } ``` When datanode and metanode are started, they will respectively call the AddDataNode and AddMetaNode APIs, so it is also necessary to prepare serviceIDKey for them. Similarly, use authtool to create keys for datanode and metanode respectively, and write the key as the serviceIDKey to the configuration file. When they start, permission authentication will be performed. ```bash $ ./cfs-authtool api -host=192.168.0.14:8080 -ticketfile=ticketadmin.json -data=datadatanode.json -output=key_datanode.json AuthService createkey ``` example `data_datanode` ```json { \"id\": \"DatanodeService\", \"role\": \"service\", \"caps\": \"{\\\"API\\\":[\\\"::*\\\"]}\" } ``` Edit `datanode.json as` following: `serviceIDKey`: use the value of `authidkey` in `key_datanode.json` ```bash $ ./cfs-authtool api -host=192.168.0.14:8080 -ticketfile=ticketadmin.json -data=datametanode.json -output=key_metanode.json AuthService createkey ``` example `data_metanode` ```json { \"id\": \"MetanodeService\", \"role\": \"service\", \"caps\": \"{\\\"API\\\":[\\\"::*\\\"]}\" } ``` Edit `metanode.json as` following: `serviceIDKey`: use the value of `authidkey` in `key_metanode.json`" } ]
{ "category": "Runtime", "file_name": "authnode.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|" } ]
{ "category": "Runtime", "file_name": "fuzzy_mode_convert_table.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Restore Resource Modifiers\" layout: docs Velero provides a generic ability to modify the resources during restore by specifying json patches. The json patches are applied to the resources before they are restored. The json patches are specified in a configmap and the configmap is referenced in the restore command. Creating resource Modifiers Below is the two-step of using resource modifiers to modify the resources during restore. Creating resource modifiers configmap You need to create one configmap in Velero install namespace from a YAML file that defined resource modifiers. The creating command would be like the below: ```bash kubectl create cm <configmap-name> --from-file <yaml-file> -n velero ``` Creating a restore reference to the defined resource policies You can create a restore with the flag `--resource-modifier-configmap`, which will apply the defined resource modifiers to the current restore. The creating command would be like the below: ```bash velero restore create --resource-modifier-configmap <configmap-name> ``` YAML template Yaml template: ```yaml version: v1 resourceModifierRules: conditions: groupResource: persistentvolumeclaims resourceNameRegex: \"^mysql.*$\" namespaces: bar foo labelSelector: matchLabels: foo: bar patches: operation: replace path: \"/spec/storageClassName\" value: \"premium\" operation: remove path: \"/metadata/labels/test\" ``` The above configmap will apply the JSON Patch to all the PVCs in the namespaces bar and foo with name starting with mysql and match label `foo: bar`. The JSON Patch will replace the storageClassName with \"premium\" and remove the label \"test\" from the PVCs. Note that the Namespace here is the original namespace of the backed up resource, not the new namespace where the resource is going to be restored. You can specify multiple JSON Patches for a particular resource. The patches will be applied in the order specified in the configmap. A subsequent patch is applied in order and if multiple patches are specified for the same path, the last patch will override the previous patches. You can specify multiple resourceModifierRules in the configmap. The rules will be applied in the order specified in the configmap. add remove replace move copy test (covered below) The `test` operation can be used to check if a particular value is present in the resource. If the value is present, the patch will be applied. If the value is not present, the patch will not be applied. This can be used to apply a patch only if a particular value is present in the resource. For example, if you wish to change the storage class of a PVC only if the PVC is using a particular storage class, you can use the following configmap. ```yaml version: v1 resourceModifierRules: conditions: groupResource: persistentvolumeclaims resourceNameRegex:" }, { "data": "namespaces: bar foo patches: operation: test path: \"/spec/storageClassName\" value: \"premium\" operation: replace path: \"/spec/storageClassName\" value: \"standard\" ``` ```yaml version: v1 resourceModifierRules: conditions: groupResource: deployments.apps resourceNameRegex: \"^test-.*$\" namespaces: bar foo patches: operation: add path: \"/spec/template/spec/containers/0\" value: \"{\\\"name\\\": \\\"nginx\\\", \\\"image\\\": \\\"nginx:1.14.2\\\", \\\"ports\\\": [{\\\"containerPort\\\": 80}]}\" operation: copy from: \"/spec/template/spec/containers/0\" path: \"/spec/template/spec/containers/1\" ``` Note: The design and approach is inspired from Update a container's image using a json patch with positional arrays kubectl patch pod valid-pod -type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' Before creating the resource modifier yaml, you can try it out using kubectl patch command. The same commands should work as it is. You can modify a resource using JSON Merge Patch ```yaml version: v1 resourceModifierRules: conditions: groupResource: pods namespaces: ns1 mergePatches: patchData: | { \"metadata\": { \"annotations\": { \"foo\": null } } } ``` The above configmap will apply the Merge Patch to all the pods in namespace ns1 and remove the annotation `foo` from the pods. Both json and yaml format are supported for the patchData. For more details, please refer to You can modify a resource using Strategic Merge Patch ```yaml version: v1 resourceModifierRules: conditions: groupResource: pods resourceNameRegex: \"^my-pod$\" namespaces: ns1 strategicPatches: patchData: | { \"spec\": { \"containers\": [ { \"name\": \"nginx\", \"image\": \"repo2/nginx\" } ] } } ``` The above configmap will apply the Strategic Merge Patch to the pod with name my-pod in namespace ns1 and update the image of container nginx to `repo2/nginx`. Both json and yaml format are supported for the patchData. For more details, please refer to A new field `matches` is added in conditions to support conditional patches. Example of matches in conditions ```yaml version: v1 resourceModifierRules: conditions: groupResource: persistentvolumeclaims.storage.k8s.io matches: path: \"/spec/storageClassName\" value: \"premium\" mergePatches: patchData: | { \"metadata\": { \"annotations\": { \"foo\": null } } } ``` The above configmap will apply the Merge Patch to all the PVCs in all namespaces with storageClassName premium and remove the annotation `foo` from the PVCs. You can specify multiple rules in the `matches` list. The patch will be applied only if all the matches are satisfied. The user can specify a wildcard for groupResource in the conditions' struct. This will allow the user to apply the patches for all the resources of a particular group or all resources in all groups. For example, `.apps` will apply to all the resources in the `apps` group, `` will apply to all the resources in core group, `.` will apply to all the resources in all groups. If both `*.groupName` and `namespaces` are specified, the patches will be applied to all the namespaced resources in this group in the specified namespaces and all the cluster resources in this group." } ]
{ "category": "Runtime", "file_name": "restore-resource-modifiers.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"How Velero Works\" layout: docs Each Velero operation -- on-demand backup, scheduled backup, restore -- is a custom resource, defined with a Kubernetes and stored in . Velero also includes controllers that process the custom resources to perform backups, restores, and all related operations. You can back up or restore all objects in your cluster, or you can filter objects by type, namespace, and/or label. Velero is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades). The backup operation: Uploads a tarball of copied Kubernetes objects into cloud object storage. Calls the cloud provider API to make disk snapshots of persistent volumes, if specified. You can optionally specify hooks to be executed during the backup. For example, you might need to tell a database to flush its in-memory buffers to disk before taking a snapshot. . Note that cluster backups are not strictly atomic. If Kubernetes objects are being created or edited at the time of backup, they might not be included in the backup. The odds of capturing inconsistent information are low, but it is possible. The schedule operation allows you to back up your data at recurring intervals. The first backup is performed when the schedule is first created, and subsequent backups happen at the schedule's specified interval. These intervals are specified by a Cron expression. Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as YYYYMMDDhhmmss. The restore operation allows you to restore all of the objects and persistent volumes from a previously created backup. You can also restore only a filtered subset of objects and persistent volumes. Velero supports multiple namespace remapping--for example, in a single restore, objects in namespace \"abc\" can be recreated under namespace \"def\", and the objects in namespace \"123\" under \"456\". The default name of a restore is `<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as YYYYMMDDhhmmss. You can also specify a custom name. A restored object also includes a label with key `velero.io/restore-name` and value `<RESTORE NAME>`. By default, backup storage locations are created in read-write mode. However, during a restore, you can configure a backup storage location to be in read-only mode, which disables backup creation and deletion for the storage" }, { "data": "This is useful to ensure that no backups are inadvertently created or deleted during a restore scenario. When you run `velero backup create test-backup`: The Velero client makes a call to the Kubernetes API server to create a `Backup` object. The `BackupController` notices the new `Backup` object and performs validation. The `BackupController` begins the backup process. It collects the data to back up by querying the API server for resources. The `BackupController` makes a call to the object storage service -- for example, AWS S3 -- to upload the backup file. By default, `velero backup create` makes disk snapshots of any persistent volumes. You can adjust the snapshots by specifying additional flags. Run `velero backup create --help` to see available flags. Snapshots can be disabled with the option `--snapshot-volumes=false`. ![19] Velero backs up resources using the Kubernetes API server's preferred version for each group/resource. When restoring a resource, this same API group/version must exist in the target cluster in order for the restore to be successful. For example, if the cluster being backed up has a `gizmos` resource in the `things` API group, with group/versions `things/v1alpha1`, `things/v1beta1`, and `things/v1`, and the server's preferred group/version is `things/v1`, then all `gizmos` will be backed up from the `things/v1` API endpoint. When backups from this cluster are restored, the target cluster must have the `things/v1` endpoint in order for `gizmos` to be restored. Note that `things/v1` does not need to be the preferred version in the target cluster; it just needs to exist. When you create a backup, you can specify a TTL by adding the flag `--ttl <DURATION>`. If Velero sees that an existing backup resource is expired, it removes: The backup resource The backup file from cloud object storage All PersistentVolume snapshots All associated Restores Velero treats object storage as the source of truth. It continuously checks to see that the correct backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding backup resource in the Kubernetes API, Velero synchronizes the information from object storage to Kubernetes. This allows restore functionality to work in a cluster migration scenario, where the original backup objects do not exist in the new cluster. Likewise, if a backup object exists in Kubernetes but not in object storage, it will be deleted from Kubernetes since the backup tarball no longer exists." } ]
{ "category": "Runtime", "file_name": "about.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Bug report about: Create a bug report title: \"[BUG]\" labels: [\"kind/bug\", \"require/qa-review-coverage\", \"require/backport\"] assignees: '' <!--A clear and concise description of the bug.--> <!--Please provide the steps to reproduce the case--> <!--A clear and concise description of what you expected to happen.--> <!--PLEASE provide a support bundle when the issue happens. You can generate a support bundle using the link at the footer of the Longhorn UI. Check . Then, attach to the issue or send to [email protected] --> <!-- Suggest checking the doc of the best practices of using Longhorn. --> Longhorn version: Impacted volume (PV): <!-- PLEASE specify the volume name to better identify the cause --> Installation method (e.g. Rancher Catalog App/Helm/Kubectl): Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version: Number of control plane nodes in the cluster: Number of worker nodes in the cluster: Node config OS type and version: Kernel version: CPU per node: Memory per node: Disk type (e.g. SSD/NVMe/HDD): Network bandwidth between the nodes (Gbps): Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal): Number of Longhorn volumes in the cluster: <!-- Please add any other context about the problem here. -->" } ]
{ "category": "Runtime", "file_name": "bug.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Internals sidebar_position: 4 slug: /internals This article introduces implementation details of JuiceFS, use this as a reference if you'd like to contribute. The content below is based on JuiceFS v1.0.0, metadata version v1. Before digging into source code, you should read . High level concepts: File system: i.e. JuiceFS Volume, represents a separate namespace. Files can be moved freely within the same file system, while data copies are required between different file systems. Metadata engine: A supported database instance of your choice, that stores and manages file system metadata. There are three categories of metadata engines currently supported by JuiceFS. Redis: Redis and various protocol-compatible services SQL: MySQL, PostgreSQL, SQLite, etc. TKV: TiKV, BadgerDB, etc. Datastore: Object storage service that stores and manages file system data, such as Amazon S3, Aliyun OSS, etc. It can also be served by other storage systems that are compatible with object storage semantics, such as local file systems, Ceph RADOS, TiKV, etc. Client: can be in various forms, such as mount process, S3 gateway, WebDAV server, Java SDK, etc. File: refers to all types of files in general in this documentation, including regular files, directory files, link files, device files, etc. Directory: is a special kind of file used to organize the tree structure, and its contents are an index to a set of other files. Low level concepts (learn more at ): Chunk: Logical concept, file is split into 64MiB chunks, allowing fast lookups during file reads; Slice: Logical concept, basic unit for file writes. Block's purpose is to improve read speed, and slice exists to improve file edits and random writes. All file writes are assigned a new or existing slice, and when file is read, what application sees is the consolidated view of all slices. Block: A chunk contains one or more blocks (4MiB by default), block is the basic storage unit in object storage. JuiceFS Client reads multiple blocks concurrently which greatly improves read performance. Apart from this, block is also the basic storage unit on disk cache, so this design improves cache eviction efficiency. Apart from this, block is immutable, all file edits is achieved through new blocks: after file edit, new blocks are uploaded to object storage, and new slices are appended to the slice list in the corresponding file metadata; Assuming you're already familiar with Go, as well as , this is the overall code structure: is the top-level entrance, all JuiceFS functionalities is rooted here, e.g. the `juicefs format` command resides in `cmd/format.go` is actual implementation: `pkg/fuse/fuse.go` provides abstract FUSE API; `pkg/vfs` contains actual FUSE implementation, Metadata requests are handled in `pkg/meta`, read requests are handled in `pkg/vfs/reader.go` and write requests are handled by `pkg/vfs/writer.go`; `pkg/meta` directory is the implementation of all metadata engines, where: `pkg/meta/interface.go` is the interface definition for all types of metadata engines `pkg/meta/redis.go` is the interface implementation of Redis database `pkg/meta/sql.go` is the interface definition and general interface implementation of relational database, and the implementation of specific databases is in a separate file (for example, the implementation of MySQL is in `pkg/meta/sql_mysql.go`) `pkg/meta/tkv.go` is the interface definition and general interface implementation of the KV database, and the implementation of a specific database is in a separate file (for example, the implementation of TiKV is in `pkg/meta/tkv_tikv.go`) `pkg/object` contains all object storage integration code; is the Hadoop Java SDK, it uses `sdk/java/libjfs` through" }, { "data": "JuiceFS implements a userspace file system based on (Filesystem in Userspace), and the implementation library provides two APIs: high-level API and low-level API, where the high-level API is based on file name and path, and the low-level API is based on inode. JuiceFS is implemented based on low-level API (in fact JuiceFS does not depend on `libfuse`, but ), because this is the same set of APIs used by kernel VFS when interacting with FUSE. If JuiceFS were to use high level API, it'll have to implement the VFS tree within `libfuse`, and then expose path based API. This method works better for systems that already expose path based APIs (e.g. HDFS, S3). If metadata itself implements file / directory tree based on inode, the inode path inode conversions will have an impact on performance (this is the reason why FUSE API for HDFS doesn't perform well). JuiceFS Metadata directly implements file tree and API based on inode, so naturally it uses FUSE low level API. File systems are usually organized in a tree structure, where nodes represent files and edges represent directory containment relationships. There are more than ten metadata structures in JuiceFS. Most of them are used to maintain the organization of file tree and properties of individual nodes, while the rest are used to manage system configuration, client sessions, asynchronous tasks, etc. All metadata structures are described below. It is created when the `juicefs format` command is executed, and some of its fields can be modified later by the `juicefs config` command. The structure is specified as follows. ```go type Format struct { Name string UUID string Storage string Bucket string AccessKey string `json:\",omitempty\"` SecretKey string `json:\",omitempty\"` SessionToken string `json:\",omitempty\"` BlockSize int Compression string `json:\",omitempty\"` Shards int `json:\",omitempty\"` HashPrefix bool `json:\",omitempty\"` Capacity uint64 `json:\",omitempty\"` Inodes uint64 `json:\",omitempty\"` EncryptKey string `json:\",omitempty\"` KeyEncrypted bool `json:\",omitempty\"` TrashDays int `json:\",omitempty\"` MetaVersion int `json:\",omitempty\"` MinClientVersion string `json:\",omitempty\"` MaxClientVersion string `json:\",omitempty\"` EnableACL bool } ``` Name: name of the file system, specified by the user when formatting UUID: unique ID of the file system, automatically generated by the system when formatting Storage: short name of the object storage used to store data, such as `s3`, `oss`," }, { "data": "Bucket: the bucket path of the object storage AccessKey: access key used to access the object storage SecretKey: secret key used to access the object storage SessionToken: session token used to access the object storage, as some object storage supports the use of temporary token to obtain permission for a limited time BlockSize: size of the data block when splitting the file (the default is 4 MiB) Compression: compression algorithm that is executed before uploading data blocks to the object storage (the default is no compression) Shards: number of buckets in the object storage, only one bucket by default; when Shards > 1, data objects will be randomly hashed into Shards buckets HashPrefix: whether to set a hash prefix for the object name, false by default Capacity: quota limit for the total capacity of the file system Inodes: quota limit for the total number of files in the file system EncryptKey: the encrypted private key of the data object, which can be used only if the data encryption function is enabled KeyEncrypted: whether the saved key is encrypted or not, by default the SecretKey, EncryptKey and SessionToken will be encrypted TrashDays: number of days the deleted files are kept in trash, the default is 1 day MetaVersion: the version of the metadata structure, currently V1 (V0 and V1 are the same) MinClientVersion: the minimum client version allowed to connect, clients earlier than this version will be denied MaxClientVersion: the maximum client version allowed to connect EnableACL: enable ACL or not This structure is serialized into JSON format and stored in the metadata engine. Maintains the value of each counter in the system and the start timestamps of some background tasks, specifically usedSpace: used capacity of the file system totalInodes: number of used files in the file system nextInode: the next available inode number (in Redis, the maximum inode number currently in use) nextChunk: the next available sliceId (in Redis, the largest sliceId currently in use) nextSession: the maximum SID (sessionID) currently in use nextTrash: the maximum trash inode number currently in use nextCleanupSlices: timestamp of the last check on the cleanup of residual slices lastCleanupSessions: timestamp of the last check on the cleanup of residual stale sessions lastCleanupFiles: timestamp of the last check on the cleanup of residual files lastCleanupTrash: timestamp of the last check on the cleanup of trash Records the session IDs of clients connected to this file system and their timeouts. Each client sends a heartbeat message to update the timeout, and those who have not updated for a long time will be automatically cleaned up by other clients. :::tip Read-only clients cannot write to the metadata engine, so their sessions will not be recorded. ::: Records specific metadata of the client session so that it can be viewed with the `juicefs status` command. This is specified as ```go type SessionInfo struct { Version string // JuiceFS version HostName string // Host name MountPoint string // path to mount point. S3 gateway and WebDAV server are \"s3gateway\" and \"webdav\" respectively ProcessID int // Process ID } ``` This structure is serialized into JSON format and stored in the metadata engine. Records attribute information of each file, as follows ```go type Attr struct { Flags uint8 // reserved flags Typ uint8 // type of a node Mode uint16 // permission mode Uid uint32 // owner id Gid uint32 // group id of owner Rdev uint32 // device number Atime int64 // last access time Mtime int64 // last modified time Ctime int64 // last change time for meta Atimensec uint32 // nanosecond part of atime Mtimensec uint32 // nanosecond part of mtime Ctimensec uint32 // nanosecond part of ctime Nlink uint32 // number of links (sub-directories or hardlinks) Length uint64 // length of regular file Parent Ino // inode of parent; 0 means tracked by parentKey (for hardlinks) Full bool // the attributes are completed or not KeepCache bool // whether to keep the cached page or not AccessACL uint32 // access ACL id (identical ACL rules share the same access ACL ID.) DefaultACL uint32 // default ACL id (default ACL and the access ACL share the same cache and store) } ``` There are a few fields that need clarification. Atime/Atimensec: See Nlink Directory file: initial value is 2 ('.' and" }, { "data": "add 1 for each subdirectory Other files: initial value is 1, add 1 for each hard link created Length Directory file: fixed at 4096 Soft link (symbolic link) file: the string length of the path to which the link points Other files: the length of the actual content of the file This structure is usually encoded in binary format and stored in the metadata engine. Records information on each edge in the file tree, as follows ``` parentInode, name -> type, inode ``` where parentInode is the inode number of the parent directory, and the others are the name, type, and inode number of the child files, respectively. Records the parent directory of some files. The parent directory of most files is recorded in the Parent field of the attribute; however, for files that have been created with hard links, there may be more than one parent directory, so the Parent field is set to 0, and all parent inodes are recorded independently, as follows ``` inode -> parentInode, links ``` where links is the count of the parentInode, because multiple hard links can be created in the same directory, and these hard links share one inode. Records information on each Chunk, as follows ``` inode, index -> []Slices ``` where inode is the inode number of the file to which the Chunk belongs, and index is the number of all Chunks in the file, starting from 0. The Chunk value is an array of Slices. Each Slice represents a piece of data written by the client, and is appended to this array in the order of writing time. When there is an overlap between different Slices, the later Slice is used. ```go type Slice struct { Pos uint32 // offset of the Slice in the Chunk ID uint64 // ID of the Slice, globally unique Size uint32 // size of the Slice Off uint32 // offset of valid data in this Slice Len uint32 // size of valid data in this Slice } ``` This structure is encoded and saved in binary format, taking up 24 bytes. Records the reference count of a Slice, as follows ``` sliceId, size -> refs ``` Since the reference count of most Slices is 1, to reduce the number of related entries in the database, the actual value minus 1 is used as the stored count value in Redis and TKV. In this way, most of the Slices have a refs value of 0, and there is no need to create related entries in the database. Records the location of the softlink file, as follows ``` inode -> target ``` Records extended attributes (Key-Value pairs) of a file, as follows ``` inode, key -> value ``` Records BSD locks (flock) of a file, specifically. ``` inode, sid, owner -> ltype ``` where `sid` is the client session ID, `owner` is a string of numbers, usually associated with a process, and `ltype` is the lock type, which can be 'R' or 'W'. Record POSIX record locks (fcntl) of a file, specifically ``` inode, sid, owner -> []plockRecord ``` Here plock is a more fine-grained lock that can only lock a certain segment of the file. ```go type plockRecord struct { ltype uint32 // lock type pid uint32 // process ID start uint64 // start position of the lock end uint64 // end position of the lock } ``` This structure is encoded and stored in binary format, taking up 24 bytes. Records the list of files to be cleaned. It is needed as data cleanup of files is an asynchronous and potentially time-consuming operation that can be interrupted by other" }, { "data": "``` inode, length -> expire ``` where length is the length of the file and expire is the time when the file was deleted. Records delayed deleted Slices. When the Trash feature is enabled, old Slices deleted by the Slice Compaction will be kept for the same amount of time as the Trash configuration, to be available for data recovery if necessary. ``` sliceId, deleted -> []slice ``` where sliceId is the ID of the new slice after compaction, deleted is the timestamp of the compaction, and the mapped value is the list of all old slices that were compacted. Each slice only encodes its ID and size. ```go type slice struct { ID uint64 Size uint32 } ``` This structure is encoded and stored in binary format, taking up 12 bytes. Records the list of files that need to be kept temporarily during the session. If a file is still open when it is deleted, the data cannot be cleaned up immediately, but needs to be held temporarily until the file is closed. ``` sid -> []inode ``` where `sid` is the session ID and the mapped value is the list of temporarily undeleted file inodes. The common format of keys in Redis is `${prefix}${JFSKey}`, where In standalone mode the prefix is an empty string, while in cluster mode it is a database number enclosed in curly braces, e.g. \"{10}\" JFSKey is the Key of different data structures in JuiceFS, which are listed in the subsequent subsections In Redis Keys, integers (including inode numbers) are represented as decimal strings if not otherwise specified. Key: `setting` Value Type: String Value: file system formatting information in JSON format Key: counter name Value Type: String Value: value of the counter, which is actually an integer Key: `allSessions` Value Type: Sorted Set Value: all non-read-only sessions connected to this file system. In Set, Member: session ID Score: timeout point of this session Key: `sessionInfos` Value Type: Hash Value: basic meta-information on all non-read-only sessions. In Hash, Key: session ID Value: session information in JSON format Key: `i${inode}` Value Type: String Value: binary encoded file attribute Key: `d${inode}` Value Type: Hash Value: all directory entries in this directory. In Hash, Key: file name Value: binary encoded file type and inode number Key: `p${inode}` Value Type: Hash Value: all parent inodes of this file. in Hash. Key: parent inode Value: count of this parent inode Key: `c${inode}_${index}` Value Type: List Value: list of Slices, each Slice is binary encoded with 24 bytes Key: `sliceRef` Value Type: Hash Value: the count value of all Slices to be recorded. In Hash, Key: `k${sliceId}_${size}` Value: reference count of this Slice minus 1 (if the reference count is 1, the corresponding entry is generally not created) Key: `s${inode}` Value Type: String Value: path that the symbolic link points to Key: `x${inode}` Value Type: Hash Value: all extended attributes of this file. In Hash, Key: name of the extended attribute Value: value of the extended attribute Key: `lockf${inode}` Value Type: Hash Value: all flocks of this file. In Hash, Key: `${sid}_${owner}`, owner in hexadecimal Value: lock type, can be 'R' or 'W' Key: `lockp${inode}` Value Type: Hash Value: all plocks of this file. In Hash, Key: `${sid}_${owner}`, owner in hexadecimal Value: array of bytes, where every 24 bytes corresponds to a Key`delfiles` Value Type: Sorted Set Value: list of all files to be" }, { "data": "In Set, Member: `${inode}:${length}` Score: the timestamp when this file was added to the set Key: `delSlices` Value Type: Hash Value: all Slices to be cleaned. In Hash, Key: `${sliceId}_${deleted}` Value: array of bytes, where every 12 bytes corresponds to a Key: `session${sid}` Value Type: List Value: list of files temporarily reserved in this session. In List, Member: inode number of the file Metadata is stored in different tables by type, and each table is named with `jfs` followed by its specific structure name to form the table name, e.g. `jfsnode`. Some tables use `Id` with the `bigserial` type as primary keys to ensure that each table has a primary key, and the `Id` columns do not contain actual information. ```go type setting struct { Name string `xorm:\"pk\"` Value string `xorm:\"varchar(4096) notnull\"` } ``` There is only one entry in this table with \"format\" as Name and file system formatting information in JSON as Value. ```go type counter struct { Name string `xorm:\"pk\"` Value int64 `xorm:\"notnull\"` } ``` ```go type session2 struct { Sid uint64 `xorm:\"pk\"` Expire int64 `xorm:\"notnull\"` Info []byte `xorm:\"blob\"` } ``` There is no separate table for this, but it is recorded in the `Info` column of `session2`. ```go type node struct { Inode Ino `xorm:\"pk\"` Type uint8 `xorm:\"notnull\"` Flags uint8 `xorm:\"notnull\"` Mode uint16 `xorm:\"notnull\"` Uid uint32 `xorm:\"notnull\"` Gid uint32 `xorm:\"notnull\"` Atime int64 `xorm:\"notnull\"` Mtime int64 `xorm:\"notnull\"` Ctime int64 `xorm:\"notnull\"` Nlink uint32 `xorm:\"notnull\"` Length uint64 `xorm:\"notnull\"` Rdev uint32 Parent Ino AccessACLId uint32 `xorm:\"'accessaclid'\"` DefaultACLId uint32 `xorm:\"'defaultaclid'\"` } ``` Most of the fields are the same as , but the timestamp precision is lower, i.e., Atime/Mtime/Ctime are in microseconds. ```go type edge struct { Id int64 `xorm:\"pk bigserial\"` Parent Ino `xorm:\"unique(edge) notnull\"` Name []byte `xorm:\"unique(edge) varbinary(255) notnull\"` Inode Ino `xorm:\"index notnull\"` Type uint8 `xorm:\"notnull\"` } ``` There is no separate table for this. All `Parent`s are found based on the `Inode` index in `edge`. ```go type chunk struct { Id int64 `xorm:\"pk bigserial\"` Inode Ino `xorm:\"unique(chunk) notnull\"` Indx uint32 `xorm:\"unique(chunk) notnull\"` Slices []byte `xorm:\"blob notnull\"` } ``` Slices are an array of bytes, and each corresponds to 24 bytes. ```go type sliceRef struct { Id uint64 `xorm:\"pk chunkid\"` Size uint32 `xorm:\"notnull\"` Refs int `xorm:\"notnull\"` } ``` ```go type symlink struct { Inode Ino `xorm:\"pk\"` Target []byte `xorm:\"varbinary(4096) notnull\"` } ``` ```go type xattr struct { Id int64 `xorm:\"pk bigserial\"` Inode Ino `xorm:\"unique(name) notnull\"` Name string `xorm:\"unique(name) notnull\"` Value []byte `xorm:\"blob notnull\"` } ``` ```go type flock struct { Id int64 `xorm:\"pk bigserial\"` Inode Ino `xorm:\"notnull unique(flock)\"` Sid uint64 `xorm:\"notnull unique(flock)\"` Owner int64 `xorm:\"notnull unique(flock)\"` Ltype byte `xorm:\"notnull\"` } ``` ```go type plock struct { Id int64 `xorm:\"pk bigserial\"` Inode Ino `xorm:\"notnull unique(plock)\"` Sid uint64 `xorm:\"notnull unique(plock)\"` Owner int64 `xorm:\"notnull unique(plock)\"` Records []byte `xorm:\"blob notnull\"` } ``` Records is an array of bytes, and each corresponds to 24 bytes. ```go type delfile struct { Inode Ino `xorm:\"pk notnull\"` Length uint64 `xorm:\"notnull\"` Expire int64 `xorm:\"notnull\"` } ``` ```go type delslices struct { Id uint64 `xorm:\"pk chunkid\"` Deleted int64 `xorm:\"notnull\"` Slices []byte `xorm:\"blob notnull\"` } ``` Slices is an array of bytes, and each corresponds to 12 bytes. ```go type sustained struct { Id int64 `xorm:\"pk bigserial\"` Sid uint64 `xorm:\"unique(sustained) notnull\"` Inode Ino `xorm:\"unique(sustained) notnull\"` } ``` The common format of keys in TKV (Transactional Key-Value Database) is `${prefix}${JFSKey}`, where prefix is used to distinguish between different file systems, usually `${VolumeName}0xFD`, where `0xFD` is used as a special byte to handle cases when there is an inclusion relationship between different file system" }, { "data": "In addition, for databases that are not shareable (e.g. BadgerDB), the empty string is used as prefix. JFSKey is the JuiceFS Key for different data types, which is listed in the following subsections. In TKV's Keys, all integers are stored in encoded binary form. inode and counter value occupy 8 bytes and are encoded with small endian. SID, sliceId and timestamp occupy 8 bytes and are encoded with big endian. ``` setting -> file system formatting information in JSON format ``` ``` C${name} -> counter value ``` ``` SE${sid} -> timestamp ``` ``` SI${sid} -> session information in JSON format ``` ``` A${inode}I -> encoded Attr ``` ``` A${inode}D${name} -> encoded {type, inode} ``` ``` A${inode}P${parentInode} -> counter value ``` ``` A${inode}C${index} -> Slices ``` where index takes up 4 bytes and is encoded with big endian. Slices is an array of bytes, one per 24 bytes. ``` K${sliceId}${size} -> counter value ``` where size takes up 4 bytes and is encoded with big endian. ``` A${inode}S -> target ``` ``` A${inode}X${name} -> xattr value ``` ``` F${inode} -> flocks ``` where flocks is an array of bytes, one flock per 17 bytes. ```go type flock struct { sid uint64 owner uint64 ltype uint8 } ``` ``` P${inode} -> plocks ``` where plocks is an array of bytes and the corresponding plock is variable-length. ```go type plock struct { sid uint64 owner uint64 size uint32 records []byte } ``` where size is the length of the records array and every 24 bytes in records corresponds to one . ``` D${inode}${length} -> timestamp ``` where length takes up 8 bytes and is encoded with big endian. ``` L${timestamp}${sliceId} -> slices ``` where slices is an array of bytes, and one corresponds to 12 bytes. ``` SS${sid}${inode} -> 1 ``` Here the Value value is only used as a placeholder. According to the design of , only the direct children of each directory are recorded in the metadata engine. When an application provides a path to access a file, JuiceFS needs to look it up level by level. Now suppose the application wants to open the file `/dir1/dir2/testfile`, then it needs to search for the entry with name \"dir1\" in the Edge structure of the root directory (inode number is fixed to 1) and get its inode number N1 search for the entry with the name \"dir2\" in the Edge structure of N1 and get its inode number N2 search for the entry with the name \"testfile\" in the Edge structure of N2, and get its inode number N3 search for the structure corresponding to N3 to get the attributes of the file Failure in any of the above steps will result in the file pointed to by that path not being found. From the previous section, we know how to find the file based on its path and get its attributes. The metadata related to the contents of the file can be found based on the inode and size fields in the file properties. Now suppose a file has an inode of 100 and a size of 160 MiB, then the file has `(size-1) / 64 MiB + 1 = 3` Chunks, as follows. ``` File: | | | | Chunk: |< Chunk 0 >|< Chunk 1 >|<-- Chunk 2 -->| ``` In standalone Redis, this means that there are 3 ," }, { "data": "`c1001` and `c100_2`, each corresponding to a list of Slices. These Slices are mainly generated when the data is written and may overwrite each other or may not fill the Chunk completely, so you need to traverse this list of Slices sequentially and reconstruct the latest version of the data distribution before using it, so that the part covered by more than one Slice is based on the last added Slice the part that is not covered by Slice is automatically zeroed, and is represented by sliceId = 0 truncate Chunk according to file size Now suppose there are 3 Slices in Chunk 0 ```go Slice{pos: 10M, id: 10, size: 30M, off: 0, len: 30M} Slice{pos: 20M, id: 11, size: 16M, off: 0, len: 16M} Slice{pos: 16M, id: 12, size: 10M, off: 0, len: 10M} ``` It can be illustrated as follows (each '_' denotes 2 MiB) ``` Chunk: | | Slice 10: | _| Slice 11: | | Slice 12: | _| New List: | | | | | | | 0 10 12 11 10 0 ``` The reconstructed new list contains and only contains the latest data distribution for this Chunk as follows ```go Slice{pos: 0, id: 0, size: 10M, off: 0, len: 10M} Slice{pos: 10M, id: 10, size: 30M, off: 0, len: 6M} Slice{pos: 16M, id: 12, size: 10M, off: 0, len: 10M} Slice{pos: 26M, id: 11, size: 16M, off: 6M, len: 10M} Slice{pos: 36M, id: 10, size: 30M, off: 26M, len: 4M} Slice{pos: 40M, id: 0, size: 24M, off: 0, len: 24M} // can be omitted ``` Block is the basic unit for JuiceFS to manage data. Its size is 4 MiB by default, and can be changed only when formatting a file system, within the interval [64 KiB, 16 MiB]. Each Block is an object in the object storage after upload, and is named in the format `${fsname}/chunks/${hash}/${basename}`, where fsname is the file system name \"chunks\" is a fixed string representing the data object of JuiceFS hash is the hash value calculated from basename, which plays a role in isolation management basename is the valid name of the object in the format of `${sliceId}${index}${size}`, where sliceId is the ID of the Slice to which the object belongs, and each Slice in JuiceFS has a globally unique ID index is the index of the object in the Slice it belongs to, by default a Slice can be split into at most 16 Blocks, so its value range is [0, 16) size is the size of the Block, and by default it takes the value of (0, 4 MiB] Currently there are two hash algorithms, and both use the sliceId in basename as the parameter. Which algorithm will be chosen to use follows the of the file system. ```go func hash(sliceId int) string { if HashPrefix { return fmt.Sprintf(\"%02X/%d\", sliceId%256, sliceId/1000/1000) } return fmt.Sprintf(\"%d/%d\", sliceId/1000/1000, sliceId/1000) } ``` Suppose a file system named `jfstest` is written with a continuous 10 MiB of data and internally given a SliceID of 1 with HashPrefix disabled, then the following three objects will be generated in the object" }, { "data": "``` jfstest/chunks/0/0/104194304 jfstest/chunks/0/0/114194304 jfstest/chunks/0/0/122097152 ``` Similarly, now taking the 64 MiB chunk in the previous section as an example, its actual data distribution is as follows ``` 0 ~ 10M: Zero 10 ~ 16M: 1004194304, 1014194304(0 ~ 2M) 16 ~ 26M: 1204194304, 1214194304, 1222097152 26 ~ 36M: 1114194304(2 ~ 4M), 1124194304, 1134194304 36 ~ 40M: 1064194304(2 ~ 4M), 1072097152 40 ~ 64M: Zero ``` According to this, the client can quickly find the data needed for the application. For example, reading 8 MiB data at offset 10 MiB location will involve 3 objects, as follows Read the entire object from `1004194304`, corresponding to 0 to 4 MiB of the read data Read 0 to 2 MiB from `1014194304`, corresponding to 4 to 6 MiB of the read data Read 0 to 2 MiB from `1204194304`, corresponding to 6 to 8 MiB of the read data To facilitate obtaining the list of objects of a certain file, JuiceFS provides the `info` command, e.g. `juicefs info /mnt/jfs/test.tmp`. ```bash objects: +++-++-+ | chunkIndex | objectName | size | offset | length | +++-++-+ | 0 | | 10485760 | 0 | 10485760 | | 0 | jfstest/chunks/0/0/1004194304 | 4194304 | 0 | 4194304 | | 0 | jfstest/chunks/0/0/1014194304 | 4194304 | 0 | 2097152 | | 0 | jfstest/chunks/0/0/1204194304 | 4194304 | 0 | 4194304 | | 0 | jfstest/chunks/0/0/1214194304 | 4194304 | 0 | 4194304 | | 0 | jfstest/chunks/0/0/1222097152 | 2097152 | 0 | 2097152 | | 0 | jfstest/chunks/0/0/1114194304 | 4194304 | 2097152 | 2097152 | | 0 | jfstest/chunks/0/0/1124194304 | 4194304 | 0 | 4194304 | | 0 | jfstest/chunks/0/0/1134194304 | 4194304 | 0 | 4194304 | | 0 | jfstest/chunks/0/0/1064194304 | 4194304 | 2097152 | 2097152 | | 0 | jfstest/chunks/0/0/1072097152 | 2097152 | 0 | 2097152 | | ... | ... | ... | ... | ... | +++-++-+ ``` The empty objectName in the table means a file hole and is read as 0. As you can see, the output is consistent with the previous analysis. It is worth mentioning that the 'size' here is size of the original data in the Block, rather than that of the actual object in object storage. The original data is written directly to object storage by default, so the 'size' is equal to object size. However, when data compression or data encryption is enabled, the size of the actual object will change and may no longer be the same as the 'size'. You can configure the compression algorithm (supporting `lz4` and `zstd`) with the `--compress <value>` parameter when formatting a file system, so that all data blocks of this file system will be compressed before uploading to object storage. The object name remains the same as default, and the content is the result of the compression algorithm, without any other meta information. Therefore, the compression algorithm in the is not allowed to be modified, otherwise it will cause the failure of reading existing data. The RSA private key can be configured to enable when formatting a file system with the `--encrypt-rsa-key <value>` parameter, which allows all data blocks of this file system to be encrypted before uploading to the object storage. The object name is still the same as default, while its content becomes a header plus the result of the data encryption algorithm. The header contains a random seed and the symmetric key used for decryption, and the symmetric key itself is encrypted with the RSA private key. Therefore, it is not allowed to modify the RSA private key in the , otherwise reading existing data will fail." } ]
{ "category": "Runtime", "file_name": "internals.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Add a multicast group. Add a multicast group to the node. ``` cilium-dbg bpf multicast group add <group> [flags] ``` ``` The following command adds group 229.0.0.1 to BPF multicast map of the node: cilium-dbg bpf multicast group add 229.0.0.1 ``` ``` -h, --help help for add ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the multicast groups." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_multicast_group_add.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "name: Feature request about: Suggest an idea for this project title: '' labels: kind/feature assignees: '' Describe the problem/challenge you have <!-- A description of the current limitation/problem/challenge that you are experiencing.--> Describe the solution you'd like <!-- A description of what you want to happen. --> Anything else you would like to add? <!-- Anything else which is relevant and may help us understand the issue. -->" } ]
{ "category": "Runtime", "file_name": "feature_request.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Setting cluster zones can prevent the entire cluster from becoming unavailable due to a single partition failure. When each node starts, the cell set will automatically join the partition. ``` bash $ cfs-cli zone list ``` If you accidentally set the volume partition incorrectly and want to change the partition: ``` bash $ cfs-cli volume update {volume name} --zone-name={zone name} ``` Most parameters in the cluster have default values, and the default zone name is \"default\". It should be noted that there must be enough datanodes and metanodes in a partition at the same time, otherwise, when creating a volume in the partition, either the data partition initialization will fail or the metadata partition initialization will fail. Each zone will have several nodesets, and the default capacity of each nodeset is 18 nodes. Because CubeFS has implemented multi-raft, each node has started a raft server process, and each raft server manages m raft instances on the node. If the other replicas of these raft instances are distributed on n nodes and the raft instances send raft heartbeats between them, the heartbeats will be transmitted between n nodes. As the cluster scales up, n will become relatively large. Through the nodeset restriction, the heartbeats are relatively independent within the nodeset, avoiding the heartbeat storm problem at the cluster level. We use the multi-raft and nodeset mechanisms together to avoid the problem of raft heartbeat storms. The dp/mp is evenly distributed in the ns. Each time a dp/mp is created, it will start polling from the ns where the previous dp/mp was located to find an available ns for creation. For dp/mp with 3 replicas, dp/mp will only select the ns when there are at least 3 available nodes in the ns. `count(ns)>= 18*n + 3`" } ]
{ "category": "Runtime", "file_name": "zone.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- toc --> - - - <!-- /toc --> Kanister uses a Kubernetes custom controller that makes use of Kopia as a primary backup and restore tool. The detailed design of the customer controller can be found The custom controller called as repository server controller currently uses Kopia CLI to perform the Kopia operations. All the operations are executed inside a pod using the `kubectl exec` function. We can get following benefits if we start using Kopia SDK instead of CLI: Better error handling Dependency management The goal over here is to start the Kopia server by executing a pre-built binary. The binary would execute most of the operations to start the server using kopia SDK and reduce the dependency on Kopia CLI and gain more flexibility over the operations Implement a library that wraps the SDK functions provided by Kopia to connect to underlying storage providers - S3, Azure, GCP, Filestore Build another library on top of the library implemented in #1 that can be used to perform repository operations Modify the repository server controller to run a pod that executes a custom image. The binary would take all the necessary steps to make the Kopia server ready using Kopia SDK and Kopia CLI. ```go package storage import ( \"context\" \"github.com/kopia/kopia/repo/blob\" ) type StorageType string const ( TypeS3 StorageType = \"S3\" TypeAzure StorageType = \"Azure\" TypeFileStore StorageType = \"FileStore\" TypeGCP StorageType = \"GCP\" ) type Storage interface { Connect() (blob.Storage, error) SetOptions(context.Context, map[string]string) WithCreate(bool) } func New(storageType StorageType) Storage { switch storageType { case TypeS3: return &s3Storage{} case TypeFileStore: return &fileSystem{} case TypeAzure: return &azureStorage{} case TypeGCP: return &gcpStorage{} default: return nil } } ``` This pkg would be a wrapper over the `storage pkg` built above and the Kopia repositoy pkg" }, { "data": "provided by Kopia SDK ```go package repository type Repository struct { st storage.Storage password string configFile string storageType storage.StorageType } // Create repository using Kopia SDK func (r Repository) Create(opts repo.NewRepositoryOptions) (err error) { storage, err := r.st.Connect() if err != nil { return err } return repo.Initialize(context.Background(), storage, opts, r.password) } // Connect to the repository using Kopia SDK func (r Repository) Connect(opts repo.ConnectOptions) (err error) { storage, err := r.st.Connect() if err != nil { return err } return repo.Connect(context.Background(), r.configFile, storage, r.password, opts) } // Connect to the repository by providing a config file func (r *Repository) ConnectUsingFile() error { repoConfig := repositoryConfigFileName(r.configFile) if _, err := os.Stat(repoConfig); os.IsNotExist(err) { return errors.New(\"failed find kopia configuration file\") } _, err := repo.Open(context.Background(), repoConfig, r.password, &repo.Options{}) return err } func repositoryConfigFileName(configFile string) string { if configFile != \"\" { return configFile } return filepath.Join(os.Getenv(\"HOME\"), \".config\", \"kopia\", \"repository.config\") } ``` Above diagram explains the current workflow repository server controller uses to start the Kopia repository server. All the commands are executed from the controller pod inside the respoitory server pod using `kube.exec`. The repository server pod that is created by controller uses `kanister-tools` image As shown in the figure we will be building a custom image which is going have this workflow: Start the Kopia repository server in `--async-repo-connect` mode that means the server would be started without connecting to the repository in an async mode. Existing approach starts the server only after the connection to Kopia repository is successful. Kopia SDK currently does not have an exported function to start the Kopia server. So we would still be using Kopia CLI to start the server Check Kopia server status using Kopia SDK wrappers explained in section Connect to Kopia repository using Kopia SDK wrappers Add or Update server users using Kopia SDK wrappers Refresh server using Kopia SDK wrappers" } ]
{ "category": "Runtime", "file_name": "replace-CLI-with-SDK.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 2 paginationnext: getting-started/fordistributed description: Learn how to use JuiceFS in standalone mode, combining object storage and databases for efficient file system management. The JuiceFS file system is driven by both and . In addition to object storage, it also supports using local disks, WebDAV, and HDFS as underlying storage options. Therefore, you can create a standalone file system using local disks and SQLite database to get a quick overview of how JuiceFS works. For Linux distributions and macOS users, the JuiceFS client can be quickly installed using a one-click installation script: ```shell curl -sSL https://d.juicefs.com/install | sh - ``` For other operating systems and installation methods, please refer to . Once installed successfully, executing the `juicefs` command in the terminal will return a help message regardless of the operating system. The JuiceFS client provides a command to create a file system as follows: ```shell juicefs format [command options] META-URL NAME ``` To format a file system, three types of information are required: [command options]: Sets up the storage medium for the file system; local disk will be used by default, and the default path is `\"$HOME/.juicefs/local\"`, `\"/var/jfs\"` or `\"C:/jfs/local\"`. META-URL: Sets up the metadata engine, typically a URL or the file path of a database. NAME: The name of the file system. :::tip JuiceFS supports a wide range of storage media and metadata storage engines. See and . ::: As an example on a Linux system, the following command creates a file system named `myjfs`: ```shell juicefs format sqlite3://myjfs.db myjfs ``` Upon completion, an output similar to the following will be returned: ```shell {1,4} 2021/12/14 18:26:37.666618 juicefs[40362] <INFO>: Meta address: sqlite3://myjfs.db [xorm] [info] 2021/12/14 18:26:37.667504 PING DATABASE sqlite3 2021/12/14 18:26:37.674147 juicefs[40362] <WARNING>: The latency to database is too high: 7.257333ms 2021/12/14 18:26:37.675713 juicefs[40362] <INFO>: Data use file:///Users/herald/.juicefs/local/myjfs/ 2021/12/14 18:26:37.689683 juicefs[40362] <INFO>: Volume is formatted as {Name:myjfs UUID:d5bdf7ea-472c-4640-98a6-6f56aea13982 Storage:file Bucket:/Users/herald/.juicefs/local/ AccessKey: SecretKey: BlockSize:4096 Compression:none Shards:0 Partitions:0 Capacity:0 Inodes:0 EncryptKey:} ``` As you can see from the output, the file system uses SQLite as the metadata storage engine. The database file is located in the current directory with the file name `myjfs.db`, which creates a table to store all the metadata of the file system `myjfs`. Since no storage-related options are specified in this example, the local disk is used as the storage medium by default. According to the output, the file system storage path is" }, { "data": "The JuiceFS client provides a command to mount file systems in the following format: ```shell juicefs mount [command options] META-URL MOUNTPOINT ``` Similar to the command of creating a file system, the following information is also required to mount a file system: `[command options]`: Specifies file system-related options. For example, `-d` enables background mounts. `META-URL`: Sets up the metadata storage, typically a URL or file path of a database. `MOUNTPOINT`: Specifies a mount point of the file system. :::tip The mount point (`MOUNTPOINT`) on Windows systems should use a disk letter that is not yet occupied, such as `Z:` or `Y:`. ::: :::note As SQLite is a single-file database, please pay attention to the path of the database file when mounting it. JuiceFS supports both relative and absolute paths. ::: The following command mounts the `myjfs` file system to the `~/jfs` folder: ```shell juicefs mount sqlite3://myjfs.db ~/jfs ``` The client mounts the file system in the foreground by default. As you can see in the above image, the program keeps running in the current terminal. To unmount the file system, press <kbd>Ctrl</kbd> + <kbd>C</kbd> or close the terminal window. To keep the file system mounted in the background, specify the `-d` or `--background` option when mounting. This allows you to mount the file system in the daemon: ```shell juicefs mount sqlite3://myjfs.db ~/jfs -d ``` Next, any files stored in the mount point `~/jfs` will be split into specific blocks according to and stored in the `$HOME/.juicefs/local/myjfs` directory; the corresponding metadata will be stored in the `myjfs.db` database. In the end, the mount point `~/jfs` can be unmounted by executing the following command: ```shell juicefs umount ~/jfs ``` The above exercise only helps you to have a quick experience with JuiceFS locally and gives you a basic overview of how JuiceFS works. For a more practical example, consider using SQLite to store metadata as demonstrated, but replace the local storage with \"object storage.\" Object Storage is a web storage service based on the HTTP protocol that offers simple APIs for access. It has a flat structure and is easy to scale and cost-effective, particularly suitable for storing large amounts of unstructured data. Almost all mainstream cloud computing platforms provide object storage services, such as Amazon S3, Alibaba Cloud OSS, and Backblaze B2. JuiceFS supports almost all object storage services, see . In general, only two steps are required to create an object storage: Create a Bucket and get the Endpoint" }, { "data": "Create the Access Key ID and Access Key Secret, which serve as the access keys for the Object Storage API. Taking AWS S3 as an example, the created resources would look like the following: Bucket Endpoint: `https://myjfs.s3.us-west-1.amazonaws.com` Access Key ID: `ABCDEFGHIJKLMNopqXYZ` Access Key Secret: `ZYXwvutsrqpoNMLkJiHgfeDCBA` :::note The process of creating an object storage may vary slightly from platform to platform, so it is recommended to check the help manual of the corresponding cloud platform. In addition, some platforms may provide different Endpoint addresses for internal and external networks. Please choose the external network access for your application. This document illustrates accessing object storage from a local environment. ::: To create a JuiceFS file system using SQLite and Amazon S3 object storage: :::note If the `myjfs.db` file already exists, delete it first and then execute the following command. ::: ```shell juicefs format --storage s3 \\ --bucket https://myjfs.s3.us-west-1.amazonaws.com \\ --access-key ABCDEFGHIJKLMNopqXYZ \\ --secret-key ZYXwvutsrqpoNMLkJiHgfeDCBA \\ sqlite3://myjfs.db myjfs ``` The command above creates a file system using the same database name and file system name with the object storage options provided. `--storage`: Specifies the storage type, such ase `oss` and `s3`. `--bucket`: Specifies the Endpoint address of the object storage. `--access-key`: Specifies the Object Storage Access Key ID. `--secret-key`: Specifies the Object Storage Access Key Secret. Once created, you can mount the file system: ```shell juicefs mount sqlite3://myjfs.db ~/jfs ``` The mount command is exactly the same as using the local storage because JuiceFS has already written the metadata of the object storage to the `myjfs.db` database, so there is no need to provide it again when mounting. Compared with using local disks, the combination of SQLite and object storage is more practical. From an application perspective, this approach is equivalent to plugging an object storage with almost unlimited capacity into your local computer, allowing you to use cloud storage as a local disk. Further, all the data of the file system is stored in the cloud-based object storage, so the `myjfs.db` database can be copied to other computers where JuiceFS clients are installed for mounting, reading, and writing. That is, any computer that can read the metadata database can mount and read/write the file system. Obviously, it is difficult for a single file database like SQLite to be accessed by multiple computers at the same time. If SQLite is replaced by databases like Redis, PostgreSQL, and MySQL, which can be accessed by multiple computers at the same time through the network, it is possible to achieve distributed reads and writes on the JuiceFS file system." } ]
{ "category": "Runtime", "file_name": "standalone.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Security Rook provides security for CephNFS server clusters through two high-level features: and . !!! attention All features in this document are experimental and may not support upgrades to future versions. !!! attention Some configurations of these features may break the ability to . The NFS CSI driver may not be able to mount exports for pods when ID mapping is configured. User ID mapping allows the NFS server to map connected NFS client IDs to a different user domain, allowing NFS clients to be associated with a particular user in your organization. For example, users stored in LDAP can be associated with NFS users and vice versa. is the System Security Services Daemon. It can be used to provide user ID mapping from a number of sources including LDAP, Active Directory, and FreeIPA. Currently, only LDAP has been tested. !!! attention The Ceph container image must have the `sssd-client` package installed to support SSSD. This package is included in `quay.io/ceph/ceph` in v17.2.4 and newer. For older Ceph versions you may build your own Ceph image which adds `RUN yum install sssd-client && yum clean all`. SSSD requires a configuration file in order to configure its connection to the user ID mapping system (e.g., LDAP). The file follows the `sssd.conf` format documented in its . Methods of providing the configuration file are documented in the . Recommendations: The SSSD sidecar only requires the namespace switch (a.k.a. \"nsswitch\" or \"nss\"). We recommend enabling only the `nss` service to lower CPU usage. NFS-Ganesha does not require user enumeration. We recommend leaving this option unset or setting `enumerate = false` to speed up lookups and reduce RAM usage. NFS exports created via documented methods do not require listing all members of groups. We recommend setting `ignoregroupmembers = true` to speed up LDAP lookups. Only customized exports that set `manage_gids` need to consider this option. A sample `sssd.conf` file is shown below. ```ini [sssd] services = nss domains = default configfileversion = 2 [nss] filter_users = root [domain/default] id_provider = ldap ldap_uri = ldap://server-address.example.net ldapsearchbase = dc=example,dc=net ldapdefaultbind_dn = cn=admin,dc=example,dc=net ldapdefaultauthtok_type = password ldapdefaultauthtok = my-password ldapusersearch_base = ou=users,dc=example,dc=net ldapgroupsearch_base = ou=groups,dc=example,dc=net ldapaccessfilter = memberOf=cn=rook,ou=groups,dc=example,dc=net enumerate = false ignoregroupmembers = true ``` The SSSD configuration file may be omitted from the CephNFS spec if desired. In this case, Rook will not set `/etc/sssd/sssd.conf` in any way. This allows you to manage the `sssd.conf` file yourself however you wish. For example, you may build it into your custom Ceph container image, or use the to securely add the file via annotations on the CephNFS spec (passed to the NFS server pods). User authentication allows NFS clients and the Rook CephNFS servers to authenticate with each other to ensure security. Kerberos is the authentication mechanism natively supported by NFS-Ganesha. With NFSv4, individual users are authenticated and not merely client" }, { "data": "Kerberos authentication requires configuration files in order for the NFS-Ganesha server to authenticate to the Kerberos server (KDC). The requirements are two-parted: one or more kerberos configuration files that configures the connection to the Kerberos server. This file follows the `krb5.conf` format documented in its . a keytab file that provides credentials for the that NFS-Ganesha will use to authenticate with the Kerberos server. a kerberos domain name which will be used to map kerberos credentials to uid/gid that NFS-Ganesha will use to authenticate with the Methods of providing the configuration files are documented in the . Recommendations: Rook configures Kerberos to log to stderr. We suggest removing logging sections from config files to avoid consuming unnecessary disk space from logging to files. A sample Kerberos config file is shown below. ```ini [libdefaults] default_realm = EXAMPLE.NET [realms] EXAMPLE.NET = { kdc = kdc.example.net:88 admin_server = kdc.example.net:749 } [domain_realm] .example.net = EXAMPLE.NET example.net = EXAMPLE.NET ``` The Kerberos config files (`configFiles`) may be omitted from the Ceph NFS spec if desired. In this case, Rook will not add any config files to `/etc/krb5.conf.rook/`, but it will still configure Kerberos to load any config files it finds there. This allows you to manage these files yourself however you wish. Similarly, the keytab file (`keytabFile`) may be omitted from the CephNFS spec if desired. In this case, Rook will not set `/etc/krb5.keytab` in any way. This allows you to manage the `krb5.keytab` file yourself however you wish. As an example for either of the above cases, you may build files into your custom Ceph container image or use the to securely add files via annotations on the CephNFS spec (passed to the NFS server pods). The Kerberos service principal used by Rook's CephNFS servers to authenticate with the Kerberos server is built up from 3 components: the configured from `spec.security.kerberos.principalName` that acts as the service name the hostname of the server on which NFS-Ganesha is running which is in turn built up from the namespace and name of the CephNFS resource, joined by a hyphen. e.g., `rooknamespace-nfsname` the realm as configured by the kerberos config file(s) from `spec.security.kerberos.configFiles` The full service principal name is constructed as `<principalName>/<namespace>-<name>@<realm>`. For ease of scaling up or down CephNFS clusters, this principal is used for all servers in the CephNFS cluster. Users must add this service principal to their Kerberos server configuration. !!! example For a CephNFS named \"fileshare\" in the \"business-unit\" Kubernetes namespace that has a `principalName` of \"sales-apac\" and where the Kerberos realm is \"EXAMPLE.NET\", the full principal name will be `sales-apac/[email protected]`. !!! advanced `spec.security.kerberos.principalName` corresponds directly to NFS-Ganesha's NFS_KRB5:PrincipalName config. See the for more details. The kerberos domain name is used to setup the domain name in /etc/idmapd.conf. This domain name is used by idmap to map the kerberos credential to the user uid/gid. Without this configured, NFS-Ganesha will be unable to map the Kerberos principal to an uid/gid and will instead use the configured anonuid/anongid (default: -2) when accessing the local filesystem." } ]
{ "category": "Runtime", "file_name": "nfs-security.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "If your distribution packages rkt, then you should generally use their version. However, if you need a newer version, you may choose to manually install the rkt-provided rpm and deb packages. rkt is available in the and can be installed using pacman: ``` sudo pacman -S rkt ``` rkt is available in the for CentOS 7. However, this is due to pending systemd upgrade issues. rkt is an integral part of Container Linux, installed with the operating system. The lists the version of rkt available in each Container Linux release channel. If the version of rkt included in Container Linux is too old, it's fairly trivial to fetch the desired version . rkt is currently packaged in (unstable). ``` sudo apt-get install rkt ``` If you don't run sid, or wish for a newer version, you can . Since Fedora version 24, rkt packages are available in the main repository. We recommend using recent Fedora releases or a manually installed package in order to have an up-to-date rkt binary. ``` sudo dnf install rkt ``` rkt's entry in the tracks packaging work for this distribution. rkt does not work with the SELinux policies currently shipped with Fedora versions 24 and 25. As a workaround, SELinux can be temporarily disabled: ``` sudo setenforce Permissive ``` Or permanently disabled by editing `/etc/selinux/config`: ``` SELINUX=permissive ``` Fedora uses to dynamically define firewall zones. rkt is . The default firewalld rules may interfere with the network connectivity of rkt pods. To work around this, add a firewalld rule to allow pod traffic: ``` sudo firewall-cmd --add-source=172.16.28.0/24 --zone=trusted ``` 172.16.28.0/24 is the subnet of the . The command must be adapted when rkt is configured to use a with a different subnet. rkt is and available via portage. ``` sudo emerge rkt ``` On NixOS enable rkt by adding the following line in `/etc/nixos/configuration.nix`: ``` virtualisation.rkt.enable = true; ``` Using the nix package manager on another OS you can use: ``` nix-env -iA nixpkgs.rkt ``` The source for the rkt.nix expression can be found on rkt is available in the project on openSUSE Build Service. Before installing, the appropriate repository needs to be added (usually Tumbleweed or Leap): ``` sudo zypper ar -f obs://Virtualization:containers/openSUSETumbleweed/ virtualizationcontainers sudo zypper ar -f obs://Virtualization:containers/openSUSELeap42.1/ virtualization_containers ``` Install rkt using zypper: ``` sudo zypper in rkt ``` rkt is not packaged currently in Ubuntu. Instead, install manually using the . rkt is available in the for the Void Linux distribution. The source for these packages is hosted on . As part of the rkt build process, rpm and deb packages are built. If you need to use the latest rkt version, or your distribution does not bundle rkt, these are available. Currently the rkt upstream project does not maintain its own repository, so users of these packages must upgrade manually. ``` gpg --recv-key 18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E wget https://github.com/rkt/rkt/releases/download/v1.30.0/rkt-1.30.0-1.x86_64.rpm wget https://github.com/rkt/rkt/releases/download/v1.30.0/rkt-1.30.0-1.x86_64.rpm.asc gpg --verify rkt-1.30.0-1.x86_64.rpm.asc sudo rpm -Uvh rkt-1.30.0-1.x86_64.rpm ``` ``` gpg --recv-key 18AD5014C99EF7E3BA5F6CE950BDD3E0FC8A365E wget https://github.com/rkt/rkt/releases/download/v1.30.0/rkt1.30.0-1amd64.deb wget https://github.com/rkt/rkt/releases/download/v1.30.0/rkt1.30.0-1amd64.deb.asc gpg --verify rkt1.30.0-1amd64.deb.asc sudo dpkg -i rkt1.30.0-1amd64.deb ```" } ]
{ "category": "Runtime", "file_name": "distributions.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete a multicast group. Delete a multicast group from the node. ``` cilium-dbg bpf multicast group delete <group> [flags] ``` ``` The following command deletes group 229.0.0.1 from BPF multicast map of the node: cilium-dbg bpf multicast group delete 229.0.0.1 ``` ``` -h, --help help for delete ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the multicast groups." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_multicast_group_delete.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "The tests must be run on an actual Kubernetes cluster. At the moment, we require the cluster to be created using and the provided , which you can do by following the instructions below. We use Vagrant to provision two Virtual Machines (one Kubernetes control-plane Node and one worker Node). The required software is installed on each machine with . By default the Vagrantfile uses but you should be able to edit the file to use your favorite Vagrant provider. We require the following to be installed on your host machine: `vagrant` (`>= 2.0.0`) `ansible` (`>= 2.4.0`) `virtualbox` (See supported versions ). You can install all dependencies with `sudo apt install vagrant ansible virtualbox`. You can install all the dependencies with : `brew install --cask virtualbox` `brew install --cask vagrant` `brew install ansible` If an action is required on your part, `brew` will let you know in its log messages. Use the following Bash scripts to manage the Kubernetes Nodes with Vagrant: `./infra/vagrant/provision.sh`: create the required VMs and provision them `./infra/vagrant/push_antrea.sh`: load Antrea Docker image to each Node, along with the Antrea deployment YAML `./infra/vagrant/suspend.sh`: suspend all Node VMs `./infra/vagrant/resume.sh`: resume all Node VMs `./infra/vagrant/destroy.sh`: destoy all Node VMs, you will need to run `provision.sh` again to create a new cluster Note that `./infra/vagrant/provision.sh` can take a while to complete but it only needs to be run once. To test Antrea IPv6 support, an IPv6-only cluster can be created, by provisioning a private IPv6 network to connect Kubernetes Nodes, instead of a private IPv4 network. You simply need to invoke `./infra/vagrant/provision.sh` with `--ip-family v6`. This option can be used even if the host machine does not support IPv6 itself. Note however that the Nodes do not have public IPv6 connectivity; they can still connect to the Internet using IPv4, which means that Docker images can be pulled without issue. Similarly, Pods (which only support IPv6) cannot connect to the Internet. To avoid issues when running Kubernetes conformance tests, we configure a proxy on the control-plane Node for all DNS traffic. While CoreDNS will reply to cluster local DNS queries directly, all other queries will be forwarded to the proxy over IPv6, and the proxy will then forward them to the default resolver for the Node (this time over IPv4). This means that all DNS queries from the Pods should succeed, even though the returned public IP addresses (IPv4 and / or IPv6) are not accessible. You may need more recent versions of the dependencies (virtualbox, vagrant, ansible) than the ones listed above when creating an IPv6 cluster. The following versions were tested successfully: `vagrant 2.2.14` `ansible 2.9.18` `virtualbox 5.2` You can SSH into any of the Node VMs using `vagrant ssh [Node name]` (must be run from the `infra/vagrant` directory. The control-plane Node is named `k8s-node-control-plane` and the worker Nodes are named `k8s-node-worker-<N>` (for a single worker Node, the name is `k8s-node-worker-1`. `kubectl` is installed on all the" }, { "data": "The file for the cluster can also be found locally on your machine at `./infra/vagrant/playbook/kube/config`. If you install locally and set the `KUBECONFIG` environment variable to the absolute path of this kubeconfig file, you can run commands against your test cluster created with Vagrant. For example: ```bash cd <directory containing this README file> export KUBECONFIG=`pwd`/infra/vagrant/playbook/kube/config kubectl cluster-info ``` With recent versions of VirtualBox (> 6.1.26), you may see the following error when running `./infra/vagrant/provision.sh`: ```text The IP address configured for the host-only network is not within the allowed ranges. Please update the address used to be within the allowed ranges and run the command again. Address: 192.168.77.100 Ranges: 192.168.56.0/21 Valid ranges can be modified in the /etc/vbox/networks.conf file. For more information including valid format see: https://www.virtualbox.org/manual/ch06.html#network_hostonly ``` To workaround this issue, you can either: downgrade your VirtualBox version to 6.1.26 create a `/etc/vbox/networks.conf` file with the following contents: ```text 192.168.77.0/24 ``` Make sure that your cluster was provisioned and that the Antrea build artifacts were pushed to all the Nodes. You can then run the tests from the top-level directory with `go test -v -timeout=30m antrea.io/antrea/test/e2e` (the `-v` enables verbose output). If you are running the test for the first time and are using the scripts we provide under `infra/vagrant` to provision your Kubernetes cluster, you will therefore need the following steps: `./infra/vagrant/provision.sh` `make` `./infra/vagrant/push_antrea.sh` `go test -v -timeout=30m antrea.io/antrea/test/e2e` If you need to test an updated version of Antrea, just run `./infra/vagrant/push_antrea.sh` and then run the tests again. If you already have a K8s cluster, these steps should be followed to run the e2e tests. First, you should provide the ssh information for each Node in the cluster. Here is an example: ```text Host <Control-Plane-Node> HostName <Control-Plane-IP> Port 22 user ubuntu IdentityFile /home/ubuntu/.ssh/id_rsa Host <Worker-Node> HostName <Worker-Node-IP> Port 22 user ubuntu IdentityFile /home/ubuntu/.ssh/id_rsa ``` Make sure the `Host` entry for each Node matches the K8s Node name. The `Port` is the port used by the ssh service on the Node. Besides, you should add the public key to `authorized_keys` of each Node and set `PubkeyAuthentication` of ssh service to `yes`. Second, the kubeconfig of the cluster should be copied to the right location, e.g. `$HOME/.kube/config` or the path specified by `-remote.kubeconfig`. Third, the `antrea.yml` (and `antrea-windows.yml` if the cluster has Windows Nodes) should be put under the `$HOME` directory of the control-plane Node. Now you can start e2e tests using the command below: ```bash go test -v antrea.io/antrea/test/e2e -provider=remote ``` You can specify ssh and kubeconfig locations with `-remote.sshconfig` and `-remote.kubeconfig`. The default location of `-remote.sshconfig` is `$HOME/.ssh/config` and the default location of `-remote.kubeconfig` is `$HOME/.kube/config`. The simplest way is to run the following command: ```bash ./ci/kind/test-e2e-kind.sh [options] ``` It will set up a two worker Node Kind cluster to run the e2e tests, and destroy the cluster after the tests stop (succeed or fail). `kubectl` needs to be present in your `PATH` to set up the test cluster. For more information on the usage of this script and the options, run: ```bash" }, { "data": "--help ``` You can also run the e2e tests with an existing Kind cluster. Refer to this for instructions on how to create a Kind cluster and use Antrea as the CNI. You need at least one control-plane Node and one worker Node. Before running the Go e2e tests, you will also need to copy the Antrea manifest to the control-plane Docker container: ```bash ./hack/generate-manifest.sh | docker exec -i kind-control-plane dd of=/root/antrea.yml go test -timeout=75m -v antrea.io/antrea/test/e2e -provider=kind ``` The default timeout of `go test` is . If you encounter any timeout issue during e2e, you can try to increase timeout first. Some cases take more than 10 minutes. eg: `go test -v -timeout=20m antrea.io/antrea/test/e2e -run=TestAntreaPolicy -provider=kind`. `generate-manifest.sh` supports generating the Antrea manifest with different Antrea configurations. Run `./hack/generate-manifest.sh --help` to see the supported config options. As part of code development, if you want to run the tests with local changes, then make the code changes on the local repo and . You can load the new image into the kind cluster using the command below: ```bash kind load docker-image antrea/antrea-controller-ubuntu:latest antrea/antrea-agent-ubuntu:latest --name <kindclustername> ``` By default, if a test case fails, we write some useful debug information to a temporary directory on disk. This information includes the detailed description (obtained with `kubectl describe`) and the logs (obtained with `kubectl logs`) of each Antrea Pod at the time the test case exited. When running the tests in verbose mode (i.e. with `-v`), the test logs will tell you the location of that temporary directory. You may also choose your own directory using `--logs-export-dir`. For example: ```bash mkdir antrea-test-logs go test -count=1 -v -run=TestDeletePod antrea.io/antrea/test/e2e --logs-export-dir `pwd`/antrea-test-logs ``` If the user provides a log directory which was used for a previous run, existing contents (subdirectories for each test case) will be overridden. By default the description and logs for Antrea Pods are only written to disk if a test fails. You can choose to dump this information unconditionally with `--logs-export-on-success`. The Prometheus integration tests can be run as part of the e2e tests when enabled explicitly. To load Antrea into the cluster with Prometheus enabled, use: `./infra/vagrant/push_antrea.sh --prometheus` To run the Prometheus tests within the e2e suite, use: `go test -v antrea.io/antrea/test/e2e --prometheus` To run all benchmarks, without the standard e2e tests: ```bash go test -v -timeout=30m -run=XXX -bench=. \\ antrea.io/antrea/test/e2e \\ -perf.http.concurrency=16 ``` The above command uses `-run=XXX` to deselect all `Test*` tests and uses `-bench=.` to select all `Benchmark*` tests. Since performance tests take a while to complete, you need to extend the timeout duration `-timeout` from the default `10m` to a longer one like `30m`. If you would like to run the performance tests in a different scale, you could run: ```bash go test -v -timeout=30m -run=XXX -bench=BenchmarkCustomize \\ antrea.io/antrea/test/e2e \\ -perf.http.requests=5000 \\ -perf.http.policy_rules=1000 \\ -perf.http.concurrency=16 ``` All flags of performance tests includes: `performance.http.concurrency (int)`: Number of allowed concurrent http requests (default 1) `performance.http.requests (int)`: Total Number of http requests `performance.http.policy_rules (int)`: Number of CIDRs in the network policy `performance.realize.timeout (duration)`: Timeout of the realization of network policies (default 5m0s)" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|" } ]
{ "category": "Runtime", "file_name": "fuzzy_mode_convert_table.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "`--writable-tmpfs` can be used with `singularity build` to run the `%test` section of the build with a ephemeral tmpfs overlay, permitting tests that write to the container filesystem. `--compat` flag for actions is a new short-hand to enable a number of options that increase OCI/Docker compatibility. Infers `--containall, --no-init, --no-umask, --writable-tmpfs`. Does not use user, uts, or network namespaces as these may not be supported on many installations. `--no-https` now applies to connections made to library services specified in `--library://<hostname>/...` URIs. The experimental `--nvccli` flag will use `nvidia-container-cli` to setup the container for Nvidia GPU operation. Singularity will not bind GPU libraries itself. Environment variables that are used with Nvidia's `docker-nvidia` runtime to configure GPU visibility / driver capabilities & requirements are parsed by the `--nvccli` flag from the environment of the calling user. By default, the `compute` and `utility` GPU capabilities are configured. The `use nvidia-container-cli` option in `singularity.conf` can be set to `yes` to always use `nvidia-container-cli` when supported. `--nvccli` is not supported in the setuid workflow, and it requires being used in combination with `--writable` in user namespace mode. Please see documentation for more details. A new `--mount` flag and `SINGULARITY_MOUNT` environment variable can be used to specify bind mounts in `type=bind,source=<src>,destination=<dst>[,options...]` format. This improves CLI compatibility with other runtimes, and allows binding paths containing `:` and `,` characters (using CSV style escaping). Perform concurrent multi-part downloads for `library://` URIs. Uses 3 concurrent downloads by default, and is configurable in `singularity.conf` or via environment variables. Building Singularity from source requires go >=1.16. We now aim to support the two most recent stable versions of Go. This corresponds to the Go and , ensuring critical bug fixes and security patches are available for all supported language versions. However, rpm and debian packaging apply patches to support older native go installations. LABELs from Docker/OCI images are now inherited. This fixes a longstanding regression from Singularity 2.x. Note that you will now need to use `--force` in a build to override a label that already exists in the source Docker/OCI container. Instances are no longer created with an IPC namespace by default. An IPC namespace can be specified with the `-i|--ipc` flag. `--bind`, `--nv` and `--rocm` options for `build` command can't be set through environment variables `SINGULARITYBIND`, `SINGULARITYBINDPATH`, `SINGULARITY_NV`, `SINGULARITY_ROCM` anymore due to side effects reported by users in this , they must be explicitely requested via command line. `--nohttps` flag has been deprecated in favour of `--no-https`. The old flag is still accepted, but will display a deprecation warning. Removed `--nonet` flag, which was intended to disable networking for in-VM execution, but has no effect. Paths for `cryptsetup`, `go`, `ldconfig`, `mksquashfs`, `nvidia-container-cli`, `unsquashfs` are now found at build time by `mconfig` and written into `singularity.conf`. The path to these executables can be overridden by changing the value in `singularity.conf`. If the path for any of them other than `cryptsetup` or `ldconfig` is not set in `singularity.conf` then the executable will be found by searching `$PATH`. When calling `ldconfig` to find GPU libraries, singularity will not fall back to `/sbin/ldconfig` if the `ldconfig` on `$PATH` errors. If installing in a Guix/Nix on environment on top of a standard host distribution you must set `ldconfig path = /sbin/ldconfig` to use the host distribution `ldconfig` to find GPU" }, { "data": "Example log-plugin rewritten as a CLI callback that can log all commands executed, instead of only container execution, and has access to command arguments. The bundled reference CNI plugins are updated to v1.0.1. The `flannel` plugin is no longer included, as it is maintained as a separate plugin at: <https://github.com/flannel-io/cni-plugin>. If you use the flannel CNI plugin you should install it from this repository. `--nv` will not call `nvidia-container-cli` to find host libraries, unless the new experimental GPU setup flow that employs `nvidia-container-cli` for all GPU related operations is enabled (see above). If a container is run with `--nvccli` and `--contain`, only GPU devices specified via the `NVIDIAVISIBLEDEVICES` environment variable will be exposed within the container. Use `NVIDIAVISIBLEDEVICES=all` to access all GPUs inside a container run with `--nvccli`. Build `--bind` option allows to set multiple bind mount without specifying the `--bind` option for each bindings. The behaviour of the `allow container` directives in `singularity.conf` has been modified, to support more intuitive limitations on the usage of SIF and non-SIF container images. If you use these directives, _you may need to make changes to singularity.conf to preserve behaviour_. A new `allow container sif` directive permits or denies usage of unencrypted SIF images, irrespective of the filesystem(s) inside the SIF. The `allow container encrypted` directive permits or denies usage of SIF images with an encrypted root filesystem. The `allow container squashfs/extfs` directives in `singularity.conf` permit or deny usage of bare SquashFS and EXT image files only. The effect of the `allow container dir` directive is unchanged. Fix the oras contexts to avoid hangs upon failed pushed to Harbor registry. Added seccomp, cryptsetup, devscripts & correct go version test to debian packaging. Additional changes include dependency updates for the SIF module (to v2.0.0), and migration to maintained versions of other modules. There is no change to functionality, on-disk SIF format etc. Fix regression introduced in 3.8.1 that caused bind mounts without a destination to be added twice. Fix regression when files `source`d from `%environment` contain `\\` escaped shell builtins (fixes issue with `source` of conda profile.d script). The `oci` commands will operate on systems that use the v2 unified cgroups hierarchy. `singularity delete` will use the correct library service when the hostname is specified in the `library://` URI. `singularity build` will use the correct library service when the hostname is specified in the `library://` URI / definition file. Call `debootstrap` with correct Debian arch when it is not identical to the value of `runtime.GOARCH`. E.g. `ppc64el -> ppc64le`. When destination is ommitted in `%files` entry in definition file, ensure globbed files are copied to correct resolved path. Return an error if `--tokenfile` used for `remote login` to an OCI registry, as this is not supported. Ensure repeated `remote login` to same URI does not create duplicate entries in `~/.singularity/remote.yaml`. Properly escape single quotes in Docker `CMD` / `ENTRYPOINT` translation. Use host uid when choosing unsquashfs flags, to avoid selinux xattr errors with `--fakeroot` on non-EL/Fedora distributions with recent squashfs-tools. Updated the modified golang-x-crypto module with the latest upstream version. Allow escaped `\\$` in a SINGULARITYENV_ var to set a literal `$` in a container env var. Also allow escaped commas and colons in the source bind path. Handle absolute symlinks correctly in multi-stage build `%copy from` blocks. Fix incorrect reference in sandbox restrictive permissions" }, { "data": "Prevent garbage collection from closing the container image file descriptor. Update to Arch Linux pacman.conf URL and remove file size verification. Avoid panic when mountinfo line has a blank field. :warning: Go module was renamed from `github.com/sylabs/singularity` to `github.com/hpcng/singularity` A new `overlay` command allows creation and addition of writable overlays. Administrators can allow named users/groups to use specific CNI network configurations. Managed by directives in `singularity.conf`. The `build` command now honors `--nv`, `--rocm`, and `--bind` flags, permitting builds that require GPU access or files bound in from the host. A library service hostname can be specified as the first component of a `library://` URL. Singularity is now relocatable for unprivileged installations only. Respect http proxy server environment variables in key operations. When pushing SIF images to `oras://` endpoints, work around Harbor & GitLab failure to accept the `SifConfigMediaType`. Avoid a `setfsuid` compilation warning on some gcc versions. Fix a crash when silent/quiet log levels used on pulls from `shub://` and `http(s)://` URIs. Wait for dm device to appear when mounting an encrypted container rootfs. Accommodate ppc64le pageSize in TestCgroups and disable -race. Fix Debian packaging Testing changes are not generally itemized. However, developers and contributors should note that this release has modified the behavior of `make test` for ease of use: `make test` runs limited unit and integration tests that will not require docker hub credentials. `make testall` runs the full unit/integration/e2e test suite that requires docker credentials to be set with `E2EDOCKERUSERNAME` and `E2EDOCKERPASSWORD` environment variables. : Due to incorrect use of a default URL, singularity action commands (run/shell/exec) specifying a container using a library:// URI will always attempt to retrieve the container from the default remote endpoint (cloud.sylabs.io) rather than the configured remote endpoint. An attacker may be able to push a malicious container to the default remote endpoint with a URI that is identical to the URI used by a victim with a non-default remote endpoint, thus executing the malicious container. : A dependency used by Singularity to extract docker/OCI image layers can be tricked into modifying host files by creating a malicious layer that has a symlink with the name \".\" (or \"/\"), when running as root. This vulnerability affects a `singularity build` or `singularity pull` as root, from a docker or OCI source. Fix progress bar display when source image size is unknown. Fix a memory usage / leak issue when building from an existing image file. Fix to allow use of `--library` flag to point push/pull at default cloud library when another remote is in use. Address false positive loop test errors, and an e2e test registry setup issue. Accommodate /sys/fs/selinux mount changes on kernel 5.9+. Fix loop devices file descriptor leak when shared loop devices is enabled. Use MaxLoopDevices variable from config file in all appropriate locations. Use -buildmode=default (non pie) on ppc64le to prevent crashes when using plugins. Remove spurious warning in parseTokenSection() e2e test fixes for new kernels, new unsquashfs version. Show correct web URI for detached builds against alternate remotes. The singularity binary is now relocatable when built without setuid support Allow configuration of global custom keyservers, separate from remote endpoints. Add a new global keyring, for public keys only (used for ECL). The `remote login` command now supports authentication to Docker/OCI registries and custom" }, { "data": "New `--exclusive` option for `remote use` allows admin to lock usage to a specific remote. A new `Fingerprints:` header in definition files will check that a SIF source image can be verified, and is signed with keys matching all specified fingerprints. Labels can be set dynamically from a build's `%post` section by setting them in the `SINGULARITY_LABELS` environment variable. New `build-arch` label is automatically set to the architecture of the host during a container build. New `-D/--description` flag for `singularity push` sets description for a library container image. `singularity remote status` shows validity of authentication token if set. `singularity push` reports quota usage and URL on successful push to a library server that supports this. A new `--no-mount` flag for actions allows a user to disable proc/sys/dev/devpts/home/tmp/hostfs/cwd mounts, even if they are enabled in `singularity.conf`. When actions (run/shell/exec...) are used without `--fakeroot` the umask from the calling environment will be propagated into the container, so that files are created with expected permissions. Use the new `--no-umask` flag to return to the previous behaviour of setting a default 0022 umask. Container metadata, environment, scripts are recorded in a descriptor in builds to SIF files, and `inspect` will use this if present. The `--nv` flag for NVIDIA GPU support will not resolve libraries reported by `nvidia-container-cli` via the ld cache. Will instead respect absolute paths to libraries reported by the tool, and bind all versioned symlinks to them. General re-work of the `remote login` flow, adds prompts and token verification before replacing an existing authentication token. The Execution Control List (ECL) now verifies container fingerprints using the new global keyring. Previously all users would need relevant keys in their own keyring. The SIF layer mediatype for ORAS has been changed to `application/vnd.sylabs.sif.layer.v1.sif` reflecting the published value. `SINGULARITY_BIND` has been restored as an environment variable set within a running container. It now reflects all user binds requested by the `-B/--bind` flag, as well as via `SINGULARITY_BIND[PATHS]`. `singularity search` now correctly searches for container images matching the host architecture by default. A new `--arch` flag allows searching for other architectures. A new results format gives more detail about container image results, while users and collections are no longer returned. Support larger definition files, environments etc. by passing engine configuration in the environment vs. via socket buffer. Ensure `docker-daemon:` and other source operations respect `SINGULARITY_TMPDIR` for all temporary files. Support double quoted filenames in the `%files` section of build definitions. Correct `cache list` sizes to show KiB with powers of 1024, matching `du` etc. Don't fail on `enable fusemount=no` when no fuse mounts are needed. Pull OCI images to the correct requested location when the cache is disabled. Ensure `Singularity>` prompt is set when container has no environment script, or singularity is called through a wrapper script. Avoid build failures in `yum/dnf` operations against the 'setup' package on `RHEL/CentOS/Fedora` by ensuring staged `/etc/` files do not match distro default content. Failed binds to `/etc/hosts` and `/etc/localtime` in a container run with `--contain` are no longer fatal errors. Don't initialize the cache for actions where it is not required. Increase embedded shell interpreter timeout, to allow slow-running environment scripts to complete. Correct buffer handling for key import to allow import from STDIN. Reset environment to avoid `LDLIBRARYPATH` issues when resolving dependencies for the `unsquashfs` sandbox. Fall back to `/sbin/ldconfig` if `ldconfig` on `PATH` fails while resolving GPU" }, { "data": "Fixes problems on systems using Nix / Guix. Address issues caused by error code changes in `unsquashfs` version 4.4. Ensure `/dev/kfd` is bound into container for ROCm when `--rocm` is used with `--contain`. Tolerate comments on `%files` sections in build definition files. Fix a loop device file descriptor leak. A change in Linux kernel 5.9 causes `--fakeroot` builds to fail with a `/sys/fs/selinux` remount error. This will be addressed in Singularity v3.7.1. Singularity 3.6.4 addresses the following security issue. : Due to insecure handling of path traversal and the lack of path sanitization within unsquashfs (a distribution provided utility used by Singularity), it is possible to overwrite/create files on the host filesystem during the extraction of a crafted squashfs filesystem. Affects unprivileged execution of SIF / SquashFS images, and image builds from SIF / SquashFS images. Update scs-library-client to support `library://` backends using an 3rd party S3 object store that does not strictly conform to v4 signature spec. Singularity 3.6.3 addresses the following security issues. : When a Singularity action command (run, shell, exec) is run with the fakeroot or user namespace option, Singularity will extract a container image to a temporary sandbox directory. Due to insecure permissions on the temporary directory it is possible for any user with access to the system to read the contents of the image. Additionally, if the image contains a world-writable file or directory, it is possible for a user to inject arbitrary content into the running container. : When a Singularity command that results in a container build operation is executed, it is possible for a user with access to the system to read the contents of the image during the build. Additionally, if the image contains a world-writable file or directory, it is possible for a user to inject arbitrary content into the running build, which in certain circumstances may enable arbitrary code execution during the build and/or when the built container is run. The value for maximum number of loop devices in the config file is now used everywhere instead of redefining this value Add CAP_MKNOD in capability bounding set of RPC to fix issue with cryptsetup when decrypting image from within a docker container. Fix decryption issue when using both IPC and PID namespaces. Fix unsupported builtins panic from shell interpreter and add umask support for definition file scripts. Do not load keyring in prepare_linux if ECL not enabled. Ensure sandbox option overrides remote build destination. Add --force option to `singularity delete` for non-interactive workflows. Default to current architecture for `singularity delete`. Respect current remote for `singularity delete` command. Allow `rw` as a (noop) bind option. Fix capability handling regression in overlay mount. Fix LDLIBRARYPATH environment override regression with `--nv/--rocm`. Fix environment variable duplication within singularity engine. Use `-user-xattrs` for unsquashfs to avoid error with rootless extraction using unsquashfs 3.4 (Ubuntu 20.04). Correct `--no-home` message for 3.6 CWD behavior. Don't fail if parent of cache dir not accessible. Fix tests for Go 1.15 Ctty handling. Fix additional issues with test images on ARM64. Fix FUSE e2e tests to use container ssh_config. Support compilation with `FORTIFY_SOURCE=2` and build in `pie` mode with `fstack-protector` enabled (#5433). Provide advisory message r.e. need for `upper` and `work` to exist in overlay images. Use squashfs mem and processor limits in squashfs gzip" }, { "data": "Ensure build destination path is not an empty string - do not overwrite CWD. Don't unset PATH when interpreting legacy /environment files. Singularity 3.6.0 introduces a new signature format for SIF images, and changes to the signing / verification code to address: In Singularity 3.x versions below 3.6.0, issues allow the ECL to be bypassed by a malicious user. In Singularity 3.5 the `--all / -a` option to `singularity verify` returns success even when some objects in a SIF container are not signed, or cannot be verified. In Singularity 3.x versions below 3.6.0, Singularity's sign and verify commands do not sign metadata found in the global header or data object descriptors of a SIF file, allowing an attacker to cause unexpected behavior. A signed container may verify successfully, even when it has been modified in ways that could be exploited to cause malicious behavior. Please see the published security advisories at <https://github.com/hpcng/singularity/security/advisories> for full detail of these security issues. Note that the new signature format is necessarily incompatible with Singularity \\< 3.6.0 - e.g. Singularity 3.5.3 cannot verify containers signed by 3.6.0. We thank Tru Huynh for a report that led to the review of, and changes to, the signature implementation. Singularity now supports the execution of minimal Docker/OCI containers that do not contain `/bin/sh`, e.g. `docker://hello-world`. A new cache structure is used that is concurrency safe on a filesystem that supports atomic rename. *If you downgrade to Singularity 3.5 or older after using 3.6 you will need to run `singularity cache clean`.* A plugin system rework adds new hook points that will allow the development of plugins that modify behavior of the runtime. An image driver concept is introduced for plugins to support new ways of handling image and overlay mounts. Plugins built for \\<=3.5 are not compatible with 3.6. The `--bind` flag can now bind directories from a SIF or ext3 image into a container. The `--fusemount` feature to mount filesystems to a container via FUSE drivers is now a supported feature (previously an experimental hidden flag). This permits users to mount e.g. `sshfs` and `cvmfs` filesystems to the container at runtime. A new `-c/--config` flag allows an alternative `singularity.conf` to be specified by the `root` user, or all users in an unprivileged installation. A new `--env` flag allows container environment variables to be set via the Singularity command line. A new `--env-file` flag allows container environment variables to be set from a specified file. A new `--days` flag for `cache clean` allows removal of items older than a specified number of days. Replaces the `--name` flag which is not generally useful as the cache entries are stored by hash, not a friendly name. A new '--legacy-insecure' flag to `verify` allows verification of SIF signatures in the old, insecure format. A new '-l / --logs' flag for `instance list` that shows the paths to instance STDERR / STDOUT log files. The `--json` output of `instance list` now include paths to STDERR / STDOUT log files. New signature format (see security fixes above). Environment variables prefixed with `SINGULARITYENV_` always take precedence over variables without `SINGULARITYENV_` prefix. The `%post` build section inherits environment variables from the base image. `%files from ...` will now follow symlinks for sources that are directly specified, or directly resolved from a glob pattern. It will not follow symlinks found through directory" }, { "data": "This mirrors Docker multi-stage COPY behaviour. Restored the CWD mount behaviour of v2, implying that CWD path is not recreated inside container and any symlinks in the CWD path are not resolved anymore to determine the destination path inside container. The `%test` build section is executed the same manner as `singularity test image`. `--fusemount` with the `container:` default directive will foreground the FUSE process. Use `container-daemon:` for previous behavior. Fixed spacing of `singularity instance list` to be dynamically changing based off of input lengths instead of fixed number of spaces to account for long instance names. Removed `--name` flag for `cache clean`; replaced with `--days`. Deprecate `-a / --all` option to `sign/verify` as new signature behavior makes this the default. Don't try to mount `$HOME` when it is `/` (e.g. `nobody` user). Process `%appinstall` sections in order when building from a definition file. Ensure `SINGULARITYCONTAINER`, `SINGULARITYENVIRONMENT` and the custom shell prompt are set inside a container. Honor insecure registry settings from `/etc/containers/registries.conf`. Fix `http_proxy` env var handling in `yum` bootstrap builds. Disable log colorization when output location is not a terminal. Check encryption keys are usable before beginning an encrypted build. Allow app names with non-alphanumeric characters. Use the `base` metapackage for arch bootstrap builds - arch no longer has a `base` group. Ensure library client messages are logged with `--debug`. Do not mount `$HOME` with `--fakeroot --contain`. Fall back to underlay automatically when using a sandbox on GPFS. Fix Ctrl-Z handling - propagation of signal. The following minor behaviour changes have been made in 3.5.3 to allow correct operation on CRAY CLE6, and correct an issue with multi-stage image builds that was blocking use by build systems such as Spack: Container action scripts are no longer bound in from `etc/actions.d` on the host. They are created dynamically and inserted at container startup. `%files from ...` will no longer follow symlinks when copying between stages in a multi stage build, as symlinks should be copied so that they resolve identically in later stages. Copying `%files` from the host will still maintain previous behavior of following links. Bind additional CUDA 10.2 libs when using the `--nv` option without `nvidia-container-cli`. Fix an NVIDIA persistenced socket bind error with `--writable`. Add detection of ceph to allow workarounds that avoid issues with sandboxes on ceph filesystems. Ensure setgid is inherited during make install. Ensure the root directory of a build has owner write permissions, regardless of the permissions in the bootstrap source. Fix a regression in `%post` and `%test` to honor the `-c` option. Fix an issue running `%post` when a container doesn't have `/etc/resolv.conf` or `/etc/hosts` files. Fix an issue with UID detection on RHEL6 when running instances. Fix a logic error when a sandbox image is in an overlay incompatible location, and both overlay and underlay are disabled globally. Fix an issue causing user namespace to always be used when `allow-setuid=no` was configured in a setuid installation. Always allow key IDs and fingerprints to be specified with or without a `0x` prefix when using `singularity keys` Fix an issue preventing joining an instance started with `--boot`. Provide a useful error message if an invalid library:// path is provided. Bring in multi-part upload client functionality that will address large image upload / proxied upload issues with a future update to Sylabs" }, { "data": "In addition, numerous improvements have been made to the test suites, allowing them to pass cleanly on a range of kernel versions and distributions that are not covered by the open-source CI runs. 700 permissions are enforced on `$HOME/.singularity` and `SINGULARITY_CACHEDIR` directories (CVE-2019-19724). Many thanks to Stuart Barkley for reporting this issue. Fixes an issue preventing use of `.docker/config` for docker registry authentication. Fixes the `run-help` command in the unprivileged workflow. Fixes a regression in the `inspect` command to support older image formats. Adds a workaround for an EL6 kernel bug regarding shared bind mounts. Fixes caching of http(s) sources with conflicting filenames. Fixes a fakeroot sandbox build error on certain filesystems, e.g. lustre, GPFS. Fixes a fakeroot build failure to a sandbox in $HOME. Fixes a fakeroot build failure from a bad def file section script location. Fixes container execution errors when CWD is a symlink. Provides a useful warning r.e. possible fakeroot build issues when seccomp support is not available. Fixes an issue where the `--disable-cache` option was not being honored. Deprecated `--groupid` flag for `sign` and `verify`; replaced with `--group-id`. Removed useless flag `--url` for `sign`. A single feature has been added in the bugfix release, with specific functionality: A new option `allow container encrypted` can be set to `no` in `singularity.conf` to prevent execution of encrypted containers. This point release addresses the following issues: Fixes a disk space leak when building from docker-archive. Makes container process SIGABRT return the expected code. Fixes the `inspect` command in unprivileged workflow. Sets an appropriate default umask during build stages, to avoid issues with very restrictive user umasks. Fixes an issue with build script content being consumed from STDIN. Corrects the behaviour of underlay with non-empty / symlinked CWD and absolute symlink binds targets. Fixes execution of containers when binding BTRFS filesystems. Fixes build / check failures for MIPS & PPC64. Ensures file ownership maintained when building image from sandbox. Fixes a squashfs mount error on kernel 5.4.0 and above. Fixes an underlay fallback problem, which prevented use of sandboxes on lustre filesystems. New support for AMD GPUs via `--rocm` option added to bind ROCm devices and libraries into containers. Plugins can now modify Singularity behaviour with two mutators: CLI and Runtime. Introduced the `config global` command to edit `singularity.conf` settings from the CLI. Introduced the `config fakeroot` command to setup `subuid` and `subgid` mappings for `--fakeroot` from the Singularity CLI. Go 1.13 adopted. Vendored modules removed from the Git tree, will be included in release tarballs. Singularity will now fail with an error if a requested bind mount cannot be made. This is beneficial to fail fast in workflows where a task may fail a long way downstream if a bind mount is unavailable. Any unavailable bind mount sources must be removed from `singularity.conf`. Docker/OCI image extraction now faithfully respects layer permissions. This may lead to sandboxes that cannot be removed without modifying permissions. `--fix-perms` option added to preserve old behaviour when building sandboxes. Discussion issue for this change at: <https://github.com/sylabs/singularity/issues/4671> `Singularity>` prompt is always set when entering shell in a container. The current `umask` will be honored when building a SIF file. `instance exec` processes acquire cgroups set on `instance start` `--fakeroot` supports uid/subgid ranges >65536 `singularity version` now reports semver compliant version information. Deprecated `--id` flag for `sign` and `verify`; replaced with" }, { "data": "" }, { "data": "" }, { "data": "This point release addresses the following issues: Sets workable permissions on OCI -> sandbox rootless builds Fallback correctly to user namespace for non setuid installation Correctly handle the starter-suid binary for non-root installs Creates CACHEDIR if it doesn't exist Set apex loglevel for umoci to match singularity loglevel This point release addresses the following issues: Fixes an issue where a PID namespace was always being used Fixes compilation on non 64-bit architectures Allows fakeroot builds for zypper, pacstrap, and debootstrap Correctly detects seccomp on OpenSUSE Honors GO_MODFLAGS properly in the mconfig generated makefile Passes the Mac hostname to the VM in MacOS Singularity builds Handles temporary EAGAIN failures when setting up loop devices on recent kernels Fixes excessive memory usage in singularity push New support for building and running encrypted containers with RSA keys and passphrases `--pem-path` option added to the `build` and action commands for RSA based encrypted containers `--passphrase` option added to `build` and action commands for passphrase based encrypted containers `SINGULARITYENCRYPTIONPEMPATH` and `SINGULARITYENCRYPTION_PASSPHRASE` environment variables added to serve same functions as above `--encrypt` option added to `build` command to build an encrypted container when environment variables contain a secret New `--disable-cache` flag prevents caching of downloaded containers Added support for multi-line variables in singularity def-files Added support for 'indexed' def-file variables (like arrays) Added support for SUSE SLE Products Added the def-file variables: product, user, regcode, productpgp, registerurl, modules, otherurl (indexed) Support multiple-architecture tags in the SCS library Added a `--dry-run` flag to `cache clean` Added a `SINGULARITY_SYPGPDIR` environment variable to specify the location of PGP key data Added a `--nonet` option to the action commands to disable networking when running with the `--vm` option Added a `--long-list` flag to the `key search` command to preserve Added experimental, hidden `--fusemount` flag to pass a command to mount a libfuse3 based file system within the container Runtime now properly honors `SINGULARITYDISABLECACHE` environment variable `remote add` command now automatically attempts to login and a `--no-login` flag is added to disable this behavior Using the `pull` command to download an unsigned container no longer produces an error code `cache clean` command now prompts user before cleaning when run without `--force` option and is more verbose Shortened the default output of the `key search` command The `--allow-unsigned` flag to `pull` has been deprecated and will be removed in the future Remote login and status commands will now use the default remote if a remote name is not supplied Added Singularity hub (`shub`) cache support when using the `pull` command Clean cache in a safer way by only deleting the cache subdirectories Improvements to the `cache clean` command new `oras` URI for pushing and pulling SIF files to and from supported OCI registries added the `--fakeroot` option to `build`, `exec`, `run`, `shell`, `test`, and `instance start` commands to run container in a new user namespace as uid 0 added the `fakeroot` network type for use with the `--network` option `sif` command to allow for the inspection and manipulation of SIF files with the following subcommands `add` Add a data object to a SIF file `del` Delete a specified object descriptor and data from SIF file `dump` Extract and output data objects from SIF files `header` Display SIF global headers `info` Display detailed information of object descriptors `list` List object descriptors from SIF files `new` Create a new empty SIF image file `setprim` Set primary system partition This point release fixes the following bugs: Allows users to join instances with non-suid workflow Removes false warning when seccomp is disabled on the host Fixes an issue in the terminal when piping output to commands Binds NVIDIA persistenced socket when `--nv` is invoked Instance files are now stored in user's home directory for privacy and many checks have been added to ensure that a user can't manipulate files to change `starter-suid` behavior when instances are joined (many thanks to Matthias Gerstner from the SUSE security team for finding and securely reporting this vulnerability) Introduced a new basic framework for creating and managing plugins Added the ability to create containers through multi-stage builds Definitions now require `Bootstrap` be the first parameter of header Created the concept of a Sylabs Cloud \"remote\" endpoint and added the ability for users and admins to set them through CLI and conf files Added caching for images from Singularity Hub Made it possible to compile Singularity outside of `$GOPATH` Added a json partition to SIF files for OCI configuration when building from an OCI source Full integration with Singularity desktop for MacOS code base Introduced the `plugin` command group for creating and managing plugins `compile` Compile a singularity plugin `disable` disable an installed singularity plugin `enable` Enable an installed singularity plugin `inspect` Inspect a singularity plugin (either an installed one or an image) `install` Install a singularity plugin `list` List installed singularity plugins `uninstall` Uninstall removes the named plugin from the system Introduced the `remote` command group to support management of Singularity endpoints: `add` Create a new Sylabs Cloud remote endpoint `list` List all remote endpoints that are configured `login` Log into a remote endpoint using an authentication token `remove` Remove an existing Sylabs Cloud remote endpoint `status` Check the status of the services at an endpoint `use` Set a remote endpoint to be used by default Added to the `key` command group to improve PGP key management: `export` Export a public or private key into a specific file `import` Import a local key into the local keyring `remove` Remove a local public key Added the `Stage: <name>` keyword to the definition file header and the `from <stage name>` option/argument pair to the `%files` section to support multistage builds The `--token/-t` option has been deprecated in favor of the `singularity remote` command group Ask to confirm password on a newly generated PGP key Prompt to push a key to the KeyStore when generated Refuse to push an unsigned container unless overridden with `--allow-unauthenticated/-U` option Warn and prompt when pulling an unsigned container without the `--allow-unauthenticated/-U` option `Bootstrap` must now be the first field of every header because of parser requirements for multi-stage builds New hidden `buildcfg` command to display compile-time parameters Added support for `LDFLAGS`, `CFLAGS`, `CGO_` variables in build system Added `--nocolor` flag to Singularity client to disable color in logging `singularity capability <add/drop> --desc` has been removed `singularity capability list <--all/--group/--user>` flags have all been removed The `--builder` flag to the `build` command implicitly sets `--remote` Repeated binds no longer cause Singularity to exit and fail, just warn instead Corrected typos and improved docstrings throughout Removed warning when CWD does not exist on the host system Added support to spec file for RPM building on SLES 11 Introduced the `oci` command group to support a new OCI compliant variant of the Singularity runtime: `attach` Attach console to a running container process `create` Create a container from a bundle directory `delete` Delete container `exec` Execute a command within container `kill` Kill a container `mount` Mount create an OCI bundle from SIF image `pause` Suspends all processes inside the container `resume` Resumes all processes previously paused inside the container `run` Create/start/attach/delete a container from a bundle directory `start` Start container process `state` Query state of a container `umount` Umount delete bundle `update` Update container cgroups resources Added `cache` command group to inspect and manage cached files `clean` Clean your local Singularity cache `list` List your local Singularity cache Can now build CLI on darwin for limited functionality on Mac Added the `scratch` bootstrap agent to build from anything Reintroduced support for zypper bootstrap agent Added the ability to overwrite a new `singularity.conf` when building from RPM if desired Fixed several regressions and omissions in support Added caching for containers pulled/built from the Changed `keys` command group to `key` (retained hidden `keys` command for backward compatibility) Created an `RPMPREFIX` variable to allow RPMs to be installed in custom locations Greatly expanded CI unit and end-to-end testing Bind paths in `singularity.conf` are properly parsed and applied at runtime Singularity runtime will properly fail if `singularity.conf` file is not owned by the root user Several improvements to RPM packaging including using golang from epel, improved support for Fedora, and avoiding overwriting conf file on new RPM install Unprivileged `--contain` option now properly mounts `devpts` on older kernels Uppercase proxy environment variables are now rightly respected Add http/https protocols for singularity run/pull commands Update to SIF 1.0.2 Add noPrompt parameter to `pkg/signing/Verify` function to enable silent verification Added the `--docker-login` flag to enable interactive authentication with docker registries Added support for pulling directly from HTTP and HTTPS Made minor improvements to RPM packaging and added basic support for alpine packaging The `$SINGULARITYNOHTTPS`,`$SINGULARITYTMPDIR`, and `$SINGULARITYDOCKERUSERNAME`/`$SINGULARITYDOCKERPASSWORD` environment variables are now correctly respected Pulling from a private shub registry now works as expected Running a container with `--network=\"none\"` no longer incorrectly fails with an error message Commands now correctly return 1 when incorrectly executed without arguments Progress bars no longer incorrectly display when running with `--quiet` or `--silent` Contents of `91-environment.sh` file are now displayed if appropriate when running `inspect --environment` Improved RPM packaging procedure via makeit Enhanced general stability of runtime Singularity is now written primarily in Go to bring better integration with the existing container ecosystem Added support for new URIs (`build` & `run/exec/shell/start`): `library://` - Supports the `docker-daemon:` - Supports images managed by the locally running docker daemon `docker-archive:` - Supports archived docker images `oci:` - Supports oci images `oci-archive:` - Supports archived oci images Handling of `docker` & `oci` URIs/images now utilizes to parse and convert those image types in a supported way Replaced `singularity instance.` command group with `singularity instance ` The command `singularity help` now only provides help regarding the usage of the `singularity` command. To display an image's `help` message, use `singularity run-help <image path>` instead Removed deprecated `singularity" }, { "data": "command group Removed deprecated `singularity create` command Removed deprecated `singularity bootstrap` command Removed deprecated `singularity mount` command Removed deprecated `singularity check` command Added `singularity run-help <image path>` command to output an image's `help` message Added `singularity sign <image path>` command to allow a user to cryptographically sign a SIF image Added `singularity verify <image path>` command to allow a user to verify a SIF image's cryptographic signatures Added `singularity keys` command to allow the management of `OpenPGP` key stores Added `singularity capability` command to allow fine grained control over the capabilities of running containers Added `singularity push` command to push images to the Added flags: `--add-caps <string>`: Run the contained process with the specified capability set (requires root) `--allow-setuid`: Allows setuid binaries to be mounted into the container (requires root) `--apply-cgroups <path>`: Apply cgroups configuration from file to contained processes (requires root) `--dns <string>`: Adds the comma separated list of DNS servers to the containers `resolv.conf` file `--drop-caps <string>`: Drop the specified capabilities from the container (requires root) `--fakeroot`: Run the container in a user namespace as `uid=0`. Requires a recent kernel to function properly `--hostname <string>`: Set the hostname of the container `--keep-privs`: Keep root user privilege inside the container (requires root) `--network <string>`: Specify a list of comma separated network types () to be present inside the container, each with its own dedicated interface in the container `--network-args <string>`: Specify arguments to pass to CNI network plugins (set by `--network`) `--no-privs`: Drop all privileges from root user inside the container (requires root) `--security <string>`: Configure security features such as SELinux, Apparmor, Seccomp... `--writable-tmpfs`: Run container with a `tmpfs` overlay The command `singularity instance start` now supports the `--boot` flag to boot the container via `/sbin/init` Changes to image mounting behavior: All image formats are mounted as read only by default `--writable` only works on images which can be mounted in read/write \\[applicable to: `sandbox` and legacy `ext3` images\\] `--writable-tmpfs` runs the container with a writable `tmpfs`-based overlay \\[applicable to: all image formats\\] `--overlay <string>` now specifies a list of `ext3`/`sandbox` images which are set as the containers overlay \\[applicable to: all image formats\\] All images are now built as images by default When building to a path that already exists, `singularity build` will now prompt the user if they wish to overwrite the file existing at the specified location The `-w|--writable` flag has been removed The `-F|--force` flag now overrides the interactive prompt and will always attempt to overwrite the file existing at the specified location The `-u|--update` flag has been added to support the workflow of running a definition file on top of an existing container \\[implies `--sandbox`, only supports `sandbox` image types\\] The `singularity build` command now supports the following flags for integration with the : `-r|--remote`: Build the image remotely on the Sylabs Remote Builder (currently unavailable) `-d|--detached`: Detach from the `stdout` of the remote build \\[requires `--remote`\\] `--builder <string>`: Specifies the URL of the remote builder to access `--library <string>`: Specifies the URL of the to push the built image to when the build command destination is in the form `library://<reference>` The `bootstrap` keyword in the definition file now supports the following values: `library` `docker-daemon` `docker-archive` `oci` `oci-archive` The `from` keyword in the definition file now correctly parses a `docker` URI which includes the `registry` and/or `namespace` components The `registry` and `namespace` keywords in the definition file are no longer supported. Instead, those values may all go into the `from` keyword Building from a tar archive of a `sandbox` no longer works" } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "Singularity", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> View endpoint health ``` cilium-dbg endpoint health <endpoint id> [flags] ``` ``` cilium endpoint health 5421 ``` ``` -h, --help help for health -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage endpoints" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_endpoint_health.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Welcome contributing to Longhorn! This guideline applies to all the repositories under Longhorn. Contributing to Longhorn is not limited to writing the code or submitting the PR. We will also appreciate if you can file issues, provide feedback and suggest new features. In fact, many of Longhorn's features are driven by the community's need. The community plays a big role in the development of Longhorn. Of course, contributing the code is more than welcome. To make things simpler, if you're fixing a small issue (e.g. typo), go ahead submitting a PR and we will pick it up; but if you're planning to submit a bigger PR to implement a new feature, it's easier to submit a new issue to discuss the design with the maintainers first before implementing it. When you're ready to get involved in contributing the code, should help you to get up to the speed. And remember to ! Feel free to join the discussion on Longhorn development at slack channel. Happy contributing! All authors to the project retain copyright to their work. However, to ensure that they are only submitting work that they have rights to, we are requiring everyone to acknowledge this by signing their work. Any copyright notices in this repo should specify the authors as \"the Longhorn contributors\". To sign your work, just add a line like this at the end of your commit message: ``` Signed-off-by: Sheng Yang <[email protected]> ``` This can easily be done with the `--signoff/-s` option to `git commit`. By doing this you state that you can certify the following (from https://developercertificate.org/): ``` Developer Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. ```" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Having a clearly defined scope of a project is important for ensuring consistency and focus. These following criteria will be used when reviewing pull requests, features, and changes for the project before being accepted. Components should not have tight dependencies on each other so that they are able to be used independently. The APIs for images and containers should be designed in a way that when used together the components have a natural flow but still be useful independently. An example for this design can be seen with the overlay filesystems and the container execution layer. The execution layer and overlay filesystems can be used independently but if you were to use both, they share a common `Mount` struct that the filesystems produce and the execution layer consumes. containerd should expose primitives to solve problems instead of building high level abstractions in the API. A common example of this is how build would be implemented. Instead of having a build API in containerd we should expose the lower level primitives that allow things required in build to work. Breaking up the filesystem APIs to allow snapshots, copy functionality, and mounts allow people implementing build at the higher levels with more flexibility. For the various components in containerd there should be defined extension points where implementations can be swapped for alternatives. The best example of this is that containerd will use `runc` from OCI as the default runtime in the execution layer but other runtimes conforming to the OCI Runtime specification can be easily added to containerd. containerd will come with a default implementation for the various components. These defaults will be chosen by the maintainers of the project and should not change unless better tech for that component comes out. Additional implementations will not be accepted into the core repository and should be developed in a separate repository not maintained by the containerd maintainers. The following table specifies the various components of containerd and general features of container runtimes. The table specifies whether the feature/component is in or out of" }, { "data": "| Name | Description | In/Out | Reason | ||--|--|-| | execution | Provide an extensible execution layer for executing a container | in | Create,start, stop pause, resume exec, signal, delete | | cow filesystem | Built in functionality for overlay, aufs, and other copy on write filesystems for containers | in | | | distribution | Having the ability to push and pull images as well as operations on images as a first class API object | in | containerd will fully support the management and retrieval of images | | metrics | container-level metrics, cgroup stats, and OOM events | in | | networking | creation and management of network interfaces | out | Networking will be handled and provided to containerd via higher level systems. | | build | Building images as a first class API | out | Build is a higher level tooling feature and can be implemented in many different ways on top of containerd | | volumes | Volume management for external data | out | The API supports mounts, binds, etc where all volumes type systems can be built on top of containerd. | | logging | Persisting container logs | out | Logging can be build on top of containerd because the containers STDIO will be provided to the clients and they can persist any way they see fit. There is no io copying of container STDIO in containerd. | containerd is scoped to a single host and makes assumptions based on that fact. It can be used to build things like a node agent that launches containers but does not have any concepts of a distributed system. containerd is designed to be embedded into a larger system, hence it only includes a barebone CLI (`ctr`) specifically for development and debugging purpose, with no mandate to be human-friendly, and no guarantee of interface stability over time. The scope of this project is an allowed list. If it's not mentioned as being in scope, it is out of scope. For the scope of this project to change it requires a 100% vote from all maintainers of the project." } ]
{ "category": "Runtime", "file_name": "SCOPE.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "This document has miscellaneous documentation for Manta developers, including how components are put together during the build and example workflows for testing changes. Anything related to the operation of Manta, including how Manta works at runtime, should go in the instead. The high-level basics are documented in the in this repo. That includes information on dependencies, how to build and deploy Manta, the repositories that make up Manta, and more. This document is for more nitty-gritty content than makes sense for the README. <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> - - - - <!-- END doctoc generated TOC please keep comment here to allow auto update --> To update the manta-deployment zone (a.k.a. the \"manta0\" zone) on a Triton headnode with changes you've made to your local sdc-manta git clone: sdc-manta.git$ ./tools/rsync-to <headnode-ip> To see which manta zones are deployed, use manta-adm show: headnode$ manta-adm show To tear down an existing manta deployment, use manta-factoryreset: manta$ manta-factoryreset You should look at the instructions in the README for actually building and deploying Manta. This section is a reference for developers to understand how those procedures work under the hood. Most Manta components are deployed as zones, based on images built from a single repo. Examples are above, and include muppet and muskie. For a typical zone (take \"muppet\"), the process from source code to deployment works like this: Build the repository itself. Build an image (a zone filesystem template and some metadata) from the contents of the built repository. Optionally, publish the image to updates.tritondatacenter.com. Import the image into a Triton instance. Provision a new zone from the imported image. During the first boot, the zone executes a one-time setup script. During the first and all subsequent boots, the zone executes another configuration script. There are tools to automate most of this: The build tools contained in the `eng.git` submodule, usually found in `deps/eng` in manta repositories include a tool called `buildimage` which assembles an image containing the built Manta component. The image represents a template filesystem with which instances of this component will be stamped out. After the image is built, it can be uploaded to updates.tritondatacenter.com. Alternatively, the image can be manually imported to a Triton instance by copying the image manifest and image file (a compressed zfs send stream) to the headnode and running \"sdc-imgadm import\". The \"manta-init\" command takes care of step 4. You run this as part of any deployment. See the for details. After the first run, subsequent runs find new images in updates.tritondatacenter.com, import them into the current Triton instance, and mark them for use by \"manta-deploy\". Alternatively, if you have images that were manually imported using \"sdc-imgadm import\", then \"manta-init\" can be run with the \"-n\" flag to use those local images instead. The \"manta-adm\" and \"manta-deploy\" commands (whichever you choose to use) take care of step 5. See the Manta Operator's Guide for details. Steps 6 and 7 happen automatically when the zone boots as a result of the previous steps. For more information on the zone setup and boot process, see the repo. There are automated tests in many repos, but it's usually important to test changed components in the context of a full Manta deployment as" }, { "data": "You have a few options, but for all of them you'll need to have a local Manta deployment that you can deploy to. Some repos (including marlin, mola, and mackerel) may have additional suggestions for testing them. You have a few options: Build your own zone image and deploy it. This is the normal upgrade process, it's the most complete test, and you should definitely do this if you're changing configuration or zone setup. It's probably the most annoying, but please help us streamline it by testing it and sending feedback. For details, see below. Assuming you're doing your dev work in a zone on the Manta network, run whatever component you're testing inside that zone. You'll have to write a configuration file for it, but you may be able to copy most of the configuration from another instance. Copy your code changes into a zone already deployed as part of your Manta. This way you don't have to worry about configuring your own instance, but it's annoying because there aren't great ways of synchronizing your changes. As described above, Manta's build and deployment model is exactly like Triton's, which is that most components are delivered as zone images and deployed by provisioning new zones from these images. While the deployment tools are slightly different than Triton's, the build process is nearly identical. The common instructions for building zone images are part of the [Triton documentation](https://github.com/TritonDataCenter/triton/blob/master/docs/developer-guide/building.md). Building a repository checked out to a given git branch will include those changes in the resulting image. One exception, is any `agents` (for example , , , (there are others)) that are bundled within the image. At build-time, the build will attempt to build agents from the same branch name as the checked-out branch of the component being built. If that branch name doesn't exist in the respective agent repository, the build will use the `master` branch of the agent repository. To include agents built from alternate branches at build time, set `$AGENT_BRANCH` in the shell environment. The build will then try to build all required agents from that branch. If no matching branch is found for a given agent, the build then will try to checkout the agent repository at the same branch name as the checked-out branch of the component you're building, before finally falling back to the `master` branch of that agent repository. The mechanism used is described in the , , and files, likely appearing as a git submodule beneath `deps/eng` in the component repository. In some cases, you may be testing a change to a single zone that involves more than one repository. For example, you may need to change not just madtom, but the node-checker module on which it depends. One way to test this is to push your dependency changes to a personal github clone (e.g., \"davepacheco/node-checker\" rather than \"joyent/node-checker\") and then commit a change to your local copy of the zone's repo (\"manta-madtom\", in this case) that points the repo at your local dependency: diff --git a/package.json b/package.json index a054b43..8ef5a35 100644 a/package.json +++ b/package.json @@ -8,7 +8,7 @@ \"dependencies\": { \"assert-plus\": \"0.1.1\", \"bunyan\": \"0.16.6\", \"checker\": \"git://github.com/TritonDataCenter/node-checker#master\", \"checker\": \"git://github.com/davepacheco/node-checker#master\", \"moray\": \"git://github.com/TritonDataCenter/node-moray.git#master\", \"posix-getopt\": \"1.0.0\", \"pg\": \"0.11.3\", This approach ensures that the build picks up your private copy of both madtom and the node-checker" }, { "data": "But when you're ready for the final push, be sure to push your changes to the dependency first, and remember to remove (don't just revert) the above change to the zone's package.json! Application and service configs can be found under the `config` directory in sdc-manta.git. For example: config/application.json config/services/webapi/service.json Sometimes it is necessary to have size-specific overrides for these services within these configs that apply during setup. The size-specific override is in the same directory as the \"normal\" file and has `.[size]` as a suffix. For example, this is the service config and the production override for the webapi: config/services/webapi/service.json config/services/webapi/service.json.production The contents of the override are only the differences. Taking the above example: $ cat config/services/webapi/service.json { \"params\": { \"networks\": [ \"manta\", \"admin\" ], \"ram\": 768 } } $ cat config/services/webapi/service.json.production { \"params\": { \"ram\": 32768, \"quota\": 100 } } You can see what the merged config with look like with the `./bin/manta-merge-config` command. For example: $ ./bin/manta-merge-config -s coal webapi { \"params\": { \"networks\": [ \"manta\", \"admin\" ], \"ram\": 768 }, \"metadata\": { \"MUSKIEDEFAULTMAXSTREAMINGSIZE_MB\": 5120 } } $ ./bin/manta-merge-config -s production webapi { \"params\": { \"networks\": [ \"manta\", \"admin\" ], \"ram\": 32768, \"quota\": 100 } } Note that after setup, the configs are stored in SAPI. Any changes to these files will not result in accidental changes in production (or any other stage). Changes must be made via the SAPI api (see the SAPI docs for details). Manta is deployed as a single SAPI application. Each manta service (moray, postgres, storage, etc.) has a corresponding SAPI service. Every zone which implements a manta service had a corresponding SAPI instance. Within the config/ and manifests/ directories, there are several subdirectories which provide the SAPI configuration used for manta. config/application.json Application definition config/services Service definitions manifests Configuration manifests manifests/applications Configuration manifests for manta application There's no static information for certain instances. Instead, manta-deploy will set a handful of instance-specific metadata (e.g. shard membership). Once Manta has been deployed there will be cases where the service manifests must be changed. Only changing the manifest in this repository isn't sufficient. The manifests used to configure running instances (new and old) are the ones stored in SAPI. The service templates in the zone are not used after initial setup. To update service templates in a running environment (coal or production, for example): 1) Verify that your changes to configuration are backward compatible or that the updates will have no effect on running services. 2) Get the current configuration for your service: headnode$ sdc-sapi /services?name=[service name] If you can't find your service name, look for what you want with the following command: headnode$ sdc-sapi /services?application_uuid=$(sdc-sapi /applications?name=manta | \\ json -gHa uuid) | json -gHa uuid name Take note of the service uuid and make sure you can fetch it with: headnode$ sdc-sapi /services/[service uuid] 3) Identify the differences between the template in this repository and what is in SAPI. 4) Update the service template in SAPI. If it is a simple, one-parameter change, and the value of the key is a string type, it can be done like this: headnode$ sapiadm update [service uuid] json.path=value headnode$ sapiadm update 8386d8f5-d4ff-4b51-985a-061832b41179 \\ params.tags.mantastorageid=2.stor.us-east.joyent.us headnode$ sapiadm update update 0b48c067-01bd-41ca-9f70-91bda65351b2 \\ metadata.PG_DIR=/manatee/pg/data If you require a complex type (an object or array) or a value that is not a string, you will need to hand-craft the differences and `|` to" }, { "data": "For example: headnode$ echo '{ \"metadata\": { \"PORT\": 5040 } }' | \\ sapiadm update fde6c6ed-eab6-4230-bb39-69c3cba80f15 Or if you want to \"edit\" what comes back from sapi: headnode$ sapiadm get [service uuid] | json params >/tmp/params.json headnode$ cat /tmp/params.json | json -o json-0 | sapiadm update [service uuid] 5) Once the service in SAPI has been modified, make sure to get it to verify what SAPI has is what it should be. A shard is a set of moray buckets, backed by >1 moray instances and >=3 Postgres instances. No data is shared between any two shards. Many other manta services may said to be \"in a shard\", but more accurately, they're using a particular shard. There are two pieces of metadata which define how shards are used: INDEXMORAYSHARDS Shards used for the indexing tier STORAGEMORAYSHARD Shard used for minnow (manta_storage) records Currently, the hash ring topology for electric-moray is created once during Manta setup and stored as an image in a Triton imgapi. The image uuid and imgapi endpoint are stored in the following sapi parameters: HASHRINGIMAGE The hash ring image uuid HASHRINGIMGAPI_SERVICE The imageapi that stores the image. In a cross-datacenter deployment, the HASHRINGIMGAPI_SERVICE may be in another datacenter. This limits your ability to deploy new electric-moray instances in the event of DC failure. This topology is independent of what's set in manta-shardadm. WARNING UNDER NO CIRCUMSTANCES SHOULD THIS TOPOLOGY BE CHANGED ONCE MANTA HAS BEEN DEPLOYED, DOING SO WILL RESULT IN DATA CORRUPTION See manta-deploy-lab for hash-ring generation examples. The manta-shardadm tool lists shards and allows the addition of new ones: manta$ manta-shardadm Manage manta shards Usage: manta-shardadm [OPTIONS] COMMAND [ARGS...] manta-shardadm help COMMAND Options: -h, --help Print help and exit. --version Print version and exit. Commands: help (?) Help on a specific sub-command. list List manta shards. set Set manta shards. In addition, the -z flag to manta-deploy specifies a particular shard for that instance. In the case of moray and postgres, that value defines which shard that instance participates in. For all other services, that value defines which shard an instance will consume. Note that deploying a postgres or moray instance into a previously undefined shard will not automatically update the set of shards for the indexing tier. Because of the presence of the electric-moray proxy, adding an additional shard requires coordination with all existing shards, lest objects and requests be routed to an incorrect shard (and thereby inducing data corruption). If you find yourself adding additional capacity, deploy the new shard first, coordinate with all existing shards, then use manta-shardadm to add the shard to list of shards for the indexing tier. Buckets API is an experimental feature that serves similar functions as the Directory API but has a different paradigm for object organization. As opposed to the hierarchical object support provided by Directory API that comes with a limit on the maximum number of objects per directory, Buckets API offers a simpler structure for storing an unlimited number of objects in groups. The two-level object structure incurs a much smaller overhead in request processing and is more cost-efficient from a metadata perspective. It is a better option when the objects in the same bucket are loosely related and do not need a finer categorization. Buckets shards are defined and stored in the same way as directory shards but are separate entities altogether. They are also managed with `manta-shardadm` which generates the `BUCKETSMORAYSHARDS` and `BUCKETSHASHRING_IMAGE` configurations in SAPI metadata. Note: The \"buckets\" that Buckets API manages are not to be confused with moray \"buckets\" which represent the object namespaces for the data stored in" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark restic repo\" layout: docs Work with restic repositories Work with restic repositories ``` -h, --help help for repo ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restic - Get restic repositories" } ]
{ "category": "Runtime", "file_name": "ark_restic_repo.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "is a fast, performant function scheduling library written in Go. Reactr is designed to be flexible, with the ability to run embedded in your Go applications and first-class support for WebAssembly. Taking advantage of Go's superior concurrency capabilities, Reactr can manage and execute hundreds of WebAssembly runtime instances all at once, making a great framework for server-side applications. Reactr allows you to run WebAssembly functions in Go, so does the . The unique feature of Reactr is that it provides a rich set of host functions in Go, which support access to networks and databases etc. Reactr then provides Rust (and Swift / AssemblyScript) APIs to call those host functions from within the WebAssembly function. In this article, we will show you how to use WasmEdge together with Reactr to take advantage of the best of both worlds. WasmEdge is the . It is also the fastest in . We will show you how to run Rust functions compiled to WebAssembly as well as JavaScript programs in WasmEdge and Reactr. WasmEdge provides including for improved performance. You need have , , and installed on your system. The GCC compiler (installed via the `build-essential` package) is also needed for WasmEdge. ```bash sudo apt-get update sudo apt-get -y upgrade sudo apt install build-essential curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source $HOME/.cargo/env rustup target add wasm32-wasi curl -OL https://golang.org/dl/go1.17.5.linux-amd64.tar.gz sudo tar -C /usr/local -xvf go1.17.5.linux-amd64.tar.gz export PATH=$PATH:/usr/local/go/bin wget -qO- https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash source $HOME/.wasmedge/env ``` A simple `hello world` example for Reactr is . Let's first create to echo hello. The Rust function `HelloEcho::run()` is as follows. It will be exposed to the Go host application through Reactr. ```rust use suborbital::runnable::*; struct HelloEcho{} impl Runnable for HelloEcho { fn run(&self, input: Vec<u8>) -> Result<Vec<u8>, RunErr> { let instring = String::fromutf8(input).unwrap(); Ok(format!(\"hello {}\", instring).asbytes().to_vec()) } } ``` Let's build the Rust function into a WebAssembly bytecode file. ```bash cd hello-echo cargo build --target wasm32-wasi --release cp target/wasm32-wasi/release/hello_echo.wasm .. cd .. ``` Next, lets look into the that executes the WebAssembly functions. The `runBundle()` function executes the `run()` function in the `Runnable` struct once. ```go func runBundle() { r := rt.New() doWasm := r.Register(\"hello-echo\", rwasm.NewRunner(\"./hello_echo.wasm\")) res, err := doWasm([]byte(\"wasmWorker!\")).Then() if err != nil { fmt.Println(err) return } fmt.Println(string(res.([]byte))) } ``` The `runGroup()` function executes the Rust-compiled WebAssembly `run()` function multiple times asynchronously in a group, and receives the results as they come in. ```go func runGroup() { r := rt.New() doWasm := r.Register(\"hello-echo\", rwasm.NewRunner(\"./hello_echo.wasm\")) grp := rt.NewGroup() for i := 0; i < 100000; i++ { grp.Add(doWasm([]byte(fmt.Sprintf(\"world %d\", i)))) } if err := grp.Wait(); err != nil {" }, { "data": "} } ``` Finally, let's run the Go host application and see the results printed to the console. You must use the `-tags wasmedge` flag to take advantage of the performance and extended WebAssembly APIs provided by WasmEdge. ```bash go mod tidy go run -tags wasmedge main.go ``` In , we will demonstrate how to use Reactr host functions and APIs to query a PostgreSQL database from your WebAssembly function. We will start a PostgreSQL instance through Docker. ```bash docker pull postgres docker run --name reactr-postgres -p 5432:5432 -e POSTGRES_PASSWORD=12345 -d postgres ``` Next, let's create a database and populate it with some sample data. ```bash $ docker run -it --rm --network host postgres psql -h 127.0.0.1 -U postgres postgres=# CREATE DATABASE reactr; postgres=# \\c reactr; postgres=# CREATE TABLE users ( uuid varchar(100) CONSTRAINT firstkey PRIMARY KEY, email varchar(50) NOT NULL, created_at date, state char(1), identifier integer ); ``` Leave this running and start another terminal window to interact with this PostgreSQL server. Let's create to access the PostgreSQL database. The Rust function `RsDbtest::run()` is as follows. It will be exposed to the Go host application through Reactr. It uses named queries such as `PGInsertUser` and `PGSelectUserWithUUID` to operate the database. Those queries are defined in the Go host application, and we will see them later. ```rust use suborbital::runnable::*; use suborbital::db; use suborbital::util; use suborbital::db::query; use suborbital::log; use uuid::Uuid; struct RsDbtest{} impl Runnable for RsDbtest { fn run(&self, _: Vec<u8>) -> Result<Vec<u8>, RunErr> { let uuid = Uuid::newv4().tostring(); let mut args: Vec<query::QueryArg> = Vec::new(); args.push(query::QueryArg::new(\"uuid\", uuid.as_str())); args.push(query::QueryArg::new(\"email\", \"[email protected]\")); match db::insert(\"PGInsertUser\", args) { Ok(_) => log::info(\"insert successful\"), Err(e) => { return Err(RunErr::new(500, e.message.as_str())) } }; let mut args2: Vec<query::QueryArg> = Vec::new(); args2.push(query::QueryArg::new(\"uuid\", uuid.as_str())); match db::update(\"PGUpdateUserWithUUID\", args2.clone()) { Ok(rows) => log::info(format!(\"update: {}\", util::tostring(rows).asstr()).as_str()), Err(e) => { return Err(RunErr::new(500, e.message.as_str())) } } match db::select(\"PGSelectUserWithUUID\", args2.clone()) { Ok(result) => log::info(format!(\"select: {}\", util::tostring(result).asstr()).as_str()), Err(e) => { return Err(RunErr::new(500, e.message.as_str())) } } match db::delete(\"PGDeleteUserWithUUID\", args2.clone()) { Ok(rows) => log::info(format!(\"delete: {}\", util::tostring(rows).asstr()).as_str()), Err(e) => { return Err(RunErr::new(500, e.message.as_str())) } } ... ... } } ``` Let's build the Rust function into a WebAssembly bytecode file. ```bash cd rs-db cargo build --target wasm32-wasi --release cp target/wasm32-wasi/release/rs_db.wasm .. cd .. ``` The first defines the SQL queries and gives each of them a name. We will then pass those queries to the Reactr runtime as a configuration. ```go func main() { dbConnString, exists := os.LookupEnv(\"REACTRDBCONN_STRING\") if !exists { fmt.Println(\"skipping as conn string env var not set\") return } q1 := rcap.Query{ Type: rcap.QueryTypeInsert, Name: \"PGInsertUser\", VarCount: 2, Query: ` INSERT INTO users (uuid, email, created_at, state, identifier) VALUES ($1, $2, NOW(), 'A', 12345)`, } q2 := rcap.Query{ Type:" }, { "data": "Name: \"PGSelectUserWithUUID\", VarCount: 1, Query: ` SELECT * FROM users WHERE uuid = $1`, } q3 := rcap.Query{ Type: rcap.QueryTypeUpdate, Name: \"PGUpdateUserWithUUID\", VarCount: 1, Query: ` UPDATE users SET state='B' WHERE uuid = $1`, } q4 := rcap.Query{ Type: rcap.QueryTypeDelete, Name: \"PGDeleteUserWithUUID\", VarCount: 1, Query: ` DELETE FROM users WHERE uuid = $1`, } config := rcap.DefaultConfigWithDB(vlog.Default(), rcap.DBTypePostgres, dbConnString, []rcap.Query{q1, q2, q3, q4}) r, err := rt.NewWithConfig(config) if err != nil { fmt.Println(err) return } ... ... } ``` Then, we can run the WebAssembly function from Reactr. ```go func main() { ... ... doWasm := r.Register(\"rs-db\", rwasm.NewRunner(\"./rs_db.wasm\")) res, err := doWasm(nil).Then() if err != nil { fmt.Println(err) return } fmt.Println(string(res.([]byte))) } ``` Finally, let's run the Go host application and see the results printed to the console. You must use the `-tags wasmedge` flag to take advantage of the performance and extended WebAssembly APIs provided by WasmEdge. ```bash export REACTRDBCONN_STRING='postgresql://postgres:[email protected]:5432/reactr' go mod tidy go run -tags wasmedge main.go ``` As we mentioned, a key feature of the WasmEdge Runtime is its advanced , which allows JavaScript programs to run in lightweight, high-performance, safe, multi-language, and . A simple example of embedded JavaScript function in Reactr is . The is very simple. It just returns a string value. ```javascript let h = 'hello'; let w = 'wasmedge'; `${h} ${w}`; ``` The uses the Reactr API to run WasmEdge's standard JavaScript interpreter . You can build your own version of JavaScript interpreter by modifying . Learn more about how to embed , and how to in WasmEdge. The Go host application just need to start the job for `rsembedjs.wasm` and pass the JavaScript content to it. The Go application can then capture and print the return value from JavaScript. ```go func main() { r := rt.New() doWasm := r.Register(\"hello-quickjs\", rwasm.NewRunner(\"./rsembedjs.wasm\")) code, err := ioutil.ReadFile(os.Args[1]) if err != nil { fmt.Print(err) } res, err := doWasm(code).Then() if err != nil { fmt.Println(err) return } fmt.Println(string(res.([]byte))) } ``` Run the Go host application as follows. ```bash $ cd quickjs $ go mod tidy $ go run -tags wasmedge main.go hello.js String(JsString(hello wasmedge)) ``` The printed result shows the type information of the string in Rust and Go APIs. You can strip out this information by changing the Rust or Go applications. WasmEdge supports many advanced JavaScript features. For the next step, you could try our to generate an HTML UI from a Reactr function! You can just build the `dist/main.js` from the React SSR example, and copy it over to this example folder to see it in action! ```bash $ cd quickjs $ go mod tidy $ go run -tags wasmedge main.go main.js <div data-reactroot=\"\"><div>This is home</div><div><div>This is page</div></div></div> UnDefined ```" } ]
{ "category": "Runtime", "file_name": "reactr.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "Here is a tutorial guiding users and new contributors to get familiar with by installing a simple local cluster. If you are an end-user who are interested in this project, some links for installation and testing are as follows: (Recommended) If you are a code developer and wish to contribute in this project, here are some links for quickly installing OpenSDS Hotpot project: (Recommended) If you want to deploy and test opensds integrated with Kubernetes scenario, please refer to: (Recommended) (Deprecated) (Alpha)" } ]
{ "category": "Runtime", "file_name": "Local-Cluster-Installation.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "VM HOST: Travis Machine: Ubuntu 16.04.6 LTS x64 Date: May 04th, 2020 Version: Gin v1.6.3 Go Version:" }, { "data": "linux/amd64 Source: Result: or ```sh Gin: 34936 Bytes HttpServeMux: 14512 Bytes Ace: 30680 Bytes Aero: 34536 Bytes Bear: 30456 Bytes Beego: 98456 Bytes Bone: 40224 Bytes Chi: 83608 Bytes Denco: 10216 Bytes Echo: 80328 Bytes GocraftWeb: 55288 Bytes Goji: 29744 Bytes Gojiv2: 105840 Bytes GoJsonRest: 137496 Bytes GoRestful: 816936 Bytes GorillaMux: 585632 Bytes GowwwRouter: 24968 Bytes HttpRouter: 21712 Bytes HttpTreeMux: 73448 Bytes Kocha: 115472 Bytes LARS: 30640 Bytes Macaron: 38592 Bytes Martini: 310864 Bytes Pat: 19696 Bytes Possum: 89920 Bytes R2router: 23712 Bytes Rivet: 24608 Bytes Tango: 28264 Bytes TigerTonic: 78768 Bytes Traffic: 538976 Bytes Vulcan: 369960 Bytes ``` ```sh Gin: 58512 Bytes Ace: 48688 Bytes Aero: 318568 Bytes Bear: 84248 Bytes Beego: 150936 Bytes Bone: 100976 Bytes Chi: 95112 Bytes Denco: 36736 Bytes Echo: 100296 Bytes GocraftWeb: 95432 Bytes Goji: 49680 Bytes Gojiv2: 104704 Bytes GoJsonRest: 141976 Bytes GoRestful: 1241656 Bytes GorillaMux: 1322784 Bytes GowwwRouter: 80008 Bytes HttpRouter: 37144 Bytes HttpTreeMux: 78800 Bytes Kocha: 785120 Bytes LARS: 48600 Bytes Macaron: 92784 Bytes Martini: 485264 Bytes Pat: 21200 Bytes Possum: 85312 Bytes R2router: 47104 Bytes Rivet: 42840 Bytes Tango: 54840 Bytes TigerTonic: 95264 Bytes Traffic: 921744 Bytes Vulcan: 425992 Bytes ``` ```sh Gin: 4384 Bytes Ace: 3712 Bytes Aero: 26056 Bytes Bear: 7112 Bytes Beego: 10272 Bytes Bone: 6688 Bytes Chi: 8024 Bytes Denco: 3264 Bytes Echo: 9688 Bytes GocraftWeb: 7496 Bytes Goji: 3152 Bytes Gojiv2: 7376 Bytes GoJsonRest: 11400 Bytes GoRestful: 74328 Bytes GorillaMux: 66208 Bytes GowwwRouter: 5744 Bytes HttpRouter: 2808 Bytes HttpTreeMux: 7440 Bytes Kocha: 128880 Bytes LARS: 3656 Bytes Macaron: 8656 Bytes Martini: 23920 Bytes Pat: 1856 Bytes Possum: 7248 Bytes R2router: 3928 Bytes Rivet: 3064 Bytes Tango: 5168 Bytes TigerTonic: 9408 Bytes Traffic: 46400 Bytes Vulcan: 25544 Bytes ``` ```sh Gin: 7776 Bytes Ace: 6704 Bytes Aero: 28488 Bytes Bear: 12320 Bytes Beego: 19280 Bytes Bone: 11440 Bytes Chi: 9744 Bytes Denco: 4192 Bytes Echo: 11664 Bytes GocraftWeb: 12800 Bytes Goji: 5680 Bytes Gojiv2: 14464 Bytes GoJsonRest: 14072 Bytes GoRestful: 116264 Bytes GorillaMux: 105880 Bytes GowwwRouter: 9344 Bytes HttpRouter: 5072 Bytes HttpTreeMux: 7848 Bytes Kocha: 181712 Bytes LARS: 6632 Bytes Macaron: 13648 Bytes Martini: 45888 Bytes Pat: 2560 Bytes Possum: 9200 Bytes R2router: 7056 Bytes Rivet: 5680 Bytes Tango: 8920 Bytes TigerTonic: 9840 Bytes Traffic: 79096 Bytes Vulcan: 44504 Bytes ``` ```sh BenchmarkGin_StaticAll 62169 19319 ns/op 0 B/op 0 allocs/op BenchmarkAce_StaticAll 65428 18313 ns/op 0 B/op 0 allocs/op BenchmarkAero_StaticAll 121132 9632 ns/op 0 B/op 0 allocs/op BenchmarkHttpServeMux_StaticAll 52626 22758 ns/op 0 B/op 0 allocs/op BenchmarkBeego_StaticAll 9962 179058 ns/op 55264 B/op 471 allocs/op BenchmarkBear_StaticAll 14894 80966 ns/op 20272 B/op 469 allocs/op BenchmarkBone_StaticAll 18718 64065 ns/op 0 B/op 0 allocs/op BenchmarkChi_StaticAll 10000 149827 ns/op 67824 B/op 471 allocs/op BenchmarkDenco_StaticAll 211393 5680 ns/op 0 B/op 0 allocs/op BenchmarkEcho_StaticAll 49341 24343 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_StaticAll 10000 126209 ns/op 46312 B/op 785 allocs/op BenchmarkGoji_StaticAll 27956 43174 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_StaticAll 3430 370718 ns/op 205984 B/op 1570 allocs/op BenchmarkGoJsonRest_StaticAll 9134 188888 ns/op 51653 B/op 1727 allocs/op BenchmarkGoRestful_StaticAll 706 1703330 ns/op 613280 B/op 2053 allocs/op BenchmarkGorillaMux_StaticAll 1268 924083 ns/op 153233 B/op 1413 allocs/op BenchmarkGowwwRouter_StaticAll 63374 18935 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_StaticAll 109938 10902 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_StaticAll 109166 10861 ns/op 0 B/op 0 allocs/op BenchmarkKocha_StaticAll 92258 12992 ns/op 0 B/op 0 allocs/op BenchmarkLARS_StaticAll 65200 18387 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_StaticAll 5671 291501 ns/op 115553 B/op 1256 allocs/op BenchmarkMartini_StaticAll 807 1460498 ns/op 125444 B/op 1717 allocs/op BenchmarkPat_StaticAll 513 2342396 ns/op 602832 B/op 12559 allocs/op BenchmarkPossum_StaticAll 10000 128270 ns/op 65312 B/op 471 allocs/op BenchmarkR2router_StaticAll 16726 71760 ns/op 22608 B/op 628 allocs/op BenchmarkRivet_StaticAll 41722 28723 ns/op 0 B/op 0 allocs/op BenchmarkTango_StaticAll 7606 205082 ns/op 39209 B/op 1256 allocs/op BenchmarkTigerTonic_StaticAll 26247 45806 ns/op 7376 B/op 157 allocs/op BenchmarkTraffic_StaticAll 550 2284518 ns/op 754864 B/op 14601 allocs/op BenchmarkVulcan_StaticAll 10000 131343 ns/op 15386 B/op 471 allocs/op ``` ```sh BenchmarkGin_Param 18785022 63.9 ns/op 0 B/op 0 allocs/op BenchmarkAce_Param 14689765 81.5 ns/op 0 B/op 0 allocs/op BenchmarkAero_Param 23094770 51.2 ns/op 0 B/op 0 allocs/op BenchmarkBear_Param 1417045 845 ns/op 456 B/op 5 allocs/op BenchmarkBeego_Param 1000000 1080 ns/op 352 B/op 3 allocs/op BenchmarkBone_Param 1000000 1463 ns/op 816 B/op 6 allocs/op BenchmarkChi_Param 1378756 885 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Param 8557899 143 ns/op 32 B/op 1 allocs/op BenchmarkEcho_Param 16433347 75.5 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Param 1000000 1218 ns/op 648 B/op 8 allocs/op BenchmarkGoji_Param 1921248 617 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_Param 561848 2156 ns/op 1328 B/op 11 allocs/op BenchmarkGoJsonRest_Param 1000000 1358 ns/op 649 B/op 13 allocs/op BenchmarkGoRestful_Param 224857 5307 ns/op 4192 B/op 14 allocs/op BenchmarkGorillaMux_Param 498313 2459 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_Param 1864354 654 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Param 26269074 47.7 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Param 2109829 557 ns/op 352 B/op 3 allocs/op BenchmarkKocha_Param 5050216 243 ns/op 56 B/op 3 allocs/op BenchmarkLARS_Param 19811712 59.9 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Param 662746 2329 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_Param 279902 4260 ns/op 1072 B/op 10 allocs/op BenchmarkPat_Param 1000000 1382 ns/op 536 B/op 11 allocs/op BenchmarkPossum_Param 1000000 1014 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Param 1712559 707 ns/op 432 B/op 5 allocs/op BenchmarkRivet_Param 6648086 182 ns/op 48 B/op 1 allocs/op BenchmarkTango_Param 1221504 994 ns/op 248 B/op 8 allocs/op BenchmarkTigerTonic_Param 891661 2261 ns/op 776 B/op 16 allocs/op BenchmarkTraffic_Param 350059 3598 ns/op 1856 B/op 21 allocs/op BenchmarkVulcan_Param 2517823 472 ns/op 98 B/op 3 allocs/op BenchmarkAce_Param5 9214365 130 ns/op 0 B/op 0 allocs/op BenchmarkAero_Param5 15369013 77.9 ns/op 0 B/op 0 allocs/op BenchmarkBear_Param5 1000000 1113 ns/op 501 B/op 5 allocs/op BenchmarkBeego_Param5 1000000 1269 ns/op 352 B/op 3 allocs/op BenchmarkBone_Param5 986820 1873 ns/op 864 B/op 6 allocs/op BenchmarkChi_Param5 1000000 1156 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Param5 3036331 400 ns/op 160 B/op 1 allocs/op BenchmarkEcho_Param5 6447133 186 ns/op 0 B/op 0 allocs/op BenchmarkGin_Param5 10786068 110 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Param5 844820 1944 ns/op 920 B/op 11 allocs/op BenchmarkGoji_Param5 1474965 827 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_Param5 442820 2516 ns/op 1392 B/op 11 allocs/op BenchmarkGoJsonRest_Param5 507555 2711 ns/op 1097 B/op 16 allocs/op BenchmarkGoRestful_Param5 216481 6093 ns/op 4288 B/op 14 allocs/op BenchmarkGorillaMux_Param5 314402 3628 ns/op 1344 B/op 10 allocs/op BenchmarkGowwwRouter_Param5 1624660 733 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Param5 13167324" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Param5 1000000 1295 ns/op 576 B/op 6 allocs/op BenchmarkKocha_Param5 1000000 1138 ns/op 440 B/op 10 allocs/op BenchmarkLARS_Param5 11580613 105 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Param5 473596 2755 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_Param5 230756 5111 ns/op 1232 B/op 11 allocs/op BenchmarkPat_Param5 469190 3370 ns/op 888 B/op 29 allocs/op BenchmarkPossum_Param5 1000000 1002 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Param5 1422129 844 ns/op 432 B/op 5 allocs/op BenchmarkRivet_Param5 2263789 539 ns/op 240 B/op 1 allocs/op BenchmarkTango_Param5 1000000 1256 ns/op 360 B/op 8 allocs/op BenchmarkTigerTonic_Param5 175500 7492 ns/op 2279 B/op 39 allocs/op BenchmarkTraffic_Param5 233631 5816 ns/op 2208 B/op 27 allocs/op BenchmarkVulcan_Param5 1923416 629 ns/op 98 B/op 3 allocs/op BenchmarkAce_Param20 4321266 281 ns/op 0 B/op 0 allocs/op BenchmarkAero_Param20 31501641 35.2 ns/op 0 B/op 0 allocs/op BenchmarkBear_Param20 335204 3489 ns/op 1665 B/op 5 allocs/op BenchmarkBeego_Param20 503674 2860 ns/op 352 B/op 3 allocs/op BenchmarkBone_Param20 298922 4741 ns/op 2031 B/op 6 allocs/op BenchmarkChi_Param20 878181 1957 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Param20 1000000 1360 ns/op 640 B/op 1 allocs/op BenchmarkEcho_Param20 2104946 580 ns/op 0 B/op 0 allocs/op BenchmarkGin_Param20 4167204 290 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Param20 173064 7514 ns/op 3796 B/op 15 allocs/op BenchmarkGoji_Param20 458778 2651 ns/op 1247 B/op 2 allocs/op BenchmarkGojiv2_Param20 364862 3178 ns/op 1632 B/op 11 allocs/op BenchmarkGoJsonRest_Param20 125514 9760 ns/op 4485 B/op 20 allocs/op BenchmarkGoRestful_Param20 101217 11964 ns/op 6715 B/op 18 allocs/op BenchmarkGorillaMux_Param20 147654 8132 ns/op 3452 B/op 12 allocs/op BenchmarkGowwwRouter_Param20 1000000 1225 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Param20 4920895 247 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Param20 173202 6605 ns/op 3196 B/op 10 allocs/op BenchmarkKocha_Param20 345988 3620 ns/op 1808 B/op 27 allocs/op BenchmarkLARS_Param20 4592326 262 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Param20 166492 7286 ns/op 2924 B/op 12 allocs/op BenchmarkMartini_Param20 122162 10653 ns/op 3595 B/op 13 allocs/op BenchmarkPat_Param20 78630 15239 ns/op 4424 B/op 93 allocs/op BenchmarkPossum_Param20 1000000 1008 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Param20 294981 4587 ns/op 2284 B/op 7 allocs/op BenchmarkRivet_Param20 691798 2090 ns/op 1024 B/op 1 allocs/op BenchmarkTango_Param20 842440 2505 ns/op 856 B/op 8 allocs/op BenchmarkTigerTonic_Param20 38614 31509 ns/op 9870 B/op 119 allocs/op BenchmarkTraffic_Param20 57633 21107 ns/op 7853 B/op 47 allocs/op BenchmarkVulcan_Param20 1000000 1178 ns/op 98 B/op 3 allocs/op BenchmarkAce_ParamWrite 7330743 180 ns/op 8 B/op 1 allocs/op BenchmarkAero_ParamWrite 13833598 86.7 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParamWrite 1363321 867 ns/op 456 B/op 5 allocs/op BenchmarkBeego_ParamWrite 1000000 1104 ns/op 360 B/op 4 allocs/op BenchmarkBone_ParamWrite 1000000 1475 ns/op 816 B/op 6 allocs/op BenchmarkChi_ParamWrite 1320590 892 ns/op 432 B/op 3 allocs/op BenchmarkDenco_ParamWrite 7093605 172 ns/op 32 B/op 1 allocs/op BenchmarkEcho_ParamWrite 8434424 161 ns/op 8 B/op 1 allocs/op BenchmarkGin_ParamWrite 10377034 118 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParamWrite 1000000 1266 ns/op 656 B/op 9 allocs/op BenchmarkGoji_ParamWrite 1874168 654 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_ParamWrite 459032 2352 ns/op 1360 B/op 13 allocs/op BenchmarkGoJsonRest_ParamWrite 499434 2145 ns/op 1128 B/op 18 allocs/op BenchmarkGoRestful_ParamWrite 241087 5470 ns/op 4200 B/op 15 allocs/op BenchmarkGorillaMux_ParamWrite 425686 2522 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_ParamWrite 922172 1778 ns/op 976 B/op 8 allocs/op BenchmarkHttpRouter_ParamWrite 15392049" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParamWrite 1973385 597 ns/op 352 B/op 3 allocs/op BenchmarkKocha_ParamWrite 4262500 281 ns/op 56 B/op 3 allocs/op BenchmarkLARS_ParamWrite 10764410 113 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParamWrite 486769 2726 ns/op 1176 B/op 14 allocs/op BenchmarkMartini_ParamWrite 264804 4842 ns/op 1176 B/op 14 allocs/op BenchmarkPat_ParamWrite 735116 2047 ns/op 960 B/op 15 allocs/op BenchmarkPossum_ParamWrite 1000000 1004 ns/op 496 B/op 5 allocs/op BenchmarkR2router_ParamWrite 1592136 768 ns/op 432 B/op 5 allocs/op BenchmarkRivet_ParamWrite 3582051 339 ns/op 112 B/op 2 allocs/op BenchmarkTango_ParamWrite 2237337 534 ns/op 136 B/op 4 allocs/op BenchmarkTigerTonic_ParamWrite 439608 3136 ns/op 1216 B/op 21 allocs/op BenchmarkTraffic_ParamWrite 306979 4328 ns/op 2280 B/op 25 allocs/op BenchmarkVulcan_ParamWrite 2529973 472 ns/op 98 B/op 3 allocs/op ``` ```sh BenchmarkGin_GithubStatic 15629472 76.7 ns/op 0 B/op 0 allocs/op BenchmarkAce_GithubStatic 15542612 75.9 ns/op 0 B/op 0 allocs/op BenchmarkAero_GithubStatic 24777151 48.5 ns/op 0 B/op 0 allocs/op BenchmarkBear_GithubStatic 2788894 435 ns/op 120 B/op 3 allocs/op BenchmarkBeego_GithubStatic 1000000 1064 ns/op 352 B/op 3 allocs/op BenchmarkBone_GithubStatic 93507 12838 ns/op 2880 B/op 60 allocs/op BenchmarkChi_GithubStatic 1387743 860 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GithubStatic 39384996 30.4 ns/op 0 B/op 0 allocs/op BenchmarkEcho_GithubStatic 12076382 99.1 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GithubStatic 1596495 756 ns/op 296 B/op 5 allocs/op BenchmarkGoji_GithubStatic 6364876 189 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_GithubStatic 550202 2098 ns/op 1312 B/op 10 allocs/op BenchmarkGoRestful_GithubStatic 102183 12552 ns/op 4256 B/op 13 allocs/op BenchmarkGoJsonRest_GithubStatic 1000000 1029 ns/op 329 B/op 11 allocs/op BenchmarkGorillaMux_GithubStatic 255552 5190 ns/op 976 B/op 9 allocs/op BenchmarkGowwwRouter_GithubStatic 15531916 77.1 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_GithubStatic 27920724 43.1 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GithubStatic 21448953 55.8 ns/op 0 B/op 0 allocs/op BenchmarkKocha_GithubStatic 21405310 56.0 ns/op 0 B/op 0 allocs/op BenchmarkLARS_GithubStatic 13625156" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GithubStatic 1000000 1747 ns/op 736 B/op 8 allocs/op BenchmarkMartini_GithubStatic 187186 7326 ns/op 768 B/op 9 allocs/op BenchmarkPat_GithubStatic 109143 11563 ns/op 3648 B/op 76 allocs/op BenchmarkPossum_GithubStatic 1575898 770 ns/op 416 B/op 3 allocs/op BenchmarkR2router_GithubStatic 3046231 404 ns/op 144 B/op 4 allocs/op BenchmarkRivet_GithubStatic 11484826 105 ns/op 0 B/op 0 allocs/op BenchmarkTango_GithubStatic 1000000 1153 ns/op 248 B/op 8 allocs/op BenchmarkTigerTonic_GithubStatic 4929780 249 ns/op 48 B/op 1 allocs/op BenchmarkTraffic_GithubStatic 106351 11819 ns/op 4664 B/op 90 allocs/op BenchmarkVulcan_GithubStatic 1613271 722 ns/op 98 B/op 3 allocs/op BenchmarkAce_GithubParam 8386032 143 ns/op 0 B/op 0 allocs/op BenchmarkAero_GithubParam 11816200 102 ns/op 0 B/op 0 allocs/op BenchmarkBear_GithubParam 1000000 1012 ns/op 496 B/op 5 allocs/op BenchmarkBeego_GithubParam 1000000 1157 ns/op 352 B/op 3 allocs/op BenchmarkBone_GithubParam 184653 6912 ns/op 1888 B/op 19 allocs/op BenchmarkChi_GithubParam 1000000 1102 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GithubParam 3484798 352 ns/op 128 B/op 1 allocs/op BenchmarkEcho_GithubParam 6337380 189 ns/op 0 B/op 0 allocs/op BenchmarkGin_GithubParam 9132032 131 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GithubParam 1000000 1446 ns/op 712 B/op 9 allocs/op BenchmarkGoji_GithubParam 1248640 977 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_GithubParam 383233 2784 ns/op 1408 B/op 13 allocs/op BenchmarkGoJsonRest_GithubParam 1000000 1991 ns/op 713 B/op 14 allocs/op BenchmarkGoRestful_GithubParam 76414 16015 ns/op 4352 B/op 16 allocs/op BenchmarkGorillaMux_GithubParam 150026 7663 ns/op 1296 B/op 10 allocs/op BenchmarkGowwwRouter_GithubParam 1592044 751 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_GithubParam 10420628 115 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GithubParam 1403755 835 ns/op 384 B/op 4 allocs/op BenchmarkKocha_GithubParam 2286170 533 ns/op 128 B/op 5 allocs/op BenchmarkLARS_GithubParam 9540374 129 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GithubParam 533154 2742 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_GithubParam 119397 9638 ns/op 1152 B/op 11 allocs/op BenchmarkPat_GithubParam 150675 8858 ns/op 2408 B/op 48 allocs/op BenchmarkPossum_GithubParam 1000000 1001 ns/op 496 B/op 5 allocs/op BenchmarkR2router_GithubParam 1602886 761 ns/op 432 B/op 5 allocs/op BenchmarkRivet_GithubParam 2986579 409 ns/op 96 B/op 1 allocs/op BenchmarkTango_GithubParam 1000000 1356 ns/op 344 B/op 8 allocs/op BenchmarkTigerTonic_GithubParam 388899 3429 ns/op 1176 B/op 22 allocs/op BenchmarkTraffic_GithubParam 123160 9734 ns/op 2816 B/op 40 allocs/op BenchmarkVulcan_GithubParam 1000000 1138 ns/op 98 B/op 3 allocs/op BenchmarkAce_GithubAll 40543 29670 ns/op 0 B/op 0 allocs/op BenchmarkAero_GithubAll 57632 20648 ns/op 0 B/op 0 allocs/op BenchmarkBear_GithubAll 9234 216179 ns/op 86448 B/op 943 allocs/op BenchmarkBeego_GithubAll 7407 243496 ns/op 71456 B/op 609 allocs/op BenchmarkBone_GithubAll 420 2922835 ns/op 720160 B/op 8620 allocs/op BenchmarkChi_GithubAll 7620 238331 ns/op 87696 B/op 609 allocs/op BenchmarkDenco_GithubAll 18355 64494 ns/op 20224 B/op 167 allocs/op BenchmarkEcho_GithubAll 31251 38479 ns/op 0 B/op 0 allocs/op BenchmarkGin_GithubAll 43550 27364 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GithubAll 4117 300062 ns/op 131656 B/op 1686 allocs/op BenchmarkGoji_GithubAll 3274 416158 ns/op 56112 B/op 334 allocs/op BenchmarkGojiv2_GithubAll 1402 870518 ns/op 352720 B/op 4321 allocs/op BenchmarkGoJsonRest_GithubAll 2976 401507 ns/op 134371 B/op 2737 allocs/op BenchmarkGoRestful_GithubAll 410 2913158 ns/op 910144 B/op 2938 allocs/op BenchmarkGorillaMux_GithubAll 346 3384987 ns/op 251650 B/op 1994 allocs/op BenchmarkGowwwRouter_GithubAll 10000 143025 ns/op 72144 B/op 501 allocs/op BenchmarkHttpRouter_GithubAll 55938 21360 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GithubAll 10000 153944 ns/op 65856 B/op 671 allocs/op BenchmarkKocha_GithubAll 10000 106315 ns/op 23304 B/op 843 allocs/op BenchmarkLARS_GithubAll 47779 25084 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GithubAll 3266 371907 ns/op 149409 B/op 1624 allocs/op BenchmarkMartini_GithubAll 331 3444706 ns/op 226551 B/op 2325 allocs/op BenchmarkPat_GithubAll 273 4381818 ns/op 1483152 B/op 26963 allocs/op BenchmarkPossum_GithubAll 10000 164367 ns/op 84448 B/op 609 allocs/op BenchmarkR2router_GithubAll 10000 160220 ns/op 77328 B/op 979 allocs/op BenchmarkRivet_GithubAll 14625 82453 ns/op 16272 B/op 167 allocs/op BenchmarkTango_GithubAll 6255 279611 ns/op 63826 B/op 1618 allocs/op BenchmarkTigerTonic_GithubAll 2008 687874 ns/op 193856 B/op 4474 allocs/op BenchmarkTraffic_GithubAll 355 3478508 ns/op 820744 B/op 14114 allocs/op BenchmarkVulcan_GithubAll 6885 193333 ns/op 19894 B/op 609 allocs/op ``` ```sh BenchmarkGin_GPlusStatic 19247326 62.2 ns/op 0 B/op 0 allocs/op BenchmarkAce_GPlusStatic 20235060 59.2 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlusStatic 31978935 37.6 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlusStatic 3516523 341 ns/op 104 B/op 3 allocs/op BenchmarkBeego_GPlusStatic 1212036 991 ns/op 352 B/op 3 allocs/op BenchmarkBone_GPlusStatic 6736242 183 ns/op 32 B/op 1 allocs/op BenchmarkChi_GPlusStatic 1490640 814 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GPlusStatic 55006856 21.8 ns/op 0 B/op 0 allocs/op BenchmarkEcho_GPlusStatic 17688258 67.9 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlusStatic 1829181 666 ns/op 280 B/op 5 allocs/op BenchmarkGoji_GPlusStatic 9147451 130 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_GPlusStatic 594015 2063 ns/op 1312 B/op 10 allocs/op BenchmarkGoJsonRest_GPlusStatic 1264906 950 ns/op 329 B/op 11 allocs/op BenchmarkGoRestful_GPlusStatic 231558 5341 ns/op 3872 B/op 13 allocs/op BenchmarkGorillaMux_GPlusStatic 908418 1809 ns/op 976 B/op 9 allocs/op BenchmarkGowwwRouter_GPlusStatic 40684604 29.5 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_GPlusStatic 46742804 25.7 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlusStatic 32567161 36.9 ns/op 0 B/op 0 allocs/op BenchmarkKocha_GPlusStatic 33800060 35.3 ns/op 0 B/op 0 allocs/op BenchmarkLARS_GPlusStatic 20431858 60.0 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlusStatic 1000000 1745 ns/op 736 B/op 8 allocs/op BenchmarkMartini_GPlusStatic 442248 3619 ns/op 768 B/op 9 allocs/op BenchmarkPat_GPlusStatic 4328004 292 ns/op 96 B/op 2 allocs/op BenchmarkPossum_GPlusStatic 1570753 763 ns/op 416 B/op 3 allocs/op BenchmarkR2router_GPlusStatic 3339474 355 ns/op 144 B/op 4 allocs/op BenchmarkRivet_GPlusStatic 18570961 64.7 ns/op 0 B/op 0 allocs/op BenchmarkTango_GPlusStatic 1388702 860 ns/op 200 B/op 8 allocs/op BenchmarkTigerTonic_GPlusStatic 7803543 159 ns/op 32 B/op 1 allocs/op BenchmarkTraffic_GPlusStatic 878605 2171 ns/op 1112 B/op 16 allocs/op BenchmarkVulcan_GPlusStatic 2742446 437 ns/op 98 B/op 3 allocs/op BenchmarkAce_GPlusParam 11626975 105 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlusParam 16914322 71.6 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlusParam 1405173 832 ns/op 480 B/op 5 allocs/op BenchmarkBeego_GPlusParam 1000000 1075 ns/op 352 B/op 3 allocs/op BenchmarkBone_GPlusParam 1000000 1557 ns/op 816 B/op 6 allocs/op BenchmarkChi_GPlusParam 1347926 894 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GPlusParam 5513000 212 ns/op 64 B/op 1 allocs/op BenchmarkEcho_GPlusParam 11884383 101 ns/op 0 B/op 0 allocs/op BenchmarkGin_GPlusParam 12898952" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlusParam 1000000 1194 ns/op 648 B/op 8 allocs/op BenchmarkGoji_GPlusParam 1857229 645 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_GPlusParam 520939 2322 ns/op 1328 B/op 11 allocs/op BenchmarkGoJsonRest_GPlusParam 1000000 1536 ns/op 649 B/op 13 allocs/op BenchmarkGoRestful_GPlusParam 205449 5800 ns/op 4192 B/op 14 allocs/op BenchmarkGorillaMux_GPlusParam 395310 3188 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_GPlusParam 1851798 667 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_GPlusParam 18420789 65.2 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlusParam 1878463 629 ns/op 352 B/op 3 allocs/op BenchmarkKocha_GPlusParam 4495610 273 ns/op 56 B/op 3 allocs/op BenchmarkLARS_GPlusParam 14615976 83.2 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlusParam 584145 2549 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_GPlusParam 250501 4583 ns/op 1072 B/op 10 allocs/op BenchmarkPat_GPlusParam 1000000 1645 ns/op 576 B/op 11 allocs/op BenchmarkPossum_GPlusParam 1000000 1008 ns/op 496 B/op 5 allocs/op BenchmarkR2router_GPlusParam 1708191 688 ns/op 432 B/op 5 allocs/op BenchmarkRivet_GPlusParam 5795014 211 ns/op 48 B/op 1 allocs/op BenchmarkTango_GPlusParam 1000000 1091 ns/op 264 B/op 8 allocs/op BenchmarkTigerTonic_GPlusParam 760221 2489 ns/op 856 B/op 16 allocs/op BenchmarkTraffic_GPlusParam 309774 4039 ns/op 1872 B/op 21 allocs/op BenchmarkVulcan_GPlusParam 1935730 623 ns/op 98 B/op 3 allocs/op BenchmarkAce_GPlus2Params 9158314 134 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlus2Params 11300517 107 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlus2Params 1239238 961 ns/op 496 B/op 5 allocs/op BenchmarkBeego_GPlus2Params 1000000 1202 ns/op 352 B/op 3 allocs/op BenchmarkBone_GPlus2Params 335576 3725 ns/op 1168 B/op 10 allocs/op BenchmarkChi_GPlus2Params 1000000 1014 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GPlus2Params 4394598 280 ns/op 64 B/op 1 allocs/op BenchmarkEcho_GPlus2Params 7851861 154 ns/op 0 B/op 0 allocs/op BenchmarkGin_GPlus2Params 9958588 120 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlus2Params 1000000 1433 ns/op 712 B/op 9 allocs/op BenchmarkGoji_GPlus2Params 1325134 909 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_GPlus2Params 405955 2870 ns/op 1408 B/op 14 allocs/op BenchmarkGoJsonRest_GPlus2Params 977038 1987 ns/op 713 B/op 14 allocs/op BenchmarkGoRestful_GPlus2Params 205018 6142 ns/op 4384 B/op 16 allocs/op BenchmarkGorillaMux_GPlus2Params 205641 6015 ns/op 1296 B/op 10 allocs/op BenchmarkGowwwRouter_GPlus2Params 1748542 684 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_GPlus2Params 14047102" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlus2Params 1418673 828 ns/op 384 B/op 4 allocs/op BenchmarkKocha_GPlus2Params 2334562 520 ns/op 128 B/op 5 allocs/op BenchmarkLARS_GPlus2Params 11954094 101 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlus2Params 491552 2890 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_GPlus2Params 120532 9545 ns/op 1200 B/op 13 allocs/op BenchmarkPat_GPlus2Params 194739 6766 ns/op 2168 B/op 33 allocs/op BenchmarkPossum_GPlus2Params 1201224 1009 ns/op 496 B/op 5 allocs/op BenchmarkR2router_GPlus2Params 1575535 756 ns/op 432 B/op 5 allocs/op BenchmarkRivet_GPlus2Params 3698930 325 ns/op 96 B/op 1 allocs/op BenchmarkTango_GPlus2Params 1000000 1212 ns/op 344 B/op 8 allocs/op BenchmarkTigerTonic_GPlus2Params 349350 3660 ns/op 1200 B/op 22 allocs/op BenchmarkTraffic_GPlus2Params 169714 7862 ns/op 2248 B/op 28 allocs/op BenchmarkVulcan_GPlus2Params 1222288 974 ns/op 98 B/op 3 allocs/op BenchmarkAce_GPlusAll 845606 1398 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlusAll 1000000 1009 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlusAll 103830 11386 ns/op 5488 B/op 61 allocs/op BenchmarkBeego_GPlusAll 82653 14784 ns/op 4576 B/op 39 allocs/op BenchmarkBone_GPlusAll 36601 33123 ns/op 11744 B/op 109 allocs/op BenchmarkChi_GPlusAll 95264 12831 ns/op 5616 B/op 39 allocs/op BenchmarkDenco_GPlusAll 567681 2950 ns/op 672 B/op 11 allocs/op BenchmarkEcho_GPlusAll 720366 1665 ns/op 0 B/op 0 allocs/op BenchmarkGin_GPlusAll 1000000 1185 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlusAll 71575 16365 ns/op 8040 B/op 103 allocs/op BenchmarkGoji_GPlusAll 136352 9191 ns/op 3696 B/op 22 allocs/op BenchmarkGojiv2_GPlusAll 38006 31802 ns/op 17616 B/op 154 allocs/op BenchmarkGoJsonRest_GPlusAll 57238 21561 ns/op 8117 B/op 170 allocs/op BenchmarkGoRestful_GPlusAll 15147 79276 ns/op 55520 B/op 192 allocs/op BenchmarkGorillaMux_GPlusAll 24446 48410 ns/op 16112 B/op 128 allocs/op BenchmarkGowwwRouter_GPlusAll 150112 7770 ns/op 4752 B/op 33 allocs/op BenchmarkHttpRouter_GPlusAll 1367820 878 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlusAll 166628 8004 ns/op 4032 B/op 38 allocs/op BenchmarkKocha_GPlusAll 265694 4570 ns/op 976 B/op 43 allocs/op BenchmarkLARS_GPlusAll 1000000 1068 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlusAll 54564 23305 ns/op 9568 B/op 104 allocs/op BenchmarkMartini_GPlusAll 16274 73845 ns/op 14016 B/op 145 allocs/op BenchmarkPat_GPlusAll 27181 44478 ns/op 15264 B/op 271 allocs/op BenchmarkPossum_GPlusAll 122587 10277 ns/op 5408 B/op 39 allocs/op BenchmarkR2router_GPlusAll 130137 9297 ns/op 5040 B/op 63 allocs/op BenchmarkRivet_GPlusAll 532438 3323 ns/op 768 B/op 11 allocs/op BenchmarkTango_GPlusAll 86054 14531 ns/op 3656 B/op 104 allocs/op BenchmarkTigerTonic_GPlusAll 33936 35356 ns/op 11600 B/op 242 allocs/op BenchmarkTraffic_GPlusAll 17833 68181 ns/op 26248 B/op 341 allocs/op BenchmarkVulcan_GPlusAll 120109 9861 ns/op 1274 B/op 39 allocs/op ``` ```sh BenchmarkGin_ParseStatic 18877833 63.5 ns/op 0 B/op 0 allocs/op BenchmarkAce_ParseStatic 19663731 60.8 ns/op 0 B/op 0 allocs/op BenchmarkAero_ParseStatic 28967341 41.5 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParseStatic 3006984 402 ns/op 120 B/op 3 allocs/op BenchmarkBeego_ParseStatic 1000000 1031 ns/op 352 B/op 3 allocs/op BenchmarkBone_ParseStatic 1782482 675 ns/op 144 B/op 3 allocs/op BenchmarkChi_ParseStatic 1453261 819 ns/op 432 B/op 3 allocs/op BenchmarkDenco_ParseStatic 45023595 26.5 ns/op 0 B/op 0 allocs/op BenchmarkEcho_ParseStatic 17330470 69.3 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParseStatic 1644006 731 ns/op 296 B/op 5 allocs/op BenchmarkGoji_ParseStatic 7026930 170 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_ParseStatic 517618 2037 ns/op 1312 B/op 10 allocs/op BenchmarkGoJsonRest_ParseStatic 1227080 975 ns/op 329 B/op 11 allocs/op BenchmarkGoRestful_ParseStatic 192458 6659 ns/op 4256 B/op 13 allocs/op BenchmarkGorillaMux_ParseStatic 744062 2109 ns/op 976 B/op 9 allocs/op BenchmarkGowwwRouter_ParseStatic 37781062 31.8 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_ParseStatic 45311223 26.5 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParseStatic 21383475 56.1 ns/op 0 B/op 0 allocs/op BenchmarkKocha_ParseStatic 29953290 40.1 ns/op 0 B/op 0 allocs/op BenchmarkLARS_ParseStatic 20036196 62.7 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParseStatic 1000000 1740 ns/op 736 B/op 8 allocs/op BenchmarkMartini_ParseStatic 404156 3801 ns/op 768 B/op 9 allocs/op BenchmarkPat_ParseStatic 1547180 772 ns/op 240 B/op 5 allocs/op BenchmarkPossum_ParseStatic 1608991 757 ns/op 416 B/op 3 allocs/op BenchmarkR2router_ParseStatic 3177936 385 ns/op 144 B/op 4 allocs/op BenchmarkRivet_ParseStatic 17783205 67.4 ns/op 0 B/op 0 allocs/op BenchmarkTango_ParseStatic 1210777 990 ns/op 248 B/op 8 allocs/op BenchmarkTigerTonic_ParseStatic 5316440 231 ns/op 48 B/op 1 allocs/op BenchmarkTraffic_ParseStatic 496050 2539 ns/op 1256 B/op 19 allocs/op BenchmarkVulcan_ParseStatic 2462798 488 ns/op 98 B/op 3 allocs/op BenchmarkAce_ParseParam 13393669 89.6 ns/op 0 B/op 0 allocs/op BenchmarkAero_ParseParam 19836619 60.4 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParseParam 1405954 864 ns/op 467 B/op 5 allocs/op BenchmarkBeego_ParseParam 1000000 1065 ns/op 352 B/op 3 allocs/op BenchmarkBone_ParseParam 1000000 1698 ns/op 896 B/op 7 allocs/op BenchmarkChi_ParseParam 1356037 873 ns/op 432 B/op 3 allocs/op BenchmarkDenco_ParseParam 6241392 204 ns/op 64 B/op 1 allocs/op BenchmarkEcho_ParseParam 14088100 85.1 ns/op 0 B/op 0 allocs/op BenchmarkGin_ParseParam 17426064 68.9 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParseParam 1000000 1254 ns/op 664 B/op 8 allocs/op BenchmarkGoji_ParseParam 1682574 713 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_ParseParam 502224 2333 ns/op 1360 B/op 12 allocs/op BenchmarkGoJsonRest_ParseParam 1000000 1401 ns/op 649 B/op 13 allocs/op BenchmarkGoRestful_ParseParam 182623 7097 ns/op 4576 B/op 14 allocs/op BenchmarkGorillaMux_ParseParam 482332 2477 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_ParseParam 1834873 657 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_ParseParam 23593393 51.0 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParseParam 2100160 574 ns/op 352 B/op 3 allocs/op BenchmarkKocha_ParseParam 4837220 252 ns/op 56 B/op 3 allocs/op BenchmarkLARS_ParseParam 18411192" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParseParam 571870 2398 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_ParseParam 286262 4268 ns/op 1072 B/op 10 allocs/op BenchmarkPat_ParseParam 692906 2157 ns/op 992 B/op 15 allocs/op BenchmarkPossum_ParseParam 1000000 1011 ns/op 496 B/op 5 allocs/op BenchmarkR2router_ParseParam 1722735 697 ns/op 432 B/op 5 allocs/op BenchmarkRivet_ParseParam 6058054 203 ns/op 48 B/op 1 allocs/op BenchmarkTango_ParseParam 1000000 1061 ns/op 280 B/op 8 allocs/op BenchmarkTigerTonic_ParseParam 890275 2277 ns/op 784 B/op 15 allocs/op BenchmarkTraffic_ParseParam 351322 3543 ns/op 1896 B/op 21 allocs/op BenchmarkVulcan_ParseParam 2076544 572 ns/op 98 B/op 3 allocs/op BenchmarkAce_Parse2Params 11718074 101 ns/op 0 B/op 0 allocs/op BenchmarkAero_Parse2Params 16264988 73.4 ns/op 0 B/op 0 allocs/op BenchmarkBear_Parse2Params 1238322 973 ns/op 496 B/op 5 allocs/op BenchmarkBeego_Parse2Params 1000000 1120 ns/op 352 B/op 3 allocs/op BenchmarkBone_Parse2Params 1000000 1632 ns/op 848 B/op 6 allocs/op BenchmarkChi_Parse2Params 1239477 955 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Parse2Params 4944133 245 ns/op 64 B/op 1 allocs/op BenchmarkEcho_Parse2Params 10518286 114 ns/op 0 B/op 0 allocs/op BenchmarkGin_Parse2Params 14505195 82.7 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Parse2Params 1000000 1437 ns/op 712 B/op 9 allocs/op BenchmarkGoji_Parse2Params 1689883 707 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_Parse2Params 502334 2308 ns/op 1344 B/op 11 allocs/op BenchmarkGoJsonRest_Parse2Params 1000000 1771 ns/op 713 B/op 14 allocs/op BenchmarkGoRestful_Parse2Params 159092 7583 ns/op 4928 B/op 14 allocs/op BenchmarkGorillaMux_Parse2Params 417548 2980 ns/op 1296 B/op 10 allocs/op BenchmarkGowwwRouter_Parse2Params 1751737 686 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Parse2Params 18089204 66.3 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Parse2Params 1556986 777 ns/op 384 B/op 4 allocs/op BenchmarkKocha_Parse2Params 2493082 485 ns/op 128 B/op 5 allocs/op BenchmarkLARS_Parse2Params 15350108 78.5 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Parse2Params 530974 2605 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_Parse2Params 247069 4673 ns/op 1152 B/op 11 allocs/op BenchmarkPat_Parse2Params 816295 2126 ns/op 752 B/op 16 allocs/op BenchmarkPossum_Parse2Params 1000000 1002 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Parse2Params 1569771 733 ns/op 432 B/op 5 allocs/op BenchmarkRivet_Parse2Params 4080546 295 ns/op 96 B/op 1 allocs/op BenchmarkTango_Parse2Params 1000000 1121 ns/op 312 B/op 8 allocs/op BenchmarkTigerTonic_Parse2Params 399556 3470 ns/op 1168 B/op 22 allocs/op BenchmarkTraffic_Parse2Params 314194 4159 ns/op 1944 B/op 22 allocs/op BenchmarkVulcan_Parse2Params 1827559 664 ns/op 98 B/op 3 allocs/op BenchmarkAce_ParseAll 478395 2503 ns/op 0 B/op 0 allocs/op BenchmarkAero_ParseAll 715392 1658 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParseAll 59191 20124 ns/op 8928 B/op 110 allocs/op BenchmarkBeego_ParseAll 45507 27266 ns/op 9152 B/op 78 allocs/op BenchmarkBone_ParseAll 29328 41459 ns/op 16208 B/op 147 allocs/op BenchmarkChi_ParseAll 48531 25053 ns/op 11232 B/op 78 allocs/op BenchmarkDenco_ParseAll 325532 4284 ns/op 928 B/op 16 allocs/op BenchmarkEcho_ParseAll 433771 2759 ns/op 0 B/op 0 allocs/op BenchmarkGin_ParseAll 576316 2082 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParseAll 41500 29692 ns/op 13728 B/op 181 allocs/op BenchmarkGoji_ParseAll 80833 15563 ns/op 5376 B/op 32 allocs/op BenchmarkGojiv2_ParseAll 19836 60335 ns/op 34448 B/op 277 allocs/op BenchmarkGoJsonRest_ParseAll 32210 38027 ns/op 13866 B/op 321 allocs/op BenchmarkGoRestful_ParseAll 6644 190842 ns/op 117600 B/op 354 allocs/op BenchmarkGorillaMux_ParseAll 12634 95894 ns/op 30288 B/op 250 allocs/op BenchmarkGowwwRouter_ParseAll 98152 12159 ns/op 6912 B/op 48 allocs/op BenchmarkHttpRouter_ParseAll 933208 1273 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParseAll 107191 11554 ns/op 5728 B/op 51 allocs/op BenchmarkKocha_ParseAll 184862 6225 ns/op 1112 B/op 54 allocs/op BenchmarkLARS_ParseAll 644546 1858 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParseAll 26145 46484 ns/op 19136 B/op 208 allocs/op BenchmarkMartini_ParseAll 10000 121838 ns/op 25072 B/op 253 allocs/op BenchmarkPat_ParseAll 25417 47196 ns/op 15216 B/op 308 allocs/op BenchmarkPossum_ParseAll 58550 20735 ns/op 10816 B/op 78 allocs/op BenchmarkR2router_ParseAll 72732 16584 ns/op 8352 B/op 120 allocs/op BenchmarkRivet_ParseAll 281365 4968 ns/op 912 B/op 16 allocs/op BenchmarkTango_ParseAll 42831 28668 ns/op 7168 B/op 208 allocs/op BenchmarkTigerTonic_ParseAll 23774 49972 ns/op 16048 B/op 332 allocs/op BenchmarkTraffic_ParseAll 10000 104679 ns/op 45520 B/op 605 allocs/op BenchmarkVulcan_ParseAll 64810 18108 ns/op 2548 B/op 78 allocs/op ```" } ]
{ "category": "Runtime", "file_name": "BENCHMARKS.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "% runc-pause \"8\" runc-pause - suspend all processes inside the container runc pause container-id The pause command suspends all processes in the instance of the container identified by container-id. Use runc list to identify instances of containers and their current status. runc-list(8), runc-resume(8), runc(8)." } ]
{ "category": "Runtime", "file_name": "runc-pause.8.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "name: Report a bug about: Create a bug report to help us improve kube-router title: '' labels: bug assignees: '' What happened? A clear and concise description of what the bug is. What did you expect to happen? A clear and concise description of what you expected to happen. How can we reproduce the behavior you experienced? Steps to reproduce the behavior: Step 1 Step 2 Step 3 Step 4 Screenshots / Architecture Diagrams / Network Topologies If applicable, add those here to help explain your problem. System Information (please complete the following information): Kube-Router Version (`kube-router --version`): [e.g. 1.0.1] Kube-Router Parameters: [e.g. --run-router --run-service-proxy --enable-overlay --overlay-type=full etc.] Kubernetes Version (`kubectl version`) : [e.g. 1.18.3] Cloud Type: [e.g. AWS, GCP, Azure, on premise] Kubernetes Deployment Type: [e.g. EKS, GKE, Kops, Kubeadm, etc.] Kube-Router Deployment Type: [e.g. DaemonSet, System Service] Cluster Size: [e.g. 200 Nodes] Logs, other output, metrics Please provide logs, other kind of output or observed metrics here. Additional context Add any other context about the problem here." } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "The Incus client and daemon respect some environment variables to adapt to the user's environment and to turn some advanced features on and off. Name | Description : | :- `INCUS_DIR` | The Incus data directory `INCUSINSECURETLS` | If set to true, allows all default Go ciphers both for client <-> server communication and server <-> image servers (server <-> server and clustering are not affected) `PATH` | List of paths to look into when resolving binaries `http_proxy` | Proxy server URL for HTTP `https_proxy` | Proxy server URL for HTTPS `no_proxy` | List of domains, IP addresses or CIDR ranges that don't require the use of a proxy Name | Description : | :- `EDITOR` | What text editor to use `VISUAL` | What text editor to use (if `EDITOR` isn't set) `INCUS_CONF` | Path to the client configuration directory `INCUSGLOBALCONF` | Path to the global client configuration directory `INCUS_REMOTE` | Name of the remote to use (overrides configured default remote) `INCUS_PROJECT` | Name of the project to use (overrides configured default project) Name | Description : | :- `INCUSAGENTPATH` | Path to the directory including the `incus-agent` builds `INCUSCLUSTERUPDATE` | Script to call on a cluster update `INCUSDEVMONITORDIR` | Path to be monitored by the device monitor. This is primarily for testing `INCUS_DOCUMENTATION` | Path to the documentation to serve through the web server `INCUSEXECPATH` | Full path to the Incus binary (used when forking subcommands) `INCUSIDMAPPEDMOUNTS_DISABLE` | Disable idmapped mounts support (useful when testing traditional UID shifting) `INCUSLXCTEMPLATE_CONFIG` | Path to the LXC template configuration directory `INCUSOVMFPATH` | Path to an OVMF build including `OVMFCODE.fd` and `OVMFVARS.ms.fd` `INCUSSECURITYAPPARMOR` | If set to `false`, forces AppArmor off `INCUS_UI` | Path to the web UI to serve through the web server `INCUSUSBIDSPATH` | Path to the hwdata `usb.ids` file" } ]
{ "category": "Runtime", "file_name": "environment.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "provides debugging and validation tools for Kubelet Container Runtime Interface (CRI). `cri-tools` includes two tools: `crictl` and `critest`. `crictl` is the CLI for Kubelet CRI, in this document, we will show how to use `crictl` to run Pods in Kata containers. Note: `cri-tools` is only used for debugging and validation purpose, and don't use it to run production workloads. Note: For how to install and configure `cri-tools` with CRI runtimes like `containerd` or CRI-O, please also refer to other . Sample config files in this document can be found . ```bash $ sudo crictl runp -r kata sandbox_config.json 16a62b035940f9c7d79fd53e93902d15ad21f7f9b3735f1ac9f51d16539b836b $ sudo crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT 16a62b035940f 21 seconds ago Ready busybox-pod 0 ``` ```bash $ sudo crictl create 16a62b035940f containerconfig.json sandboxconfig.json e6ca0e0f7f532686236b8b1f549e4878e4fe32ea6b599a5d684faf168b429202 ``` List containers and check the container is in `Created` state: ```bash $ sudo crictl ps -a CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e6ca0e0f7f532 docker.io/library/busybox:latest 19 seconds ago Created busybox-container 0 16a62b035940f ``` ```bash $ sudo crictl start e6ca0e0f7f532 e6ca0e0f7f532 ``` List containers and we can see that the container state has changed from `Created` to `Running`: ```bash $ sudo crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e6ca0e0f7f532 docker.io/library/busybox:latest About a minute ago Running busybox-container 0 16a62b035940f ``` And last we can `exec` into `busybox` container: ```bash $ sudo crictl exec -it e6ca0e0f7f532 sh ``` And run commands in it: ``` / # hostname busybox_host / # id uid=0(root) gid=0(root) ``` In this example, we will create two Pods: one is for `redis` server, and another one is `redis` client. It's also possible to start a container within a single command: ```bash $ sudo crictl run -r kata redisservercontainerconfig.json redisserversandboxconfig.json bb36e05c599125842c5193909c4de186b1cee3818f5d17b951b6a0422681ce4b ``` ```bash $ sudo crictl run -r kata redisclientcontainerconfig.json redisclientsandboxconfig.json e344346c5414e3f51f97f20b2262e0b7afe457750e94dc0edb109b94622fc693 ``` After the new container started, we can check the running Pods and containers. ```bash $ sudo crictl pods POD ID CREATED STATE NAME NAMESPACE ATTEMPT 469d08a7950e3 30 seconds ago Ready redis-client-pod 0 02c12fdb08219 About a minute ago Ready redis-server-pod 0 $ sudo crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e344346c5414e docker.io/library/redis:6.0.8-alpine 35 seconds ago Running redis-client 0 469d08a7950e3 bb36e05c59912 docker.io/library/redis:6.0.8-alpine About a minute ago Running redis-server 0 02c12fdb08219 ``` To connect to the `redis-server`. First we need to get the `redis-server`'s IP address. ```bash $ server=$(sudo crictl inspectp 02c12fdb08219 | jq .status.network.ip | tr -d '\"' ) $ echo $server 172.19.0.118 ``` Launch `redis-cli` in the new Pod and connect server running at `172.19.0.118`. ```bash $ sudo crictl exec -it e344346c5414e redis-cli -h $server 172.19.0.118:6379> get test-key (nil) 172.19.0.118:6379> set test-key test-value OK 172.19.0.118:6379> get test-key \"test-value\" ``` Then back to `redis-server`, check if the `test-key` is set in server. ```bash $ sudo crictl exec -it bb36e05c59912 redis-cli get test-key \"test-val\" ``` Returned `test-val` is just set by `redis-cli` in `redis-client` Pod." } ]
{ "category": "Runtime", "file_name": "run-kata-with-crictl.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "<img src=\"https://raw.githubusercontent.com/cncf/artwork/main/projects/kube-ovn/horizontal/color/kube-ovn-horizontal-color.svg\" alt=\"kubeovnlogo\" width=\"500\"/> ](https://github.com/kubeovn/kube-ovn/blob/master/LICENSE) ](https://github.com/kubeovn/kube-ovn/releases) ](https://img.shields.io/docker/pulls/kubeovn/kube-ovn) ](https://goreportcard.com/report/github.com/kubeovn/kube-ovn) If you miss the good old days of SDN, then Kube-OVN is your choice in Cloud Native era. Kube-OVN, a , integrates the OVN-based Network Virtualization with Kubernetes. It offers an advanced Container Network Fabric for Enterprises with the most functions, extreme performance and the easiest operation. The Kube-OVN community is waiting for your participation! Join the Follow us at Chat with us at Namespaced Subnets: Each Namespace can have a unique Subnet (backed by a Logical Switch). Pods within the Namespace will have IP addresses allocated from the Subnet. It's also possible for multiple Namespaces to share a Subnet. Vlan/Underlay Support: In addition to overlay network, Kube-OVN also supports underlay and vlan mode network for better performance and direct connectivity with physical network. VPC Support: Multi-tenant network with independent address spaces, where each tenant has its own network infrastructure such as eips, nat gateways, security groups and loadbalancers. Static IP Addresses for Workloads: Allocate random or static IP addresses to workloads. Multi-Cluster Network: Connect different Kubernetes/Openstack clusters into one L3 network. TroubleShooting Tools: Handy tools to diagnose, trace, monitor and dump container network traffic to help troubleshoot complicate network issues. Prometheus & Grafana Integration: Exposing network quality metrics like pod/node/service/dns connectivity/latency in Prometheus format. ARM Support: Kube-OVN can run on x86_64 and arm64 platforms. Windows Support: Kube-OVN can run on Windows worker nodes. Subnet Isolation: Can configure a Subnet to deny any traffic from source IP addresses not within the same Subnet. Can whitelist specific IP addresses and IP ranges. Network Policy: Implementing networking.k8s.io/NetworkPolicy API by high performance ovn ACL. DualStack IP Support: Pod can run in IPv4-Only/IPv6-Only/DualStack mode. Pod NAT and EIP: Manage the pod external traffic and external ip like tradition VM. IPAM for Multi NIC: A cluster-wide IPAM for CNI plugins other than Kube-OVN, such as macvlan/vlan/host-device to take advantage of subnet and static ip allocation functions in Kube-OVN. Dynamic QoS: Configure Pod/Gateway Ingress/Egress traffic rate/priority/loss/latency on the fly. Embedded Load Balancers: Replace kube-proxy with the OVN embedded high performance distributed L2 Load Balancer. Distributed Gateways: Every Node can act as a Gateway to provide external network" }, { "data": "Namespaced Gateways: Every Namespace can have a dedicated Gateway for Egress traffic. Direct External ConnectivityPod IP can be exposed to external network directly. BGP Support: Pod/Subnet IP can be exposed to external by BGP router protocol. Traffic Mirror: Duplicated container network traffic for monitoring, diagnosing and replay. Hardware Offload: Boost network performance and save CPU resource by offloading OVS flow table to hardware. DPDK Support: DPDK application now can run in Pod with OVS-DPDK. Cilium Integration: Cilium can take over the work of kube-proxy. F5 CES Integration: F5 can help better manage the outgoing traffic of k8s pod/container. The Switch, Router and Firewall showed in the diagram below are all distributed on all Nodes. There is no single point of failure for in-cluster network. Kube-OVN offers prometheus integration with grafana dashboards to visualize network quality. Kube-OVN is easy to install with all necessary components/dependencies included. If you already have a Kubernetes cluster without any cni plugin, please refer to the . If you want to install Kubernetes from scratch, you can try or for Chinese users try to deploy a production ready Kubernetes cluster with Kube-OVN embedded. We are looking forward to your PR! Q: What's the different with other CNIs? A: Different CNI Implementations have different scopes, there is no single implementation that can resolve all network problems. Kube-OVN is aiming to bring SDN to Cloud Native. If you are missing the old day network concepts like VPC, Subnet, customize route, security groups etc. you can not find corresponding functions in any other CNIs. Then Kube-OVN is your only choice when you need these functions to build datacenter or enterprise network fabric. Q: How about the scalability of Kube-OVN? A: We have simulated 200 Nodes with 10k Pods by kubemark, and it works fine. Some community users have deployed one cluster with 500 Nodes and 10k+ Pods in production. It's still not reach the limitation, but we don't have enough resources to find the limitation. Q: What's the Addressing/IPAM? Node-specific or cluster-wide? A: Kube-OVN uses a cluster-wide IPAM, Pod address can float to any nodes in the cluster. Q: What's the encapsulation? A: For overlay mode, Kube-OVN uses Geneve/Vxlan/STT to encapsulate packets between nodes. For Vlan/Underlay mode there is no encapsulation. <a href=\"https://trackgit.com\"> <img src=\"https://us-central1-trackgit-analytics.cloudfunctions.net/token/ping/l57vf80m7ckk59iv8ll5\" alt=\"trackgit-views\" /> </a>" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Kube-OVN", "subcategory": "Cloud Native Network" }
[ { "data": "The Vineyard community takes all security bugs seriously. Thank you for improving the security quality of vineyard. We adopt a private disclosure process for security issues. If you find a bug, a security vulnerability or any security related issues, please DO NOT file a public issue. Do not create a Github issue. Instead, send your report privately to . Security reports are greatly appreciated and we will publicly thank you for it. Please provide as much information as possible, so we can react quickly. For instance, that could include: Description of the location and potential impact of the vulnerability; A detailed description of the steps required to reproduce the vulnerability (POC scripts, screenshots, and compressed packet captures are all helpful to us) Whatever else you think we might need to identify the source of this vulnerability One of our maintainers will acknowledge your email within 48 hours, and will send a more detailed response within 48 hours indicating the next steps in handling your report. After the initial reply to your report, the maintainers will endeavor to keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "Vineyard", "subcategory": "Cloud Native Storage" }
[ { "data": "- - - - - Please make sure to read and observe our . Carina is a community project driven by its community which strives to promote a healthy, friendly and productive environment. Carina is a standard kubernetes CSI plugin. Users can use standard kubernetes storage resources like storageclass/PVC/PV to request storage media. Fork the repository on GitHub. Make your changes on your fork repository. Submit a PR. We will help you to contribute in different areas like filing issues, developing features, fixing critical bugs and getting your work reviewed and merged. If you have questions about the development process, feel free to . We are always in need of help, be it fixing documentation, reporting bugs or writing some code. Look at places where you feel best coding practices aren't followed, code refactoring is needed or tests are missing. Here is how you get started. There are within the Carina organization. Each repository has beginner-friendly issues that provide a good first issue. For example, has and labels for issues that should not need deep knowledge of the system. We can help new contributors who wish to work on such issues. Another good way to contribute is to find a documentation improvement, such as a missing/broken link. Please see below for the workflow. When you are willing to take on an issue, just reply on the issue. The maintainer will assign it to you. While we encourage everyone to contribute code, it is also appreciated when someone reports an issue. Issues should be filed under the appropriate Carina sub-repository. Example: a Carina issue should be opened to . Please follow the prompted submission guidelines while opening an issue. Please do not ever hesitate to ask a question or send a pull request. This is a rough outline of what a contributor's workflow looks like: Create a topic branch from where to base the contribution. This is usually main. Make commits of logical units. Push changes in a topic branch to a personal fork of the repository. Submit a pull request to . Pull requests are often called simply" }, { "data": "Carina generally follows the standard process. To submit a proposed change, please develop the code/fix and add new test cases. After that, run these local verifications before submitting pull request to predict the pass or fail of continuous integration. Run and pass `make vet` Run and pass `make test` To make it easier for your PR to receive reviews, consider the reviewers will need you to: follow . write . break large changes into a logical series of smaller patches which individually make easily understandable changes, and in aggregate solve a broader issue. We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` carina-node: add test codes for carina-node this add some unit test codes to improve code coverage for carina-node Fixes #666 ``` The format can be described more formally as follows: ``` <subsystem>: <what changed> <BLANK LINE> <why this change was made> <BLANK LINE> <footer> ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools. Note: if your pull request isn't getting enough attention, you can use the reach out on WechatGroup to get help finding reviewers. There are multiple types of tests. The location of the test code varies with type, as do the specifics of the environment needed to successfully run the test: Unit: These confirm that a particular function behaves as intended. Unit test source code can be found adjacent to the corresponding source code within a given package. These are easily run locally by any developer. Integration: These tests cover interactions of package components or interactions between components and Kubernetes control plane components like API server. End-to-end (\"e2e\"): These are broad tests of overall system behavior and coherence. Continuous integration will run these tests on PRs." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "In the previous two posts we talked about gVisor's as well as how those are applied in the . Recently, a new container escape vulnerability () was announced that ties these topics well together. gVisor is to this specific issue, but it provides an interesting case study to continue our exploration of gVisor's security. While gVisor is not immune to vulnerabilities, to minimize the impact and remediate if a vulnerability is found. First, lets describe how the discovered vulnerability works. There are numerous ways one can send and receive bytes over the network with Linux. One of the most performant ways is to use a ring buffer, which is a memory region shared by the application and the kernel. These rings are created by calling with for receiving and for sending packets. The vulnerability is in the code that reads packets when `PACKETRXRING` is enabled. There is another option () that asks the kernel to leave some space in the ring buffer before each packet for anything the application needs, e.g. control structures. When a packet is received, the kernel calculates where to copy the packet to, taking the amount reserved before each packet into consideration. If the amount reserved is large, the kernel performed an incorrect calculation which could cause an overflow leading to an out-of-bounds write of up to 10 bytes, controlled by the attacker. The data in the write is easily controlled using the loopback to send a crafted packet and receiving it using a `PACKETRXRING` with a carefully selected `PACKET_RESERVE` size. ```c static int tpacketrcv(struct skbuff skb, struct net_device dev, struct packettype *pt, struct netdevice *orig_dev) { // ... if (sk->sktype == SOCKDGRAM) { macoff = netoff = TPACKETALIGN(po->tphdrlen) + 16 + po->tp_reserve; } else { unsigned int maclen = skbnetworkoffset(skb); // tp_reserve is unsigned int, netoff is unsigned short. // Addition can overflow netoff netoff = TPACKETALIGN(po->tphdrlen + (maclen < 16 ? 16 : maclen)) + po->tp_reserve; if (po->hasvnethdr) { netoff += sizeof(struct virtionethdr); do_vnet = true; } // Attacker controls netoff and can make macoff be smaller // than sizeof(struct virtionethdr) macoff = netoff - maclen; } // ... // \"macoff - sizeof(struct virtionethdr)\" can be negative, // resulting in a pointer before h.raw if (do_vnet && virtionethdrfromskb(skb, h.raw + macoff - sizeof(struct virtionethdr), vio_le(), true, 0)) { // ... ``` The capability is required to create the socket above. However, in order to support common debugging tools like `ping` and `tcpdump`, Docker containers, including those created for Kubernetes, are given `CAPNETRAW` by default and thus may be able to trigger this vulnerability to elevate privileges and escape the container. Next, we are going to explore why this vulnerability doesnt work in gVisor, and how gVisor could prevent the escape even if a similar vulnerability existed inside gVisors kernel. gVisor does not implement `PACKETRXRING`, but does support raw sockets which are required for `PACKETRXRING`. Raw sockets are a controversial feature to support in a sandbox environment. While it allows great customizations for essential tools like `ping`, it may allow packets to be written to the network without any" }, { "data": "In general, allowing an untrusted application to write crafted packets to the network is a questionable idea and a historical source of vulnerabilities. With that in mind, if `CAPNETRAW` is enabled by default, it would not be secure by default to run untrusted applications. After multiple discussions when raw sockets were first implemented, we decided to disable raw sockets by default, even if `CAPNETRAW` is given to the application. Instead, enabling raw sockets in gVisor requires the admin to set `--net-raw` flag to runsc when configuring the runtime, in addition to requiring the `CAPNETRAW` capability in the application. It comes at the expense that some tools may not work out of the box, but as part of our principle, we felt that it was important for the less secure configuration to be explicit. Since this bug was due to an overflow in the specific Linux implementation of the packet ring, gVisor's raw socket implementation is not affected. However, if there were a vulnerability in gVisor, containers would not be allowed to exploit it by default. As an alternative way to implement this same constraint, Kubernetes allows to be configured to customize requests. Cloud providers can use this to implement more stringent policies. For example, GKE implements an admission controller for gVisor that unless it has been explicitly set in the pod spec. gVisor has its own application kernel, called the Sentry, that is distinct from the host kernel. Just like what you would expect from a kernel, gVisor has a memory management subsystem, virtual file system, and a full network stack. The host network is only used as a transport to carry packets in and out the sandbox[^1]. The loopback interface which is used in the exploit stays completely inside the sandbox, never reaching the host. Therefore, even if the Sentry was vulnerable to the attack, there would be two factors that would prevent a container escape from happening. First, the vulnerability would be limited to the Sentry, and the attacker would compromise only the application kernel, bound by a restricted set of filters, discussed more in depth below. Second, the Sentry is a distinct implementation of the API, written in Go, which provides bounds checking that would have likely prevented access past the bounds of the shared region (e.g. see or , which have similar shared regions). Here, Kubernetes warrants slightly more explanation. gVisor makes pods the unit of isolation and a pod can run multiple containers. In other words, each pod is a gVisor instance, and each container is a set of processes running inside gVisor, isolated via Sentry-internal namespaces like regular containers inside a pod. If there were a vulnerability in gVisor, the privilege escalation would allow a container inside the pod to break out to other containers inside the same pod, but the container still cannot break out of the pod. gVisor follows a that the system should have two layers of protection, and those layers should require different compromises to be broken. We apply this principle by assuming that the Sentry (first layer of defense) . In order to protect the host kernel from a compromised Sentry, we wrap it around many security and isolations features to ensure only the minimal set of functionality from the host kernel is" }, { "data": "First, the sandbox runs inside a cgroup that can limit and throttle host resources being used. Second, the sandbox joins empty namespaces, including user and mount, to further isolate from the host. Next, it changes the process root to a read-only directory that contains only `/proc` and nothing else. Then, it executes with the unprivileged user/group ) with all capabilities stripped. Last and most importantly, a seccomp filter is added to tightly restrict what parts of the Linux syscall surface that gVisor is allowed to access. The allowed host surface is a far smaller set of syscalls than the Sentry implements for applications to use. Not only restricting the syscall being called, but also checking that arguments to these syscalls are within the expected set. Dangerous syscalls like <code>execve(2)</code>, <code>open(2)</code>, and <code>socket(2)</code> are prohibited, thus an attacker isnt able to execute binaries or acquire new resources on the host. if there were a vulnerability in gVisor that allowed an attacker to execute code inside the Sentry, the attacker still has extremely limited privileges on the host. In fact, a compromised Sentry is much more restricted than a non-compromised regular container. For CVE-2020-14386 in particular, the attack would be blocked by more than one security layer: non-privileged user, no capability, and seccomp filters. Although the surface is drastically reduced, there is still a chance that there is a vulnerability in one of the allowed syscalls. Thats why its important to keep the surface small and carefully consider what syscalls are allowed. You can find the full set of allowed syscalls . Another possible attack vector is resources that are present in the Sentry, like open file descriptors. The Sentry has file descriptors that an attacker could potentially use, such as log files, platform files (e.g. `/dev/kvm`), an RPC endpoint that allows external communication with the Sentry, and a Netstack endpoint that connects the sandbox to the network. The Netstack endpoint in particular is a concern because it gives direct access to the network. Its an `AF_PACKET` socket that allows arbitrary L2 packets to be written to the network. In the normal case, Netstack assembles packets that go out the network, giving the container control over only the payload. But if the Sentry is compromised, an attacker can craft packets to the network. In many ways this is similar to anyone sending random packets over the internet, but still this is a place where the host kernel surface exposed is larger than we would like it to be. Security comes with many tradeoffs that are often hard to make, such as the decision to disable raw sockets by default. However, these tradeoffs have served us well, and we've found them to have paid off over time. CVE-2020-14386 offers great insight into how multiple layers of protection can be effective against such an attack. We cannot guarantee that a container escape will never happen in gVisor, but we do our best to make it as hard as we possibly can. If you have not tried gVisor yet, its easier than you think. Just follow the steps . <br> <br> -- them to local containers or send them out the NIC. The packet will be handled by many switches, routers, proxies, servers, etc. along the way, which may be subject to their own vulnerabilities." } ]
{ "category": "Runtime", "file_name": "2020-09-18-containing-a-real-vulnerability.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Copyright (C) 2014 by Oleku Konko Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." } ]
{ "category": "Runtime", "file_name": "LICENSE.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "layout: global title: Building Alluxio From Source This guide describes how to clone the Alluxio repository, compile the source code, and run tests in your environment. Alternatively, we have published a docker image with Java, Maven, Golang, and Git pre-installed to help build Alluxio source code. Checkout the Alluxio main branch from Github: ```shell $ git clone https://github.com/Alluxio/alluxio.git $ cd alluxio ``` This section guides you to setup pre-configured compilation environment based on our published docker image. You can skip this section and build Alluxio source code if JDK and Maven are already installed locally. Start a container named `alluxio-build` based on this image and get into this container to proceed: ```shell $ docker run -itd \\ --network=host \\ -v ${ALLUXIO_HOME}:/alluxio \\ -v ${HOME}/.m2:/root/.m2 \\ --name alluxio-build \\ alluxio/alluxio-maven bash $ docker exec -it -w /alluxio alluxio-build bash ``` Note that, Container path `/alluxio` is mapped to host path `${ALLUXIO_HOME}`, so the binary built will still be accessible outside the container afterwards. Container path `/root/.m2` is mapped to host path `${HOME}/.m2` to leverage your local copy of the maven cache. This is optional. When done using the container, destroy it by running ```shell $ docker rm -f alluxio-build ``` Build the source code using Maven: ```shell $ mvn clean install -DskipTests ``` To speed up the compilation, you can run the following instruction to skip different checks: ```shell $ mvn clean install \\ -DskipTests \\ -Dmaven.javadoc.skip=true \\ -Dfindbugs.skip=true \\ -Dcheckstyle.skip=true \\ -Dlicense.skip=true ``` The Maven build system fetches its dependencies, compiles source code, runs unit tests, and packages the system. If this is the first time you are building the project, it can take a while to download all the dependencies. Subsequent builds, however, will be much faster. Once Alluxio is built, you can start it with: ```shell $ ./bin/alluxio process start local ``` To verify that Alluxio is running, you can visit or check the log in the `alluxio/logs` directory. The `worker.log` and `master.log` files will typically be the most useful. It may take a few seconds for the web server to start. You can run a test command to verify that data can be read and written to Alluxio: ```shell $ ./bin/alluxio exec basicIOTest ``` You should be able to see the result `Passed the test!` Stop the local Alluxio system by using: ```shell $ ./bin/alluxio process stop local ``` If you are seeing `java.lang.OutOfMemoryError: Java heap space`, please set the following variable to increase the memory heap size for maven: ```shell $ export MAVEN_OPTS=\"-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m\" ``` If you see following error message by maven like below: \"`Failed to execute goal org.codehaus.mojo:buildnumber-maven-plugin:1.4:create-metadata (default) on project alluxio-core-common: Execution default of goal org.codehaus.mojo:buildnumber-maven-plugin:1.4:create-metadata failed.: NullPointerException`\" Because the build number is based on the revision number retrieved from SCM, it will check build number from git hash code. If check failed, SCM will throw a NPE. To avoid the exception, please set the Alluxio version with parameter \"`-Dmaven.buildNumber.revisionOnScmFailure`\". For example, if the alluxio version is 2.7.3 then set \"`-Dmaven.buildNumber.revisionOnScmFailure=2.7.3`\". See for more information." } ]
{ "category": "Runtime", "file_name": "Building-Alluxio-From-Source.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Container Storage Interface Snapshot Support in Velero\" layout: docs Integrating Container Storage Interface (CSI) snapshot support into Velero enables Velero to backup and restore CSI-backed volumes using the . By supporting CSI snapshot APIs, Velero can support any volume provider that has a CSI driver, without requiring a Velero-specific plugin to be available. This page gives an overview of how to add support for CSI snapshots to Velero through CSI plugins. For more information about specific components, see the . Your cluster is Kubernetes version 1.20 or greater. Your cluster is running a CSI driver capable of support volume snapshots at the . When restoring CSI VolumeSnapshots across clusters, the name of the CSI driver in the destination cluster is the same as that on the source cluster to ensure cross cluster portability of CSI VolumeSnapshots NOTE: Not all cloud provider's CSI drivers guarantee snapshot durability, meaning that the VolumeSnapshot and VolumeSnapshotContent objects may be stored in the same object storage system location as the original PersistentVolume and may be vulnerable to data loss. You should refer to your cloud provider's documentation for more information on configuring snapshot durability. Since v0.3.0 the velero team will provide official support for CSI plugin when they are used with AWS and Azure drivers. To integrate Velero with the CSI volume snapshot APIs, you must enable the `EnableCSI` feature flag and install the Velero on the Velero server. Both of these can be added with the `velero install` command. ```bash velero install \\ --features=EnableCSI \\ --plugins=<object storage plugin>,velero/velero-plugin-for-csi:v0.3.0 \\ ... ``` To include the status of CSI objects associated with a Velero backup in `velero backup describe` output, run `velero client config set features=EnableCSI`. See for more information about managing client-side feature flags. You can also view the image on . This section documents some of the choices made during implementation of the Velero : VolumeSnapshots created by the Velero CSI plugins are retained only for the lifetime of the backup even if the `DeletionPolicy` on the VolumeSnapshotClass is set to `Retain`. To accomplish this, during deletion of the backup the prior to deleting the VolumeSnapshot, VolumeSnapshotContent object is patched to set its `DeletionPolicy` to `Delete`. Deleting the VolumeSnapshot object will result in cascade delete of the VolumeSnapshotContent and the snapshot in the storage provider. VolumeSnapshotContent objects created during a `velero backup` that are dangling, unbound to a VolumeSnapshot object, will be discovered, using labels, and deleted on backup deletion. The Velero CSI plugins, to backup CSI backed PVCs, will choose the VolumeSnapshotClass in the cluster that has the same driver name and also has the `velero.io/csi-volumesnapshot-class` label set on it, like ```yaml" }, { "data": "\"true\" ``` The VolumeSnapshot objects will be removed from the cluster after the backup is uploaded to the object storage, so that the namespace that is backed up can be deleted without removing the snapshot in the storage provider if the `DeletionPolicy` is `Delete`. Velero's CSI support does not rely on the Velero VolumeSnapshotter plugin interface. Instead, Velero uses a collection of BackupItemAction plugins that act first against PersistentVolumeClaims. When this BackupItemAction sees PersistentVolumeClaims pointing to a PersistentVolume backed by a CSI driver, it will choose the VolumeSnapshotClass with the same driver name that has the `velero.io/csi-volumesnapshot-class` label to create a CSI VolumeSnapshot object with the PersistentVolumeClaim as a source. This VolumeSnapshot object resides in the same namespace as the PersistentVolumeClaim that was used as a source. From there, the CSI external-snapshotter controller will see the VolumeSnapshot and create a VolumeSnapshotContent object, a cluster-scoped resource that will point to the actual, disk-based snapshot in the storage system. The external-snapshotter plugin will call the CSI driver's snapshot method, and the driver will call the storage system's APIs to generate the snapshot. Once an ID is generated and the storage system marks the snapshot as usable for restore, the VolumeSnapshotContent object will be updated with a `status.snapshotHandle` and the `status.readyToUse` field will be set. Velero will include the generated VolumeSnapshot and VolumeSnapshotContent objects in the backup tarball, as well as upload all VolumeSnapshots and VolumeSnapshotContents objects in a JSON file to the object storage system. When Velero synchronizes backups into a new cluster, VolumeSnapshotContent objects and the VolumeSnapshotClass that is chosen to take snapshot will be synced into the cluster as well, so that Velero can manage backup expiration appropriately. The `DeletionPolicy` on the VolumeSnapshotContent will be the same as the `DeletionPolicy` on the VolumeSnapshotClass that was used to create the VolumeSnapshot. Setting a `DeletionPolicy` of `Retain` on the VolumeSnapshotClass will preserve the volume snapshot in the storage system for the lifetime of the Velero backup and will prevent the deletion of the volume snapshot, in the storage system, in the event of a disaster where the namespace with the VolumeSnapshot object may be lost. When the Velero backup expires, the VolumeSnapshot objects will be deleted and the VolumeSnapshotContent objects will be updated to have a `DeletionPolicy` of `Delete`, to free space on the storage system. For more details on how each plugin works, see the 's documentation. Note: The AWS, Microsoft Azure, and Google Cloud Platform (GCP) Velero plugins version 1.4 and later are able to snapshot and restore persistent volumes provisioned by a CSI driver via the APIs of the cloud provider, without having to install Velero CSI plugins. See the , , and Velero plugin repo for more information on supported CSI drivers." } ]
{ "category": "Runtime", "file_name": "csi.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "*Note that the default configuration of `runc` (foreground, new terminal) is generally the best option for most users. This document exists to help explain what the purpose of the different modes is, and to try to steer users away from common mistakes and misunderstandings.* In general, most processes on Unix (and Unix-like) operating systems have 3 standard file descriptors provided at the start, collectively referred to as \"standard IO\" (`stdio`): `0`: standard-in (`stdin`), the input stream into the process `1`: standard-out (`stdout`), the output stream from the process `2`: standard-error (`stderr`), the error stream from the process When creating and running a container via `runc`, it is important to take care to structure the `stdio` the new container's process receives. In some ways containers are just regular processes, while in other ways they're an isolated sub-partition of your machine (in a similar sense to a VM). This means that the structure of IO is not as simple as with ordinary programs (which generally just use the file descriptors you give them). Before we continue, it is important to note that processes can have more file descriptors than just `stdio`. By default in `runc` no other file descriptors will be passed to the spawned container process. If you wish to explicitly pass file descriptors to the container you have to use the `--preserve-fds` option. These ancillary file descriptors don't have any of the strange semantics discussed further in this document (those only apply to `stdio`) -- they are passed untouched by `runc`. It should be noted that `--preserve-fds` does not take individual file descriptors to preserve. Instead, it takes how many file descriptors (not including `stdio` or `LISTEN_FDS`) should be passed to the container. In the following example: ``` % runc run --preserve-fds 5 <container> ``` `runc` will pass the first `5` file descriptors (`3`, `4`, `5`, `6`, and `7` -- assuming that `LISTEN_FDS` has not been configured) to the container. In addition to `--preserve-fds`, `LISTEN_FDS` file descriptors are passed automatically to allow for `systemd`-style socket activation. To extend the above example: ``` % LISTENPID=$pidofrunc LISTENFDS=3 runc run --preserve-fds 5 <container> ``` `runc` will now pass the first `8` file descriptors (and it will also pass `LISTENFDS=3` and `LISTENPID=1` to the container). The first `3` (`3`, `4`, and `5`) were passed due to `LISTEN_FDS` and the other `5` (`6`, `7`, `8`, `9`, and `10`) were passed due to `--preserve-fds`. You should keep this in mind if you use `runc` directly in something like a `systemd` unit file. To disable this `LISTENFDS`-style passing just unset `LISTENFDS`. Be very careful when passing file descriptors to a container process. Due to some Linux kernel (mis)features, a container with access to certain types of file descriptors (such as `O_PATH` descriptors) outside of the container's root file system can use these to break out of the container's pivoted mount namespace. `runc` supports two distinct methods for passing `stdio` to the container's primary process: (`terminal: true`) (`terminal: false`) When first using `runc` these two modes will look incredibly similar, but this can be quite deceptive as these different modes have quite different" }, { "data": "By default, `runc spec` will create a configuration that will create a new terminal (`terminal: true`). However, if the `terminal: ...` line is not present in `config.json` then pass-through is the default. *In general we recommend using new terminal, because it means that tools like `sudo` will work inside your container. But pass-through can be useful if you know what you're doing, or if you're using `runc` as part of a non-interactive pipeline.* In new terminal mode, `runc` will create a brand-new \"console\" (or more precisely, a new pseudo-terminal using the container's namespaced `/dev/pts/ptmx`) for your contained process to use as its `stdio`. When you start a process in new terminal mode, `runc` will do the following: Create a new pseudo-terminal. Pass the slave end to the container's primary process as its `stdio`. Send the master end to a process to interact with the `stdio` for the container's primary process (). It should be noted that since a new pseudo-terminal is being used for communication with the container, some strange properties of pseudo-terminals might surprise you. For instance, by default, all new pseudo-terminals translate the byte `'\\n'` to the sequence `'\\r\\n'` on both `stdout` and `stderr`. In addition there are [a whole range of `ioctls(2)` that can only interact with pseudo-terminal `stdio`][tty_ioctl(4)]. NOTE: In new terminal mode, all three `stdio` file descriptors are the same underlying file. The reason for this is to match how a shell's `stdio` looks to a process (as well as remove race condition issues with having to deal with multiple master pseudo-terminal file descriptors). However this means that it is not really possible to uniquely distinguish between `stdout` and `stderr` from the caller's perspective. If you see an error like ``` open /dev/tty: no such device or address ``` from runc, it means it can't open a terminal (because there isn't one). This can happen when stdin (and possibly also stdout and stderr) are redirected, or in some environments that lack a tty (such as GitHub Actions runners). The solution to this is to not use a terminal for the container, i.e. have `terminal: false` in `config.json`. If the container really needs a terminal (some programs require one), you can provide one, using one of the following methods. One way is to use `ssh` with the `-tt` flag. The second `t` forces a terminal allocation even if there's no local one -- and so it is required when stdin is not a terminal (some `ssh` implementations only look for a terminal on stdin). Another way is to run runc under the `script` utility, like this ```console $ script -e -c 'runc run <container>' ``` If you have already set up some file handles that you wish your contained process to use as its `stdio`, then you can ask `runc` to pass them through to the contained process (this is not necessarily the same as `--preserve-fds`'s passing of file descriptors -- ). As an example (assuming that `terminal: false` is set in `config.json`): ``` % echo input | runc run some_container > /tmp/log.out 2>" }, { "data": "``` Here the container's various `stdio` file descriptors will be substituted with the following: `stdin` will be sourced from the `echo input` pipeline. `stdout` will be output into `/tmp/log.out` on the host. `stderr` will be output into `/tmp/log.err` on the host. It should be noted that the actual file handles seen inside the container may be different (for instance, the file referenced by `1` could be `/tmp/log.out` directly or a pipe which `runc` is using to buffer output, based on the mode). However the net result will be the same in either case. In principle you could use the [new terminal mode](#new-terminal) in a pipeline, but the difference will become more clear when you are introduced to . `runc` itself runs in two modes: You can use either with either `runc` mode. However, there are considerations that may indicate preference for one mode over another. It should be noted that while two types of modes (terminal and `runc`) are conceptually independent from each other, you should be aware of the intricacies of which combination you are using. *In general we recommend using foreground because it's the most straight-forward to use, with the only downside being that you will have a long-running `runc` process. Detached mode is difficult to get right and generally requires having your own `stdio` management.* The default (and most straight-forward) mode of `runc`. In this mode, your `runc` command remains in the foreground with the container process as a child. All `stdio` is buffered through the foreground `runc` process (irrespective of which terminal mode you are using). This is conceptually quite similar to running a normal process interactively in a shell (and if you are using `runc` in a shell interactively, this is what you should use). Because the `stdio` will be buffered in this mode, some very important peculiarities of this mode should be kept in mind: With , the container will see a pseudo-terminal as its `stdio` (as you might expect). However, the `stdio` of the foreground `runc` process will remain the `stdio` that the process was started with -- and `runc` will copy all `stdio` between its `stdio` and the container's `stdio`. This means that while a new pseudo-terminal has been created, the foreground `runc` process manages it over the lifetime of the container. With , the foreground `runc`'s `stdio` is not passed to the container. Instead, the container's `stdio` is a set of pipes which are used to copy data between `runc`'s `stdio` and the container's `stdio`. This means that the container never has direct access to host file descriptors (aside from the pipes created by the container runtime, but that shouldn't be an issue). The main drawback of the foreground mode of operation is that it requires a long-running foreground `runc` process. If you kill the foreground `runc` process then you will no longer have access to the `stdio` of the container (and in most cases this will result in the container dying abnormally due to `SIGPIPE` or some other error). By extension this means that any bug in the long-running foreground `runc` process (such as a memory leak) or a stray OOM-kill sweep could result in your container being killed through no fault of the" }, { "data": "In addition, there is no way in foreground mode of passing a file descriptor directly to the container process as its `stdio` (like `--preserve-fds` does). These shortcomings are obviously sub-optimal and are the reason that `runc` has an additional mode called \"detached mode\". In contrast to foreground mode, in detached mode there is no long-running foreground `runc` process once the container has started. In fact, there is no long-running `runc` process at all. However, this means that it is up to the caller to handle the `stdio` after `runc` has set it up for you. In a shell this means that the `runc` command will exit and control will return to the shell, after the container has been set up. You can run `runc` in detached mode in one of the following ways: `runc run -d ...` which operates similar to `runc run` but is detached. `runc create` followed by `runc start` which is the standard container lifecycle defined by the OCI runtime specification (`runc create` sets up the container completely, waiting for `runc start` to begin execution of user code). The main use-case of detached mode is for higher-level tools that want to be wrappers around `runc`. By running `runc` in detached mode, those tools have far more control over the container's `stdio` without `runc` getting in the way (most wrappers around `runc` like `cri-o` or `containerd` use detached mode for this reason). Unfortunately using detached mode is a bit more complicated and requires more care than the foreground mode -- mainly because it is now up to the caller to handle the `stdio` of the container. Another complication is that the parent process is responsible for acting as the subreaper for the container. In short, you need to call `prctl(PRSETCHILD_SUBREAPER, 1, ...)` in the parent process and correctly handle the implications of being a subreaper. Failing to do so may result in zombie processes being accumulated on your host. These tasks are usually performed by a dedicated (and minimal) monitor process per-container. For the sake of comparison, other runtimes such as LXC do not have an equivalent detached mode and instead integrate this monitor process into the container runtime itself -- this has several tradeoffs, and runc has opted to support delegating the monitoring responsibility to the parent process through this detached mode. In detached mode, pass-through actually does what it says on the tin -- the `stdio` file descriptors of the `runc` process are passed through (untouched) to the container's `stdio`. The purpose of this option is to allow a user to set up `stdio` for a container themselves and then force `runc` to just use their pre-prepared `stdio` (without any pseudo-terminal funny business). *If you don't see why this would be useful, don't use this option.* You must be incredibly careful when using detached pass-through (especially in a shell). The reason for this is that by using detached pass-through you are passing host file descriptors to the container. In the case of a shell, usually your `stdio` is going to be a pseudo-terminal (on your" }, { "data": "A malicious container could take advantage of TTY-specific `ioctls` like `TIOCSTI` to fake input into the host shell (remember that in detached mode, control is returned to your shell and so the terminal you've given the container is being read by a shell prompt). There are also several other issues with running non-malicious containers in a shell with detached pass-through (where you pass your shell's `stdio` to the container): Output from the container will be interleaved with output from your shell (in a non-deterministic way), without any real way of distinguishing from where a particular piece of output came from. Any input to `stdin` will be non-deterministically split and given to either the container or the shell (because both are blocked on a `read(2)` of the same FIFO-style file descriptor). They are all related to the fact that there is going to be a race when either your host or the container tries to read from (or write to) `stdio`. This problem is especially obvious when in a shell, where usually the terminal has been put into raw mode (where each individual key-press should cause `read(2)` to return). NOTE: There is also currently a where using detached pass-through will result in the container hanging if the `stdout` or `stderr` is a pipe (though this should be a temporary issue). When creating a new pseudo-terminal in detached mode, and fairly obvious problem appears -- how do we use the new terminal that `runc` created? Unlike in pass-through, `runc` has created a new set of file descriptors that need to be used by something in order for container communication to work. The way this problem is resolved is through the use of Unix domain sockets. There is a feature of Unix sockets called `SCM_RIGHTS` which allows a file descriptor to be sent through a Unix socket to a completely separate process (which can then use that file descriptor as though they opened it). When using `runc` in detached new terminal mode, this is how a user gets access to the pseudo-terminal's master file descriptor. To this end, there is a new option (which is required if you want to use `runc` in detached new terminal mode): `--console-socket`. This option takes the path to a Unix domain socket which `runc` will connect to and send the pseudo-terminal master file descriptor down. The general process for getting the pseudo-terminal master is as follows: Create a Unix domain socket at some path, `$socket_path`. Call `runc run` or `runc create` with the argument `--console-socket $socket_path`. Using `recvmsg(2)` retrieve the file descriptor sent using `SCM_RIGHTS` by `runc`. Now the manager can interact with the `stdio` of the container, using the retrieved pseudo-terminal master. After `runc` exits, the only process with a copy of the pseudo-terminal master file descriptor is whoever read the file descriptor from the socket. NOTE: Currently `runc` doesn't support abstract socket addresses (due to it not being possible to pass an `argv` with a null-byte as the first character). In the future this may change, but currently you must use a valid path name. In order to help users make use of detached new terminal mode, we have provided a , as well as ." } ]
{ "category": "Runtime", "file_name": "terminals.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "We follow the . Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at ." } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "This directory contains an example `elasticsearch_adapter.lua` on how to use to push fields of the RGW requests to . Install and run Elasticsearch using docker: ```bash docker network create elastic docker pull elasticsearch:2.4.6 docker run --net elastic -p 9200:9200 -p 9300:9300 -e \"discovery.type=single-node\" elasticsearch:2.4.6 ``` Upload the script: ```bash radosgw-admin script put --infile=elasticsearch_adapter.lua --context=postRequest ``` Add the packages used in the script: ```bash radosgw-admin script-package add --package='elasticsearch 1.0.0-1' --allow-compilation radosgw-admin script-package add --package='lunajson' --allow-compilation radosgw-admin script-package add --package='lua-cjson 2.1.0-1' --allow-compilation ``` Restart radosgw. Send a request: ```bash s3cmd --host=localhost:8000 --host-bucket=\"localhost:8000/%(bucket)\" --accesskey=0555b35654ad1656d804 --secretkey=h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q== mb s3://mybucket s3cmd --host=localhost:8000 --host-bucket=\"localhost:8000/%(bucket)\" --accesskey=0555b35654ad1656d804 --secretkey=h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q== put -P /etc/hosts s3://mybucket curl http://localhost:8000/mybucket/hosts ``` Search by bucket id from Elasticsearch: ```bash curl -X GET \"localhost:9200/rgw/_search?pretty\" -H 'Content-Type: application/json' -d' { \"query\": { \"match\": { \"Bucket.Id\": \"05382336-b2db-409f-82dc-f28ab5fef978.4471.4471\" } } } ' ``` Lua 5.3" } ]
{ "category": "Runtime", "file_name": "elasticsearch_adapter.md", "project_name": "Ceph", "subcategory": "Cloud Native Storage" }
[ { "data": "Longhorn v1.4.3 is the latest stable version of Longhorn 1.4. It introduces improvements and bug fixes in the areas of stability, resilience, and so on. Please try it out and provide feedback. Thanks for all the contributions! For the definition of stable or latest release, please check . Please ensure your Kubernetes cluster is at least v1.21 before installing v1.4.3. Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions . Please read the first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.3 from v1.3.x/v1.4.x, which are only supported source versions. Follow the upgrade instructions . N/A Please follow up on about any outstanding issues found after this release. ) - @c3y1huang @chriscchien ) - @yangchiu @ChanYiLin @mantissahz ) - @weizhe0422 @PhanLe1010 @smallteeths ) - @yangchiu @mantissahz ) - @ChanYiLin @mantissahz ) - @c3y1huang @roger-ryao ) - @yangchiu @derekbit ) - @c3y1huang @roger-ryao ) - @weizhe0422 @ejweber ) - @yangchiu @mantissahz ) - @c3y1huang @chriscchien ) - @WebberHuang1118 @chriscchien ) - @ejweber @chriscchien ) - @mantissahz @roger-ryao ) - @c3y1huang @khushboo-rancher ) - @yangchiu @ChanYiLin @mantissahz ) - @yangchiu @ejweber ) - @c3y1huang @chriscchien ) - @yangchiu @PhanLe1010 @khushboo-rancher ) - @derekbit @roger-ryao @ChanYiLin @PhanLe1010 @WebberHuang1118 @c3y1huang @chriscchien @derekbit @ejweber @innobead @khushboo-rancher @mantissahz @roger-ryao @smallteeths @weizhe0422 @yangchiu" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.4.3.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Releases are performed by maintainers and should usually be discussed and planned at a maintainer meeting. Choose the version number. It should be prefixed with `v`, e.g. `v1.2.3` Take a quick scan through the PRs and issues to make sure there isn't anything crucial that must be in the next release. Create a draft of the release note Discuss the level of testing that's needed and create a test plan if sensible Check what version of `go` is used in the build container, updating it if there's a new stable release. Make sure you are on the master branch and don't have any local uncommitted changes. Create a signed tag for the release `git tag -s $VERSION` (Ensure that GPG keys are created and added to GitHub) Push the tag to git `git push origin <TAG>` Create a release on Github, using the tag which was just pushed. Add the release note to the release. Announce the release on at least the CNI mailing, IRC and Slack." } ]
{ "category": "Runtime", "file_name": "RELEASING.md", "project_name": "Container Network Interface (CNI)", "subcategory": "Cloud Native Network" }
[ { "data": "title: Back up Clusters icon: case-study-1.svg Backup your Kubernetes resources and volumes for an entire cluster, or part of a cluster by using namespaces or label selectors." } ]
{ "category": "Runtime", "file_name": "sample1.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }