content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "This page contains an overview of the various features an administrator can turn on or off for Antrea components. We follow the same convention as the . In particular: a feature in the Alpha stage will be disabled by default but can be enabled by editing the appropriate `.conf` entry in the Antrea manifest. a feature in the Beta stage will be enabled by default but can be disabled by editing the appropriate `.conf` entry in the Antrea manifest. a feature in the GA stage will be enabled by default and cannot be disabled. Some features are specific to the Agent, others are specific to the Controller, and some apply to both and should be enabled / disabled consistently in both `.conf` entries. To enable / disable a feature, edit the Antrea manifest appropriately. For example, to enable `FeatureGateFoo` on Linux, edit the Agent configuration in the `antrea` ConfigMap as follows: ```yaml antrea-agent.conf: | featureGates: FeatureGateFoo: true ``` | Feature Name | Component | Default | Stage | Alpha Release | Beta Release | GA Release | Extra Requirements | Notes | | -- | | - | -- | - | | - | | | | `AntreaProxy` | Agent | `true` | GA | v0.8 | v0.11 | v1.14 | Yes | Must be enabled for Windows. | | `EndpointSlice` | Agent | `true` | GA | v0.13.0 | v1.11 | v1.14 | Yes | | | `TopologyAwareHints` | Agent | `true` | Beta | v1.8 | v1.12 | N/A | Yes | | | `CleanupStaleUDPSvcConntrack` | Agent | `false` | Alpha | v1.13 | N/A | N/A | Yes | | | `LoadBalancerModeDSR` | Agent | `false` | Alpha | v1.13 | N/A | N/A | Yes | | | `AntreaPolicy` | Agent + Controller | `true` | Beta | v0.8 | v1.0 | N/A | No | Agent side config required from v0.9.0+. | | `Traceflow` | Agent + Controller | `true` | Beta | v0.8 | v0.11 | N/A | Yes | | | `FlowExporter` | Agent | `false` | Alpha | v0.9 | N/A | N/A | Yes | | | `NetworkPolicyStats` | Agent + Controller | `true` | Beta | v0.10 | v1.2 | N/A | No | | | `NodePortLocal` | Agent | `true` | GA | v0.13 | v1.4 | v1.14 | Yes | Important user-facing change in v1.2.0 | | `Egress` | Agent + Controller | `true` | Beta | v1.0 | v1.6 | N/A | Yes | | | `NodeIPAM` | Controller | `true` | Beta | v1.4 | v1.12 | N/A | Yes | | | `AntreaIPAM` | Agent + Controller | `false` | Alpha | v1.4 | N/A | N/A | Yes | | | `Multicast` | Agent + Controller | `true` | Beta | v1.5 | v1.12 | N/A | Yes | | | `SecondaryNetwork` | Agent | `false` | Alpha | v1.5 | N/A | N/A | Yes | | | `ServiceExternalIP` | Agent + Controller | `false` | Alpha | v1.5 | N/A | N/A | Yes | | | `TrafficControl` | Agent | `false` | Alpha | v1.7 | N/A | N/A | No | | | `Multicluster` | Agent + Controller | `false` | Alpha | v1.7 | N/A | N/A | Yes | Controller side feature gate added in v1.10.0 | | `IPsecCertAuth` | Agent + Controller | `false` | Alpha |"
},
{
"data": "| N/A | N/A | No | | | `ExternalNode` | Agent | `false` | Alpha | v1.8 | N/A | N/A | Yes | | | `SupportBundleCollection` | Agent + Controller | `false` | Alpha | v1.10 | N/A | N/A | Yes | | | `L7NetworkPolicy` | Agent + Controller | `false` | Alpha | v1.10 | N/A | N/A | Yes | | | `AdminNetworkPolicy` | Controller | `false` | Alpha | v1.13 | N/A | N/A | Yes | | | `EgressTrafficShaping` | Agent | `false` | Alpha | v1.14 | N/A | N/A | Yes | OVS meters should be supported | | `EgressSeparateSubnet` | Agent | `false` | Alpha | v1.15 | N/A | N/A | No | | | `NodeNetworkPolicy` | Agent | `false` | Alpha | v1.15 | N/A | N/A | Yes | | | `L7FlowExporter` | Agent | `false` | Alpha | v1.15 | N/A | N/A | Yes | | `AntreaProxy` implements Service load-balancing for ClusterIP Services as part of the OVS pipeline, as opposed to relying on kube-proxy. This only applies to traffic originating from Pods, and destined to ClusterIP Services. In particular, it does not apply to NodePort Services. Please note that due to some restrictions on the implementation of Services in Antrea, the maximum number of Endpoints that Antrea can support at the moment is 800. If the number of Endpoints for a given Service exceeds 800, extra Endpoints will be dropped. Note that this feature must be enabled for Windows. The Antrea Windows YAML manifest provided as part of releases enables this feature by default. If you edit the manifest, make sure you do not disable it, as it is needed for correct NetworkPolicy implementation for Pod-to-Service traffic. Please refer to this for extra information on AntreaProxy and how it can be configured. When using the OVS built-in kernel module (which is the most common case), your kernel version must be >= 4.6 (as opposed to >= 4.4 without this feature). `EndpointSlice` enables Service EndpointSlice support in AntreaProxy. The EndpointSlice API was introduced in Kubernetes 1.16 (alpha) and it is enabled by default in Kubernetes 1.17 (beta), promoted to GA in Kubernetes 1.21. The EndpointSlice feature will take no effect if AntreaProxy is not enabled. Refer to this for more information about EndpointSlice. If this feature is enabled but the EndpointSlice v1 API is not available (Kubernetes version is lower than 1.21), Antrea Agent will log a message and fallback to the Endpoints API. EndpointSlice v1 API is available (Kubernetes version >=1.21). Option `antreaProxy.enable` is set to true. `TopologyAwareHints` enables TopologyAwareHints support in AntreaProxy. For AntreaProxy, traffic can be routed to the Endpoint which is closer to where it originated when this feature is enabled. Refer to this for more information about TopologyAwareHints. Option `antreaProxy.enable` is set to true. EndpointSlice API version v1 is available in Kubernetes. `LoadBalancerModeDSR` allows users to specify the load balancer mode as DSR (Direct Server Return). The load balancer mode determines how external traffic destined to LoadBalancerIPs and ExternalIPs of Services is processed when it's load balanced across Nodes. In DSR mode, external traffic is never SNAT'd and backend Pods running on Nodes that are not the ingress Node can reply to clients directly, bypassing the ingress Node. Therefore, DSR mode can preserve client IP of requests, and usually has lower latency and higher throughput. It's only meaningful to use this feature when AntreaProxy is enabled and configured to proxy external traffic (proxyAll=true). Refer to this"
},
{
"data": "antrea-proxy.md#configuring-load-balancer-mode-for-external-traffic) for more information about load balancer mode. Options `antreaProxy.enable` and `antreaProxy.proxyAll` are set to true. IPv4 only. Linux Nodes only. Encap mode only. `CleanupStaleUDPSvcConntrack` enables support for cleaning up stale UDP Service conntrack connections in AntreaProxy. Option `antreaProxy.enable` is set to true. `AntreaPolicy` enables Antrea ClusterNetworkPolicy and Antrea NetworkPolicy CRDs to be handled by Antrea controller. `ClusterNetworkPolicy` is an Antrea-specific extension to K8s NetworkPolicies, which enables cluster admins to define security policies which apply to the entire cluster. `Antrea NetworkPolicy` also complements K8s NetworkPolicies by supporting policy priorities and rule actions. Refer to this for more information. None `Traceflow` enables a CRD API for Antrea that supports generating tracing requests for traffic going through the Antrea-managed Pod network. This is useful for troubleshooting connectivity issues, e.g. determining if a NetworkPolicy is responsible for traffic drops between two Pods. Refer to this for more information. Until Antrea v0.11, this feature could only be used in \"encap\" mode, with the Geneve tunnel type (default configuration for both Linux and Windows). In v0.11, this feature was graduated to Beta (enabled by default) and this requirement was lifted. In order to support cluster Services as the destination for tracing requests, option `antreaProxy.enable` should be set to true to enable AntreaProxy. `Flow Exporter` is a feature that runs as part of the Antrea Agent, and enables network flow visibility into a Kubernetes cluster. Flow exporter sends IPFIX flow records that are built from observed connections in Conntrack module to a flow collector. Refer to this for more information. This feature is currently only supported for Nodes running Linux. Windows support will be added in the future. `NetworkPolicyStats` enables collecting NetworkPolicy statistics from antrea-agents and exposing them through Antrea Stats API, which can be accessed by kubectl get commands, e.g. `kubectl get networkpolicystats`. The statistical data includes total number of sessions, packets, and bytes allowed or denied by a NetworkPolicy. It is collected asynchronously so there may be a delay of up to 1 minute for changes to be reflected in API responses. The feature supports K8s NetworkPolicies and Antrea-native policies, the latter of which requires `AntreaPolicy` to be enabled. Usage examples: ```bash kubectl get networkpolicystats -A NAMESPACE NAME SESSIONS PACKETS BYTES CREATED AT default access-nginx 3 36 5199 2020-09-07T13:19:38Z kube-system access-dns 1 12 1221 2020-09-07T13:22:42Z kubectl get antreaclusternetworkpolicystats NAME SESSIONS PACKETS BYTES CREATED AT cluster-deny-egress 3 36 5199 2020-09-07T13:19:38Z cluster-access-dns 10 120 12210 2020-09-07T13:22:42Z kubectl get antreanetworkpolicystats -A NAMESPACE NAME SESSIONS PACKETS BYTES CREATED AT default access-http 3 36 5199 2020-09-07T13:19:38Z foo bar 1 12 1221 2020-09-07T13:22:42Z kubectl get antreaclusternetworkpolicystats cluster-access-dns -o json { \"apiVersion\": \"stats.antrea.io/v1alpha1\", \"kind\": \"AntreaClusterNetworkPolicyStats\", \"metadata\": { \"creationTimestamp\": \"2022-02-24T09:04:53Z\", \"name\": \"cluster-access-dns\", \"uid\": \"940cf76a-d836-4e76-b773-d275370b9328\" }, \"ruleTrafficStats\": [ { \"name\": \"rule1\", \"trafficStats\": { \"bytes\": 392, \"packets\": 4, \"sessions\": 1 } }, { \"name\": \"rule2\", \"trafficStats\": { \"bytes\": 111, \"packets\": 2, \"sessions\": 1 } } ], \"trafficStats\": { \"bytes\": 503, \"packets\": 6, \"sessions\": 2 } } ``` None `NodePortLocal` (NPL) is a feature that runs as part of the Antrea Agent, through which each port of a Service backend Pod can be reached from the external network using a port of the Node on which the Pod is running. NPL enables better integration with external Load Balancers which can take advantage of the feature: instead of relying on NodePort Services implemented by kube-proxy, external Load-Balancers can consume NPL port mappings published by the Antrea Agent (as K8s Pod annotations) and load-balance Service traffic directly to backend Pods. Refer to this for more"
},
{
"data": "This feature is currently only supported for Nodes running Linux with IPv4 addresses. Only TCP & UDP Service ports are supported (not SCTP). `Egress` enables a CRD API for Antrea that supports specifying which egress (SNAT) IP the traffic from the selected Pods to the external network should use. When a selected Pod accesses the external network, the egress traffic will be tunneled to the Node that hosts the egress IP if it's different from the Node that the Pod runs on and will be SNATed to the egress IP when leaving that Node. Refer to this for more information. This feature is currently only supported for Nodes running Linux and \"encap\" mode. The support for Windows and other traffic modes will be added in the future. `NodeIPAM` runs a Node IPAM Controller similar to the one in Kubernetes that allocates Pod CIDRs for Nodes. Running Node IPAM Controller with Antrea is useful in environments where Kubernetes Controller Manager does not run the Node IPAM Controller, and Antrea has to handle the CIDR allocation. This feature requires the Node IPAM Controller to be disabled in Kubernetes Controller Manager. When Antrea and Kubernetes both run Node IPAM Controller there is a risk of conflicts in CIDR allocation between the two. `AntreaIPAM` feature allocates IP addresses from IPPools. It is required by bridging mode Pods. The bridging mode allows flexible control over Pod IP addressing. The desired set of IP ranges, optionally with VLANs, are defined with `IPPool` CRD. An IPPool can be annotated to Namespace, Pod and PodTemplate of StatefulSet/Deployment. Then, Antrea will manage IP address assignment for corresponding Pods according to `IPPool` spec. On a Node, cross-Node/VLAN traffic of AntreaIPAM Pods is sent to the underlay network, and forwarded/routed by the underlay network. For more information, please refer to the . This feature gate also needs to be enabled to use Antrea for IPAM when configuring secondary network interfaces with Multus, in which case Antrea works as an IPAM plugin and allocates IP addresses for Pods' secondary networks, again from the configured IPPools of a secondary network. Refer to the to learn more information. Both bridging mode and secondary network IPAM are supported only on Linux Nodes. The bridging mode works only with `system` OVS datapath type; and `noEncap`, `noSNAT` traffic mode. At the moment, it supports only IPv4. The IPs in an IP range without a VLAN must be in the same underlay subnet as the Node IPs, because inter-Node traffic of AntreaIPAM Pods is forwarded by the Node network. IP ranges with a VLAN must not overlap with other network subnets, and the underlay network router should provide the network connectivity for these VLANs. The `Multicast` feature enables forwarding multicast traffic within the cluster network (i.e., between Pods) and between the external network and the cluster network. Refer to this for more information. This feature is only supported: on Linux Nodes for IPv4 traffic in `noEncap` and `encap` traffic modes The `SecondaryNetwork` feature enables support for provisioning secondary network interfaces for Pods, by annotating them appropriately. More documentation will be coming in the future. At the moment, Antrea can only create secondary network interfaces using SR-IOV VFs on baremetal Linux Nodes. The `ServiceExternalIP` feature enables a controller which can allocate external IPs for Services with type `LoadBalancer`. External IPs are allocated from an `ExternalIPPool` resource and each IP gets assigned to a Node selected by the `nodeSelector` of the pool"
},
{
"data": "That Node will receive Service traffic destined to that IP and distribute it among the backend Endpoints for the Service (through kube-proxy). To enable external IP allocation for a `LoadBalancer` Service, you need to annotate the Service with `\"service.antrea.io/external-ip-pool\": \"<externalIPPool name>\"` and define the appropriate `ExternalIPPool` resource. Refer to this for more information. This feature is currently only supported for Nodes running Linux. `TrafficControl` enables a CRD API for Antrea that controls and manipulates the transmission of Pod traffic. It allows users to mirror or redirect traffic originating from specific Pods or destined for specific Pods to a local network device or a remote destination via a tunnel of various types. It enables a monitoring solution to get full visibility into network traffic, including both north-south and east-west traffic. Refer to this for more information. The `Multicluster` feature gate of Antrea Agent enables which route Multi-cluster Service and Pod traffic through tunnels across clusters, and support for . The `Multicluster` feature gate of Antrea Controller enables support for . Antrea Multi-cluster Controller must be deployed and the cluster must join a Multi-cluster ClusterSet to configure Antrea Multi-cluster features. Refer to for more information about Multi-cluster configuration. At the moment, Antrea Multi-cluster supports only IPv4. This feature enables certificate-based authentication for IPSec tunnel. The `ExternalNode` feature enables Antrea Agent runs on a virtual machine or a bare-metal server which is not a Kubernetes Node, and enforces Antrea NetworkPolicy for the VM/BM. Antrea Agent supports the `ExternalNode` feature on both Linux and Windows. Refer to this for more information. Since Antrea Agent is running on an unmanaged VM/BM when this feature is enabled, features designed for K8s Pods are disabled. As of now, this feature requires that `AntreaPolicy` and `NetworkPolicyStats` are also enabled. OVS is required to be installed on the virtual machine or the bare-metal server before running Antrea Agent, and the OVS version must be >= 2.13.0. `SupportBundleCollection` feature enables a CRD API for Antrea to collect support bundle files on any Node or ExternalNode, and upload to a user defined file server. More documentation will be coming in the future. User should provide a file server with this feature, and store its authentication credential in a Secret. Antrea Controller is required to be configured with the permission to read the Secret. `L7NetworkPolicy` enables users to protect their applications by specifying how they are allowed to communicate with others, taking into account application context, providing fine-grained control over the network traffic beyond IP, transport protocol, and port. Refer to this for more information. This feature is currently only supported for Nodes running Linux, and TX checksum offloading must be disabled. Refer to this for more information and how it can be configured. The `AdminNetworkPolicy` API (which currently includes the AdminNetworkPolicy and BaselineAdminNetworkPolicy objects) complements the Antrea-native policies and help cluster administrators to set security postures in a portable manner. `NodeNetworkPolicy` allows users to apply ClusterNetworkPolicy to Kubernetes Nodes. This feature is only supported for Linux Nodes at the moment. The `EgressTrafficShaping` feature gate of Antrea Agent enables traffic shaping of Egress, which could limit the bandwidth for all egress traffic belonging to an Egress. Refer to this This feature leverages OVS meters to do the actual rate-limiting, therefore this feature requires OVS meters to be supported in the datapath. `EgressSeparateSubnet` allows users to allocate Egress IPs from a different subnet from the default Node subnet. Refer to this for more information. `L7FlowExporter` enables users to export application-layer flow data using Pod or Namespace annotations. Refer to this for more information. Linux Nodes only."
}
] |
{
"category": "Runtime",
"file_name": "feature-gates.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Architecture of the library === ELF -> Specifications -> Objects -> Links ELF BPF is usually produced by using Clang to compile a subset of C. Clang outputs an ELF file which contains program byte code (aka BPF), but also metadata for maps used by the program. The metadata follows the conventions set by libbpf shipped with the kernel. Certain ELF sections have special meaning and contain structures defined by libbpf. Newer versions of clang emit additional metadata in BPF Type Format (aka BTF). The library aims to be compatible with libbpf so that moving from a C toolchain to a Go one creates little friction. To that end, the is tested against the Linux selftests and avoids introducing custom behaviour if possible. The output of the ELF reader is a `CollectionSpec` which encodes all of the information contained in the ELF in a form that is easy to work with in Go. The BPF Type Format describes more than just the types used by a BPF program. It includes debug aids like which source line corresponds to which instructions and what global variables are used. lives in a separate internal package since exposing it would mean an additional maintenance burden, and because the API still has sharp corners. The most important concept is the `btf.Type` interface, which also describes things that aren't really types like `.rodata` or `.bss` sections. `btf.Type`s can form cyclical graphs, which can easily lead to infinite loops if one is not careful. Hopefully a safe pattern to work with `btf.Type` emerges as we write more code that deals with it. Specifications `CollectionSpec`, `ProgramSpec` and `MapSpec` are blueprints for in-kernel objects and contain everything necessary to execute the relevant `bpf(2)` syscalls. Since the ELF reader outputs a `CollectionSpec` it's possible to modify clang-compiled BPF code, for example to rewrite"
},
{
"data": "At the same time the package provides an assembler that can be used to generate `ProgramSpec` on the fly. Creating a spec should never require any privileges or be restricted in any way, for example by only allowing programs in native endianness. This ensures that the library stays flexible. Objects `Program` and `Map` are the result of loading specs into the kernel. Sometimes loading a spec will fail because the kernel is too old, or a feature is not enabled. There are multiple ways the library deals with that: Fallback: older kernels don't allow naming programs and maps. The library automatically detects support for names, and omits them during load if necessary. This works since name is primarily a debug aid. Sentinel error: sometimes it's possible to detect that a feature isn't available. In that case the library will return an error wrapping `ErrNotSupported`. This is also useful to skip tests that can't run on the current kernel. Once program and map objects are loaded they expose the kernel's low-level API, e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer wrappers on top of the low-level API, like `MapIterator`. The low-level API is useful when our higher-level API doesn't support a particular use case. Links BPF can be attached to many different points in the kernel and newer BPF hooks tend to use bpf_link to do so. Older hooks unfortunately use a combination of syscalls, netlink messages, etc. Adding support for a new link type should not pull in large dependencies like netlink, so XDP programs or tracepoints are out of scope. Each bpflinktype has one corresponding Go type, e.g. `link.tracing` corresponds to BPFLINKTRACING. In general, these types should be unexported as long as they don't export methods outside of the Link interface. Each Go type may have multiple exported constructors. For example `AttachTracing` and `AttachLSM` create a tracing link, but are distinct functions since they may require different arguments."
}
] |
{
"category": "Runtime",
"file_name": "ARCHITECTURE.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Directfs is now the default in runsc. This feature gives gVisors application kernel (the Sentry) secure direct access to the container filesystem, avoiding expensive round trips to the filesystem gofer. Learn more about this feature in the following blog that was on . -- gVisor is used internally at Google to run a variety of services and workloads. One of the challenges we faced while building gVisor was providing remote filesystem access securely to the sandbox. gVisors strict and defense in depth approach assumes that the sandbox may get compromised because it shares the same execution context as the untrusted application. Hence the sandbox cannot be given sensitive keys and credentials to access Google-internal remote filesystems. To address this challenge, we added a trusted filesystem proxy called a \"gofer\". The gofer runs outside the sandbox, and provides a secure interface for untrusted containers to access such remote filesystems. For architectural simplicity, gofers were also used to serve local filesystems as well as remote. {:width=\"100%\"} When gVisor was as , the same gofer model was copied over to maintain the same security guarantees. runsc was configured to start one gofer process per container which serves the container filesystem to the sandbox over a predetermined protocol (now ). However, a gofer adds a layer of indirection with significant overhead. This gofer model (built for remote filesystems) brings very few advantages for the runsc use-case, where all the filesystems served by the gofer (like rootfs and ) are mounted locally on the host. The gofer directly accesses them using filesystem syscalls. Linux provides some security primitives to effectively isolate local filesystems. These include, , and detached bind mounts[^1]. Directfs is a new filesystem access mode that uses these primitives to expose the container filesystem to the sandbox in a secure manner. The sandboxs view of the filesystem tree is limited to just the container filesystem. The sandbox process is not given access to anything mounted on the broader host filesystem. Even if the sandbox gets compromised, these mechanisms provide additional barriers to prevent broader system compromise. In directfs mode, the gofer still exists as a cooperative process outside the sandbox. As usual, the gofer enters a new mount namespace, sets up appropriate bind mounts to create the container filesystem in a new directory and then s into that directory. Similarly, the sandbox process enters new user and mount namespaces and then s into an empty directory to ensure it cannot access anything via path"
},
{
"data": "But instead of making RPCs to the gofer to access the container filesystem, the sandbox requests the gofer to provide file descriptors to all the mount points via . The sandbox then directly makes file-descriptor-relative syscalls (e.g. , , , etc) to perform filesystem operations. {:width=\"100%\"} Earlier when the gofer performed all filesystem operations, we could deny all these syscalls in the sandbox process using seccomp. But with directfs enabled, the sandbox process's seccomp filters need to allow the usage of these syscalls. Most notably, the sandbox can now make syscalls (which allow path traversal), but with certain restrictions: , and . We also had to give the sandbox the same privileges as the gofer (for example `CAPDACOVERRIDE` and `CAPDACREAD_SEARCH`), so it can perform the same filesystem operations. It is noteworthy that only the trusted gofer provides FDs (of the container filesystem) to the sandbox. The sandbox cannot walk backwards (using '..') or follow a malicious symlink to escape out of the container filesystem. In effect, we've decreased our dependence on the syscall filters to catch bad behavior, but correspondingly increased our dependence on Linux's filesystem isolation protections. Making RPCs to the gofer for every filesystem operation adds a lot of overhead to runsc. Hence, avoiding gofer round trips significantly improves performance. Let's find out what this means for some of our benchmarks. We will run the benchmarks using our newly released on bind mounts (as opposed to rootfs). This would simulate more realistic use cases because bind mounts are extensively used while configuring filesystems in containers. Bind mounts also do not have an overlay (), so all operations go through goferfs / directfs mount. Let's first look at our , which repeatedly calls on a file. {:width=\"100%\"} The `stat(2)` syscall is more than 2x faster! However, since this is not representative of real-world applications, we should not extrapolate these results. So let's look at some . {:width=\"100%\"} We see a 12% reduction in the absolute time to run these workloads and 17% reduction in Ruby load time! The gofer model in runsc was overly restrictive for accessing host files. We were able to leverage existing filesystem isolation mechanisms in Linux to bypass the gofer without compromising security. Directfs significantly improves performance for certain workloads. This is part of our ongoing efforts to improve gVisor performance. You can learn more about gVisor at . You can also use gVisor in with . Happy sandboxing! -- mount(MS_BIND) and then detaching it from the filesystem tree using umount(MNT_DETACH)."
}
] |
{
"category": "Runtime",
"file_name": "2023-06-27-directfs.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "and are used in Datenlord to monitor the whole system's status, as well as to send alerts when the alert rules are triggered. To quickly start all the things just do this: ``` kubectl apply -f scripts/setup/datenlord-monitor.yaml ``` This will create the namespace `datenlord-monitoring` and bring up all components in there. To shut down all components again you can just delete that namespace: ``` kubectl delete namespace datenlord-monitoring ``` All configurations for Prometheus are part of `datenlord-monitor.yaml` file and are saved as a Kubernetes config map. By externalizing Prometheus configs to a Kubernetes config map, you dont have to build the Prometheus image whenever you need to add or remove a configuration. You need to update the config map and restart the Prometheus pods to apply the new configuration. The `prometheus-server-conf` config map contains all the configurations to discover pods and services running in the Kubernetes cluster dynamically. The following scrape jobs are configured in our Prometheus monitor configuration: kubernetes-apiservers: It gets all the metrics from the API servers. kubernetes-nodes: It collects all the kubernetes node metrics. kubernetes-pods: All pods metrics. kubernetes-cadvisor: Collects all cAdvisor metrics. kubernetes-service-endpoints: All the Service endpoints are scrapped if the service metadata is annotated with prometheus.io/scrape and prometheus.io/port annotations. node-exporter: Collects all the Linux system-level metrics of all Kubernetes nodes. Prometheus main server is deployed as a `Deployment` with one replica. To access the Prometheus dashboard over a IP or a DNS name, we also expose it as Kubernetes service. Once created, you can access the Prometheus dashboard using any of the Kubernetes nodes IP on port 30000. If you use Minikube, you can also run `minikube service prometheus-service -n datenlord-monitoring`, which will open the Prometheus dashboard page automatically. Now if you browse to `status -> Targets`, you will see all the Kubernetes endpoints connected to Prometheus as shown below. `Alert Manager` is also setup in Datenlord Prometheus, which has the following key configurations: A config map for AlertManager configuration. Alert Manager Kubernetes Deployment. Alert Manager service to access the web UI. Alert receiving configuration is saved in the `alertmanager-config` config map, you can change it to set up your custom alert receiver. The alerting rules are configured in `prometheus-server-conf` config map. Grafana server is also deployed as a `Deployment` with one replica, and is configured with prometheus as default data"
},
{
"data": "If you have more data sources, you can add more data sources with different YAMLs under the `grafana-datasources` config map. The Grafana dashboard is exposed as a Kubernetes service. Once created, you can access the Grafana dashboard using any of the Kubernetes nodes IP on port 32000. If you use Minikube, you can also run `minikube service grafana -n datenlord-monitoring`, which will open the Grafana dashboard page automatically. Use the following default username and password to log in. Once you log in with default credentials, it will prompt you to change the default password. ``` User: admin Pass: admin ``` Datenlord has already built a dashboard templates. To use Datenlord dashboard, just import file in Grafana. Then you will see all the prebuilt dashboards as shown below. The detail configurations can be found in this doc. There are also many prebuilt Grafana templates available for Kubernetes. To know more, see Datenlord uses `Elasticsearch`, `Fluentd` and `Kibana` to collect and display all logs from Kubernetes system. Fluentd is able to collect logs both from user applications and cluster components such as kube-apiserver and kube-scheduler, two special nodes in the k8s cluster. As with fluentd, ElasticSearch can perform many tasks, all of them centered around storing and searching. Kibana is a useful visualization software that helps with data demonstration. To quickly start all the things just do this: ``` kubectl apply -f scripts/setup/datenlord-logging.yaml ``` This will create the namespace `datenlord-logging` and bring up all components in there. To shut down all components again you can just delete that namespace: ``` kubectl delete namespace datenlord-logging ``` Elasticsearch is deployed as a `Deployment` with `emptyDir` storage volume. You can also change it to your own PVC for persistent storage. As the fluentd needs to keep all the logs from the cluster, it has to be installed in all nodes. So it is deployed as a `DaemonSet`. All the configurations are stored in the `fluentd-es-config` config map, which has the following key setup: Filters to discard useless logs. Input sources to control where to collect logs. Output configuration to set Elasticsearch as the destination. Kibana is already configured with Elasticsearch as its default data source, and is exposed as a Kubernetes service. Once created, you can access the Kibana dashboard using any of the Kubernetes nodes IP on port 32001. If you use Minikube, you can also run `minikube service kibana -n datenlord-logging`, which will open the Kibana dashboard automatically. Then you can create any index pattern under the `Index Patterns` page."
}
] |
{
"category": "Runtime",
"file_name": "datenlord_monitoring.md",
"project_name": "DatenLord",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "sidebar_position: 15 sidebar_label: \"Community\" If you have any questions about the HwameiStor cloud-native local storage system, welcome to join the community to explore this metaverse world dedicated to developers and grow together! Submit your feedback and issue via . Join a user group #user Join a developer group #developer The blog posts updates on a regular basis. Join regular discussions with community developers. Tel.: (+86) 400 002 6898 Email: info@ Join our Wechat group by scanning the QR code:"
}
] |
{
"category": "Runtime",
"file_name": "community.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Pass the ttRPC receiving context from the Stub to each NRI request handler of the plugin. Fix Stub/Plugin UpdateContainer interface to pass the resource update to the UpdateContainer NRI request handler of the plugin as the last argument. All plugins need to be updated to reflect the above changes in any NRI request handler they implement. NRI plugins can now add rlimits Eliminate the global NRI configuration file, replacing any remaining configuration options with corresponding programmatic options for runtimes. Change default socket path from /var/run/nri.sock to /var/run/nri/nri.sock. Make plugin timeouts configurable on the runtime side. Plugins should be API-compatible between 0.2.0 and 0.3.0, but either the runtime needs to be configured to use the old NRI socket path, or 0.2.0 plugins need to be configured to use the new default NRI socket path. Replace the v0.1.0 CNI like plugin interface with JSON message exchange on stdin and stdout with external daemon-like plugins and a protobuf-defined protocol with ttRPC bindings for communicating with the runtime. Allow plugins to track the state of (CRI) pods and containers. Allow plugins to make changes to a selected subset of container parameters during container creation, update, and stopping of (other) containers. All 0.1.0 plugins are incompatible with 0.2.0, although is provided to bridge between any existing 0.1.0 plugins and the current NRI APIs."
}
] |
{
"category": "Runtime",
"file_name": "RELEASES.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "gVisor serve as the source-of-truth for most work in flight. Specific performance and compatibility issues are generally tracked there. may be used to track larger features that span many issues. However, labels are also used to aggregate cross-cutting feature work. Most gVisor work is focused on four areas. : overall sandbox performance, including platform performance, is a critical area for investment. This includes: network performance (throughput and latency), file system performance (metadata and data I/O), application switch and fault costs, etc. The goal of gVisor is to provide sandboxing without a material performance or efficiency impact on all but the most performance-sensitive applications. : supporting a wide range of applications requires supporting a large system API, including special system files (e.g. proc, sys, dev, etc.). The goal of gVisor is to support the broad set of applications that depend on a generic Linux API, rather than a specific kernel version. : the above goals require aggressive testing and coverage, and well-established processes. This includes adding appropriate system call coverage, end-to-end suites and runtime tests. : Container infrastructure is evolving rapidly and becoming more complex, and gVisor must continuously implement relevant and popular features to ensure that integration points remain robust and feature-complete while preserving security guarantees. Releases are available on . As a convenience, binary packages are also published. Instructions for their use are available via the ."
}
] |
{
"category": "Runtime",
"file_name": "roadmap.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Kubelet configured to use CNI Kubernetes version with CRD support (generally ) Your Kubelet(s) must be configured to run with the CNI network plugin. Please see for more details. Generally we recommend two options: Manually place a Multus binary in your `/opt/cni/bin`, or use our -- which creates a daemonset that has an opinionated way of how to install & configure Multus CNI (recommended). Copy Multus Binary into place You may acquire the Multus binary via compilation (see the ) or download the a binary from the page. Copy multus binary into CNI binary directory, usually `/opt/cni/bin`. Perform this on all nodes in your cluster (master and nodes). cp multus /opt/cni/bin Via Daemonset method As a , you may apply these YAML files (included in the clone of this repository). Run this command (typically you would run this on the master, or wherever you have access to the `kubectl` command to manage your cluster). cat ./deployments/multus-daemonset.yml | kubectl apply -f - # thin deployment or cat ./deployments/multus-daemonset-thick.yml | kubectl apply -f - # thick (client/server) deployment If you need more comprehensive detail, continue along with this guide, otherwise, you may wish to either or skip to the section. If you use daemonset to install multus, skip this section and go to \"Create network attachment\" You put CNI config file in `/etc/cni/net.d`. Kubernetes CNI runtime uses the alphabetically first file in the directory. (`\"NOTE1\"`, `\"NOTE2\"` are just comments, you can remove them at your configuration) Execute following commands at all Kubernetes nodes (i.e. master and minions) ``` mkdir -p /etc/cni/net.d cat >/etc/cni/net.d/00-multus.conf <<EOF { \"name\": \"multus-cni-network\", \"type\": \"multus\", \"readinessindicatorfile\": \"/run/flannel/subnet.env\", \"delegates\": [ { \"NOTE1\": \"This is example, wrote your CNI config in delegates\", \"NOTE2\": \"If you use flannel, you also need to run flannel daemonset before!\", \"type\": \"flannel\", \"name\": \"flannel.1\", \"delegate\": { \"isDefaultGateway\": true } } ], \"kubeconfig\": \"/etc/cni/net.d/multus.d/multus.kubeconfig\" } EOF ``` For the detail, please take a look into NOTE: You can use \"clusterNetwork\"/\"defaultNetworks\" instead of \"delegates\", see for the detail As above config, you need to set `\"kubeconfig\"` in the config file for NetworkAttachmentDefinition(CRD). In case of \"delegates\", the first delegates network will be used for \"Pod IP\". Otherwise, \"clusterNetwork\" will be used for \"Pod IP\". Create resources for multus to access CRD objects as following command: ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: ServiceAccount metadata: name: multus namespace: kube-system kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: multus rules: apiGroups: [\"k8s.cni.cncf.io\"] resources: '*' verbs: '*' apiGroups: \"\" resources: pods pods/status verbs: get update kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: multus roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: multus subjects: kind: ServiceAccount name: multus namespace: kube-system EOF ``` Create kubeconfig at master node as following commands: ``` mkdir -p /etc/cni/net.d/multus.d SERVICEACCOUNT_CA=$(kubectl get secrets -n=kube-system -o json | jq -r '.items[]|select(.metadata.annotations.\"kubernetes.io/service-account.name\"==\"multus\")| .data.\"ca.crt\"') SERVICEACCOUNT_TOKEN=$(kubectl get secrets -n=kube-system -o json | jq -r '.items[]|select(.metadata.annotations.\"kubernetes.io/service-account.name\"==\"multus\")| .data.token' | base64 -d ) KUBERNETESSERVICEPROTO=$(kubectl get all -o json | jq -r .items[0].spec.ports[0].name) KUBERNETESSERVICEHOST=$(kubectl get all -o json | jq -r .items[0].spec.clusterIP) KUBERNETESSERVICEPORT=$(kubectl get all -o json | jq -r .items[0].spec.ports[0].port) cat > /etc/cni/net.d/multus.d/multus.kubeconfig <<EOF apiVersion: v1 kind: Config clusters: name: local cluster: server: ${KUBERNETESSERVICEPROTOCOL:-https}://${KUBERNETESSERVICEHOST}:${KUBERNETESSERVICEPORT} certificate-authority-data: ${SERVICEACCOUNT_CA} users: name: multus user: token: \"${SERVICEACCOUNT_TOKEN}\" contexts: name: multus-context context: cluster: local user: multus current-context: multus-context EOF ``` Copy `/etc/cni/net.d/multus.d/multus.kubeconfig` into other Kubernetes nodes NOTE: Recommend to exec 'chmod 600 /etc/cni/net.d/multus.d/multus.kubeconfig' to keep secure ``` scp /etc/cni/net.d/multus.d/multus.kubeconfig"
},
{
"data": "``` If you use daemonset to install multus, skip this section and go to \"Create network attachment\" Create CRD definition in Kubernetes as following command at master node: ``` cat <<EOF | kubectl create -f - apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: network-attachment-definitions.k8s.cni.cncf.io spec: group: k8s.cni.cncf.io version: v1 scope: Namespaced names: plural: network-attachment-definitions singular: network-attachment-definition kind: NetworkAttachmentDefinition shortNames: net-attach-def validation: openAPIV3Schema: properties: spec: properties: config: type: string EOF ``` The 'NetworkAttachmentDefinition' is used to setup the network attachment, i.e. secondary interface for the pod, There are two ways to configure the 'NetworkAttachmentDefinition' as following: NetworkAttachmentDefinition with json CNI config NetworkAttachmentDefinition with CNI config file Following command creates NetworkAttachmentDefinition. CNI config is in `config:` field. ``` cat <<EOF | kubectl create -f - apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf-1 spec: config: '{ \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"ranges\": [ [ { \"subnet\": \"10.10.0.0/16\", \"rangeStart\": \"10.10.1.20\", \"rangeEnd\": \"10.10.3.50\", \"gateway\": \"10.10.0.254\" } ] ] } }' EOF ``` If NetworkAttachmentDefinition has no spec, multus find a file in defaultConfDir ('/etc/cni/multus/net.d', with same name in the 'name' field of CNI config. ``` cat <<EOF | kubectl create -f - apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf-2 EOF ``` ``` cat <<EOF > /etc/cni/multus/net.d/macvlan2.conf { \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"name\": \"macvlan-conf-2\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"ranges\": [ [ { \"subnet\": \"11.10.0.0/16\", \"rangeStart\": \"11.10.1.20\", \"rangeEnd\": \"11.10.3.50\" } ] ] } } EOF ``` ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: pod-case-01 annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf-1, macvlan-conf-2 spec: containers: name: pod-case-01 image: docker.io/centos/tools:latest command: /sbin/init EOF ``` You can also specify NetworkAttachmentDefinition with its namespace as adding `<namespace>/` ``` cat <<EOF | kubectl create -f - apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf-3 namespace: testns1 spec: config: '{ \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"ranges\": [ [ { \"subnet\": \"12.10.0.0/16\", \"rangeStart\": \"12.10.1.20\", \"rangeEnd\": \"12.10.3.50\" } ] ] } }' EOF cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: pod-case-02 annotations: k8s.v1.cni.cncf.io/networks: testns1/macvlan-conf-3 spec: containers: name: pod-case-02 image: docker.io/centos/tools:latest command: /sbin/init EOF ``` You can also specify interface name as adding `@<ifname>`. ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: pod-case-03 annotations: k8s.v1.cni.cncf.io/networks: macvlan-conf-1@macvlan1 spec: containers: name: pod-case-03 image: docker.io/centos/tools:latest command: /sbin/init EOF ``` ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: pod-case-04 annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\" : \"macvlan-conf-1\" }, { \"name\" : \"macvlan-conf-2\" } ]' spec: containers: name: pod-case-04 image: docker.io/centos/tools:latest command: /sbin/init EOF ``` You can also specify NetworkAttachmentDefinition with its namespace as adding `\"namespace\": \"<namespace>\"`. ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: pod-case-05 annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\" : \"macvlan-conf-1\", \"namespace\": \"testns1\" } ]' spec: containers: name: pod-case-05 image: docker.io/centos/tools:latest command: /sbin/init EOF ``` You can also specify interface name as adding `\"interface\": \"<ifname>\"`. ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: pod-case-06 annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\" : \"macvlan-conf-1\", \"interface\": \"macvlan1\" }, { \"name\" : \"macvlan-conf-2\" } ]' spec: containers: name: pod-case-06 image: docker.io/centos/tools:latest command: /sbin/init EOF ``` Following the example of `ip -d address` output of above pod, \"pod-case-06\": ``` kubectl exec -it pod-case-06 -- ip -d address 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 numtxqueues 1 numrxqueues 1 gsomaxsize 65536 gsomaxsegs 65535 inet"
},
{
"data": "scope host lo validlft forever preferredlft forever inet6 ::1/128 scope host validlft forever preferredlft forever 3: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default link/ether 0a:58:0a:f4:02:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0 veth numtxqueues 1 numrxqueues 1 gsomaxsize 65536 gsomaxsegs 65535 inet 10.244.2.6/24 scope global eth0 validlft forever preferredlft forever inet6 fe80::ac66:45ff:fe7c:3a19/64 scope link validlft forever preferredlft forever 4: macvlan1@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 4e:6d:7a:4e:14:87 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0 macvlan mode bridge numtxqueues 1 numrxqueues 1 gsomaxsize 65536 gsomaxsegs 65535 inet 10.10.1.22/16 scope global macvlan1 validlft forever preferredlft forever inet6 fe80::4c6d:7aff:fe4e:1487/64 scope link validlft forever preferredlft forever 5: net2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether 6e:e3:71:7f:86:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 0 promiscuity 0 macvlan mode bridge numtxqueues 1 numrxqueues 1 gsomaxsize 65536 gsomaxsegs 65535 inet 11.10.1.22/16 scope global net2 validlft forever preferredlft forever inet6 fe80::6ce3:71ff:fe7f:86f7/64 scope link validlft forever preferredlft forever ``` | Interface name | Description | | | | | lo | loopback | | eth0 | Default network interface (flannel) | | macvlan1 | macvlan interface (macvlan-conf-1) | | net2 | macvlan interface (macvlan-conf-2) | Typically, the default route for a pod will route traffic over the `eth0` and therefore over the cluster-wide default network. You may wish to specify that a different network attachment will have the default route. You can achieve this by using the JSON formatted annotation and specifying a `default-route` key. NOTE: It's important that you consider that this may impact some functionality of getting traffic to route over the cluster-wide default network. For example, we have a this configuration for macvlan: ``` cat <<EOF | kubectl create -f - apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: macvlan-conf spec: config: '{ \"cniVersion\": \"0.3.0\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"host-local\", \"subnet\": \"192.168.2.0/24\", \"rangeStart\": \"192.168.2.200\", \"rangeEnd\": \"192.168.2.216\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"gateway\": \"192.168.2.1\" } }' EOF ``` We can then create a pod which uses the `default-route` key in the JSON formatted `k8s.v1.cni.cncf.io/networks` annotation. ``` cat <<EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: samplepod annotations: k8s.v1.cni.cncf.io/networks: '[{ \"name\": \"macvlan-conf\", \"default-route\": [\"192.168.2.1\"] }]' spec: containers: name: samplepod command: [\"/bin/bash\", \"-c\", \"trap : TERM INT; sleep infinity & wait\"] image: dougbtv/centos-network EOF ``` This will set `192.168.2.1` as the default route over the `net1` interface, such as: ``` kubectl exec -it samplepod -- ip route default via 192.168.2.1 dev net1 10.244.0.0/24 dev eth0 proto kernel scope link src 10.244.0.169 10.244.0.0/16 via 10.244.0.1 dev eth0 ``` Multus CNI, when installed using the daemonset-style installation uses an entrypoint script which copies the Multus binary into place, places CNI configurations. This entrypoint takes a variety of parameters for customization. Typically, you'd modified the daemonset YAML itself to specify these parameters. For example, the `command` and `args` parameters in the `containers` section of the DaemonSet may look something like: ``` command: [\"/entrypoint.sh\"] args: \"--multus-conf-file=auto\" \"--namespace-isolation=true\" \"--multus-log-level=verbose\" ``` Note that some of the defaults have directories inside the root directory named `/host/`, this is because it is deployed as a container and we have host file system locations mapped into this directory inside the container. If you use other directories, you may have to change the mounted volumes. Each parameter is shown with the default as the value. --cni-conf-dir=/host/etc/cni/net.d This is the configuration directory where Multus will write its configuration file. --cni-bin-dir=/host/opt/cni/bin This the directory in which the Multus binary will be"
},
{
"data": "--namespace-isolation=false Setting this option to true enables the Namespace isolation feature, which insists that custom resources must be created in the same namespace as the pods, otherwise it will refuse to attach those definitions as additional interfaces. See (the configuration guide for more information)[configuration.md]. --global-namespaces=default,foo,bar The `--global-namespaces` works only when `--namespace-isolation=true`. This takes a comma-separated list of namespaces which can be referred to globally when namespace isolation is enabled. See (the configuration guide for more information)[configuration.md]. --multus-bin-file=/usr/src/multus-cni/bin/multus This option lets you set which binary executable to copy from the container onto the host (into the directory specified by `--cni-bin-dir`), allowing one to copy an alternate version or build of Multus CNI. --multus-conf-file=/usr/src/multus-cni/images/70-multus.conf The `--multus-conf-file` is one of two options; it can be set to a source file to be copied into the location specified by `--cni-conf-dir`. Or, to a value of `auto`, that is: `--multus-conf-file=auto`. The automatic configuration option is used to automatically generate Multus configurations given existing on-disk CNI configurations for your default network. In the case that `--multus-conf-file=auto` -- The entrypoint script will look at the `--multus-autoconfig-dir` (by default, the same as the `--cni-conf-dir`). Multus will wait (600 seconds) until there's a CNI configuration file there, and it will take the alphabetically first configuration there, and it will wrap that configuration into a Multus configuration. --multus-autoconfig-dir=/host/etc/cni/net.d Used only with `--multus-conf-file=auto`. This option allows one to set which directory will be used to generate configuration files. This can be used if you have your CNI configuration stored in an alternate location, or, you have constraints on race conditions where you'd like to generate your default network configuration first, and then only have Multus write its configuration when it finds that configuration -- allowing only Multus to write the CNI configuration in the `--cni-conf-dir`, therefore notifying the Kubelet that the node is in a ready state. --multus-kubeconfig-file-host=/etc/cni/net.d/multus.d/multus.kubeconfig Used only with `--multus-conf-file=auto`. Allows you to specify an alternate path to the Kubeconfig. --multus-master-cni-file-name= The `--multus-master-cni-file-name` can be used to select the cni file as the master cni, rather than the first file in cni-conf-dir. For example, `--multus-master-cni-file-name=10-calico.conflist`. --multus-log-level= --multus-log-file= Used only with `--multus-conf-file=auto`. See the for which values are permitted. Used only with `--multus-conf-file=auto`. Allows you to specify CNI spec version. Please set if you need to specify CNI spec version. --cni-version= In some cases, the original CNI configuration that the Multus configuration was generated from (using `--multus-conf-file=auto`) may be used as a sort of semaphor for network readiness -- as this model is used by the Kubelet itself. If you need to disable Multus' availability, you may wish to clean out the generated configuration file when the source file for autogeneration of the config file is no longer present. You can use this functionality by setting: --cleanup-config-on-exit=true Additionally when using CRIO, you may wish to have the CNI config file that's used as the source for `--multus-conf-file=auto` renamed. This boolean option when set to true automatically renames the file with a `.old` suffix to the original filename. --rename-conf-file=true When using `--multus-conf-file=auto` you may also care to specify a `binDir` in the configuration, this can be accomplished using the `--additional-bin-dir` option. --additional-bin-dir=/opt/multus/bin Sometimes, you may wish to not have the entrypoint copy the binary file onto the host. Potentially, you have another way to copy in a specific version of Multus, for example. By default, it's always copied, but you may disable the copy with: --skip-multus-binary-copy=true If you wish to have auto configuration use the `readinessindicatorfile` in the configuration, you can use the `--readiness-indicator-file` to express which file should be used as the readiness indicator. --readiness-indicator-file=/path/to/file"
}
] |
{
"category": "Runtime",
"file_name": "how-to-use.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. Examples of behavior that contributes to a positive environment for our community include: Demonstrating empathy and kindness toward other people Being respectful of differing opinions, viewpoints, and experiences Giving and gracefully accepting constructive feedback Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: The use of sexualized language or imagery, and sexual attention or advances of any kind Trolling, insulting or derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or email address, without their explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline"
},
{
"data": "Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [email protected]. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. Community Impact: A violation through a single incident or series of actions. Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. Community Impact: A serious violation of community standards, including sustained inappropriate behavior. Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. Consequence: A permanent ban from any sort of public interaction within the community. This Code of Conduct is adapted from the , version 2.0, available at https://www.contributor-covenant.org/version/2/0/codeofconduct.html. Community Impact Guidelines were inspired by [Mozilla's code of conduct enforcement ladder](https://github.com/mozilla/diversity). For answers to common questions about this code of conduct, see the FAQ at https://www.contributor-covenant.org/faq. Translations are available at https://www.contributor-covenant.org/translations."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "kube-vip",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "As the main users of Tencent Cloud are from China, so the tutorial is . We also provide a code template for deploying serverless WebAssembly functions on Tencent Cloud, please check out . Fork the repo and start writing your own rust functions."
}
] |
{
"category": "Runtime",
"file_name": "tencent.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "https://github.com/heptio/velero/releases/tag/v0.11.1 Added the `velero migrate-backups` command to migrate legacy Ark backup metadata to the current Velero format in object storage. This command needs to be run in preparation for upgrading to v1.0, if* you have backups that were originally created prior to v0.11 (i.e. when the project was named Ark). https://github.com/heptio/velero/releases/tag/v0.11.0 Heptio Ark is now Velero! This release is the first one to use the new name. For details on the changes and how to migrate to v0.11, see the . Please follow the instructions to ensure a successful upgrade to v0.11.* Restic has been upgraded to v0.9.4, which brings significantly faster restores thanks to a new multi-threaded restorer. Velero now waits for terminating namespaces and persistent volumes to delete before attempting to restore them, rather than trying and failing to restore them while they're being deleted. Fix concurrency bug in code ensuring restic repository exists (#1235, @skriss) Wait for PVs and namespaces to delete before attempting to restore them. (#826, @nrb) Set the zones for GCP regional disks on restore. This requires the `compute.zones.get` permission on the GCP serviceaccount in order to work correctly. (#1200, @nrb) Renamed Heptio Ark to Velero. Changed internal imports, environment variables, and binary name. (#1184, @nrb) use 'restic stats' instead of 'restic check' to determine if repo exists (#1171, @skriss) upgrade restic to v0.9.4 & replace --hostname flag with --host (#1156, @skriss) Clarify restore log when object unchanged (#1153, @daved) Add backup-version file in backup tarball. (#1117, @wwitzel3) add ServerStatusRequest CRD and show server version in `ark version` output (#1116, @skriss)"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-0.11.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Vendoring dependencies\" layout: docs We are using to manage dependencies. You can install it by following [these instructions][1]. Run `dep ensure`. If you want to see verbose output, you can append `-v` as in `dep ensure -v`. Run `dep ensure -update <pkg> [<pkg> ...]` to update one or more dependencies."
}
] |
{
"category": "Runtime",
"file_name": "vendoring-dependencies.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete a policy entry ``` cilium-dbg bpf policy delete <endpoint id> <identity> [port/proto] [flags] ``` ``` --deny Sets deny mode -h, --help help for delete ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage policy related BPF maps"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_policy_delete.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The release procedure below can be performed by a project member with \"maintainer\" or higher privileges on the GitHub repository. It assumes that you will be working in an up-to-date local clone of the GitHub repository, where the `upstream` remote points to `github.com/hpcng/singularity`. Set a target date for the release candidate (if required) and release. Generally 2 weeks from RC -> release is appropriate for new 3.X.0 minor versions. Aim to specifically discuss the release timeline and progress in community meetings at least 2 months prior to the scheduled date. Use a GitHub milestone to track issues and PRs that will form part of the next release. Ensure that the `CHANGELOG.md` is kept up-to-date on the `master` branch, with all relevant changes listed under a \"Changes Since Last Release\" section. Monitor and merge dependabot updates, such that a release is made with as up-to-date versions of dependencies as possible. This lessens the burden in addressing patch release fixes that require dependency updates, as we use several dependencies that move quickly. When a new 3.Y.0 minor version of Singularity is issued the release process begins by branching, and then issuing a release candidate for broader testing. When a new 3.Y.Z patch release is issued, the branch will already be present, and steps 1-2 should be skipped. From a repository that is up-to-date with master, create a release branch e.g. `git checkout upstream/master -b release-3.8`. Push the release branch to GitHub via `git push upstream release-3.8`. Examine the GitHub branch protection rules, to extend them to the new release branch if needed. Modify the `README.md`, `INSTALL.md`, `CHANGELOG.md` via PR against the release branch, so that they reflect the version to be released. Apply an annotated tag via `git tag -a -m \"Singularity v3.8.0 Release Candidate 1\""
},
{
"data": "Push the tag via `git push upstream v3.8.0-rc.1`. Create a tarball via `./mconfig --only-rpm -v && make dist`. Test intallation from the tarball. Compute the sha256sum of the tarball e.g. `sha256sum *.tar.gz > sha256sums`. Create a GitHub release, marked as a 'pre-release', incorporating `CHANGELOG.md` information, and attaching the tarball and `sha256sums`. Notify the community about the RC via the Google Group and Slack. There will often be multiple release candidates issued prior to the final release of a new 3.Y.0 minor version. A small 3.Y.Z patch release may not require release candidates where the code changes are contained, confirmed by the person reporting the bug(s), and well covered by tests. Ensure the user and admin documentation is up-to-date for the new version, branched, and tagged. can be edited can be edited Ensure the user and admin documentation has been deployed to the singularity.hpcng.org website. Modify the `README.md`, `INSTALL.md`, `CHANGELOG.md` via PR against the release branch, so that they reflect the version to be released. Apply an annotated tag via `git tag -a -m \"Singularity v3.8.0\" v3.8.0`. Push the tag via `git push upstream v3.8.0-rc.1`. Create a tarball via `./mconfig -v && make dist`. Test intallation from the tarball. Compute the sha256sum of the tarball e.g. `sha256sum *.tar.gz > sha256sums`. Create a GitHub release, incorporating `CHANGELOG.md` information, and attaching the tarball and `sha256sums`. Notify the community about the RC via the Google Group and Slack. Create and merge a PR from the `release-3.x` branch into `master`, so that history from the RC process etc. is captured on `master`. If the release is a new major/minor version, move the prior `release-3.x` branch to `vault/release-3.x`. If the release is a new major/minor version, update the `.github/dependabot.yml` configuration so that dependabot is tracking the new stable release branch. Start scheduling / setting up milestones etc. to track the next release!"
}
] |
{
"category": "Runtime",
"file_name": "RELEASE_PROCEDURE.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(authorization)= When interacting with Incus over the Unix socket, members of the `incus-admin` group will have full access to the Incus API. Those who are only members of the `incus` group will instead be restricted to a single project tied to their user. When interacting with Incus over the network (see {ref}`server-expose` for instructions), it is possible to further authenticate and restrict user access. There are two supported authorization methods: {ref}`authorization-tls` {ref}`authorization-openfga` (authorization-tls)= Incus natively supports restricting {ref}`authentication-trusted-clients` to one or more projects. When a client certificate is restricted, the client will also be prevented from performing global configuration changes or altering the configuration (limits, restrictions) of the projects it's allowed access to. To restrict access, use . Set the `restricted` key to `true` and specify a list of projects to restrict the client to. If the list of projects is empty, the client will not be allowed access to any of them. This authorization method is always used if a client authenticates with TLS, regardless of whether another authorization method is configured. (authorization-openfga)= Incus supports integrating with . This authorization method is highly granular. For example, it can be used to restrict user access to a single instance. To use OpenFGA for authorization, you must configure and run an OpenFGA server yourself. To enable this authorization method in Incus, set the server configuration options. Incus will connect to the OpenFGA server, write the {ref}`openfga-model`, and query this server for authorization for all subsequent requests. (openfga-model)= With OpenFGA, access to a particular API resource is determined by the user's relationship to it. These relationships are determined by an . The Incus OpenFGA authorization model describes API resources in terms of their relationship to other resources, and a relationship a user or group might have with that resource. Some convenient relations have also been built into the model: `server -> admin`: Full access to Incus. `server -> operator`: Full access to Incus, without edit access on server configuration, certificates, or storage pools. `server -> viewer`: Can view all server level configuration but cannot edit. Cannot view projects or their contents. `project -> manager`: Full access to a single project, including edit access. `project -> operator`: Full access to a single project, without edit access. `project -> viewer`: View access for a single project. `instance -> manager`: Full access to a single instance, including edit access. `instance -> operator`: Full access to a single instance, without edit access. `instance -> user`: View access to a single instance, plus permissions for `exec`, `console`, and `file` APIs. `instance -> viewer`: View access to a single instance. ```{important} Users that you do not trust with root access to the host should not be granted the following relations: `server -> admin` `server -> operator` `server -> can_edit` `server -> cancreatestorage_pools` `server -> cancreateprojects` `server -> cancreatecertificates` `certificate -> can_edit` `storagepool -> canedit` `project -> manager` The remaining relations may be granted. However, you must apply appropriate {ref}`project-restrictions`. ``` The full Incus OpenFGA authorization model is defined in `internal/server/auth/driveropenfgamodel.openfga`: ```{literalinclude} ../internal/server/auth/driveropenfgamodel.openfga language: none ```"
}
] |
{
"category": "Runtime",
"file_name": "authorization.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The default setting for CubeFS volumes does not enable the trash feature. However, you can enable the trash feature for a volume through the master service interface. namevolume name. authKeycalculate the 32-bit MD5 value of the owner field of vol as authentication information. trashIntervaltrash cleanup period for deleted files is specified in minutes. The default value of 0 means that the trash feature is not enabled. ``` bash curl -v \"http://127.0.0.1:17010/vol/setTrashInterval?name=test&authKey=&trashInterval=7200\" | jq . ``` After enabling the trash feature, the client will create a hidden folder named \".Trash\" in the root directory of the mount point. The \"Current\" folder within \".Trash\" retains the currently mistakenly deleted files or folders along with their complete file paths. ::: tip Note In order to improve the efficiency of the trash, deleted files are displayed with their parent directories flattened in the file names. The background coroutine constructs the parent directory paths for the files. Therefore, when there are a large number of files in the deleted, it is possible to briefly encounter situations where deleted files do not have their parent directories. This is considered a normal occurrence. ::: The \"Current\" directory is periodically renamed to Expired_ExpirationTimestamp. When the expiration timestamp of the \"Expired\" folder is reached, all contents within that folder are deleted. As mentioned earlier, to recover a mistakenly deleted file, you simply need to locate the file in either the \"Current\" or \"Expired\" directory within the \".Trash\" folder. Using the complete parent directory path, you can restore the deleted file/folder to its original location using the \"mv\" operation. It is important to note that the contents of the trash rely on the client's background coroutine for periodic deletion. Therefore, if there is no online client for the respective volume, the contents of the trash will be retained until a client for the respective volume. To free up space in the trash as quickly as possible, you can also directly perform the \"rm\" operation on the \".Trash\" folder through the client."
}
] |
{
"category": "Runtime",
"file_name": "trash.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Rook Test Framework The integration tests run end-to-end tests on Rook in a running instance of Kubernetes. The framework includes scripts for starting Kubernetes so users can quickly spin up a Kubernetes cluster. The tests are generally designed to install Rook, run tests, and uninstall Rook. The CI runs the integration tests with each PR and each master or release branch build. If the tests fail in a PR, the tmate for debugging. This document will outline the steps to run the integration tests locally in a minikube environment, should the CI not be sufficient to troubleshoot. !!! hint The CI is generally much simpler to troubleshoot than running these tests locally. Running the tests locally is rarely necessary. !!! warning A risk of running the tests locally is that a local disk is required during the tests. If not running in a VM, your laptop or other test machine could be destroyed. Follow Rook's to install Minikube. Now that the Kubernetes cluster is running we need to populate the Docker registry to allow local image builds to be easily used inside Minikube. ```console eval $(minikube docker-env -p minikube) ``` `make build` will now build and push the images to the Docker registry inside the Minikube virtual machine. ```console make build ``` Tag the newly built images to `rook/ceph:local-build` for running tests, or `rook/ceph:master` if creating example manifests:: ```console docker tag $(docker images|awk '/build-/ {print $1}') rook/ceph:local-build docker tag rook/ceph:local-build rook/ceph:master ``` Some settings are available to run the tests under different environments. The settings are all configured with environment variables. See for the available environment variables. Set the following variables: ```console export TESTHELMPATH=/tmp/rook-tests-scripts-helm/linux-amd64/helm export TESTBASEDIR=WORKING_DIR export TESTSCRATCHDEVICE=/dev/vdb ``` Set `TESTSCRATCHDEVICE` to the correct block device name based on the driver that's being used. !!! hint If using the `virtualbox` minikube driver, the device should be `/dev/sdb` !!! warning The integration tests erase the contents of `TESTSCRATCHDEVICE` when the test is completed To run a specific suite, specify the suite name: ```console go test -v -timeout 1800s -run CephSmokeSuite github.com/rook/rook/tests/integration ``` After running tests, see test logs under `tests/integration/_output`. To run specific tests inside a suite: ```console go test -v -timeout 1800s -run CephSmokeSuite github.com/rook/rook/tests/integration -testify.m TestARookClusterInstallation_SmokeTest ``` !!! info Only the golang test suites are documented to run locally. Canary and other tests have only ever been supported in the CI. Setup OpenShift environment and export KUBECONFIG Make sure `oc` executable file is in the PATH. Only the `CephSmokeSuite` is currently supported on OpenShift. Set the following environment variables depending on the environment: ```console export TESTENVNAME=openshift export TESTSTORAGECLASS=gp2 export TESTBASEDIR=/tmp ``` Run the integration tests"
}
] |
{
"category": "Runtime",
"file_name": "rook-test-framework.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Show contents of table \"health\" ``` cilium-dbg statedb health [flags] ``` ``` -h, --help help for health -w, --watch duration Watch for new changes with the given interval (e.g. --watch=100ms) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Inspect StateDB"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_statedb_health.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "First install the rpmbuild tool and initialize the rpmbuild working directory: ```shell yum install -y rpm-build rpmbuild -ba isulad.spec ``` The second command will exit with an error, but this command will still create the rpmbuild directory in the user's default working directory. In the rpmbuild working directory, you will find the following subdirectories: ```shell $ ls ~/rpmbuild BUILD BUILDROOT RPMS SOURCES SPECS SRPMS ``` tips: `SOURCES` stores the source code `SPECS` stores the SPEC files used for the build `RPMS` stores rpm packages. ```shell dnf install -y patch automake autoconf libtool cmake make libcap libcap-devel libselinux libselinux-devel libseccomp libseccomp-devel git libcgroup tar python3 python3-pip libcurl-devel zlib-devel glibc-headers openssl-devel gcc gcc-c++ systemd-devel systemd-libs golang libtar && \\ dnf --enablerepo=powertools install -y yajl-devel device-mapper-devel && \\ dnf install -y epel-release && \\ dnf --enablerepo=powertools install libuv-devel &&\\ dnf install libwebsockets-devel ``` ```shell dnf --enablerepo=powertools install -y docbook2X doxygen && \\ dnf install -y bash-completion chrpath rsync ``` First download the lxc source code: ```shell git clone https://gitee.com/src-openeuler/lxc.git ``` Put the source code, patch, and spec into the rpmbuild working directory: ```shell export RPM=~/rpmbuild cd lxc cp .patch .tar.gz $RPM/SOURCES/ && \\ cp *.spec $RPM/SPECS/ ``` ```shell cd ~/rpmbuild/SPECS rpmbuild -ba lxc.spec ``` After the build is successful, the rpm package will be placed in the RPM directory. You can find the corresponding rpm package and install it with the `rpm -Uvh`. ```shell cd ~/rpmbuild/RPMS/x86_64 rpm -Uvh lxc-libs-4.0.3-2022080901.x86_64.rpm rpm -Uvh lxc-4.0.3-2022080901.x86_64.rpm rpm -Uvh lxc-devel-4.0.3-2022080901.x86_64.rpm ``` ```shell dnf --enablerepo=powertools install -y gtest-devel ``` Note lxc must be installed before installing lcr. First download the lcr source code: ```shell git clone https://gitee.com/openeuler/lcr ``` Then package the source code: ```shell export RPM=~/rpmbuild cd lcr tar -zcvf lcr-2.0.tar.gz * ``` Finally put the required source code, patch, and spec into the rpmbuild working directory: ```shell cp lcr-2.0.tar.gz $RPM/SOURCES/ cp *.spec $RPM/SPECS/ ``` ```shell cd ~/rpmbuild/SPECS rpmbuild -ba lcr.spec ``` ```sh rpm -Uvh lcr-2.1.0-2.x86_64.rpm rpm -Uvh lcr-devel-2.1.0-2.x86_64.rpm ``` ```shell dnf --enablerepo=powertools install -y gmock-devel ``` First download the clibcni source code: ```shell git clone"
},
{
"data": "``` Then package the source code: ```shell cd clicni tar -zcvf clibcni-2.0.tar.gz * ``` Finally put the required source code, patch, and spec into the rpmbuild working directory: ```shell cp clibcni-2.0.tar.gz $RPM/SOURCES/ cp *.spec $RPM/SPECS/ ``` ```shell cd ~/rpmbuild/SPECS rpmbuild -ba clibcni.spec ``` After the build is successful, the rpm package will be placed in the RPM directory . You can find the corresponding rpm package and install it with the `rpm -Uvh`. ```shell yum install -y emacs.x86_64 ``` First download the protobuf source code, Then package the source code, Finally put the required source code, patch, and spec into the rpmbuild working directory: ```shell git clone https://gitee.com/src-openeuler/protobuf cd protobuf git checkout openEuler-20.03-LTS cp .tar.gz .el .patch $RPM/SOURCES/ && cp .spec $RPM/SPECS/ ``` Since isulad does not need to build protobuf for java and python, you can modify the first 5 lines of the spec file to avoid installing related dependencies: ```shell cd ~/rpmbuild/SPECS vim protobuf.spec %bcond_with python %bcond_with java ``` ```shell rpmbuild -ba protobuf.spec ``` ```sh rpm -Uvh protobuf-3.14.0-4.x86_64.rpm dnf install -y emacs-26.1-7.el8.x86_64 rpm -Uvh protobuf-compiler-3.14.0-4.x86_64.rpm rpm -Uvh protobuf-devel-3.14.0-4.x86_64.rpm ``` ```shell yum install -y emacs.x8664 openssl-devel.x8664 dnf --enablerepo=powertools install gflags-devel python3-Cython python3-devel dnf install -y abseil-cpp-devel gperftools-devel re2-devel ``` ```shell git clone https://gitee.com/src-openeuler/grpc cd grpc git checkout openEuler-20.03-LTS cp .tar.gz $RPM/SOURCES/ && cp .spec $RPM/SPECS/ ``` ```shell rpmbuild -ba grpc.spec ``` ```sh dnf install -y epel-release.noarch c-ares-1.13.0-5.el8.x8664 gperftools-libs-2.7-9.el8.x8664 dnf --enablerepo=powertools install gflags-devel rpm -Uvh grpc-1.31.0-1.x86_64.rpm dnf install -y openssl-devel.x86_64 rpm -Uvh grpc-devel-1.31.0-1.x86_64.rpm ``` ```shell dnf install -y bzip2-devel e2fsprogs-devel libattr-devel libxml2-devel lz4-devel lzo-devel sharutils libacl-devel dnf --enablerepo=powertools install sharutils ``` ```shell git clone https://gitee.com/src-openeuler/libarchive cd libarchive git checkout openEuler-20.03-LTS cp .tar.gz .patch $RPM/SOURCES/ && cp *.spec $RPM/SPECS/ ``` ```shell rpmbuild -ba libarchive.spec ``` ```shell rpm -Uvh libarchive-3.4.3-4.x86_64.rpm rpm -Uvh libarchive-devel-3.4.3-4.x86_64.rpm ``` ```shell dnf install -y sqlite-devel ``` ```shell git clone https://gitee.com/openeuler/iSulad cd iSulad/ tar -zcvf iSulad-2.1.tar.gz * ``` ```shell cp iSulad-2.1.tar.gz $RPM/SOURCES/ cp *.spec $RPM/SPECS/ ``` ```shell rpmbuild -ba iSulad.spec ``` First, you should install libwebsockets: ```shell dnf install -y epel-release dnf --enablerepo=powertools install libuv-devel dnf install libwebsockets-devel ``` then, you can install iSulad ```shell dnf install -y sqlite-devel.x86_64 rpm -Uvh iSulad-2.1.0-1.x86_64.rpm ```"
}
] |
{
"category": "Runtime",
"file_name": "build_guide_with_rpm.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Tap | Pointer to string | | [optional] Ip | Pointer to string | | [optional] [default to \"192.168.249.1\"] Mask | Pointer to string | | [optional] [default to \"255.255.255.0\"] Mac | Pointer to string | | [optional] HostMac | Pointer to string | | [optional] Mtu | Pointer to int32 | | [optional] Iommu | Pointer to bool | | [optional] [default to false] NumQueues | Pointer to int32 | | [optional] [default to 2] QueueSize | Pointer to int32 | | [optional] [default to 256] VhostUser | Pointer to bool | | [optional] [default to false] VhostSocket | Pointer to string | | [optional] VhostMode | Pointer to string | | [optional] [default to \"Client\"] Id | Pointer to string | | [optional] PciSegment | Pointer to int32 | | [optional] RateLimiterConfig | Pointer to | | [optional] `func NewNetConfig() *NetConfig` NewNetConfig instantiates a new NetConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewNetConfigWithDefaults() *NetConfig` NewNetConfigWithDefaults instantiates a new NetConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *NetConfig) GetTap() string` GetTap returns the Tap field if non-nil, zero value otherwise. `func (o NetConfig) GetTapOk() (string, bool)` GetTapOk returns a tuple with the Tap field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetTap(v string)` SetTap sets Tap field to given value. `func (o *NetConfig) HasTap() bool` HasTap returns a boolean if a field has been set. `func (o *NetConfig) GetIp() string` GetIp returns the Ip field if non-nil, zero value otherwise. `func (o NetConfig) GetIpOk() (string, bool)` GetIpOk returns a tuple with the Ip field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetIp(v string)` SetIp sets Ip field to given value. `func (o *NetConfig) HasIp() bool` HasIp returns a boolean if a field has been set. `func (o *NetConfig) GetMask() string` GetMask returns the Mask field if non-nil, zero value otherwise. `func (o NetConfig) GetMaskOk() (string, bool)` GetMaskOk returns a tuple with the Mask field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetMask(v string)` SetMask sets Mask field to given"
},
{
"data": "`func (o *NetConfig) HasMask() bool` HasMask returns a boolean if a field has been set. `func (o *NetConfig) GetMac() string` GetMac returns the Mac field if non-nil, zero value otherwise. `func (o NetConfig) GetMacOk() (string, bool)` GetMacOk returns a tuple with the Mac field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetMac(v string)` SetMac sets Mac field to given value. `func (o *NetConfig) HasMac() bool` HasMac returns a boolean if a field has been set. `func (o *NetConfig) GetHostMac() string` GetHostMac returns the HostMac field if non-nil, zero value otherwise. `func (o NetConfig) GetHostMacOk() (string, bool)` GetHostMacOk returns a tuple with the HostMac field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetHostMac(v string)` SetHostMac sets HostMac field to given value. `func (o *NetConfig) HasHostMac() bool` HasHostMac returns a boolean if a field has been set. `func (o *NetConfig) GetMtu() int32` GetMtu returns the Mtu field if non-nil, zero value otherwise. `func (o NetConfig) GetMtuOk() (int32, bool)` GetMtuOk returns a tuple with the Mtu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetMtu(v int32)` SetMtu sets Mtu field to given value. `func (o *NetConfig) HasMtu() bool` HasMtu returns a boolean if a field has been set. `func (o *NetConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o NetConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *NetConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set. `func (o *NetConfig) GetNumQueues() int32` GetNumQueues returns the NumQueues field if non-nil, zero value otherwise. `func (o NetConfig) GetNumQueuesOk() (int32, bool)` GetNumQueuesOk returns a tuple with the NumQueues field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetNumQueues(v int32)` SetNumQueues sets NumQueues field to given value. `func (o *NetConfig) HasNumQueues() bool` HasNumQueues returns a boolean if a field has been set. `func (o *NetConfig) GetQueueSize() int32` GetQueueSize returns the QueueSize field if non-nil, zero value otherwise. `func (o NetConfig) GetQueueSizeOk() (int32, bool)` GetQueueSizeOk returns a tuple with the QueueSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetQueueSize(v int32)` SetQueueSize sets QueueSize field to given"
},
{
"data": "`func (o *NetConfig) HasQueueSize() bool` HasQueueSize returns a boolean if a field has been set. `func (o *NetConfig) GetVhostUser() bool` GetVhostUser returns the VhostUser field if non-nil, zero value otherwise. `func (o NetConfig) GetVhostUserOk() (bool, bool)` GetVhostUserOk returns a tuple with the VhostUser field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetVhostUser(v bool)` SetVhostUser sets VhostUser field to given value. `func (o *NetConfig) HasVhostUser() bool` HasVhostUser returns a boolean if a field has been set. `func (o *NetConfig) GetVhostSocket() string` GetVhostSocket returns the VhostSocket field if non-nil, zero value otherwise. `func (o NetConfig) GetVhostSocketOk() (string, bool)` GetVhostSocketOk returns a tuple with the VhostSocket field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetVhostSocket(v string)` SetVhostSocket sets VhostSocket field to given value. `func (o *NetConfig) HasVhostSocket() bool` HasVhostSocket returns a boolean if a field has been set. `func (o *NetConfig) GetVhostMode() string` GetVhostMode returns the VhostMode field if non-nil, zero value otherwise. `func (o NetConfig) GetVhostModeOk() (string, bool)` GetVhostModeOk returns a tuple with the VhostMode field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetVhostMode(v string)` SetVhostMode sets VhostMode field to given value. `func (o *NetConfig) HasVhostMode() bool` HasVhostMode returns a boolean if a field has been set. `func (o *NetConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o NetConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetId(v string)` SetId sets Id field to given value. `func (o *NetConfig) HasId() bool` HasId returns a boolean if a field has been set. `func (o *NetConfig) GetPciSegment() int32` GetPciSegment returns the PciSegment field if non-nil, zero value otherwise. `func (o NetConfig) GetPciSegmentOk() (int32, bool)` GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetPciSegment(v int32)` SetPciSegment sets PciSegment field to given value. `func (o *NetConfig) HasPciSegment() bool` HasPciSegment returns a boolean if a field has been set. `func (o *NetConfig) GetRateLimiterConfig() RateLimiterConfig` GetRateLimiterConfig returns the RateLimiterConfig field if non-nil, zero value otherwise. `func (o NetConfig) GetRateLimiterConfigOk() (RateLimiterConfig, bool)` GetRateLimiterConfigOk returns a tuple with the RateLimiterConfig field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NetConfig) SetRateLimiterConfig(v RateLimiterConfig)` SetRateLimiterConfig sets RateLimiterConfig field to given value. `func (o *NetConfig) HasRateLimiterConfig() bool` HasRateLimiterConfig returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "NetConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "If an ActionSet fails to perform an action, then the failure events can be seen in the respective ActionSet as well as its associated Blueprint by using the following commands: ``` bash $ kubectl --namespace kanister describe actionset <ActionSet Name> Events: Type Reason Age From Message - - - Normal Started Action 14s Kanister Controller Executing action delete Normal Started Phase 14s Kanister Controller Executing phase deleteFromS3 Warning ActionSetFailed Action: delete 13s Kanister Controller Failed to run phase 0 of action delete: command terminated with exit code 1 $ kubectl --namespace kanister describe blueprint <Blueprint Name> Events: Type Reason Age From Message - - - Normal Added 4m Kanister Controller Added blueprint 'Blueprint Name' Warning ActionSetFailed Action: delete 1m Kanister Controller Failed to run phase 0 of action delete: command terminated with exit code 1 ``` If you ever need to debug a live Kanister system and the information available in ActionSets you might have created is not enough, looking at the Kanister controller logs might help. Assuming you have deployed the controller in the `kanister` namespace, you can use the following commands to get controller logs. ``` bash $ kubectl get pods --namespace kanister NAME READY STATUS RESTARTS AGE release-kanister-operator-1484730505-l443d 1/1 Running 0 1m $ kubectl logs -f <operator-pod-name-from-above> --namespace kanister ``` If you are not successful in verifying the reason behind the failure, please reach out to us on or file an issue on . A [mailing list](https://groups.google.com/forum/#!forum/kanisterio) is also available if needed. For the validating webhook to work, the Kubernetes API Server needs to connect to port `9443` of the Kanister operator. If your cluster has a firewall setup, it has to be configured to allow that communication. If you get an error while applying a blueprint, that the webhook can\\'t be reached, check if your firewall misses a rule for port `9443`: ``` console $ kubectl apply -f blueprint.yaml Error from server (InternalError): error when creating \"blueprint.yaml\": Internal error occurred: failed calling webhook \"blueprints.cr.kanister.io\": failed to call webhook: Post \"https://kanister-kanister-operator.kanister.svc:443/validate/v1alpha1/blueprint?timeout=5s\": context deadline exceeded ``` See [GKE: Adding firewall rules for specific use cases](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#addfirewallrules) and [kubernetes/kubernetes: Using non-443 ports for admission webhooks requires firewall rule in GKE](https://github.com/kubernetes/kubernetes/issues/79739) for more details."
}
] |
{
"category": "Runtime",
"file_name": "troubleshooting.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "A volume is a logical concept composed of multiple metadata and data shards. From the client's perspective, a volume can be seen as a file system instance that can be accessed by containers. A volume can be mounted in multiple containers, allowing files to be accessed by different clients simultaneously. From the perspective of object storage, a volume corresponds to a bucket. The following will introduce how to create volumes in different modes. For more volume operations, please refer to . Request the master service interface to create a volume. name: volume name capacity: volume quota. If the quota is used up, it needs to be expanded. owner: the owner of the volume. If there is no user with the same name as the owner in the cluster, a user with the user ID of Owner will be created automatically. ``` bash curl -v \"http://127.0.0.1:17010/admin/createVol?name=test&capacity=100&owner=cfs\" ``` ::: tip Tip Example of expanding the volume (where authKey is the MD5 value of the volume owner string). ```shell curl -v \"http://127.0.0.1:17010/vol/expand?name=test&authKey=57f0162b2303be3449c7008484b0d306&capacity=200\" ``` ::: If the Blobstore erasure-coded subsystem is deployed, you can create an erasure-coded volume to store cold data. ```bash curl -v 'http://127.0.0.1:17010/admin/createVol?name=test-cold&capacity=100&owner=cfs&volType=1' ``` name: volume name capacity: volume quota. If the quota is used up, it needs to be expanded. owner: the owner of the volume. volType: volume type. 0 for replicated volume, 1 for erasure-coded volume. The default is 0. You can create an erasure-coded volume and set multiple replicas as read-write cache. Cache read data ```bash curl -v 'http://127.0.0.1:17010/admin/createVol?name=test-cold&capacity=100&owner=cfs&volType=1&cacheCap=10&cacheAction=1' ``` name: volume name capacity: volume quota. If the quota is used up, it needs to be expanded. owner: the owner of the volume. volType: volume type. 0 for replicated volume, 1 for erasure-coded volume. The default is 0. cacheCap: cache size in GB. cacheAction: cache type. 0 for no cache, 1 for cache read, 2 for cache read-write. The default is 0. Cache read and write data ```bash curl -v 'http://127.0.0.1:17010/admin/createVol?name=test-cold&capacity=100&owner=cfs&volType=1&cacheCap=10&cacheAction=2' ``` name: volume name capacity: volume quota. If the quota is used up, it needs to be expanded. owner: the owner of the volume. volType: volume type. 0 for replicated volume, 1 for erasure-coded volume. The default is 0. cacheCap: cache size in GB. cacheAction: cache type. 0 for no cache, 1 for cache read, 2 for cache read-write. The default is 0."
}
] |
{
"category": "Runtime",
"file_name": "volume.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The transfer service is a simple flexible service which can be used to transfer artifact objects between a source and destination. The flexible API allows each implementation of the transfer interface to determines whether the transfer between the source and destination is possible. This allows new functionality to be added directly by implementations without versioning the API or requiring other implementations to handle an interface change. The transfer service is built upon the core ideas put forth by the libchan project, that an API with binary streams and data channels as first class objects is more flexible and opens a wider variety of use cases without requiring constant protocol and API updates. To accomplish this, the transfer service makes use of the streaming service to allow binary and object streams to be accessible by transfer objects even when using grpc and ttrpc. The transfer API consists of a single operation which can be called with various different objects depending on the intended operation. In Go the API looks like, ```go type Transferrer interface { Transfer(ctx context.Context, source interface{}, destination interface{}, opts ...Opt) error } ``` The proto API looks like, ```proto service Transfer { rpc Transfer(TransferRequest) returns (google.protobuf.Empty); } message TransferRequest { google.protobuf.Any source = 1; google.protobuf.Any destination = 2; TransferOptions options = 3; } message TransferOptions { string progress_stream = 1; // Progress min interval } ``` | Source | Destination | Description | Local Implementation Version | |-|-|-|--| | Registry | Image Store | \"pull\" | 1.7 | | Image Store | Registry | \"push\" | 1.7 | | Object stream (Archive) | Image Store | \"import\" | 1.7 | | Image Store | Object stream (Archive) | \"export\" | 1.7 | | Object stream (Layer) | Mount/Snapshot | \"unpack\" | Not implemented | | Mount/Snapshot | Object stream (Layer) | \"diff\" | Not implemented | | Image Store | Image Store | \"tag\" | 1.7 | | Registry | Registry | mirror registry image | Not implemented | containerd has a single built in transfer plugin which implements most basic transfer operations. The local plugin can be configured the same way as other containerd plugins ``` [plugins] [plugins.\"io.containerd.transfer.v1\"] ``` Pull Components ```mermaid flowchart TD subgraph containerd Client Client(Client) end subgraph containerd subgraph Service Streaming(Streaming Service) Transfer(Transfer Service) end subgraph Transfer objects RS(Registry Source) ISD(Image Store Destination) end subgraph Backend R(Resolver) CS(ContentStore) IS(Image Store) S(Snapshotter) end end Reg(((Remote Registry))) Client-- Create Stream --> Streaming Client-- Pull via Transfer --> Transfer Transfer-- Get Stream --> Streaming Transfer-- Progress via Stream--> Client Transfer-->RS Transfer-->ISD Transfer-->CS RS-->R ISD-->IS R-->Reg ISD-->CS ISD-->S ``` Streaming is used by the transfer service to send or receive data streams as part of an operation as well as to handle callbacks (synchronous or asynchronous). The streaming protocol should be invisible to the client Go interface. Object types such as funcs, readers, and writers can be transparently converted to the streaming protocol when going over"
},
{
"data": "The client and server interface can remain unchanged while the proto marshaling and unmarshaling need to be aware of the streaming protocol and have access to the stream manager. Streams are created by clients using the client side stream manager and sent via proto RPC as string stream identifiers. Server implementations of services can lookup the streams by the stream identifier using the server side stream manager. Progress is an asynchronous callback sent from the server to the client. It is normally representing in the Go interface as a simple callback function, which the the client implements and the server calls. From Go types progress uses these types ```go type ProgressFunc func(Progress) type Progress struct { Event string Name string Parents []string Progress int64 Total int64 } ``` The proto message type sent over the stream is ```proto message Progress { string event = 1; string name = 2; repeated string parents = 3; int64 progress = 4; int64 total = 5; } ``` Progress can be passed along as a transfer option to get progress on any transfer operation. The progress events may differ based on the transfer operation. Transfer objects may also use `io.Reader` and `io.WriteCloser` directly. The bytes are transferred over the stream using two simple proto message types ```proto message Data { bytes data = 1; } message WindowUpdate { int32 update = 1; } ``` The sender sends the `Data` message and the receiver sends the `WindowUpdate` message. When the client is sending an `io.Reader`, the client is the sender and server is the receiver. When a client sends an `io.WriteCloser`, the server is the sender and the client is the receiver. Binary streams are used for import (sending an `io.Reader`) and export (sending an `io.WriteCloser`). Credentials are handled as a synchronous callback from the server to the client. The callback is made when the server encounters an authorization request from a registry. The Go interface to use a credential helper in a transfer object looks like ```go type CredentialHelper interface { GetCredentials(ctx context.Context, ref, host string) (Credentials, error) } type Credentials struct { Host string Username string Secret string Header string } ``` It is send over a stream using the proto messages ```proto // AuthRequest is sent as a callback on a stream message AuthRequest { // host is the registry host string host = 1; // reference is the namespace and repository name requested from the registry string reference = 2; // wwwauthenticate is the HTTP WWW-Authenticate header values returned from the registry repeated string wwwauthenticate = 3; } enum AuthType { NONE = 0; // CREDENTIALS is used to exchange username/password for access token // using an oauth or \"Docker Registry Token\" server CREDENTIALS = 1; // REFRESH is used to exchange secret for access token using an oauth // or \"Docker Registry Token\" server REFRESH = 2; // HEADER is used to set the HTTP Authorization header to secret // directly for the registry. // Value should be `<auth-scheme> <authorization-parameters>` HEADER = 3; } message AuthResponse { AuthType authType = 1; string secret = 2; string username = 3; google.protobuf.Timestamp expire_at = 4; } ```"
}
] |
{
"category": "Runtime",
"file_name": "transfer.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The current Longhorn upgrade process lacks enforcement of the engine version, potentially leading to compatibility issues. To address this concern, we propose the implementation of an Engine Upgrade Enforcement feature. https://github.com/longhorn/longhorn/issues/5842 Longhorn needs to be able to upgrade safely without risking compatibility issue with older version engine images. Without this enhancement, we are facing various challenges like dealing with potential operation failures and increased maintenance overhead. The primary goal of this proposal is to enhance Longhorn's upgrade mechanism by introducing logic that prevents upgrading to Longhorn versions while there are incompatible engine images in use. `None` This proposal focuses on preventing users from upgrading to unsupported or incompatible engine versions. This enhancement will build upon the existing pre-upgrade checks to include validation of engine version compatibility. Previously, users had the freedom to continue using an older engine version after a Longhorn upgrade. With the proposed enhancement, the Longhorn upgrade process will be blocked if it includes an incompatible engine version. This will enforce users to manually upgrade the engine to a compatible version before proceeding with the Longhorn upgrade. User will perform upgrade a usual. Longhorn will examine the compatibility of the current engine version. If the current engine version is incompatible with the target engine version for the upgrade, Longhorn will halt the upgrade process and prompt the user to address the engine version mismatch before proceeding. `None` The implementation approach for this feature will be similar to the . Key implementation steps include: Enhance the function to include the new checks. ``` func CheckUpgradePathSupported(namespace string, lhClient lhclientset.Interface) error { if err := CheckLHUpgradePathSupported(namespace, lhClient); err != nil { return err } return CheckEngineUpgradePathSupported(namespace, lhClient, emeta.GetVersion()) } ``` Retrieve the current engine images being used and record the versions. Prevent upgrades if the targeting engine version is detact to be downgrading. Prevent upgrades if the engine image version is lower than . Create unit test for the new logic. Run manual test to verify the handling of incompatible engine image versions (e.g., Longhorn v1.4.x -> v1.5.x -> v1.6.x.) `None` `None`"
}
] |
{
"category": "Runtime",
"file_name": "20230815-engine-upgrade-enforcement.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Create Samba Shares sidebar_position: 8 description: Learn how to share directories in the JuiceFS file system through Samba. import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; Samba is an open-source software suite that implements the SMB/CIFS (Server Message Block / Common Internet File System) protocol, which is a commonly used file-sharing protocol in Windows systems. With Samba, you can create shared directories on Linux/Unix servers, allowing Windows computers to access and use these shared resources over the network. To create a shared folder on a Linux system with Samba installed, you can edit the `smb.conf` configuration file. Once configured, Windows and macOS systems can access and read/write the shared folder using their file managers. Linux needs to install the Samba client for access. When you need to share directories from the JuiceFS file system through Samba, you can simply use the `juicefs mount` command to mount the file system. Then, you can create Samba shares with the JuiceFS mount point or subdirectories. :::note `juicefs mount` mounts the file system as a local user-space file system through the FUSE interface, making it identical to the local file system in terms of appearance and usage. Hence, it can be directly used to create Samba shares. ::: Most Linux distributions provide Samba through their package managers. <Tabs> <TabItem value=\"debian\" label=\"Debian and derivatives\"> ```shell sudo apt install samba ``` </TabItem> <TabItem value=\"redhat\" label=\"RHEL and derivatives\"> ```shell sudo dnf install samba ``` </TabItem> </Tabs> If you need to configure AD/DC (Active Directory / Domain Controller), additional software packages need to be installed. For more details, refer to the . According to the , it is recommended to use file systems that support extended attributes (xattr). To enable extended attribute support for JuiceFS during the mount process, use the `--enable-xattr` option. For example: ```shell sudo juicefs mount -d --enable-xattr sqlite3://myjfs.db /mnt/myjfs ``` For cases where you configure automatic mounting through `/etc/fstab`, you can add the `enable-xattr` option to the mount options section. For example: ```ini redis://127.0.0.1:6379/0 /mnt/myjfs juicefs _netdev,max-uploads=50,writeback,cache-size=1024000,enable-xattr 0 0 ``` Samba is software designed for Linux/Unix systems, serving file sharing to Windows systems. In Windows systems, many files and directories have additional metadata, for example, file authors, keywords, and icon"
},
{
"data": "This information is typically stored outside the POSIX file system and requires xattr format for storage in Windows. To ensure that these files can be correctly stored in Linux systems, Samba recommends using file systems that support extended attributes when creating shares. Assuming the JuiceFS mount point is `/mnt/myjfs`, if you want to create a Samba share for the `media` directory within it, you can configure it as follows: ```ini [Media] path = /mnt/myjfs/media guest ok = no read only = no browseable = yes ``` Apple macOS systems support direct access to Samba shares. Similar to Windows, macOS also has additional metadata (e.g., icon positions, Spotlight search) that needs to be saved using xattr. Samba version 4.9 and above have the support for macOS extended attributes enabled by default. If your Samba version is lower than 4.9, you need to add the `ea support = yes` option to the [global] section of the Samba configuration to enable extended attribute support for macOS. Edit the configuration file `/etc/samba/smb.conf`, for example: ```ini [global] workgroup = SAMBA security = user passdb backend = tdbsam ea support = yes ``` Samba has its own user database, independent of the operating system users. However, since Samba shares directories from the system, appropriate user permissions are required to read and write files. When creating users for Samba, it is required that the user already exists in the system, as Samba will automatically map the Samba user to the same-named system user with corresponding permissions. If the user already exists in the system, assuming the system account is \"herald,\" you can create a Samba account for it as follows: ```shell sudo smbpasswd -a herald ``` Follow the on-screen prompts to set the password. The Samba account can have a different password than the system user. If you need to create a new user, taking the example of creating a user named \"abc\": Create a user: ```shell sudo adduser abc ``` Create a corresponding Samba user with the same name: ```shell sudo smbpasswd -a abc ``` `pdbedit` is a built-in tool in Samba used to manage the Samba user database. You can use this tool to list all the created Samba users: ```shell sudo pdbedit -L ``` It will display a list of all created Samba users, including their usernames, security identifiers (SIDs), group membership, and other related information."
}
] |
{
"category": "Runtime",
"file_name": "samba.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Ceph Dashboard The dashboard is a very helpful tool to give you an overview of the status of your Ceph cluster, including overall health, status of the mon quorum, status of the mgr, osd, and other Ceph daemons, view pools and PG status, show logs for the daemons, and more. Rook makes it simple to enable the dashboard. The can be enabled with settings in the CephCluster CRD. The CephCluster CRD must have the dashboard `enabled` setting set to `true`. This is the default setting in the example manifests. ```yaml [...] spec: dashboard: enabled: true ``` The Rook operator will enable the ceph-mgr dashboard module. A service object will be created to expose that port inside the Kubernetes cluster. Rook will enable port 8443 for https access. This example shows that port 8443 was configured. ```console $ kubectl -n rook-ceph get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.108.111.192 <none> 9283/TCP 3h rook-ceph-mgr-dashboard ClusterIP 10.110.113.240 <none> 8443/TCP 3h ``` The first service is for reporting the , while the latter service is for the dashboard. If you are on a node in the cluster, you will be able to connect to the dashboard by using either the DNS name of the service at `https://rook-ceph-mgr-dashboard-https:8443` or by connecting to the cluster IP, in this example at `https://10.110.113.240:8443`. After you connect to the dashboard you will need to login for secure access. Rook creates a default user named `admin` and generates a secret called `rook-ceph-dashboard-password` in the namespace where the Rook Ceph cluster is running. To retrieve the generated password, you can run the following: ```console kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath=\"{}\" | base64 --decode && echo ``` The following dashboard configuration settings are supported: ```yaml spec: dashboard: urlPrefix: /ceph-dashboard port: 8443 ssl: true ``` `urlPrefix` If you are accessing the dashboard via a reverse proxy, you may wish to serve it under a URL prefix. To get the dashboard to use hyperlinks that include your prefix, you can set the `urlPrefix` setting. `port` The port that the dashboard is served on may be changed from the default using the `port` setting. The corresponding K8s service exposing the port will automatically be updated. `ssl` The dashboard may be served without SSL (useful for when you deploy the dashboard behind a proxy already served using SSL) by setting the `ssl` option to be false. Information about physical disks is available only in . The Rook manager module is required by the dashboard to obtain the information about physical disks, but it is disabled by default. Before it is enabled, the dashboard 'Physical Disks' section will show an error message. To prepare the Rook manager module to be used in the dashboard, modify your Ceph Cluster CRD: ```yaml mgr: modules: name: rook enabled: true ``` And apply the changes: ```console $ kubectl apply -f cluster.yaml ``` Once the Rook manager module is enabled as the orchestrator backend, there are two settings required for showing disk information: `ROOKENABLEDISCOVERY_DAEMON`: Set to `true` to provide the dashboard the information about physical disks. The default is `false`. `ROOKDISCOVERDEVICES_INTERVAL`: The interval for changes to be refreshed in the set of physical disks in the cluster. The default is `60` minutes. Modify the operator.yaml, and apply the changes: ```console $ kubectl apply -f operator.yaml ``` Commonly you will want to view the dashboard from outside the"
},
{
"data": "For example, on a development machine with the cluster running inside minikube you will want to access the dashboard from the host. There are several ways to expose a service that will depend on the environment you are running in. You can use an or for exposing services such as NodePort, LoadBalancer, or ExternalIPs. The simplest way to expose the service in minikube or similar environment is using the NodePort to open a port on the VM that can be accessed by the host. To create a service with the NodePort, save this yaml as `dashboard-external-https.yaml`. ```yaml apiVersion: v1 kind: Service metadata: name: rook-ceph-mgr-dashboard-external-https namespace: rook-ceph labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: name: dashboard port: 8443 protocol: TCP targetPort: 8443 selector: app: rook-ceph-mgr rook_cluster: rook-ceph mgr_role: active sessionAffinity: None type: NodePort ``` Now create the service: ```console kubectl create -f dashboard-external-https.yaml ``` You will see the new service `rook-ceph-mgr-dashboard-external-https` created: ```console $ kubectl -n rook-ceph get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.108.111.192 <none> 9283/TCP 4h rook-ceph-mgr-dashboard ClusterIP 10.110.113.240 <none> 8443/TCP 4h rook-ceph-mgr-dashboard-external-https NodePort 10.101.209.6 <none> 8443:31176/TCP 4h ``` In this example, port `31176` will be opened to expose port `8443` from the ceph-mgr pod. Find the ip address of the VM. If using minikube, you can run `minikube ip` to find the ip address. Now you can enter the URL in your browser such as `https://192.168.99.110:31176` and the dashboard will appear. If you have a cluster on a cloud provider that supports load balancers, you can create a service that is provisioned with a public hostname. The yaml is the same as `dashboard-external-https.yaml` except for the following property: ```yaml spec: [...] type: LoadBalancer ``` Now create the service: ```console kubectl create -f dashboard-loadbalancer.yaml ``` You will see the new service `rook-ceph-mgr-dashboard-loadbalancer` created: ```console $ kubectl -n rook-ceph get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 172.30.11.40 <none> 9283/TCP 4h rook-ceph-mgr-dashboard ClusterIP 172.30.203.185 <none> 8443/TCP 4h rook-ceph-mgr-dashboard-loadbalancer LoadBalancer 172.30.27.242 a7f23e8e2839511e9b7a5122b08f2038-1251669398.us-east-1.elb.amazonaws.com 8443:32747/TCP 4h ``` Now you can enter the URL in your browser such as `https://a7f23e8e2839511e9b7a5122b08f2038-1251669398.us-east-1.elb.amazonaws.com:8443` and the dashboard will appear. If you have a cluster with an and a Certificate Manager (e.g. ) then you can create an Ingress like the one below. This example achieves four things: Exposes the dashboard on the Internet (using a reverse proxy) Issues a valid TLS Certificate for the specified domain name (using ) Tells the reverse proxy that the dashboard itself uses HTTPS Tells the reverse proxy that the dashboard itself does not have a valid certificate (it is self-signed) ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: rook-ceph-mgr-dashboard namespace: rook-ceph annotations: kubernetes.io/tls-acme: \"true\" nginx.ingress.kubernetes.io/backend-protocol: \"HTTPS\" nginx.ingress.kubernetes.io/server-snippet: | proxysslverify off; spec: ingressClassName: \"nginx\" tls: hosts: rook-ceph.example.com secretName: rook-ceph.example.com rules: host: rook-ceph.example.com http: paths: path: / pathType: Prefix backend: service: name: rook-ceph-mgr-dashboard port: name: https-dashboard ``` Customise the Ingress resource to match your cluster. Replace the example domain name `rook-ceph.example.com` with a domain name that will resolve to your Ingress Controller (creating the DNS entry if required). Now create the Ingress: ```console kubectl create -f dashboard-ingress-https.yaml ``` You will see the new Ingress `rook-ceph-mgr-dashboard` created: ```console $ kubectl -n rook-ceph get ingress NAME HOSTS ADDRESS PORTS AGE rook-ceph-mgr-dashboard rook-ceph.example.com 80, 443 5m ``` And the new Secret for the TLS certificate: ```console kubectl -n rook-ceph get secret rook-ceph.example.com NAME TYPE DATA AGE rook-ceph.example.com kubernetes.io/tls 2 4m ``` You can now browse to `https://rook-ceph.example.com/` to log into the dashboard."
}
] |
{
"category": "Runtime",
"file_name": "ceph-dashboard.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Architecture sidebar_position: 2 slug: /architecture description: This article introduces the technical architecture of JuiceFS and its technical advantages. The JuiceFS file system consists of three parts: JuiceFS Client: The JuiceFS client handles all file I/O operations, including background tasks like data compaction and trash file expiration. It communicates with both the object storage and metadata engine. The client supports multiple access methods: FUSE: JuiceFS file system can be mounted on a host in a POSIX-compatible manner, allowing the massive cloud storage to be used as local storage. Hadoop Java SDK: JuiceFS can replace HDFS, providing Hadoop with cost-effective and abundant storage capacity. Kubernetes CSI Driver: JuiceFS provides shared storage for containers in Kubernetes through its CSI Driver. S3 Gateway: Applications using S3 as the storage layer can directly access the JuiceFS file system, and tools such as AWS CLI, s3cmd, and MinIO client can be used to access the JuiceFS file system at the same time. WebDAV Server: Files in JuiceFS can be operated directly using the HTTP protocol. Data Storage: File data is split and stored in object storage. JuiceFS supports virtually all types of object storage, including typical self-hosted solutions like OpenStack Swift, Ceph, and MinIO. Metadata Engine: The Metadata Engine stores file metadata, which contains: Common file system metadata: file name, size, permission information, creation and modification time, directory structure, file attribute, symbolic link, file lock. JuiceFS-specific metadata: file data mapping, reference counting, client session, etc. JuiceFS supports a variety of common databases as the metadata engine, like Redis, TiKV, MySQL/MariaDB, PostgreSQL, and SQLite, and the list is still expanding. if your favorite database is not supported. Traditional file systems use local disks to store both file data and metadata. However, JuiceFS formats data first and then stores it in the object storage, with the corresponding metadata being stored in the metadata engine. In JuiceFS, each file is composed of one or more chunks. Each chunk has a maximum size of 64 MB. Regardless of the file's size, all reads and writes are located based on their offsets (the position in the file where the read or write operation occurs) to the corresponding chunk. This design enables JuiceFS to achieve excellent performance even with large files. As long as the total length of the file remains unchanged, the chunk division of the file remains fixed, regardless of how many modifications or writes the file undergoes. Chunks exist to optimize lookup and positioning, while the actual file writing is performed on slices. In JuiceFS, each slice represents a single continuous write, belongs to a specific chunk, and cannot overlap between adjacent chunks. This ensures that the slice length never exceeds 64 MB. For example, if a file is generated through a continuous sequential write, each chunk contains only one slice. The figure above illustrates this scenario: a 160 MB file is sequentially written, resulting in three chunks, each containing only one slice. File writing generates slices, and invoking `flush` persists these"
},
{
"data": "`flush` can be explicitly called by the user, and even if not invoked, the JuiceFS client automatically performs `flush` at the appropriate time to prevent buffer overflow (refer to ). When persisting to the object storage, slices are further split into individual blocks (default maximum size of 4 MB) to enable multi-threaded concurrent writes, thereby enhancing write performance. The previously mentioned chunks and slices are logical data structures, while blocks represent the final physical storage form and serve as the smallest storage unit for the object storage and disk cache. After writing a file to JuiceFS, you cannot find the original file directly in the object storage. Instead, the storage bucket contains a `chunks` folder and a series of numbered directories and files. These numerically named object storage files are the blocks split and stored by JuiceFS. The mapping between these blocks, chunks, slices, and other metadata information (such as file names and sizes) is stored in the metadata engine. This decoupled design makes JuiceFS a high-performance file system. Regarding logical data structures, if a file is not generated through continuous sequential writes but through multiple append writes, each append write triggers a `flush` to initiate the upload, resulting in multiple slices. If the data size for each append write is less than 4 MB, the data blocks eventually stored in the object storage are smaller than 4 MB blocks. Depending on the writing pattern, the arrangement of slices can be diverse: If a file is repeatedly modified in the same part, it results in multiple overlapping slices. If writes occur in non-overlapping parts, there will be gaps between slices. However complex the arrangement of slices may be, when reading a file, the most recent written slice is read for each file position. The figure below illustrates this concept: while slices may overlap, reading the file always occurs \"from top to bottom.\" This ensures that you see the latest state of the file. Due to the potential overlapping of slices, JuiceFS in the reference relationship between chunks and slices. This approach informs the file system of the valid data in each slice. However, it is not difficult to imagine that looking up the \"most recently written slice within the current read range\" during file reading, especially with a large number of overlapping slices as shown in the figure, can significantly impact read performance. This leads to what we call \"file fragmentation.\" File fragmentation not only affects read performance but also increases space usage at various levels (object storage, metadata). Hence, whenever a write occurs, the client evaluates the file's fragmentation and runs the fragmentation compaction asynchronously, merging all slices within the same chunk into one. Additional technical aspects of JuiceFS storage design: Irrespective of the file size, JuiceFS avoids storage merging to prevent read amplification and ensure optimal performance. JuiceFS provides strong consistency guarantees while allowing tuning options with caching mechanisms tailored to specific use cases. For example, by configuring more aggressive metadata caching, a certain level of consistency can be traded for enhanced performance. For more details, see . JuiceFS supports the functionality and enables it by default. After a file is deleted, it is retained for a certain period before being permanently cleared. This helps you avoid data loss caused by accidental deletion."
}
] |
{
"category": "Runtime",
"file_name": "architecture.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Get datapath SHA header ``` cilium-dbg bpf sha get <sha> [flags] ``` ``` -h, --help help for get -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage compiled BPF template objects"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_sha_get.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "is the open source, in-memory data store used by millions of developers as a database, cache, streaming engine and message broker. We will be using chart that bootstraps a Redis deployment on a cluster using the package manager. Kubernetes 1.20+ PV provisioner support in the underlying infrastructure Kanister controller version 0.108.0 installed in your cluster, let's assume in Namespace `kanister` Kanctl CLI installed (https://docs.kanister.io/tooling.html#install-the-tools) Docker CLI installed A docker image containing the required tools to back up Redis. The Dockerfile for the image can be found . To build and push the docker image to your docker registry, execute steps. Execute below commands to build and push `redis-tools` docker image to a registry. ```bash $ cd ~/kanister/docker/redis-tools $ docker build -t <registry>/<accountname>/redis-tools:<tagname> . $ docker push <registry>/<accountname>/redis-tools:<tagname> ``` Execute the below commands to install the Redis database using the `bitnami` chart with the release name `redis`: ```bash $ helm repo add bitnami https://charts.bitnami.com/bitnami $ helm repo update $ helm install redis bitnami/redis --namespace redis-test --create-namespace \\ --set auth.password='<redis-password>' --set volumePermissions.enabled=true ``` The command deploys a Redis instance in the `redis-test` namespace. By default a random password will be generated for the user. For setting your own password, use the `auth.password` param as shown above. You can retrieve your root password by running the following command. Make sure to replace [YOURRELEASENAME] and [YOUR_NAMESPACE]: `kubectl get secret [YOURRELEASENAME] --namespace [YOUR_NAMESPACE] -o jsonpath=\"{.data.redis-password}\" | base64 -d` Tip: List all releases using `helm list --all-namespaces`, using Helm Version 3. If you have deployed Redis application with name other than `redis` and namespace other than `redis-test`, you need to modify the commands (backup, restore and delete) used below to use the correct release name and namespace. Create Profile CR if not created already ```bash $ kanctl create profile s3compliant --access-key <aws-access-key-id> \\ --secret-key <aws-secret-key> \\ --bucket <s3-bucket-name> --region <region-name> \\ --namespace redis-test ``` You can read more about the Profile custom Kanister resource . NOTE: The above command will configure a location where artifacts resulting from Kanister data operations such as backup should go. This is stored as a `profiles.cr.kanister.io` CustomResource (CR) which is then referenced in Kanister ActionSets. Every ActionSet requires a Profile reference to complete the action. This CR (`profiles.cr.kanister.io`) can be shared between Kanister-enabled application instances. Create Blueprint in the same namespace as the Kanister controller NOTE: Replace `<registry>`, `<accountname>` and `<tagname>` for the image value in `./redis-blueprint.yaml` before running following command. ```bash $ kubectl create -f ./redis-blueprint.yaml -n kanister ``` Once Redis is running, you can populate it with some"
},
{
"data": "Let's add a key called \"name\": ```bash $ kubectl -n redis-test exec -it redis-master-0 -- bash $ redis-cli -a <redis-password> 127.0.0.1:6379> set name test-redis OK 127.0.0.1:6379> get name \"test-redis\" ``` You can now take a backup of the Redis data using an ActionSet defining backup for this application. Create an ActionSet in the same namespace as the controller. ```bash $ kubectl get profile -n redis-test NAME AGE s3-profile-75ql6 2m $ kanctl create actionset --action backup --namespace kanister --blueprint redis-blueprint --statefulset redis-test/redis-master --profile redis-test/s3-profile-75ql6 --secrets redis=redis-test/redis actionset backup-ms8wg created $ kubectl --namespace kanister get actionsets.cr.kanister.io backup-ms8wg NAME PROGRESS LAST TRANSITION TIME STATE backup-ms8wg 100.00 2022-12-30T08:26:36Z complete ``` Let's say someone accidentally deleted the key using the following command: ```bash $ kubectl -n redis-test exec -it redis-master-0 -- bash $ redis-cli -a <redis-password> 127.0.0.1:6379> get name \"test-redis\" 127.0.0.1:6379> del name (integer) 1 127.0.0.1:6379> get name (nil) ``` To restore the missing data, you should use the backup that you created before. An easy way to do this is to leverage `kanctl`, a command-line tool that helps create ActionSets that depend on other ActionSets: ```bash $ kanctl --namespace kanister create actionset --action restore --from backup-ms8wg actionset restore-backup-ms8wg-2c4c7 created $ kubectl --namespace kanister get actionsets.cr.kanister.io restore-backup-ms8wg-2c4c7 NAME PROGRESS LAST TRANSITION TIME STATE restore-backup-ms8wg-2c4c7 100.00 2022-12-30T08:42:21Z complete ``` Once the ActionSet status is set to \"complete\", you can verify that the data has been successfully restored to Redis. ```bash $ kubectl -n redis-test exec -it redis-master-0 -- bash $ redis-cli -a <redis-password> 127.0.0.1:6379> get name \"test-redis\" ``` The artifacts created by the backup action can be cleaned up using the following command: ```bash $ kanctl --namespace kanister create actionset --action delete --from backup-ms8wg --namespacetargets kanister actionset delete-backup-ms8wg-b6lz4 created $ kubectl --namespace kanister get actionsets.cr.kanister.io delete-backup-ms8wg-b6lz4 NAME PROGRESS LAST TRANSITION TIME STATE delete-backup-ms8wg-b6lz4 100.00 2022-12-30T08:44:40Z complete ``` If you run into any issues with the above commands, you can check the logs of the controller using: ```bash $ kubectl --namespace kanister logs -l app=kanister-operator ``` you can also check events of the actionset ```bash $ kubectl describe actionset restore-backup-ms8wg-2c4c7 -n kanister ``` To uninstall/delete the `redis` deployment: ```bash $ helm delete redis -n redis-test release \"redis\" uninstalled ``` The command removes all the Kubernetes components associated with the chart and deletes the release. Remove Blueprint, Profile CR and ActionSets ```bash $ kubectl delete blueprints.cr.kanister.io redis-blueprint -n kanister blueprint.cr.kanister.io \"redis-blueprint\" deleted $ kubectl get profiles.cr.kanister.io -n redis-test NAME AGE s3-profile-75ql6 23m $ kubectl delete profiles.cr.kanister.io s3-profile-75ql6 -n redis-test profile.cr.kanister.io \"s3-profile-75ql6\" deleted $ kubectl --namespace kanister delete actionsets.cr.kanister.io backup-ms8wg delete-backup-ms8wg-b6lz4 restore-backup-ms8wg-2c4c7 actionset.cr.kanister.io \"backup-ms8wg\" deleted actionset.cr.kanister.io \"delete-backup-ms8wg-b6lz4\" deleted actionset.cr.kanister.io \"restore-backup-ms8wg-2c4c7\" deleted ```"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This guide shows you how to use the DRBD Module Loader on machines with SecureBoot enabled. When SecureBoot is enabled the Linux Kernel refuses to load unsigned kernel modules. Since the DRBD modules are built from source, they do not have a valid signature by default. To load them, they need to be signed, and the signing key needs to be inserted into the machine's trust store. To complete this guide, you should be familiar with: editing `LinstorSatelliteConfiguration` resources. generating key material with `openssl`. managing machine owner keys with `mokutil`. creating and protecting Kubernetes Secret resources. A Kubernetes node running with SecureBoot enabled. This can be checked by running the following command on each node: ``` SecureBoot enabled ``` Console access to the machine during early boot, or permissions to update the EFI platform keys another way. Create a private key and self-signed certificate using the `openssl` command line utility. The following command will create a new private key named `signingkey.pem` and a certificate named `signingkey.x509`, valid for 10 years, in the appropriate format: ``` $ openssl req -x509 -new -nodes -utf8 -sha256 -days 36500 -batch -outform DER \\ -out signingkey.x509 -keyout signingkey.pem -config - <<EOF [ req ] default_bits = 4096 distinguishedname = reqdistinguished_name prompt = no string_mask = utf8only x509_extensions = myexts [ reqdistinguishedname ] O = Piraeus Datastore CN = Piraeus Datastore kernel module signing key emailAddress = [email protected] [ myexts ] basicConstraints=critical,CA:FALSE keyUsage=digitalSignature subjectKeyIdentifier=hash authorityKeyIdentifier=keyid EOF ``` The generated certificate needs to be added to the machine's trust store. This step depends on the machine platform you are using. The instructions here apply to any system where you have console access during early boot, such as bare-metal or most virtualization platforms. First, distribute the generated certificate `signing_key.x509` to all nodes. Then, use the following command to add the certificate to the machine owner keys (MOK) using a password of your choice: ``` input password: input password again: ``` To enable the keys, start a console session on the machine, either by directly attching a keyboard and monitor, attaching a virtual console using the machines BMC, or using tools of your virtualization platform such as virt-viewer or"
},
{
"data": "Then, reboot the machine, chosing to \"Perform MOK management\" when promted. Now: Select \"Enroll MOK\": Continue enrollment of the key: Enter the password chosen when running `mokutil --import`: Reboot the machine: Create a Kubernetes Secret resource containing the generated key material. The following command creates a secret named `drbd-signing-keys`: ``` $ kubectl create secret generic drbd-signing-keys --type piraeus.io/signing-key \\ --from-file=signingkey.pem --from-file=signingkey.x509 secret/drbd-signing-keys created ``` Now, configure the `drbd-module-loader` to use the key material to sign the kernel modules. The following `LinstorSatelliteConfiguration` resource makes the key available in the `drbd-module-loader` container available and sets the `LB_SIGN` environment variable to start signing the modules: ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: drbd-module-signing spec: podTemplate: spec: initContainers: name: drbd-module-loader env: name: LB_SIGN value: /signing-key volumeMounts: name: signing-key mountPath: /signing-key readOnly: true volumes: name: signing-key secret: secretName: drbd-signing-keys ``` When everything is configured correctly, the `drbd-module-loader` init container will be able to insert the DRBD modules. ``` $ kubectl logs ds/linstor-satellite.node1.example.com drbd-module-loader ... DRBD version loaded: version: 9.2.8 (api:2/proto:86-122) GIT-hash: e163b05a76254c0f51f999970e861d72bb16409a build by @linstor-satellite.k8s-21.test-zskzl, 2024-04-03 09:04:15 Transports (api:20): tcp (9.2.8) lb-tcp (9.2.8) rdma (9.2.8) ``` If the modules are not signed, or the signature is not trusted, the `drbd-module-loader` container will crash with one of the following error messages: ``` insmod: ERROR: could not insert module ./drbd.ko: Operation not permitted insmod: ERROR: could not insert module ./drbdtransporttcp.ko: Operation not permitted insmod: ERROR: could not insert module ./drbdtransportlb-tcp.ko: Operation not permitted Could not load DRBD kernel modules ``` ``` insmod: ERROR: could not insert module ./drbd.ko: Key was rejected by service insmod: ERROR: could not insert module ./drbdtransporttcp.ko: Key was rejected by service insmod: ERROR: could not insert module ./drbdtransportlb-tcp.ko: Key was rejected by service Could not load DRBD kernel modules ``` ``` insmod: ERROR: could not insert module ./drbd.ko: Required key not available insmod: ERROR: could not insert module ./drbdtransporttcp.ko: Required key not available insmod: ERROR: could not insert module ./drbdtransportlb-tcp.ko: Required key not available Could not load DRBD kernel modules ``` Additional information is available in the kernel logs: ``` $ dmesg ... Lockdown: insmod: unsigned module loading is restricted; see man kernel_lockdown.7 PKCS#7 signature not signed with a trusted key ```"
}
] |
{
"category": "Runtime",
"file_name": "secure-boot.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "() This section discusses the basics of the Manta architecture. <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> - - - - - <!-- END doctoc generated TOC please keep comment here to allow auto update --> Horizontal scalability. It must be possible to add more hardware to scale any component within Manta without downtime. As a result of this constraint, there are multiple instances of each service. Strong consistency. In the face of network partitions where it's not possible to remain both consistent and available, Manta chooses consistency. So if all three datacenters in a three-DC deployment become partitioned from one another, requests may fail rather than serve potentially incorrect data. High availability. Manta must survive failure of any service, physical server, rack, or even an entire datacenter, assuming it's been deployed appropriately. Development installs of Manta can fit on a single system, and obviously those don't survive server failure, but several production deployments span three datacenters and survive partitioning or failure of an entire datacenter without downtime for the other two. We use nodes to refer to physical servers. Compute nodes mean the same thing they mean in Triton, which is any physical server that's not a head node. Storage nodes are compute nodes that are designated to store actual Manta objects. A Manta install uses: a headnode (see \"Manta and Triton\" below) one or more storage nodes to store user objects one or more non-storage compute nodes for the other Manta services. We use the term datacenter (or DC) to refer to an availability zone (or AZ). Each datacenter represents a single Triton deployment (see below). Manta supports being deployed in either 1 or 3 datacenters within a single region, which is a group of datacenters having a high-bandwidth, low-latency network connection. Manta is built atop Triton (formerly known as SmartDataCenter). A three-datacenter deployment of Manta is built atop three separate Triton deployments. The presence of Manta does not change the way Triton is deployed or operated. Administrators still have AdminUI, APIs, and they're still responsible for managing the Triton services, platform versions, and the like through the normal Triton mechanisms. All user-facing Manta functionality can be divided into a few major subsystems: The storage tier* is responsible for storing the physical copies of user objects on disk. Storage nodes store objects as files with random uuids. So within each storage node, the objects themselves are effectively just large, opaque blobs of data. The metadata tier* is responsible for storing metadata about each object that's visible from the public Manta API. This metadata includes the set of storage nodes on which the physical copy is stored. In order to make all this work, there are several other pieces. For example: The front door* is made up of the SSL terminators, load balancers, and API servers that actually handle user HTTP requests. All user interaction with Manta happens over HTTP, so the front door handles all user-facing operations. An authentication cache* maintains a read-only copy of the Joyent account database. All front door requests are authenticated against this cache. A garbage collection* system removes objects marked for deletion. A consensus layer* is used to keep track of primary-secondary relationships in the metadata tier. DNS-based nameservices* are used to keep track of all instances of all services in the"
},
{
"data": "Just like with Triton, components are divided into services, instances, and agents. Services and instances are SAPI concepts. A service is a group of instances of the same kind. For example, \"webapi\" is a service, and there may be multiple webapi zones. Each zone is an instance of the \"webapi\" service. The vast majority of Manta components are service instances, and there are several different services involved. Note: Do not confuse SAPI services with SMF services. We're talking about SAPI services here. A given SAPI instance (which is a zone) may have many SMF services. | Kind | Major subsystem | Service | Purpose | Components | | - | | | -- | | | Service | Consensus | nameservice | Service discovery | ZooKeeper, (DNS) | | Service | Front door | loadbalancer | SSL termination and load balancing | haproxy, | | Service | Front door | webapi | Manta HTTP Directory API server | | | Service | Front door | buckets-api\\* | Manta HTTP Buckets API server | | | Service | Front door | authcache | Authentication cache | (redis) | | Service | Garbage Collection | garbage-buckets-consumer | Manta Buckets API garbage collection | , | | Service | Garbage Collection | garbage-deleter | Deleting storage for objects | , | | Service | Garbage Collection | garbage-dir-consumer | Manta Directory API garbage collection | , | | Service | Garbage Collection | garbage-mpu-cleaner | MPU garbage collection | , | | Service | Garbage Collection | garbage-uploader | Send GC instructions to storage zones | , | | Service | Metadata | postgres | Directory metadata storage | postgres, | | Service | Metadata | buckets-postgres\\* | Buckets metadata storage | postgres, | | Service | Metadata | moray | Directory key-value store | | | Service | Metadata | buckets-mdapi\\* | Buckets key-value store | | | Service | Metadata | electric-moray | Directory consistent hashing (sharding)| | | Service | Metadata | buckets-mdplacement\\* | Buckets consistent hashing (sharding) | | | Service | Metadata | reshard\\* | Metadata reshard tool | | | Service | Metadata | storinfo | Storage metadata cache and picker | | | Service | Storage | storage | Object storage and capacity reporting | (nginx), , | | Service | Storage | rebalancer | Storage zone evacuation/rebalancing | | | Service | Operations | madtom | Web-based Manta monitoring | | | Service | Operations | ops | Operator workspace | | \\* experimental features In some sense, the heart of Manta (and Triton) is a service discovery mechanism (based on ZooKeeper) for keeping track of which service instances are running. In a nutshell, this works like this: Setup: There are 3-5 \"nameservice\" zones deployed that form a ZooKeeper cluster. There's a \"binder\" DNS server in each of these zones that serves DNS requests based on the contents of the ZooKeeper data store. Setup: When other zones are deployed, part of their configuration includes the IP addresses of the nameservice zones. These DNS servers are the only components that each zone knows about directly. When an instance starts up (e.g., a \"moray\" zone), an SMF service called the registrar connects to the ZooKeeper cluster (using the IP addresses configured with the zone) and publishes its own IP to ZooKeeper. A moray zone for shard 1 in region \"us-east\" publishes its own IP under"
},
{
"data": "When a client wants to contact the shard 1 moray, it makes a DNS request for 1.moray.us-east.joyent.us using the DNS servers in the ZooKeeper cluster. Those DNS servers returns all IPs that have been published for 1.moray.us-east.joyent.us. If the registrar in the 1.moray zone dies, the corresponding entry in the ZooKeeper data store is automatically removed, causing that zone to fall out of DNS. Due to DNS TTLs of 60s, it may take up to a minute for clients to notice that the zone is gone. Internally, most services work this way. Since we don't control Manta clients, the external service discovery system is simpler and more static. We manually configure the public `us-central.manta.mnx.io` DNS name to resolve to each of the loadbalancer public IP addresses. After a request reaches the loadbalancers, everything uses the internal service discovery mechanism described above to contact whatever other services they need. The storage tier is made up of Mantis Shrimp nodes that have a great deal of of physical storage in order to store users' objects. Each storage node has an instance of the storage service, also called a \"mako\" or \"shark\" (as in: a shard of the storage tier). Inside this zone runs: mako*: an nginx instance that supports simple PUT/GET for objects. This is not the front door; this is used internally to store each copy of a user object. Objects are stored in a ZFS delegated dataset inside the storage zone, under `/manta/$accountuuid/$objectuuid`. minnow*: a small Node service that periodically reports storage capacity data into the metadata tier so that the front door knows how much capacity each storage node has. rebalancer-agent*: a small Rust service that processes object copy requests coming from the rebalancer. The rebalancer or rebalancer manager is a separate service running outside of the storage zones which orchestrates the migration of objects between storage zones. The service allows an operator to evacuate an entire storage zone in the event of hardware outage or planned replacement. The metadata tier is itself made up of three levels: \"postgres\" and \"buckets-postgres\" zones, which run instances of the postgresql database \"moray\" and \"buckets-mdapi\" zones, which run key-value stores on top of postgres \"electric-moray\" and \"buckets-mdplacement\" zones, which handle sharding of metadata requests All object metadata is stored in PostgreSQL databases. Metadata is keyed on the object's name, and the value is a JSON document describing properties of the object including what storage nodes it's stored on. This part is particularly complicated, so pay attention! The metadata tier is replicated for availability and sharded for scalability. It's easiest to think of sharding first. Sharding means dividing the entire namespace into one or more shards in order to scale horizontally. So instead of storing objects A-Z in a single postgres database, we might choose two shards (A-M in shard 1, N-Z in shard 2), or three shards (A-I in shard 1, J-R in shard 2, S-Z in shard 3), and so on. Each shard is completely separate from the others. They don't overlap at all in the data that they store. The shard responsible for a given object is determined by consistent hashing on the directory name of the object. So the shard for \"/mark/stor/foo\" is determined by hashing \"/mark/stor\". Within each shard, we use multiple postgres instances for high availability. At any given time, there's a primary peer (also called the \"master\"), a secondary peer (also called the \"synchronous slave\"), and an async peer (sometimes called the \"asynchronous"
},
{
"data": "As the names suggest, we configure synchronous replication between the primary and secondary, and asynchronous replication between the secondary and the async peer. Synchronous replication means that transactions must be committed on both the primary and the secondary before they can be committed to the client. Asynchronous replication means that the asynchronous peer may be slightly behind the other two. The idea with configuration replication in this way is that if the primary crashes, we take several steps to recover: The shard is immediately marked read-only. The secondary is promoted to the primary. The async peer is promoted to the secondary. With the shard being read-only, it should quickly catch up. Once the async peer catches up, the shard is marked read-write again. When the former primary comes back online, it becomes the asynchronous peer. This allows us to quickly restore read-write service in the event of a postgres crash or an OS crash on the system hosting the primary. This process is managed by the \"manatee\" component, which uses ZooKeeper for leader election to determine which postgres will be the primary at any given time. It's really important to keep straight the difference between sharding and replication. Even though replication means that we have multiple postgres instances in each shard, only the primary can be used for read/write operations, so we're still limited by the capacity of a single postgres instance. That's why we have multiple shards. <!-- XXX graphic --> There are actually two kinds of metadata in Manta: Object metadata, which is sharded as described above. This may be medium to high volume, depending on load. Storage node capacity metadata, which is reported by \"minnow\" instances (see above) and all lives on one shard. This is extremely low-volume: a couple of writes per storage node per minute. Manta supports the resharding of directory object metadata, a process which would typically be used to add additional shards (for horizontal scaling of metadata capacity). This operation is handled by the reshard service which currently supports only the doubling of shards and incurs object write downtime during the hash ring update step in the process. For each metadata shard (which we said above consists of three PostgreSQL databases), there's two or more \"moray\" instances. Moray is a key-value store built on top of postgres. Clients never talk to postgres directly; they always talk to Moray. (Actually, they generally talk to electric-moray, which proxies requests to Moray. See below.) Moray keeps track of the replication topology (which Postgres instances is the primary, which is the secondary, and which is the async) and directs all read/write requests to the primary Postgres instance. This way, clients don't need to know about the replication topology. Like Postgres, each Moray instance is tied to a particular shard. These are typically referred to as \"1.moray\", \"2.moray\", and so on. Whereas moray handles the metadata requests for directory objects, buckets-mdapi performs the same function for buckets objects. The electric-moray service sits in front of the sharded Moray instances and directs requests to the appropriate shard. So if you try to update or fetch the metadata for `/mark/stor/foo`, electric-moray will hash `/mark/stor` to find the right shard and then proxy the request to one of the Moray instances operating that shard. Buckets-mdplacement performs the same sharding function as electric-moray for buckets objects. But unlike electric-moray, it is not on the data path of service requests. The hash ring information is cached in buckets-api upon service"
},
{
"data": "The front door consists of \"loadbalancer\", \"webapi\" and \"buckets-api\" zones. \"loadbalancer\" zones run haproxy for both SSL termination and load balancing across the available \"webapi\" and \"buckets-api\" instances. \"haproxy\" is managed by a component called \"muppet\" that uses the DNS-based service discovery mechanism to keep haproxy's list of backends up-to-date. \"webapi\" zones run the Manta-specific API server, called muskie. Muskie handles PUT/GET/DELETE requests to the front door, including requests to: create and delete objects create, list, and delete directories create multipart uploads, upload parts, fetch multipart upload state, commit multipart uploads, and abort multipart uploads \"buckets-api\" zones run an alternative API server which handles S3-like PUT/GET/DELETE requests for objects with a different organization paradigm. The feature is still considered experimental, with limited support and unpublished API clients at this time. Requests for objects and directories involve: validating the request authenticating the user (via mahi, the auth cache) looking up the requested object's metadata (via electric-moray or buckets- mdplacement) authorizing the user for access to the specified resource For requests on directories and zero-byte objects, the last step is to update or return the right metadata. For write requests on objects, muskie or buckets-api then: Constructs a set of candidate storage nodes that will be used to store the object's data, where each storage node is located in a different datacenter (in a multi-DC configuration). By default, there are two copies of the data, but users can configure this by setting the durability level with the request. Tries to issue a PUT with 100-continue to each of the storage nodes in the candidate set. If that fails, try another set. If all sets are exhausted, fail with 503. Once the 100-continue is received from all storage nodes, the user's data is streamed to all nodes. Upon completion, there should be a 204 response from each storage node. Once the data is safely written to all nodes, the metadata tier is updated (using a PUT to electric-moray or to buckets-mdapi), and a 204 is returned to the client. At this point, the object's data is recorded persistently on the requested number of storage nodes, and the metadata is replicated on at least two index nodes. For read requests on objects, muskie or buckets-api instead contacts each of the storage nodes hosting the data and streams data from whichever one responds first to the client. Multipart uploads, a.k.a. MPU, provide an alternate way for users to upload Manta objects (only for the Directory API). The user creates the multipart upload, uploads the object in parts, and exposes the object in Manta by committing the multipart upload. Generally, these operations are implemented using existing Manta constructs: Parts are normal Manta objects, with a few key differences. Users cannot use the GET, POST or DELETE HTTP methods on parts. Additionally, all parts are co-located on the same set of storage nodes, which are selected when the multipart upload is created. All parts for a given multipart upload are stored in a parts directory, which is a normal Manta directory. Part directories are stored in the top-level `/$MANTA_USER/uploads` directory tree. Most of the logic for multipart uploads is performed by Muskie, but there are some additional features of the system only used for multipart uploads: the manta_uploads bucket in Moray stores finalizing records* for a given shard. A finalizing record is inserted atomically with the target object record when a multipart upload is"
},
{
"data": "the mako zones have a custom mako-finalize* operation invoked by muskie when a multipart upload is committed. This operation creates the target object from the parts and subsequently deletes the parts from disk. This operation is invoked on all storage nodes that will contain the target object when the multipart upload is committed. Garbage collection consists of several components in the `garbage-collector` and `storage` zones and is responsible for removing the storage used by objects which have been removed from the metadata tier. It is also responsible for removing metadata for finalized MPU uploads. When an object is deleted from the metadata tier in either the Manta Directory API or Manta Buckets API, the objects on disk are not immediately removed, nor are all references in the metadata tier itself. The original record is moved into a new deletion record which includes the information required to delete the storage backing the now-deleted object. The garbage collection system is responsible for actually performing the cleanup. Processes in the `garbage-collector` zone include: `garbage-buckets-consumer` -- consumes deletion records from `buckets-mdapi` (created when an object is deleted by Manta Buckets API). The records found are written to local `instructions` files in the `garbage-collector` zone. `garbage-dir-consumer` -- consumes deletion records from the `mantafastdeletequeue` bucket (created when an object is deleted through the Manta Directory API). The records found are written to local `instructions` files in the `garbage-collector` zone. `garbage-uploader` -- consumes the locally queued `instructions` and uploads them to the appropriate `storage` zone for processing. `garbage-mpu-cleaner` -- consumes \"finalized\" MPU uploads (Manta Directory API only) and deletes the upload parts, upload directory and the finalize record itself after the upload has been finalized for some period of time (default: 5 minutes). On the `storage` zones, there's an additional component of garbage collection: `garbage-deleter` -- consumes `instructions` that were uploaded by `garbage-uploader` and actually deletes the no-longer-needed object files in `/manta` of the storage zone. Once the storage is deleted, the completed instructions files are also deleted. Each of these services in both zones, run as their own SMF service and has their own log file in `/var/svc/log`. Metering is the process of measuring how much resource each user used. It is not a full-fledged usage reporting feature at this time but the operator can still obtain the total object counts and bytes used per user by aggregating the metrics from individual storage zones. In each storage zone, the usage metrics are reported by a daily cron job that generates a `mako_rollup.out` text file under the `/var/tmp/mako_rollup` directory. There are many dimensions to scalability. In the metadata tier: number of objects (scalable with additional shards) number of objects in a directory (fixed, currently at a million objects) In the storage tier: total size of data (scalable with additional storage servers) size of data per object (limited to the amount of storage on any single system, typically in the tens of terabytes, which is far larger than is typically practical) In terms of performance: total bytes in or out per second (depends on network configuration) count of concurrent requests (scalable with additional metadata shards or API servers) As described above, for most of these dimensions, Manta can be scaled horizontally by deploying more software instances (often on more hardware). For a few of these, the limits are fixed, but we expect them to be high enough for most purposes. For a few others, the limits are not known, and we've never (or rarely) run into them, but we may need to do additional work when we discover where these limits are."
}
] |
{
"category": "Runtime",
"file_name": "architecture.md",
"project_name": "Triton Object Storage",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- toc --> - - - - - - - - - - - - - - - - <!-- /toc --> is a Kubernetes network plugin that provides network connectivity and security features for Pod workloads. Considering the scale and dynamism of Kubernetes workloads in a cluster, Network Flow Visibility helps in the management and configuration of Kubernetes resources such as Network Policy, Services, Pods etc., and thereby provides opportunities to enhance the performance and security aspects of Pod workloads. For visualizing the network flows, Antrea monitors the flows in Linux conntrack module. These flows are converted to flow records, and then flow records are post-processed before they are sent to the configured external flow collector. High-level design is given below: In Antrea, the basic building block for the Network Flow Visibility is the Flow Exporter. Flow Exporter operates within Antrea Agent; it builds and maintains a connection store by polling and dumping flows from conntrack module periodically. Connections from the connection store are exported to the [Flow Aggregator Service](#flow-aggregator) using the IPFIX protocol, and for this purpose we use the IPFIX exporter process from the library. In addition to enabling the Flow Exporter feature gate (if needed), you need to ensure that the `flowExporter.enable` flag is set to true in the Antrea Agent configuration. your `antrea-agent` ConfigMap should look like this: ```yaml antrea-agent.conf: | featureGates: FlowExporter: true flowExporter: enable: true flowCollectorAddr: \"flow-aggregator/flow-aggregator:4739:tls\" flowPollInterval: \"5s\" activeFlowExportTimeout: \"5s\" idleFlowExportTimeout: \"15s\" ``` Please note that the default value for `flowExporter.flowCollectorAddr` is `\"flow-aggregator/flow-aggregator:4739:tls\"`, which enables the Flow Exporter to connect the Flow Aggregator Service, assuming it is running in the same K8 cluster with the Name and Namespace set to `flow-aggregator`. If you deploy the Flow Aggregator Service with a different Name and Namespace, then set `flowExporter.flowCollectorAddr` appropriately. Please note that the default values for `flowExporter.flowPollInterval`, `flowExporter.activeFlowExportTimeout`, and `flowExporter.idleFlowExportTimeout` parameters are set to 5s, 5s, and 15s, respectively. TLS communication between the Flow Exporter and the Flow Aggregator is enabled by default. Please modify them as per your requirements. Prior to the Antrea v1.13 release, the `flowExporter` option group in the Antrea Agent configuration did not exist. To enable the Flow Exporter feature, one simply needed to enable the feature gate, and the Flow Exporter related configuration could be configured using the (now deprecated) `flowCollectorAddr`, `flowPollInterval`, `activeFlowExportTimeout`, `idleFlowExportTimeout` parameters. There are 34 IPFIX IEs in each exported flow record, which are defined in the IANA-assigned IE registry, the Reverse IANA-assigned IE registry and the Antrea IE registry. The reverse IEs are used to provide bi-directional information about the flow. The Enterprise ID is 0 for IANA-assigned IE registry, 29305 for reverse IANA IE registry, 56505 for Antrea IE"
},
{
"data": "All the IEs used by the Antrea Flow Exporter are listed below: | IPFIX Information Element| Field ID | Type | |--|-|-| | flowStartSeconds | 150 | dateTimeSeconds| | flowEndSeconds | 151 | dateTimeSeconds| | flowEndReason | 136 | unsigned8 | | sourceIPv4Address | 8 | ipv4Address | | destinationIPv4Address | 12 | ipv4Address | | sourceIPv6Address | 27 | ipv6Address | | destinationIPv6Address | 28 | ipv6Address | | sourceTransportPort | 7 | unsigned16 | | destinationTransportPort | 11 | unsigned16 | | protocolIdentifier | 4 | unsigned8 | | packetTotalCount | 86 | unsigned64 | | octetTotalCount | 85 | unsigned64 | | packetDeltaCount | 2 | unsigned64 | | octetDeltaCount | 1 | unsigned64 | | IPFIX Information Element| Field ID | Type | |--|-|-| | reversePacketTotalCount | 86 | unsigned64 | | reverseOctetTotalCount | 85 | unsigned64 | | reversePacketDeltaCount | 2 | unsigned64 | | reverseOctetDeltaCount | 1 | unsigned64 | | IPFIX Information Element | Field ID | Type | Description | |-|-|-|-| | sourcePodNamespace | 100 | string | | | sourcePodName | 101 | string | | | destinationPodNamespace | 102 | string | | | destinationPodName | 103 | string | | | sourceNodeName | 104 | string | | | destinationNodeName | 105 | string | | | destinationClusterIPv4 | 106 | ipv4Address | | | destinationClusterIPv6 | 107 | ipv6Address | | | destinationServicePort | 108 | unsigned16 | | | destinationServicePortName | 109 | string | | | ingressNetworkPolicyName | 110 | string | Name of the ingress network policy applied to the destination Pod for this flow. | | ingressNetworkPolicyNamespace | 111 | string | Namespace of the ingress network policy applied to the destination Pod for this flow. | | ingressNetworkPolicyType | 115 | unsigned8 | 1 stands for Kubernetes Network Policy. 2 stands for Antrea Network Policy. 3 stands for Antrea Cluster Network Policy. | | ingressNetworkPolicyRuleName | 141 | string | Name of the ingress network policy Rule applied to the destination Pod for this flow. | | egressNetworkPolicyName | 112 | string | Name of the egress network policy applied to the source Pod for this flow. | | egressNetworkPolicyNamespace | 113 | string | Namespace of the egress network policy applied to the source Pod for this flow. | | egressNetworkPolicyType | 118 | unsigned8 | | | egressNetworkPolicyRuleName | 142 | string | Name of the egress network policy rule applied to the source Pod for this flow. | | ingressNetworkPolicyRuleAction | 139 | unsigned8 | 1 stands for Allow. 2 stands for Drop. 3 stands for Reject. | | egressNetworkPolicyRuleAction | 140 | unsigned8 | | | tcpState | 136 | string | The state of the TCP connection. The states are: LISTEN, SYN-SENT, SYN-RECEIVED, ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT, and CLOSED. | | flowType | 137 | unsigned8 | 1 stands for Intra-Node. 2 stands for Inter-Node. 3 stands for To External. 4 stands for From External. | Currently, the Flow Exporter feature provides visibility for Pod-to-Pod, Pod-to-Service and Pod-to-External network flows along with the associated statistics such as data throughput (bits per second), packet throughput (packets per second), cumulative byte count and cumulative packet count. Pod-To-Service flow visibility is supported only , which is the case by default starting with Antrea v0.11. In the future, we will enable the support for External-To-Service flows. Kubernetes information such as Node name, Pod name, Pod Namespace, Service name, NetworkPolicy name and NetworkPolicy Namespace, is added to the flow records. Network Policy Rule Action (Allow, Reject, Drop) is also supported for both Antrea-native NetworkPolicies and K8s NetworkPolicies. For K8s NetworkPolicies, connections dropped due to will be assigned the Drop action. For flow records that are exported from any given Antrea Agent, the Flow Exporter only provides the information of Kubernetes entities that are local to the Antrea Agent. In other words, flow records are only complete for intra-Node flows, but incomplete for inter-Node flows. It is the responsibility of the to correlate flows from the source and destination Nodes and produce complete flow records. Both Flow Exporter and Flow Aggregator are supported in IPv4 clusters, IPv6 clusters and dual-stack clusters. We support following connection metrics as Prometheus metrics that are exposed through : `antreaagentconntracktotalconnection_count`, `antreaagentconntrackantreaconnection_count`, `antreaagentdeniedconnectioncount`, `antreaagentconntrackmaxconnection_count`, and `antreaagentflowcollectorreconnection_count` Flow Aggregator is deployed as a Kubernetes Service. The main functionality of Flow Aggregator is to store, correlate and aggregate the flow records received from the Flow Exporter of Antrea"
},
{
"data": "More details on the functionality are provided in the section. Flow Aggregator is implemented as IPFIX mediator, which consists of IPFIX Collector Process, IPFIX Intermediate Process and IPFIX Exporter Process. We use the library to implement the Flow Aggregator. To deploy a released version of Flow Aggregator Service, pick a deployment manifest from the . For any given release `<TAG>` (e.g. `v0.12.0`), you can deploy Flow Aggregator as follows: ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/flow-aggregator.yml ``` To deploy the latest version of Flow Aggregator Service (built from the main branch), use the checked-in : ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/flow-aggregator.yml ``` The following configuration parameters have to be provided through the Flow Aggregator ConfigMap. Flow aggregator needs to be configured with at least one of the supported . `flowCollector` is mandatory for , and `clickHouse` is mandatory for . We provide an example value for this parameter in the following snippet. If you have deployed the , then please set `flowCollector.enable` to `true` and use the address for `flowCollector.address`: `<Ipfix-Collector Cluster IP>:<port>:<tcp|udp>` If you have deployed the , then please enable the collector by setting `clickHouse.enable` to `true`. If it is deployed following the , the ClickHouse server is already exposed via a K8s Service, and no further configuration is required. If a different FQDN or IP is desired, please use the URL for `clickHouse.databaseURL` in the following format: `<protocol>://<ClickHouse server FQDN or IP>:<ClickHouse port>`. Starting with Antrea v1.13, you can enable TLS when connecting to the ClickHouse Server by setting `clickHouse.databaseURL` with protocol `tls` or `https`. You can also change the value of `clickHouse.tls.insecureSkipVerify` to determine whether to skip the verification of the server's certificate. If you want to provide a custom CA certificate, you can set `clickHouse.tls.caCert` to `true` and the flow Aggregator will read the certificate key pair from the`clickhouse-ca` Secret. Make sure to follow the following form when creating the `clickhouse-ca` Secret with the custom CA certificate: ```yaml apiVersion: v1 kind: Secret metadata: name: clickhouse-ca namespace: flow-aggregator data: ca.crt: <BASE64 ENCODED CA CERTIFICATE> ``` You can use `kubectl apply -f <PATH TO SECRET YAML>` to create the above secret , or use `kubectl create secret`: ```bash kubectl create secret generic clickhouse-ca -n flow-aggregator --from-file=ca.crt=<PATH TO CA CERTIFICATE> ``` Prior to Antrea v1.13, secure connections to ClickHouse are not supported, and TCP is the only supported protocol when connecting to the ClickHouse server from the Flow Aggregator. ```yaml flow-aggregator.conf: | activeFlowRecordTimeout: 60s inactiveFlowRecordTimeout: 90s aggregatorTransportProtocol: \"tls\" flowAggregatorAddress: \"\" recordContents: podLabels: false apiServer: apiPort: 10348 tlsCipherSuites: \"\" tlsMinVersion: \"\" flowCollector: enable: false address: \"\" recordFormat: \"IPFIX\" clickHouse: enable: false database: \"default\" databaseURL: \"tcp://clickhouse-clickhouse.flow-visibility.svc:9000\" tls: insecureSkipVerify: false caCert: false debug: false compress: true commitInterval: \"8s\" ``` Please note that the default values for `activeFlowRecordTimeout`, `inactiveFlowRecordTimeout`, `aggregatorTransportProtocol` parameters are set to `60s`, `90s` and `tls` respectively. Please make sure that `aggregatorTransportProtocol` and protocol of `flowCollectorAddr` in `agent-agent.conf` are set to `tls` to guarantee secure communication works properly. Protocol of `flowCollectorAddr` and `aggregatorTransportProtocol` must always match, so TLS must either be enabled for both sides or disabled for both sides. Please modify the parameters as per your requirements. Please note that the default value for `recordContents.podLabels` is `false`, which indicates source and destination Pod labels will not be included in the flow records exported to `flowCollector` and `clickHouse`. If you would like to include them, you can modify the value to `true`. Please note that the default value for `apiServer.apiPort` is `10348`, which is the port used to expose the Flow Aggregator's APIServer. Please modify the parameters as per your requirements. Please note that the default value for"
},
{
"data": "is `8s`, which is based on experiment results to achieve best ClickHouse write performance and data retention. Based on ClickHouse recommendation for best performance, this interval is required be no shorter than `1s`. Also note that flow aggregator has a cache limit of ~500k records for ClickHouse-Grafana collector. If `clickHouse.commitInterval` is set to a value too large, there's a risk of losing records. In addition to IPFIX information elements provided in the , the Flow Aggregator adds the following fields to the flow records. | IPFIX Information Element | Field ID | Type | Description | |-|-|-|-| | packetTotalCountFromSourceNode | 120 | unsigned64 | The cumulative number of packets for this flow as reported by the source Node, since the flow started. | | octetTotalCountFromSourceNode | 121 | unsigned64 | The cumulative number of octets for this flow as reported by the source Node, since the flow started. | | packetDeltaCountFromSourceNode | 122 | unsigned64 | The number of packets for this flow as reported by the source Node, since the previous report for this flow at the observation point. | | octetDeltaCountFromSourceNode | 123 | unsigned64 | The number of octets for this flow as reported by the source Node, since the previous report for this flow at the observation point. | | reversePacketTotalCountFromSourceNode | 124 | unsigned64 | The cumulative number of reverse packets for this flow as reported by the source Node, since the flow started. | | reverseOctetTotalCountFromSourceNode | 125 | unsigned64 | The cumulative number of reverse octets for this flow as reported by the source Node, since the flow started. | | reversePacketDeltaCountFromSourceNode | 126 | unsigned64 | The number of reverse packets for this flow as reported by the source Node, since the previous report for this flow at the observation point. | | reverseOctetDeltaCountFromSourceNode | 127 | unsigned64 | The number of reverse octets for this flow as reported by the source Node, since the previous report for this flow at the observation point. | | packetTotalCountFromDestinationNode | 128 | unsigned64 | The cumulative number of packets for this flow as reported by the destination Node, since the flow started. | | octetTotalCountFromDestinationNode | 129 | unsigned64 | The cumulative number of octets for this flow as reported by the destination Node, since the flow started. | | packetDeltaCountFromDestinationNode | 130 | unsigned64 | The number of packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. | | octetDeltaCountFromDestinationNode | 131 | unsigned64 | The number of octets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. | | reversePacketTotalCountFromDestinationNode| 132 | unsigned64 | The cumulative number of reverse packets for this flow as reported by the destination Node, since the flow started. | | reverseOctetTotalCountFromDestinationNode | 133 | unsigned64 | The cumulative number of reverse octets for this flow as reported by the destination Node, since the flow started. | | reversePacketDeltaCountFromDestinationNode| 134 | unsigned64 | The number of reverse packets for this flow as reported by the destination Node, since the previous report for this flow at the observation point. | | reverseOctetDeltaCountFromDestinationNode | 135 | unsigned64 | The number of reverse octets for this flow as reported by the destination Node, since the previous report for this flow at the observation"
},
{
"data": "| | sourcePodLabels | 143 | string | | | destinationPodLabels | 144 | string | | | throughput | 145 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point. The unit is bits per second. | | reverseThroughput | 146 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point. The unit is bits per second. | | throughputFromSourceNode | 147 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second. | | throughputFromDestinationNode | 148 | unsigned64 | The average amount of traffic flowing from source to destination, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second. | | reverseThroughputFromSourceNode | 149 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the source Node. The unit is bits per second. | | reverseThroughputFromDestinationNode | 150 | unsigned64 | The average amount of reverse traffic flowing from destination to source, since the previous report for this flow at the observation point, based on the records sent from the destination Node. The unit is bits per second. | | flowEndSecondsFromSourceNode | 151 | unsigned32 | The absolute timestamp of the last packet of this flow, based on the records sent from the source Node. The unit is seconds. | | flowEndSecondsFromDestinationNode | 152 | unsigned32 | The absolute timestamp of the last packet of this flow, based on the records sent from the destination Node. The unit is seconds. | Flow Aggregator stores the received flow records from Antrea Agents in a hash map, where the flow key is 5-tuple of a network connection. 5-tuple consists of Source IP, Destination IP, Source Port, Destination Port and Transport protocol. Therefore, Flow Aggregator maintains one flow record for any given connection, and this flow record gets updated till the connection in the Kubernetes cluster becomes invalid. In the case of inter-Node flows, there are two flow records, one from the source Node, where the flow originates from, and another one from the destination Node, where the destination Pod resides. Both the flow records contain incomplete information as mentioned . Flow Aggregator provides support for the correlation of the flow records from the source Node and the destination Node, and it exports a single flow record with complete information for both inter-Node and intra-Node flows. Flow Aggregator aggregates the flow records that belong to a single connection. As part of aggregation, fields such as flow timestamps, flow statistics etc. are updated. For the purpose of updating flow statistics fields, Flow Aggregator introduces the in Antrea Enterprise IPFIX registry corresponding to the Source Node and Destination Node, so that flow statistics from different Nodes can be preserved. antctl can access the Flow Aggregator API to dump flow records and print metrics about flow record processing. Refer to the for more information. If you would like to quickly try Network Flow Visibility feature, you can deploy Antrea, the Flow Aggregator Service, the Grafana Flow Collector on the . Build required image under antrea by using make command: ```shell make make flow-aggregator-image ``` Given any external IPFIX flow collector, you can deploy Antrea and the Flow Aggregator Service on a default Vagrant setup by running the following commands: ```shell ./infra/vagrant/provision.sh"
},
{
"data": "--flow-collector <externalFlowCollectorAddress> ``` If you would like to deploy the Grafana Flow Collector, you can run the following command: ```shell ./infra/vagrant/provision.sh ./infra/vagrant/push_antrea.sh --flow-collector Grafana ``` Here we list two choices the external configured flow collector: go-ipfix collector and Grafana flow collector. For each collector, we introduce how to deploy it and how to output or visualize the collected flow records information. The go-ipfix collector can be built from . It is used to collect, decode and log the IPFIX records. To deploy a released version of the go-ipfix collector, please choose one deployment manifest from the list of releases (supported after v0.5.2). For any given release <TAG> (e.g. v0.5.2), you can deploy the collector as follows: ```shell kubectl apply -f https://github.com/vmware/go-ipfix/releases/download/<TAG>/ipfix-collector.yaml ``` To deploy the latest version of the go-ipfix collector (built from the main branch), use the checked-in : ```shell kubectl apply -f https://raw.githubusercontent.com/vmware/go-ipfix/main/build/yamls/ipfix-collector.yaml ``` Go-ipfix collector also supports customization on its parameters: port and protocol. Please follow the to configure those parameters if needed. To output the flow records collected by the go-ipfix collector, use the command below: ```shell kubectl logs <ipfix-collector-pod-name> -n ipfix ``` Starting with Antrea v1.8, support for the Grafana Flow Collector has been migrated to Theia. The Grafana Flow Collector was added in Antrea v1.6.0. In Antrea v1.7.0, we start to move the network observability and analytics functionalities of Antrea to , including the Grafana Flow Collector. Going forward, further development of the Grafana Flow Collector will be in the Theia repo. For the up-to-date version of Grafana Flow Collector and other Theia features, please refer to the . Starting with Antrea v1.7, support for the ELK Flow Collector has been removed. Please consider using the instead, which is actively maintained. In addition to layer 4 network visibility, Antrea adds layer 7 network flow export. To achieve L7 (Layer 7) network flow export, the `L7FlowExporter` feature gate must be enabled. Note: L7 flow-visibility support for Theia is not yet implemented. To export layer 7 flows of a Pod or a Namespace, user can annotate Pods or Namespaces with the annotation key `visibility.antrea.io/l7-export` and set the value to indicate the traffic flow direction, which can be `ingress`, `egress` or `both`. For example, to enable L7 flow export in the ingress direction on Pod test-pod in the default Namespace, you can use: ```bash kubectl annotate pod test-pod visibility.antrea.io/l7-export=ingress ``` Based on the annotation, Flow Exporter will export the L7 flow data to the Flow Aggregator or configured IPFix collector using the fields `appProtocolName` and `httpVals`. `appProtocolName` field is used to indicate the application layer protocol name (e.g. http) and it will be empty if application layer data is not exported. `httpVals` stores a serialized JSON dictionary with every HTTP request for a connection mapped to a unique transaction ID. This format lets us group all the HTTP transactions pertaining to the same connection, into the same exported record. An example of `httpVals` is : `\"{\\\"0\\\":{\\\"hostname\\\":\\\"10.10.0.1\\\",\\\"url\\\":\\\"/public/\\\",\\\"httpuseragent\\\":\\\"curl/7.74.0\\\",\\\"httpcontenttype\\\":\\\"text/html\\\",\\\"http_method\\\":\\\"GET\\\",\\\"protocol\\\":\\\"HTTP/1.1\\\",\\\"status\\\":200,\\\"length\\\":153}}\"` HTTP fields in the `httpVals` are: | Http field | Description | |-|--| | hostname | IP address of the sender | | URL | url requested on the server | | httpuseragent | application used for HTTP | | httpcontenttype | type of content being returned by the server | | http_method | HTTP method used for the request | | protocol | HTTP protocol version used for the request or response | | status | HTTP status code | | length | size of the response body | As of now, the only supported layer 7 protocol is `HTTP1.1`. Support for more protocols may be added in the future."
}
] |
{
"category": "Runtime",
"file_name": "network-flow-visibility.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Data Processing Workflow sidebar_position: 3 slug: /internals/io_processing description: This article introduces read and write implementation of JuiceFS, including how it splits files into chunks. JuiceFS splits large files at multiple levels to improve I/O performance. See . Files are initially divided into logical chunks (64 MiB each), which are isolated from each other and further broken down into slices. Slices are the data units for persistence. During a write request, data is stored in the client buffer as chunks/slices. A new slice is created if it does not overlap or adjoin any existing slices; otherwise, the affected existing slices are updated. On a flush operation, a slice is divided into blocks (4 MiB by default) and uploaded to the object storage. Metadata is updated upon successful upload. Sequential writes are optimized, requiring only one continuously growing slice and one final flush. This maximizes object storage write performance. A simple below shows sequentially writing a 1 GiB file with a 1 MiB I/O size at its first stage. The following figure shows the data flow in each component of the system. Use to obtain real-time performance monitoring metrics. The first highlighted section in the above figure shows: The average I/O size for writing to the object storage is `object.put / object.put_c = 4 MiB`. It is the same as the default block size. The ratio of metadata transactions to object storage transactions is `meta.txn : object.put_c -= 1 : 16`. It means that a single slice flush requires 1 metadata update and 16 uploads to the object storage. Each flush operation transmits 64 MiB of data (4 MiB * 16), equivalent to the default chunk size. The average request size in the FUSE layer approximately equals to `fuse.write / fuse.ops ~= 128 KiB`, matching the default request size limitation. Generally, when JuiceFS writes a small file, the file is uploaded to the object storage upon file closure, and the I/O size is equal to the file size. In the third stage of the figure above, where 128 KiB small files are created, we can see that: The size of data written to the object storage during PUT operations is 128 KiB, calculated by `object.put / object.put_c`. The number of metadata transactions is approximately twice the number of PUT operations, since each file requires one create and one write. When JuiceFS uploads objects smaller than the block size, it simultaneously writes them into the to improve future performance. As shown in the third stage of the figure above, the write bandwidth of the `blockcache` is the same as that of the object storage. Since small files are cached, reading these files is extremely fast, as demonstrated in the fourth stage. Write operations are immediately committed to the client buffer, resulting in very low write latency (typically just a few microseconds). The actual upload to the object storage is automatically triggered internally when certain conditions are met, such as when the size or number of slices exceeds their limit, or data stays in the buffer for too long. Explicit calls, such as closing a file or invoking `fsync`, can also trigger uploading. The client buffer is only released after the data stored inside is uploaded. In scenarios with high write concurrency, if the buffer size (configured using ) is not big enough, or the object storage's performance insufficient, write blocking may occur, because the buffer cannot be released timely. The real-time buffer usage is shown in the"
},
{
"data": "field in the metrics figure. To slow things down, The JuiceFS client introduces a 10 ms delay to every write when the buffer usage exceeds the threshold. If the buffer usage is over twice the threshold, new writes are completely suspended until the buffer is released. Therefore, if the write latency keeps increasing or the buffer usage has exceeded the threshold for a long while, you should increase `--buffer-size`. Also consider increasing the maximum number of upload concurrency (, defaults to 20), which improves the upload bandwidth, thus boosting buffer release. JuiceFS supports random writes, including mmap-based random writes. Note that a block is an immutable object, because most object storage services don't support edit in blocks; they can only be re-uploaded and overwritten. Thus, when overwrites or random writes occur, JuiceFS avoids downloading the block for editing and re-uploading, which could cause serious I/O amplifications. Instead, writes are performed on new or existing slices. Relevant new blocks are uploaded to the object storage, and the new slice is appended to the slice list under the chunk. When a file is read, what the client sees is actually a consolidated view of all the slices. Compared to sequential writes, random writes in large files are more complicated. There could be a number of intermittent slices in a chunk, possibly all smaller than 4 MiB. Frequent random writes require frequent metadata updates, which in turn further impact performance. To improve read performance, JuiceFS schedules compaction tasks when the number of slices under a chunk exceeds the limit. You can also manually trigger compaction by running . Client write cache is also referred to as \"Writeback mode\" throughout the docs. For scenarios that does not deem consistency and data security as top priorities, enabling client write cache is also an option to further improve performance. When client write cache is enabled, flush operations return immediately after writing data to the local cache directory. Then, local data is uploaded asynchronously to the object storage. In other words, the local cache directory is a cache layer for the object storage. Learn more in . JuiceFS supports sequential reads and random reads (including mmap-based random reads). During read requests, the object corresponding to the block is completely read through the `GetObject` API of the object storage, or only a certain range of data in the object may be read (e.g., the read range is limited by the `Range` parameter of ). Meanwhile, prefetching is performed (controlled by the option) to download the complete data block into the local cache directory, as shown in the `blockcache` write speed in the second stage of the above metrics figure. This is very good for sequential reads as all cached data is utilized, maximizing the object storage access efficiency. The dataflow is illustrated in the figure below: Although prefetching works well for sequential reads, it might not be so effective for random reads on large files. It can cause read amplification and frequent cache eviction. Consider disabling prefetching using `--prefetch=0`. It is always hard to design cache strategy for random read scenarios. Two possible solutions are increasing the cache size to store all data locally or completely disabling the cache (`--cache-size=0`) and relying on a high-performance object storage service. Reading small files (smaller than the block size) is much easier because the entire file can be read in a single request. Since small files are cached locally during the write process, future reads are fast."
}
] |
{
"category": "Runtime",
"file_name": "io_processing.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Local endpoint map ``` -h, --help help for endpoint ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Delete local endpoint entries - List local endpoint entries"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_endpoint.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document tracks projects that integrate with flannel. and help us keep the list current. : Container orchestration platform with options for . : Kubernetes CNI plugin that uses Calico for network policies and intra-node communications and Flannel for inter-node communications. : Kubernetes distribution with flannel embedded as CNI. : Kubernetes distribution packed with Canal as default CNI."
}
] |
{
"category": "Runtime",
"file_name": "integrations.md",
"project_name": "Flannel",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage fqdn proxy ``` cilium-dbg fqdn [flags] ``` ``` -h, --help help for fqdn ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Manage fqdn proxy cache - Show internal state Cilium has for DNS names / regexes"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_fqdn.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Collects agent & system information useful for bug reporting ``` cilium-bugtool [OPTIONS] [flags] ``` ``` $ cilium-bugtool [...] $ kubectl get pods --namespace kube-system NAME READY STATUS RESTARTS AGE cilium-kg8lv 1/1 Running 0 13m [...] $ kubectl -n kube-system exec cilium-kg8lv -- cilium-bugtool $ kubectl cp kube-system/cilium-kg8lv:/tmp/cilium-bugtool-243785589.tar /tmp/cilium-bugtool-243785589.tar ``` ``` --archive Create archive when false skips deletion of the output directory (default true) --archive-prefix string String to prefix to name of archive if created (e.g., with cilium pod-name) -o, --archiveType string Archive type: tar | gz (default \"tar\") --cilium-agent-container-name string Name of the Cilium Agent main container (when k8s-mode is true) (default \"cilium-agent\") --config string Configuration to decide what should be run (default \"./.cilium-bugtool.config\") --dry-run Create configuration file of all commands that would have been executed --enable-markdown Dump output of commands in markdown format --envoy-dump When set, dump envoy configuration from unix socket (default true) --envoy-metrics When set, dump envoy prometheus metrics from unix socket (default true) --exclude-object-files Exclude per-endpoint object files. Template object files will be kept --exec-timeout duration The default timeout for any cmd execution in seconds (default 30s) --get-pprof When set, only gets the pprof traces from the cilium-agent binary -h, --help help for cilium-bugtool -H, --host string URI to server-side API --hubble-metrics When set, hubble prometheus metrics (default true) --hubble-metrics-port int Port to query for hubble metrics (default 9965) --k8s-label string Kubernetes label for Cilium pod (default \"k8s-app=cilium\") --k8s-mode Require Kubernetes pods to be found or fail --k8s-namespace string Kubernetes namespace for Cilium pod (default \"kube-system\") --parallel-workers int Maximum number of parallel worker tasks, use 0 for number of CPUs --pprof-debug int Debug pprof args (default 1) --pprof-port int Pprof port to connect to. Known Cilium component ports are agent:6060, operator:6061, apiserver:6063 (default 6060) --pprof-trace-seconds int Amount of seconds used for pprof CPU traces (default 180) -t, --tmp string Path to store extracted files. Use '-' to send to stdout. (default \"/tmp\") ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-bugtool.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "``` bash curl -v http://10.196.59.202:17210/getPartitions ``` ``` bash curl -v http://10.196.59.202:17210/getPartitionById?pid=100 ``` Obtains current status information for a specified shard ID, including the raft leader address for the current shard group, the raft group members, and the inode allocation cursor. Request Parameters: | Parameter | Type | Description | |--||-| | pid | Integer | Metadata shard ID |"
}
] |
{
"category": "Runtime",
"file_name": "partition.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - GuestNumaId | int32 | | Cpus | Pointer to []int32 | | [optional] Distances | Pointer to | | [optional] MemoryZones | Pointer to []string | | [optional] SgxEpcSections | Pointer to []string | | [optional] PciSegments | Pointer to []int32 | | [optional] `func NewNumaConfig(guestNumaId int32, ) *NumaConfig` NewNumaConfig instantiates a new NumaConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewNumaConfigWithDefaults() *NumaConfig` NewNumaConfigWithDefaults instantiates a new NumaConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *NumaConfig) GetGuestNumaId() int32` GetGuestNumaId returns the GuestNumaId field if non-nil, zero value otherwise. `func (o NumaConfig) GetGuestNumaIdOk() (int32, bool)` GetGuestNumaIdOk returns a tuple with the GuestNumaId field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaConfig) SetGuestNumaId(v int32)` SetGuestNumaId sets GuestNumaId field to given value. `func (o *NumaConfig) GetCpus() []int32` GetCpus returns the Cpus field if non-nil, zero value otherwise. `func (o NumaConfig) GetCpusOk() ([]int32, bool)` GetCpusOk returns a tuple with the Cpus field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaConfig) SetCpus(v []int32)` SetCpus sets Cpus field to given value. `func (o *NumaConfig) HasCpus() bool` HasCpus returns a boolean if a field has been set. `func (o *NumaConfig) GetDistances() []NumaDistance` GetDistances returns the Distances field if non-nil, zero value otherwise. `func (o NumaConfig) GetDistancesOk() ([]NumaDistance, bool)` GetDistancesOk returns a tuple with the Distances field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaConfig) SetDistances(v []NumaDistance)` SetDistances sets Distances field to given value. `func (o *NumaConfig) HasDistances() bool` HasDistances returns a boolean if a field has been set. `func (o *NumaConfig) GetMemoryZones() []string` GetMemoryZones returns the MemoryZones field if non-nil, zero value otherwise. `func (o NumaConfig) GetMemoryZonesOk() ([]string, bool)` GetMemoryZonesOk returns a tuple with the MemoryZones field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaConfig) SetMemoryZones(v []string)` SetMemoryZones sets MemoryZones field to given value. `func (o *NumaConfig) HasMemoryZones() bool` HasMemoryZones returns a boolean if a field has been set. `func (o *NumaConfig) GetSgxEpcSections() []string` GetSgxEpcSections returns the SgxEpcSections field if non-nil, zero value otherwise. `func (o NumaConfig) GetSgxEpcSectionsOk() ([]string, bool)` GetSgxEpcSectionsOk returns a tuple with the SgxEpcSections field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaConfig) SetSgxEpcSections(v []string)` SetSgxEpcSections sets SgxEpcSections field to given value. `func (o *NumaConfig) HasSgxEpcSections() bool` HasSgxEpcSections returns a boolean if a field has been set. `func (o *NumaConfig) GetPciSegments() []int32` GetPciSegments returns the PciSegments field if non-nil, zero value otherwise. `func (o NumaConfig) GetPciSegmentsOk() ([]int32, bool)` GetPciSegmentsOk returns a tuple with the PciSegments field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *NumaConfig) SetPciSegments(v []int32)` SetPciSegments sets PciSegments field to given value. `func (o *NumaConfig) HasPciSegments() bool` HasPciSegments returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "NumaConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Fix race conditions in NetworkPolicyController. (, [@tnqn]) Ensure NO_FLOOD is always set for IPsec tunnel ports and TrafficControl ports. ( , [@xliuxu] [@tnqn]) Fix Service routes being deleted on Agent startup on Windows. (, [@hongliangl]) Fix Agent crash in dual-stack clusters when any Node is not configured with an IP address for each address family. (, [@hongliangl]) Fix route deletion for Service ClusterIP and LoadBalancerIP when AntreaProxy is enabled. (, [@tnqn]) Upgrade Antrea base image to ubuntu 22.04. (, [@antoninbas]) Add OFSwitch connection check to Agent's liveness probes. (, [@tnqn]) Improve installcnichaining to support updates to CNI config file. (, [@antoninbas]) Add a periodic job to rejoin dead Nodes to fix Egress not working properly after long network downtime. (, [@tnqn]) Fix connectivity issues caused by MAC address changes with systemd v242 and later. (, [@wenyingd]) Fix potential deadlocks and memory leaks of memberlist maintenance in large-scale clusters. (, [@wenyingd]) Fix Windows AddNodePort parameter error. (, [@XinShuYang]) Set no-flood config with ports for TrafficControl after Agent restarting. (, [@hongliangl]) Fix multicast group not removed from cache when it is uninstalled. (, [@wenyingd]) Remove redundant Openflow messages when syncing an updated group to OVS. (, [@hongliangl]) Fix Antrea Octant plugin build. (, [@antoninbas]) Fix FlowExporter memory bloat when export process is dead. (, [@wsquan171]) Fix Pod-to-external traffic on EKS in policyOnly mode. (, [@antoninbas]) Use uplink interface name for host interface internal port to support DHCP client. (, [@gran-vmv]) Add TrafficControl feature to control the transmission of Pod traffic; it allows users to mirror or redirect traffic originating from specific Pods or destined for specific Pods to a local network device or a remote destination via a tunnel of various types. ( , [@tnqn] [@hongliangl] [@wenqiq]) Refer to for more information about this feature. Refer to for more information about using this feature to provide network-based intrusion detection service to your Pods. Add support for the IPsec Certificate-based Authentication. (, [@xliuxu]) Add an Antrea Agent configuration option `ipsec.authenticationMode` to specify authentication mode. Supported options are \"psk\" (default) and \"cert\". Add an Antrea Controller configuration option `ipsecCSRSigner.autoApprove` to specify the auto-approve policy of Antrea CSR signer for IPsec certificates management. By default, Antrea will auto-approve the CertificateSingingRequest (CSR) if it is verified. Add an Antrea Controller configuration option `ipsecCSRSigner.selfSignedCA` to specify whether to use auto-generated self-signed CA certificate. By default, Antrea will auto-generate a self-signed CA certificate. Add the following capabilities to Antrea-native policies: Add support for matching ICMP traffic. (, [@GraysonWu]) Add support for matching multicast and IGMP traffic. (, [@liu4480]) Add support for rule-level statistics for multicast and IGMP traffic. (, [@ceclinux]) Add the following capabilities to the Multicast feature: Add `antctl get podmulticaststats` command to query Pod-level multicast traffic statistics in Agent mode. (, [@ceclinux]) Add \"MulticastGroup\" API to query Pods that have joined multicast groups; `kubectl get multicastgroups` can generate requests and output responses of the API. ( , [@ceclinux]) Add an Antrea Agent configuration option `multicast.igmpQueryInterval` to specify the interval at which the antrea-agent sends IGMP queries to Pods. (, [@liu4480]) Add the following capabilities to the Multi-cluster feature: Add the Multi-cluster Gateway functionality which supports routing Multi-cluster Service traffic across clusters through tunnels between the Gateway Nodes. It enables Multi-cluster Service access across clusters, without requiring direct reachability of Pod IPs between"
},
{
"data": "( , [@luolanzone]) Add a number of `antctl mc` subcommands for bootstrapping Multi-cluster; refer to the for more information. (, [@hjiajing]) Add the following capabilities to secondary network IPAM: Add support for IPAM for Pod secondary networks managed by Multus. (, [@jianjuns]) Add support for multiple IPPools. (, [@jianjuns]) Add support for static addresses. (, [@jianjuns]) Add support for NodePortLocal on Windows. (, [@XinShuYang]) Add support for Traceflow on Windows. (, [@gran-vmv]) Add support for containerd to antrea-eks-node-init.yml. (, [@antoninbas]) Add an Antrea Agent configuration option `disableTXChecksumOffload` to support cases in which the datapath's TX checksum offloading does not work properly. (, [@tnqn]) Add support for InternalTrafficPolicy in AntreaProxy. (, [@hongliangl]) Add the following documentations: Add for the Antrea Agent RBAC permissions and how to restrict them using Gatekeeper/OPA. (, [@antoninbas]) Add for Antrea Multi-cluster. (, [@luolanzone] [@jianjuns]) Add for the AntreaProxy feature. (, [@antoninbas]) Add for secondary network IPAM. (, [@jianjuns]) Optimize generic traffic performance by reducing OVS packet recirculation. (, [@tnqn]) Optimize NodePort traffic performance by reducing OVS packet recirculation. (, [@hongliangl]) Improve validation for IPPool CRD. (, [@jianjuns]) Improve validation for `egress.to.namespaces.match` of AntreaClusterNetworkPolicy rules. (, [@qiyueyao]) Deprecate the Antrea Agent configuration option `multicastInterfaces` in favor of `multicast.multicastInterfaces`. (, [@tnqn]) Reduce permissions of Antrea Agent ServiceAccount. (, [@xliuxu]) Create a Secret in the Antrea manifest for the antctl and antrea-agent ServiceAccount as K8s v1.24 no longer creates a token for each ServiceAccount automatically. (, [@antoninbas]) Implement garbage collector for IP Pools to clean up allocations and reservations for which owner no longer exists. (, [@annakhm]) Preserve client IP if the selected Endpoint is local regardless of ExternalTrafficPolicy. (, [@hongliangl]) Add a Helm chart for Antrea and use the Helm templates to generate the standard Antrea YAML manifests. (, [@antoninbas]) Make \"Agent mode\" antctl work out-of-the-box on Windows. (, [@antoninbas]) Truncate SessionAffinity timeout values of Services instead of wrapping around. (, [@antoninbas]) Move Antrea Windows log dir from `C:\\k\\antrea\\logs\\` to `C:\\var\\log\\antrea\\`. (, [@GraysonWu]) Limit max number of data values displayed on Grafana panels. (, [@heanlan]) Support deploying ClickHouse with Persistent Volume. (, [@yanjunz97]) Remove support for ELK Flow Collector. (, [@heanlan]) Improve documentation for Antrea-native policies. (, [@Dyanngg]) Update OVS version to 2.17.0. (, [@antoninbas]) Fix Egress not working with kube-proxy IPVS strictARP mode. (, [@xliuxu]) Fix intra-Node Pod traffic bypassing Ingress NetworkPolicies in some scenarios. (, [@hongliangl]) Fix FQDN policy support for IPv6. (, [@tnqn]) Fix multicast not working if the AntreaPolicy feature is disabled. (, [@liu4480]) Fix tolerations for Pods running on control-plane for Kubernetes >= 1.24. (, [@xliuxu]) Fix DNS resolution error of antrea-agent on AKS by using `ClusterFirst` dnsPolicy. (, [@tnqn]) Clean up stale routes installed by AntreaProxy when ProxyAll is disabled. (, [@hongliangl]) Ensure that Service traffic does not bypass NetworkPolicies when ProxyAll is enabled on Windows. (, [@hongliangl]) Use IP and MAC to find virtual management adapter to fix Agent crash in some scenarios on Windows. (, [@wenyingd]) Fix handling of the \"reject\" packets generated by the Antrea Agent to avoid infinite looping. (, [@GraysonWu]) Fix export/import of Services with named ports when using the Antrea Multi-cluster feature. (, [@luolanzone]) Fix Multi-cluster importer not working after leader controller restarts. (, [@luolanzone]) Fix Endpoint ResourceExports not cleaned up after corresponding Service is deleted. (, [@luolanzone]) Fix pool CRD format in egress.md and service-loadbalancer.md. (, [@jianjuns]) Fix infinite looping when Agent tries to delete a non-existing route. (, [@hongliangl]) Fix race condition in ConntrackConnectionStore and FlowExporter. (, [@heanlan])"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.7.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "name: Feature request about: Suggest an idea for this project title: '' labels: type-feature assignees: '' Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Urgency Explain why the feature is important Additional context Add any other context or screenshots about the feature request here."
}
] |
{
"category": "Runtime",
"file_name": "feature_request.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Copyright (c) 2014-2015, Philip Hofer Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "Soda Foundation",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document describes how rkt compares to various other projects in the container ecosystem. * The Docker Engine is an application container runtime implemented as a central API daemon. Docker can resolve a \"\" name, such as `quay.io/coreos/etcd`, and download, execute, and monitor the application container. Functionally, this is all similar to rkt; however, along with \"Docker Images\", rkt can also download and run \"App Container Images\" (ACIs) specified by the (appc). Besides also supporting ACIs, rkt has a substantially different architecture that is designed with composability and security in mind. Prior to Docker version 1.11, the Docker Engine daemon downloaded container images, launched container processes, exposed a remote API, and acted as a log collection daemon, all in a centralized process running as root. While such a centralized architecture is convenient for deployment, it does not follow best practices for Unix process and privilege separation; further, it makes Docker difficult to properly integrate with Linux init systems such as upstart and systemd. Since version 1.11, the Docker daemon no longer handles the execution of containers itself. Instead, this is now handled by . More precisely, the Docker daemon prepares the image as an (OCI) bundle and makes an API call to containerd to start the OCI bundle. containerd then starts the container using . Since running a Docker container from the command line (i.e. using `docker run`) just talks to the Docker daemon API, which is in turn directly or indirectly via containerd responsible for creating the container, init systems are unable to directly track the life of the actual container process. rkt has no centralized \"init\" daemon, instead launching containers directly from client commands, making it compatible with init systems such as systemd, upstart, and others. A more detailed view to rkt's process model is shown in the . rkt uses standard Unix group permissions to allow privilege separation between different operations. Once the rkt data directory is correctly set up, container image downloads and signature verification can run as a non-privileged user. is a low-level container runtime and an implementation of the . runC exposes and expects a user to understand low-level details of the host operating system and configuration. It requires the user to separately download or cryptographically verify container images, and for \"higher level tools\" to prepare the container filesystem. runC does not have a centralized daemon, and, given a properly configured \"OCI bundle\", can be integrated with init systems such as upstart and systemd. rkt includes the same functionality as runC but does not expect a user to understand low-level details of the operating system to use, and can be invoked as simply as `rkt run coreos.com/etcd,version=v2.2.0`. It can download both \"\" and \"\". As rkt does not have a centralized daemon it can also be easily integrated with init systems such as upstart and systemd. is a daemon to control"
},
{
"data": "It has a command-line tool called `ctr` which is used to interact with the containerd daemon. This makes the containerd process model similar to that of the Docker process model, illustrated above. Unlike the Docker daemon it has a reduced feature set; not supporting image download, for example. rkt has no centralized daemon to manage containers, instead launching containers directly from client commands, making it compatible with init systems such as systemd, upstart, and others. LXC is a system container runtime designed to execute \"full system containers\", which generally consist of a full operating system image. An LXC process, in most common use cases, will boot a full Linux distribution such as Debian, Fedora, Arch, etc, and a user will interact with it similarly to how they would with a Virtual Machine image. LXC may also be used to run (but not download) application containers, but this use requires more understanding of low-level operating system details and is a less common practice. LXC can download \"full system container\" images from various public mirrors and cryptographically verify them. LXC does not have a central daemon and can integrate with init systems such as upstart and systemd. LXD is similar to LXC but is a REST API on top of liblxc which forks a monitor and container process. This ensures the LXD daemon is not a central point of failure and containers continue running in case of LXD daemon failure. All other details are nearly identical to LXC. rkt can download, cryptographically verify, and run application container images. It is not designed to run \"full system containers\" but instead individual applications such as web apps, databases, or caches. As rkt does not have a centralized daemon it can be integrated with init systems such as upstart and systemd. OpenVZ is a system container runtime designed to execute \"full system containers\" which are generally a full system image. An OpenVZ process, in most common use cases, will boot a full Linux Distro such as Debian, Fedora, Arch, etc and a user will interact with it similarly to a Virtual Machine image. OpenVZ can download \"full system container\" images from various public mirrors and cryptographically verify them. OpenVZ does not have a central daemon and can integrate with init systems such as upstart and systemd. rkt can download, cryptographically verify, and run application container images. It is not designed to run \"full system containers\" but instead individual applications such as web apps, databases, or caches. As rkt does not have a centralized daemon it can be integrated with init systems such as upstart and systemd. systemd-nspawn is a container runtime designed to execute a process inside of a Linux container. systemd-nspawn gets its name from \"namespace spawn\", which means it only handles process isolation and does not do resource isolation like memory, CPU,"
},
{
"data": "systemd-nspawn can run an application container or system container but does not, by itself, download or verify images. systemd-nspawn does not have a centralized daemon and can be integrated with init systems such as upstart and systemd. rkt can download, cryptographically verify, and run application container images. It is not designed to run \"full system containers\", but instead individual applications such as web apps, databases, or caches. As rkt does not have a centralized daemon it can be integrated with init systems such as upstart and systemd. By default rkt uses systemd-nspawn to configure the namespaces for an application container. machinectl is a system manager that can be used to query and control the state of registered systems on a systemd host. These systems may be registered Virtual Machines, systemd-nspawn containers, or other runtimes that register with the systemd registration manager, systemd-machined. Among many other things, machinectl can download, cryptographically verify, extract and trigger to run a systemd-nspawn container off the extracted image content. By default these images are expected to be \"full system containers\", as systemd-nspawn is passed the --boot argument. On systemd hosts, rkt will integrate with systemd-machined in much the same way that machinectl containers will: any pods created by rkt will be registered as machines on the host and can be interacted with using machinectl commands. However, in being more oriented towards applications, rkt abstracts the pod lifecycle away from the user. rkt also provides a more configurable and advanced workflow around discovering, downloading and verifying images, as well as supporting more image types. Unlike machinectl, rkt execs systemd-nspawn directly instead of creating a systemd service, allowing it to integrate cleanly with any supervisor system. Furthermore, in addition to namespace isolation, rkt can set up various other kinds of isolation (e.g. resources) defined in the appc specification. qemu-kvm and lkvm are userspace tools that execute a full system image inside of a Virtual Machine using the . A system image will commonly include a boot loader, kernel, root filesystem and be pre-installed with applications to run on boot. Most commonly qemu-kvm is used for IaaS systems such as OpenStack, Eucalyptus, etc. The Linux KVM infrastructure is trusted for running multi-tenanted virtual machine infrastructures and is generally accepted as being secure enough to run untrusted system images. qemu-kvm and lkvm do not have a centralized daemon and can be integrated with init systems such as upstart and systemd. rkt can download, cryptographically verify, and run application container images. It is not designed to run \"full system images\" but instead individual applications such as web apps, databases, or caches. As rkt does not have a centralized daemon it can be integrated with init systems such as upstart and systemd. rkt can optionally use lkvm or qemu-kvm as an additional security measure over a Linux container, at a slight cost to performance and flexibility; this feature can be configured using the ."
}
] |
{
"category": "Runtime",
"file_name": "rkt-vs-other-projects.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "If you want to write an option parser, and have it be good, there are two ways to do it. The Right Way, and the Wrong Way. The Wrong Way is to sit down and write an option parser. We've all done that. The Right Way is to write some complex configurable program with so many options that you go half-insane just trying to manage them all, and put it off with duct-tape solutions until you see exactly to the core of the problem, and finally snap and write an awesome option parser. If you want to write an option parser, don't write an option parser. Write a package manager, or a source control system, or a service restarter, or an operating system. You probably won't end up with a good one of those, but if you don't give up, and you are relentless and diligent enough in your procrastination, you may just end up with a very nice option parser. // my-program.js var nopt = require(\"nopt\") , Stream = require(\"stream\").Stream , path = require(\"path\") , knownOpts = { \"foo\" : [String, null] , \"bar\" : [Stream, Number] , \"baz\" : path , \"bloo\" : [ \"big\", \"medium\", \"small\" ] , \"flag\" : Boolean , \"pick\" : Boolean , \"many\" : [String, Array] } , shortHands = { \"foofoo\" : [\"--foo\", \"Mr. Foo\"] , \"b7\" : [\"--bar\", \"7\"] , \"m\" : [\"--bloo\", \"medium\"] , \"p\" : [\"--pick\"] , \"f\" : [\"--flag\"] } // everything is optional. // knownOpts and shorthands default to {} // arg list defaults to process.argv // slice defaults to 2 , parsed = nopt(knownOpts, shortHands, process.argv, 2) console.log(parsed) This would give you support for any of the following: ```bash $ node my-program.js --foo \"blerp\" --no-flag { \"foo\" : \"blerp\", \"flag\" : false } $ node my-program.js bar 7 --foo \"Mr. Hand\" --flag { bar: 7, foo: \"Mr. Hand\", flag: true } $ node my-program.js --foo \"blerp\" -f --p { foo: \"blerp\", flag: true, pick: true } $ node my-program.js -fp --foofoo { foo: \"Mr. Foo\", flag: true, pick: true } $ node my-program.js --foofoo -- -fp # -- stops the flag parsing. { foo: \"Mr. Foo\", argv: { remain: [\"-fp\"] } } $ node my-program.js --blatzk 1000 -fp # unknown opts are ok. { blatzk: 1000, flag: true, pick: true } $ node my-program.js --blatzk true -fp # but they need a value { blatzk: true, flag: true, pick: true } $ node my-program.js --no-blatzk -fp # unless they start with \"no-\" { blatzk: false, flag: true, pick: true } $ node my-program.js --baz b/a/z # known paths are resolved. { baz: \"/Users/isaacs/b/a/z\" } $ node my-program.js --many 1 --many null --many foo { many: [\"1\", \"null\", \"foo\"] } $ node my-program.js --many foo { many: [\"foo\"] } ``` Read the tests at the bottom of `lib/nopt.js` for more examples of what this puppy can do. The following types are supported, and defined on `nopt.typeDefs` String: A normal string. No parsing is done. path: A file system path. Gets resolved against cwd if not absolute. url: A url. If it doesn't parse, it isn't accepted. Number: Must be numeric. Date: Must parse as a date. If it does, and `Date` is one of the options, then it will return a Date object, not a string. Boolean: Must be either `true` or `false`. If an option is a boolean, then it does not need a value, and its presence will imply `true` as the value. To negate boolean flags, do `--no-whatever` or `--whatever false` NaN: Means that the option is strictly not"
},
{
"data": "Any value will fail. Stream: An object matching the \"Stream\" class in node. Valuable for use when validating programmatically. (npm uses this to let you supply any WriteStream on the `outfd` and `logfd` config options.) Array: If `Array` is specified as one of the types, then the value will be parsed as a list of options. This means that multiple values can be specified, and that the value will always be an array. If a type is an array of values not on this list, then those are considered valid values. For instance, in the example above, the `--bloo` option can only be one of `\"big\"`, `\"medium\"`, or `\"small\"`, and any other value will be rejected. When parsing unknown fields, `\"true\"`, `\"false\"`, and `\"null\"` will be interpreted as their JavaScript equivalents, and numeric values will be interpreted as a number. You can also mix types and values, or multiple types, in a list. For instance `{ blah: [Number, null] }` would allow a value to be set to either a Number or null. When types are ordered, this implies a preference, and the first type that can be used to properly interpret the value will be used. To define a new type, add it to `nopt.typeDefs`. Each item in that hash is an object with a `type` member and a `validate` method. The `type` member is an object that matches what goes in the type list. The `validate` method is a function that gets called with `validate(data, key, val)`. Validate methods should assign `data[key]` to the valid value of `val` if it can be handled properly, or return boolean `false` if it cannot. You can also call `nopt.clean(data, types, typeDefs)` to clean up a config object and remove its invalid properties. By default, nopt outputs a warning to standard error when invalid options are found. You can change this behavior by assigning a method to `nopt.invalidHandler`. This method will be called with the offending `nopt.invalidHandler(key, val, types)`. If no `nopt.invalidHandler` is assigned, then it will console.error its whining. If it is assigned to boolean `false` then the warning is suppressed. Yes, they are supported. If you define options like this: ```javascript { \"foolhardyelephants\" : Boolean , \"pileofmonkeys\" : Boolean } ``` Then this will work: ```bash node program.js --foolhar --pil node program.js --no-f --pileofmon ``` Shorthands are a hash of shorter option names to a snippet of args that they expand to. If multiple one-character shorthands are all combined, and the combination does not unambiguously match any other option or shorthand, then they will be broken up into their constituent parts. For example: ```json { \"s\" : [\"--loglevel\", \"silent\"] , \"g\" : \"--global\" , \"f\" : \"--force\" , \"p\" : \"--parseable\" , \"l\" : \"--long\" } ``` ```bash npm ls -sgflp npm ls --loglevel silent --global --force --long --parseable ``` The config object returned by nopt is given a special member called `argv`, which is an object with the following fields: `remain`: The remaining args after all the parsing has occurred. `original`: The args as they originally appeared. `cooked`: The args after flags and shorthands are expanded. Node programs are called with more or less the exact argv as it appears in C land, after the v8 and node-specific options have been plucked off. As such, `argv[0]` is always `node` and `argv[1]` is always the JavaScript program being run. That's usually not very useful to you. So they're sliced off by default. If you want them, then you can pass in `0` as the last argument, or any other number that you'd like to slice off the start of the"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Ceph OSD Management Ceph Object Storage Daemons (OSDs) are the heart and soul of the Ceph storage platform. Each OSD manages a local device and together they provide the distributed storage. Rook will automate creation and management of OSDs to hide the complexity based on the desired state in the CephCluster CR as much as possible. This guide will walk through some of the scenarios to configure OSDs where more configuration may be required. The provides a simple environment to run Ceph tools. The `ceph` commands mentioned in this document should be run from the toolbox. Once the is created, connect to the pod to execute the `ceph` commands to analyze the health of the cluster, in particular the OSDs and placement groups (PGs). Some common commands to analyze OSDs include: ```console ceph status ceph osd tree ceph osd status ceph osd df ceph osd utilization ``` ```console kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l \"app=rook-ceph-tools\" -o jsonpath='{.items[0].metadata.name}') bash ``` The will provide the basic steps to create a cluster and start some OSDs. For more details on the OSD settings also see the documentation. If you are not seeing OSDs created, see the . To add more OSDs, Rook will automatically watch for new nodes and devices being added to your cluster. If they match the filters or other settings in the `storage` section of the cluster CR, the operator will create new OSDs. In more dynamic environments where storage can be dynamically provisioned with a raw block storage provider, the OSDs can be backed by PVCs. See the `storageClassDeviceSets` documentation in the topic. To add more OSDs, you can either increase the `count` of the OSDs in an existing device set or you can add more device sets to the cluster CR. The operator will then automatically create new OSDs according to the updated cluster CR. To remove an OSD due to a failed disk or other re-configuration, consider the following to ensure the health of the data through the removal process: Confirm you will have enough space on your cluster after removing your OSDs to properly handle the deletion Confirm the remaining OSDs and their placement groups (PGs) are healthy in order to handle the rebalancing of the data Do not remove too many OSDs at once Wait for rebalancing between removing multiple OSDs If all the PGs are `active+clean` and there are no warnings about being low on space, this means the data is fully replicated and it is safe to proceed. If an OSD is failing, the PGs will not be perfectly clean and you will need to proceed anyway. Update your CephCluster"
},
{
"data": "Depending on your CR settings, you may need to remove the device from the list or update the device filter. If you are using `useAllDevices: true`, no change to the CR is necessary. !!! important On host-based clusters, you may need to stop the Rook Operator while performing OSD removal steps in order to prevent Rook from detecting the old OSD and trying to re-create it before the disk is wiped or removed. To stop the Rook Operator, run: ```console kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0 ``` You must perform steps below to (1) purge the OSD and either (2.a) delete the underlying data or (2.b)replace the disk before starting the Rook Operator again. Once you have done that, you can start the Rook operator again with: ```console kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1 ``` To reduce the storage in your cluster or remove a failed OSD on a PVC: Shrink the number of OSDs in the `storageClassDeviceSets` in the CephCluster CR. If you have multiple device sets, you may need to change the index of `0` in this example path. `kubectl -n rook-ceph patch CephCluster rook-ceph --type=json -p '[{\"op\": \"replace\", \"path\": \"/spec/storage/storageClassDeviceSets/0/count\", \"value\":<desired number>}]'` Reduce the `count` of the OSDs to the desired number. Rook will not take any action to automatically remove the extra OSD(s). Identify the PVC that belongs to the OSD that is failed or otherwise being removed. `kubectl -n rook-ceph get pvc -l ceph.rook.io/DeviceSet=<deviceSet>` Identify the OSD you desire to remove. The OSD assigned to the PVC can be found in the labels on the PVC `kubectl -n rook-ceph get pod -l ceph.rook.io/pvc=<orphaned-pvc> -o yaml | grep ceph-osd-id` For example, this might return: `ceph-osd-id: \"0\"` Remember the OSD ID for purging the OSD below If you later increase the count in the device set, note that the operator will create PVCs with the highest index that is not currently in use by existing OSD PVCs. If you want to remove an unhealthy OSD, the osd pod may be in an error state such as `CrashLoopBackoff` or the `ceph` commands in the toolbox may show which OSD is `down`. If you want to remove a healthy OSD, you should run the following commands: ```console $ kubectl -n rook-ceph scale deployment rook-ceph-osd-<ID> --replicas=0 $ ceph osd down osd.<ID> ``` !!! note The `rook-ceph` kubectl plugin must be ```bash kubectl rook-ceph rook purge-osd 0 --force ``` OSD removal can be automated with the example found in the . In the osd-purge.yaml, change the `<OSD-IDs>` to the ID(s) of the OSDs you want to remove. Run the job: `kubectl create -f"
},
{
"data": "When the job is completed, review the logs to ensure success: `kubectl -n rook-ceph logs -l app=rook-ceph-purge-osd` When finished, you can delete the job: `kubectl delete -f osd-purge.yaml` If you want to remove OSDs by hand, continue with the following sections. However, we recommend you use the above-mentioned steps to avoid operation errors. If the OSD purge job fails or you need fine-grained control of the removal, here are the individual commands that can be run from the toolbox. Detach the OSD PVC from Rook `kubectl -n rook-ceph label pvc <orphaned-pvc> ceph.rook.io/DeviceSetPVCId-` Mark the OSD as `out` if not already marked as such by Ceph. This signals Ceph to start moving (backfilling) the data that was on that OSD to another OSD. `ceph osd out osd.<ID>` (for example if the OSD ID is 23 this would be `ceph osd out osd.23`) Wait for the data to finish backfilling to other OSDs. `ceph status` will indicate the backfilling is done when all of the PGs are `active+clean`. If desired, it's safe to remove the disk after that. Remove the OSD from the Ceph cluster `ceph osd purge <ID> --yes-i-really-mean-it` Verify the OSD is removed from the node in the CRUSH map `ceph osd tree` The operator can automatically remove OSD deployments that are considered \"safe-to-destroy\" by Ceph. After the steps above, the OSD will be considered safe to remove since the data has all been moved to other OSDs. But this will only be done automatically by the operator if you have this setting in the cluster CR: ```yaml removeOSDsIfOutAndSafeToRemove: true ``` Otherwise, you will need to delete the deployment directly: ```console kubectl delete deployment -n rook-ceph rook-ceph-osd-<ID> ``` In PVC-based cluster, remove the orphaned PVC, if necessary. If you want to clean the device where the OSD was running, see in the instructions to wipe a disk on the topic. To replace a disk that has failed: Run the steps in the previous section to . Replace the physical device and verify the new device is attached. Check if your cluster CR will find the new device. If you are using `useAllDevices: true` you can skip this step. If your cluster CR lists individual devices or uses a device filter you may need to update the CR. The operator ideally will automatically create the new OSD within a few minutes of adding the new device or updating the CR. If you don't see a new OSD automatically created, restart the operator (by deleting the operator pod) to trigger the OSD creation. Verify if the OSD is created on the node by running `ceph osd tree` from the toolbox. !!! note The OSD might have a different ID than the previous OSD that was replaced."
}
] |
{
"category": "Runtime",
"file_name": "ceph-osd-mgmt.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "There are two different ways processes in the virtual machine can communicate with processes in the host. The first one is by using serial ports, where the processes in the virtual machine can read/write data from/to a serial port device and the processes in the host can read/write data from/to a Unix socket. Most GNU/Linux distributions have support for serial ports, making it the most portable solution. However, the serial link limits read/write access to one process at a time. A newer, simpler method is , which can accept connections from multiple clients. The following diagram shows how it's implemented in Kata Containers. ``` .-. | .. | | | .--. .--. | | | | |cont1| |cont2| | | | | `--' `--' | | | | | | | | | | .. | | | | | agent | | | | | `' | | | | | | | | | | POD .-. | | | `--| vsock |-' | | `-' | | | | | | .. .. | | | shim | | shim | | | `' `' | | Host | `-' ``` The host Linux kernel version must be greater than or equal to v4.8, and the `vhostvsock` module must be loaded or built-in (`CONFIGVHOST_VSOCK=y`). To load the module run the following command: ``` $ sudo modprobe -i vhost_vsock ``` The Kata Containers version must be greater than or equal to 1.2.0 and `use_vsock` must be set to `true` in the runtime . To use Kata Containers with VSOCKs in a VMWare guest environment, first stop the `vmware-tools` service and unload the VMWare Linux kernel module. ``` sudo systemctl stop vmware-tools sudo modprobe -r vmwvsockvmci_transport sudo modprobe -i vhost_vsock ``` Using a proxy for multiplexing the connections between the VM and the host uses 4.5MB per . In a high density deployment this could add up to GBs of memory that could have been used to host more PODs. When we talk about density each kilobyte matters and it might be the decisive factor between run another POD or not. Before making the decision not to use VSOCKs, you should ask yourself, how many more containers can run with the memory RAM consumed by the Kata proxies? Since communication via VSOCKs is direct, the only way to lose communication with the containers is if the VM itself or the `containerd-shim-kata-v2` dies, if this happens the containers are removed automatically."
}
] |
{
"category": "Runtime",
"file_name": "VSocks.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "For debugging or inspection you may want to extract the PodManifest to stdout. ``` { \"acVersion\":\"0.8.11\", \"acKind\":\"PodManifest\" ... ``` | Flag | Default | Options | Description | | | | | | | `--pretty-print` | `true` | `true` or `false` | Apply indent to format the output | See the table with ."
}
] |
{
"category": "Runtime",
"file_name": "cat-manifest.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "If you are using the generator you can create a completion command by running ```bash cobra add completion ``` Update the help text show how to install the bash_completion Linux show here Writing the shell script to stdout allows the most flexible use. ```go // completionCmd represents the completion command var completionCmd = &cobra.Command{ Use: \"completion\", Short: \"Generates bash completion scripts\", Long: `To load completion run . <(bitbucket completion) To configure your bash shell to load completions for each session add to your bashrc . <(bitbucket completion) `, Run: func(cmd *cobra.Command, args []string) { rootCmd.GenBashCompletion(os.Stdout); }, } ``` Note: The cobra generator may include messages printed to stdout for example if the config file is loaded, this will break the auto complete script Generating bash completions from a cobra command is incredibly easy. An actual program which does so for the kubernetes kubectl binary is as follows: ```go package main import ( \"io/ioutil\" \"os\" \"k8s.io/kubernetes/pkg/kubectl/cmd\" \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" ) func main() { kubectl := cmd.NewKubectlCommand(util.NewFactory(nil), os.Stdin, ioutil.Discard, ioutil.Discard) kubectl.GenBashCompletionFile(\"out.sh\") } ``` `out.sh` will get you completions of subcommands and flags. Copy it to `/etc/bash_completion.d/` as described and reset your terminal to use autocompletion. If you make additional annotations to your code, you can get even more intelligent and flexible behavior. This method allows you to provide a pre-defined list of completion choices for your nouns using the `validArgs` field. For example, if you want `kubectl get ` to show a list of valid \"nouns\" you have to set them. Simplified code from `kubectl get` looks like: ```go validArgs []string = { \"pod\", \"node\", \"service\", \"replicationcontroller\" } cmd := &cobra.Command{ Use: \"get [(-o|--output=)json|yaml|template|...] (RESOURCE [NAME] | RESOURCE/NAME ...)\", Short: \"Display one or many resources\", Long: get_long, Example: get_example, Run: func(cmd *cobra.Command, args []string) { err := RunGet(f, out, cmd, args) util.CheckErr(err) }, ValidArgs: validArgs, } ``` Notice we put the \"ValidArgs\" on the \"get\" subcommand. Doing so will give results like ```bash node pod replicationcontroller service ``` If your nouns have a number of aliases, you can define them alongside `ValidArgs` using `ArgAliases`: ```go argAliases []string = { \"pods\", \"nodes\", \"services\", \"svc\", \"replicationcontrollers\", \"rc\" } cmd := &cobra.Command{ ... ValidArgs: validArgs, ArgAliases: argAliases } ``` The aliases are not shown to the user on tab completion, but they are accepted as valid nouns by the completion algorithm if entered manually, e.g. in: ```bash backend frontend database ``` Note that without declaring `rc` as an alias, the completion algorithm would show the list of nouns in this example again instead of the replication controllers. In some cases it is not possible to provide a list of possible completions in advance. Instead, the list of completions must be determined at execution-time. Cobra provides two ways of defining such dynamic completion of"
},
{
"data": "Note that both these methods can be used along-side each other as long as they are not both used for the same command. Note: Custom Completions written in Go will automatically work for other shell-completion scripts (e.g., Fish shell), while Custom Completions written in Bash will only work for Bash shell-completion. It is therefore recommended to use Custom Completions written in Go. In a similar fashion as for static completions, you can use the `ValidArgsFunction` field to provide a Go function that Cobra will execute when it needs the list of completion choices for the nouns of a command. Note that either `ValidArgs` or `ValidArgsFunction` can be used for a single cobra command, but not both. Simplified code from `helm status` looks like: ```go cmd := &cobra.Command{ Use: \"status RELEASE_NAME\", Short: \"Display the status of the named release\", Long: status_long, RunE: func(cmd *cobra.Command, args []string) { RunGet(args[0]) }, ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { if len(args) != 0 { return nil, cobra.ShellCompDirectiveNoFileComp } return getReleasesFromCluster(toComplete), cobra.ShellCompDirectiveNoFileComp }, } ``` Where `getReleasesFromCluster()` is a Go function that obtains the list of current Helm releases running on the Kubernetes cluster. Notice we put the `ValidArgsFunction` on the `status` subcommand. Let's assume the Helm releases on the cluster are: `harbor`, `notary`, `rook` and `thanos` then this dynamic completion will give results like ```bash harbor notary rook thanos ``` You may have noticed the use of `cobra.ShellCompDirective`. These directives are bit fields allowing to control some shell completion behaviors for your particular completion. You can combine them with the bit-or operator such as `cobra.ShellCompDirectiveNoSpace | cobra.ShellCompDirectiveNoFileComp` ```go // Indicates an error occurred and completions should be ignored. ShellCompDirectiveError // Indicates that the shell should not add a space after the completion, // even if there is a single completion provided. ShellCompDirectiveNoSpace // Indicates that the shell should not provide file completion even when // no completion is provided. // This currently does not work for zsh or bash < 4 ShellCompDirectiveNoFileComp // Indicates that the shell will perform its default behavior after completions // have been provided (this implies !ShellCompDirectiveNoSpace && !ShellCompDirectiveNoFileComp). ShellCompDirectiveDefault ``` When using the `ValidArgsFunction`, Cobra will call your registered function after having parsed all flags and arguments provided in the command-line. You therefore don't need to do this parsing yourself. For example, when a user calls `helm status --namespace my-rook-ns `, Cobra will call your registered `ValidArgsFunction` after having parsed the `--namespace` flag, as it would have done when calling the `RunE` function. Cobra achieves dynamic completions written in Go through the use of a hidden command called by the completion script. To debug your Go completion code, you can call this hidden command directly: ```bash harbor :4 Completion ended with directive:"
},
{
"data": "# This is on stderr ``` *Important:* If the noun to complete is empty, you must pass an empty parameter to the `complete` command: ```bash harbor notary rook thanos :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` Calling the `complete` command directly allows you to run the Go debugger to troubleshoot your code. You can also add printouts to your code; Cobra provides the following functions to use for printouts in Go completion code: ```go // Prints to the completion script debug file (if BASHCOMPDEBUG_FILE // is set to a file path) and optionally prints to stderr. cobra.CompDebug(msg string, printToStdErr bool) { cobra.CompDebugln(msg string, printToStdErr bool) // Prints to the completion script debug file (if BASHCOMPDEBUG_FILE // is set to a file path) and to stderr. cobra.CompError(msg string) cobra.CompErrorln(msg string) ``` *Important: You should not* leave traces that print to stdout in your completion code as they will be interpreted as completion choices by the completion script. Instead, use the cobra-provided debugging traces functions mentioned above. This method allows you to inject bash functions into the completion script. Those bash functions are responsible for providing the completion choices for your own completions. Some more actual code that works in kubernetes: ```bash const ( bashcompletionfunc = `kubectlparseget() { local kubectl_output out if kubectl_output=$(kubectl get --no-headers \"$1\" 2>/dev/null); then out=($(echo \"${kubectl_output}\" | awk '{print $1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } kubectlgetresource() { if [[ ${#nouns[@]} -eq 0 ]]; then return 1 fi kubectlparseget ${nouns[${#nouns[@]} -1]} if [[ $? -eq 0 ]]; then return 0 fi } kubectlcustomfunc() { case ${last_command} in kubectlget | kubectldescribe | kubectldelete | kubectlstop) kubectlgetresource return ;; *) ;; esac } `) ``` And then I set that in my command definition: ```go cmds := &cobra.Command{ Use: \"kubectl\", Short: \"kubectl controls the Kubernetes cluster manager\", Long: `kubectl controls the Kubernetes cluster manager. Find more information at https://github.com/GoogleCloudPlatform/kubernetes.`, Run: runHelp, BashCompletionFunction: bashcompletionfunc, } ``` The `BashCompletionFunction` option is really only valid/useful on the root command. Doing the above will cause `kubectl_custom_func()` (`<command-use>customfunc()`) to be called when the built in processor was unable to find a solution. In the case of kubernetes a valid command might look something like `kubectl get pod ` the `kubectl_customc_func()` will run because the cobra.Command only understood \"kubectl\" and \"get.\" `kubectlcustomfunc()` will see that the cobra.Command is \"kubectlget\" and will thus call another helper `kubectlgetresource()`. `kubectlgetresource` will look at the 'nouns' collected. In our example the only noun will be `pod`. So it will call `kubectlparseget pod`. `kubectlparse_get` will actually call out to kubernetes and get any pods. It will then set `COMPREPLY` to valid pods! Most of the time completions will only show subcommands. But if a flag is required to make a subcommand work, you probably want it to show up when the user types . Marking a flag as 'Required' is incredibly"
},
{
"data": "```go cmd.MarkFlagRequired(\"pod\") cmd.MarkFlagRequired(\"container\") ``` and you'll get something like ```bash -c --container= -p --pod= ``` In this example we use --filename= and expect to get a json or yaml file as the argument. To make this easier we annotate the --filename flag with valid filename extensions. ```go annotations := []string{\"json\", \"yaml\", \"yml\"} annotation := make(mapstring) annotation[cobra.BashCompFilenameExt] = annotations flag := &pflag.Flag{ Name: \"filename\", Shorthand: \"f\", Usage: usage, Value: value, DefValue: value.String(), Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` Now when you run a command with this filename flag you'll get something like ```bash test/ example/ rpmbuild/ hello.yml test.json ``` So while there are many other files in the CWD it only shows me subdirs and those with valid extensions. As for nouns, Cobra provides two ways of defining dynamic completion of flags. Note that both these methods can be used along-side each other as long as they are not both used for the same flag. Note: Custom Completions written in Go will automatically work for other shell-completion scripts (e.g., Fish shell), while Custom Completions written in Bash will only work for Bash shell-completion. It is therefore recommended to use Custom Completions written in Go. To provide a Go function that Cobra will execute when it needs the list of completion choices for a flag, you must register the function in the following manner: ```go flagName := \"output\" cmd.RegisterFlagCompletionFunc(flagName, func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { return []string{\"json\", \"table\", \"yaml\"}, cobra.ShellCompDirectiveDefault }) ``` Notice that calling `RegisterFlagCompletionFunc()` is done through the `command` with which the flag is associated. In our example this dynamic completion will give results like so: ```bash json table yaml ``` You can also easily debug your Go completion code for flags: ```bash json table yaml :4 Completion ended with directive: ShellCompDirectiveNoFileComp # This is on stderr ``` *Important: You should not* leave traces that print to stdout in your completion code as they will be interpreted as completion choices by the completion script. Instead, use the cobra-provided debugging traces functions mentioned in the above section. Alternatively, you can use bash code for flag custom completion. Similar to the filename completion and filtering using `cobra.BashCompFilenameExt`, you can specify a custom flag completion bash function with `cobra.BashCompCustom`: ```go annotation := make(mapstring) annotation[cobra.BashCompCustom] = []string{\"kubectlgetnamespaces\"} flag := &pflag.Flag{ Name: \"namespace\", Usage: usage, Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` In addition add the `kubectlgetnamespaces` implementation in the `BashCompletionFunction` value, e.g.: ```bash kubectlgetnamespaces() { local template template=\"{{ range .items }}{{ .metadata.name }} {{ end }}\" local kubectl_out if kubectl_out=$(kubectl get -o template --template=\"${template}\" namespace 2>/dev/null); then COMPREPLY=( $( compgen -W \"${kubectl_out}[*]\" -- \"$cur\" ) ) fi } ``` You can also configure the `bash aliases` for the commands and they will also support completions. ```bash alias aliasname=origcommand complete -o default -F start_origcommand aliasname $) aliasname <tab><tab> completion firstcommand secondcommand ```"
}
] |
{
"category": "Runtime",
"file_name": "bash_completions.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Firecracker is tightly coupled with the guest and host kernels on which it is run. This document presents our kernel support policy which aims to help our customers choose host and guest OS configuration, and predict future kernel related changes. We are continuously validating the currently supported Firecracker releases (as per ) using a combination of: host linux kernel versions 4.14, 5.10, and 6.1; guest linux kernel versions 4.14, and 5.10. While other versions and other kernel configs might work, they are not periodically validated in our test suite, and using them might result in unexpected behaviour. Starting with release `v1.0` each major and minor release will specify the supported kernel versions. Once a kernel version is officially enabled, it is supported for a minimum of 2 years. Adding support for a new kernel version will result in a Firecracker release only if compatibility changes are required. | Host kernel | Guest kernel v4.14 | Guest kernel v5.10 | Min. end of support | | -: | :-: | :-: | : | | v4.14 | Y | Y | 2021-01-22 | | v5.10 | Y | Y | 2024-01-31 | | v6.1 | Y | Y | 2025-10-12 | The guest kernel configs used in our validation pipelines can be found while a breakdown of the relevant guest kernel modules can be found in the next section. The configuration items that may be relevant for Firecracker are: serial console - `CONFIGSERIAL8250CONSOLE`, `CONFIGPRINTK` initrd support - `CONFIGBLKDEV_INITRD` virtio devices - `CONFIGVIRTIOMMIO` balloon - `CONFIGMEMORYBALLOON`, `CONFIGVIRTIOBALLOON` block - `CONFIGVIRTIOBLK` partuuid support - `CONFIGMSDOSPARTITION` network - `CONFIGVIRTIONET` vsock - `CONFIGVIRTIOVSOCKETS` entropy - `CONFIGHWRANDOM_VIRTIO` guest RNG - `CONFIGRANDOMTRUST_CPU` use CPU RNG instructions (if present) to initialize RNG. Available for >="
},
{
"data": "ACPI support - `CONFIGACPI` and `CONFIGPCI` There are also guest config options which are dependant on the platform on which Firecracker is run: timekeeping - `CONFIGARMAMBA`, `CONFIGRTCDRV_PL031` serial console - `CONFIGSERIALOF_PLATFORM` timekeeping - `CONFIGKVMGUEST` (which enables CONFIGKVMCLOCK) high precision timekeeping - `CONFIGPTP1588_CLOCK`, `CONFIGPTP1588CLOCKKVM` external clean shutdown - `CONFIGSERIOI8042`, `CONFIGKEYBOARDATKBD` virtio devices - `CONFIGVIRTIOMMIOCMDLINEDEVICES` Depending on the source of boot (either from a block device or from an initrd), the minimal configuration for a guest kernel for a successful microVM boot is: Booting with initrd: `CONFIGBLKDEV_INITRD=y` aarch64 `CONFIGVIRTIOMMIO=y` (for the serial device). x8664 `CONFIGKVM_GUEST=y`. Booting with root block device: aarch64 `CONFIGVIRTIOBLK=y` x86_64 `CONFIGVIRTIOBLK=y` `CONFIG_ACPI=y` `CONFIG_PCI=y` `CONFIGKVMGUEST=y`. Optional: To enable boot logs set `CONFIGSERIAL8250_CONSOLE=y` and `CONFIG_PRINTK=y` in the guest kernel config. Firecracker supports booting kernels with ACPI support. The relevant configurations for the guest kernel are: `CONFIG_ACPI=y` `CONFIG_PCI=y` Please note that Firecracker does not support PCI devices. The `CONFIG_PCI` option is needed for ACPI initialization inside the guest. ACPI supersedes the legacy way of booting a microVM, i.e. via MPTable and command line parameters for VirtIO devices. We suggest that users disable MPTable and passing VirtIO devices via kernel command line parameters. These boot mechanisms are now deprecated. Users can disable these features by disabling the corresponding guest kernel configuration parameters: `CONFIGX86MPPARSE=n` `CONFIGVIRTIOMMIOCMDLINEDEVICES=n` During the deprecation period Firecracker will continue to support the legacy way of booting a microVM. Firecracker will be able to boot kernels with the following configurations: Only ACPI Only legacy mechanisms Both ACPI and legacy mechanisms When using a 4.14 host and a 5.10 guest, we disable the SVE extension in the guest. This is due to the introduction of the SVE extension in Graviton3, which causes the default 5.10 guest (with SVE support enabled), to crash if run with a 4.14 host which does not support SVE. When booting with kernels that support both ACPI and legacy boot mechanisms Firecracker passes VirtIO devices to the guest twice, once through ACPI and a second time via kernel command line parameters. In these cases, the guest tries to initialize devices twice. The second time, initialization fails and the guest will emit warning messages in `dmesg`, however the devices will work correctly."
}
] |
{
"category": "Runtime",
"file_name": "kernel-policy.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Allocating IPs in a Specific Range menu_order: 10 search_type: Documentation The default configurations for both Weave Net and Docker use [Private Networks](https://en.wikipedia.org/wiki/Private_network), whose addresses are never found on the public Internet, and subsequently reduces the chance of IP overlap. However, it could be that you or your hosting provider are using some of these private addresses in the same range, which will cause a clash. If after `weave launch`, the following error message appears: Network 10.32.0.0/12 overlaps with existing route 10.0.0.0/8 on host. ERROR: Default --ipalloc-range 10.32.0.0/12 overlaps with existing route on host. You must pick another range and set it on all hosts. As the message indicates, the default range that Weave Net would like to use is `10.32.0.0/12` - a 12-bit prefix, where all addresses start with the bit pattern 000010100010, or in decimal everything from 10.32.0.0 through 10.47.255.255. However, your host is using a route for `10.0.0.0/8`, which overlaps, since the first 8 bits are the same. In this case, if you used the default network for an address like `10.32.5.6` the kernel would never be sure if this meant the Weave Net network of `10.32.0.0/12` or the hosting network of `10.0.0.0/8`. If you are sure the addresses you want are not in use, then explicitly setting the range with `--ipalloc-range` in the command-line arguments to `weave launch` on all hosts forces Weave Net to use that range, even though it overlaps. Otherwise, you can pick a different range, preferably another subset of the [Private Networks](https://en.wikipedia.org/wiki/Private_network). For example 172.30.0.0/16. See Also"
}
] |
{
"category": "Runtime",
"file_name": "configuring-weave.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: \"ark restore create\" layout: docs Create a restore Create a restore ``` ark restore create [RESTORENAME] --from-backup BACKUPNAME [flags] ``` ``` ark restore create restore-1 --from-backup backup-1 ark restore create --from-backup backup-1 ``` ``` --exclude-namespaces stringArray namespaces to exclude from the restore --exclude-resources stringArray resources to exclude from the restore, formatted as resource.group, such as storageclasses.storage.k8s.io --from-backup string backup to restore from -h, --help help for create --include-cluster-resources optionalBool[=true] include cluster-scoped resources in the restore --include-namespaces stringArray namespaces to include in the restore (use '' for all namespaces) (default ) --include-resources stringArray resources to include in the restore, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources) --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the restore --namespace-mappings mapStringString namespace mappings from name in the backup to desired restored name in the form src1:dst1,src2:dst2,... -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. --restore-volumes optionalBool[=true] whether to restore volumes from snapshots -l, --selector labelSelector only restore resources matching this label selector (default <none>) --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restores"
}
] |
{
"category": "Runtime",
"file_name": "ark_restore_create.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Multisite is a feature of Ceph that allows object stores to replicate its data over multiple Ceph clusters. Multisite also allows object stores to be independent and isolated from other object stores in a cluster. For reference, here is a description of the underlying Ceph Multisite data model. ``` A cluster has one or more realms A realm spans one or more clusters A realm has one or more zone groups A realm has one master zone group A realm defined in another cluster is replicated with the pull command The objects in a realm are independent and isolated from objects in other realms A zone group has one or more zones A zone group has one master zone A zone group spans one or more clusters A zone group defined in another cluster is replicated with the pull command A zone group defines a namespace for object IDs unique across its zones Zone group metadata is replicated to other zone groups in the realm A zone belongs to one cluster A zone has a set of pools that store the user and object metadata and object data Zone data and metadata is replicated to other zones in the zone group A master zone needs to be created for secondary zones to pull from to replicate across zones ``` When a ceph-object-store is created without the `zone` section; a realm, zone group, and zone is created with the same name as the ceph-object-store. Since it is the only ceph-object-store in the realm, the data in the ceph-object-store remain independent and isolated from others on the same cluster. When a ceph-object-store is created with the `zone` section, the Ceph Multisite will be configured. The ceph-object-store will join a zone, zone group, and realm with a different than it's own. This allows the ceph-object-store to replace it's data over multiple Ceph clusters. To enable Ceph's multisite, the following steps need to"
},
{
"data": "A realm needs to be created A master zone group in the realm needs to be created A master zone in the master zone group needs to be created An object store needs to be added to the master zone The master zone of the master zonegroup is designated as the 'metadata master zone', and all changes to user and bucket metadata are written through that zone first before replicating to other zones via metadata sync. This is different from data sync, where objects can be written to any zone and replicated to its peers in the zonegroup. If an admin is creating a new realm on a Rook Ceph cluster, the admin should create: A with the name of the realm the admin wishes to create. A referring to the ceph-object-realm resource. A referring to the ceph-object-zone-group resource. A referring to the ceph-object-zone resource. If an admins pulls a realm on a Rook Ceph cluster from another Ceph cluster, the admin should create: A referring to the realm on the other Ceph cluster, and an endpoint in a master zone in that realm. A referring to the realm that was pulled or matching the ceph-object-zone-group resource from the cluster the realm was pulled from. A referring to the zone group that the new zone will be in. A referring to the ceph-object-zone resource. At the moment the multisite resources only handles Day 1 initial configuration. Changes made to the resource's configuration or deletion of the resource are not reflected on the Ceph cluster. To be clear, when the ceph-object-{realm, zone group, zone} resource is deleted or modified, the realm/zone group/zone is not deleted or modified in the Ceph cluster. Deletion or modification must be done the toolbox. Future iterations of this design will address these Day 2 operations and other such as: Initializing and modifying Storage Classes Deletion of the CR reflecting deletion of the realm, zone group, & zone The status of the ceph-object-{realm, zone group, zone} reflecting the status of the realm, zone group, and zone. Changing the master zone group in a realm Changing the master zone in a zone group"
}
] |
{
"category": "Runtime",
"file_name": "ceph-multisite-overview.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "A non-exhaustive list of containerd adopters is provided below. _Docker/Moby engine_ - Containerd began life prior to its CNCF adoption as a lower-layer runtime manager for `runc` processes below the Docker engine. Continuing today, containerd has extremely broad production usage as a component of the stack. Note that this includes any use of the open source ; including the Balena project listed below. __ - offers containerd as the CRI runtime for v1.11 and higher versions. __ - IBM's on-premises cloud offering has containerd as a \"tech preview\" CRI runtime for the Kubernetes offered within this product for the past two releases, and plans to fully migrate to containerd in a future release. __ - Container-Optimized OS is a Linux Operating System from Google that is optimized for running containers. COS has used containerd as container runtime when containerd was part of Docker's core container runtime. __ - containerd has been offered in GKE since version 1.14 and has been the default runtime since version 1.19. It is also the only supported runtime for GKE Autopilot from the launch. __ - uses containerd + Firecracker (noted below) as the runtime and isolation technology for containers run in the Fargate platform. Fargate is a serverless, container-native compute offering from Amazon Web Services. __ - EKS optionally offers containerd as a CRI runtime starting with Kubernetes version 1.21. In Kubernetes 1.22 the default CRI runtime will be containerd. __ - Bottlerocket is a Linux distribution from Amazon Web Services purpose-built for containers using containerd as the core system runtime. _Cloud Foundry_ - The for CF has been using OCI runC directly with additional code from CF managing the container image and filesystem interactions, but have recently migrated to use containerd as a replacement for the extra code they had written around runC. _Alibaba's PouchContainer_ - The Alibaba project uses containerd as its runtime for a cloud native offering that has unique isolation and image distribution capabilities. _Rancher's k3s project_ - Rancher Labs is a lightweight Kubernetes distribution; in their words: \"Easy to install, half the memory, all in a binary less than 40mb.\" k8s uses containerd as the embedded runtime for this popular lightweight Kubernetes variant. _Rancher's Rio project_ - Rancher Labs project uses containerd as the runtime for a combined Kubernetes, Istio, and container \"Cloud Native Container Distribution\" platform. _Eliot_ - The container project for IoT device container management uses containerd as the"
},
{
"data": "_Balena_ - Resin's container engine, based on moby/moby but for edge, embedded, and IoT use cases, uses the containerd and runc stack in the same way that the Docker engine uses containerd. _LinuxKit_ - the Moby project's for building secure, minimal Linux OS images in a container-native model uses containerd as the core runtime for system and service containers. _BuildKit_ - The Moby project's can use either runC or containerd as build execution backends for building container images. BuildKit support has also been built into the Docker engine in recent releases, making BuildKit provide the backend to the `docker build` command. __ - Microsoft's managed Kubernetes offering uses containerd for Linux nodes running v1.19 and greater, and Windows nodes running 1.20 and greater. _Amazon Firecracker_ - The AWS has extended containerd with a new snapshotter and v2 shim to allow containerd to drive virtualized container processes via their VMM implementation. More details on their containerd integration are available in . _Kata Containers_ - The lightweight-virtualized container runtime project integrates with containerd via a custom v2 shim implementation that drives the Kata container runtime. _D2iQ Konvoy_ - D2iQ Inc product uses containerd as the container runtime for its Kubernetes distribution. _Inclavare Containers_ - is an innovation of container runtime with the novel approach for launching protected containers in hardware-assisted Trusted Execution Environment (TEE) technology, aka Enclave, which can prevent the untrusted entity, such as Cloud Service Provider (CSP), from accessing the sensitive and confidential assets in use. _VMware TKG_ - VMware's Multicloud Kubernetes offering uses containerd as the default CRI runtime. _VMware TCE_ - VMware's fully-featured, easy to manage, Kubernetes platform for learners and users. It is a freely available, community supported, and open source distribution of VMware Tanzu. It uses containerd as the default CRI runtime. __ - Talos Linux is Linux designed for Kubernetes secure, immutable, and minimal. Talos Linux is using containerd as the core system runtime and CRI implementation. _Deckhouse_ - from Flant allows you to manage Kubernetes clusters anywhere in a fully automatic and uniform fashion. It uses containerd as the default CRI runtime. _Other Projects_ - While the above list provides a cross-section of well known uses of containerd, the simplicity and clear API layer for containerd has inspired many smaller projects around providing simple container management platforms. Several examples of building higher layer functionality on top of the containerd base have come from various containerd community participants: Michael Crosby's project, Evan Hazlett's project, Paul Knopf's immutable Linux image builder project: ."
}
] |
{
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This guide will quickly get you started running your first gVisor sandbox container using the runtime directly with the default platform. First, follow the . Now we will create an container bundle to run our container. First we will create a root directory for our bundle. ```bash mkdir bundle cd bundle ``` Create a root file system for the container. We will use the Docker `hello-world` image as the basis for our container. ```bash mkdir --mode=0755 rootfs docker export $(docker create hello-world) | sudo tar -xf - -C rootfs --same-owner --same-permissions ``` Next, create an specification file called `config.json` that contains our container specification. We tell the container to run the `/hello` program. ```bash runsc spec -- /hello ``` Finally run the container. ```bash sudo runsc run hello ``` Next try or ."
}
] |
{
"category": "Runtime",
"file_name": "oci.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "sidebar_position: 10 sidebar_label: \"System Audit\" It's important to record the information about the system operation history. HwameiStor provides a feature of audit to record the operations on all the system resources, including Cluster, Node, StoragePool, Volume, etc... The audit information is easier for user to understant and parse for various purposes. HwameiStor designs a new CRD for every resource as below: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: Event name: spec: resourceType: <Cluster | Node | StoragePool | Volume> resourceName: records: action: actionContent: # in JSON format time: state: stateContent: # in JSON format ``` For instance, let's look at audit information of a volume: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: Event metadata: creationTimestamp: \"2023-08-08T15:52:55Z\" generation: 5 name: volume-pvc-34e3b086-2d95-4980-beb6-e175fd79a847 resourceVersion: \"10221888\" uid: d3ebaffb-eddb-4c84-93be-efff350688af spec: resourceType: Volume resourceName: pvc-34e3b086-2d95-4980-beb6-e175fd79a847 records: action: Create actionContent: '{\"requiredCapacityBytes\":5368709120,\"volumeQoS\":{},\"poolName\":\"LocalStorage_PoolHDD\",\"replicaNumber\":2,\"convertible\":true,\"accessibility\":{\"nodes\":[\"k8s-node1\",\"k8s-master\"],\"zones\":[\"default\"],\"regions\":[\"default\"]},\"pvcNamespace\":\"default\",\"pvcName\":\"mysql-data-volume\",\"volumegroup\":\"db890e34-a092-49ac-872b-f2a422439c81\"}' time: \"2023-08-08T15:52:55Z\" action: Mount actionContent: '{\"allocatedCapacityBytes\":5368709120,\"replicas\":[\"pvc-34e3b086-2d95-4980-beb6-e175fd79a847-krp927\",\"pvc-34e3b086-2d95-4980-beb6-e175fd79a847-wm7p56\"],\"state\":\"Ready\",\"publishedNode\":\"k8s-node1\",\"fsType\":\"xfs\",\"rawblock\":false}' time: \"2023-08-08T15:53:07Z\" action: Unmount actionContent: '{\"allocatedCapacityBytes\":5368709120,\"usedCapacityBytes\":33783808,\"totalInode\":2621120,\"usedInode\":3,\"replicas\":[\"pvc-34e3b086-2d95-4980-beb6-e175fd79a847-krp927\",\"pvc-34e3b086-2d95-4980-beb6-e175fd79a847-wm7p56\"],\"state\":\"Ready\",\"publishedNode\":\"k8s-node1\",\"fsType\":\"xfs\",\"rawblock\":false}' time: \"2023-08-08T16:03:03Z\" action: Delete actionContent: '{\"requiredCapacityBytes\":5368709120,\"volumeQoS\":{},\"poolName\":\"LocalStorage_PoolHDD\",\"replicaNumber\":2,\"convertible\":true,\"accessibility\":{\"nodes\":[\"k8s-node1\",\"k8s-master\"],\"zones\":[\"default\"],\"regions\":[\"default\"]},\"pvcNamespace\":\"default\",\"pvcName\":\"mysql-data-volume\",\"volumegroup\":\"db890e34-a092-49ac-872b-f2a422439c81\",\"config\":{\"version\":1,\"volumeName\":\"pvc-34e3b086-2d95-4980-beb6-e175fd79a847\",\"requiredCapacityBytes\":5368709120,\"convertible\":true,\"resourceID\":2,\"readyToInitialize\":true,\"initialized\":true,\"replicas\":[{\"id\":1,\"hostname\":\"k8s-node1\",\"ip\":\"10.6.113.101\",\"primary\":true},{\"id\":2,\"hostname\":\"k8s-master\",\"ip\":\"10.6.113.100\",\"primary\":false}]},\"delete\":true}' time: \"2023-08-08T16:03:38Z\" ```"
}
] |
{
"category": "Runtime",
"file_name": "system_audit.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This directory contains useful packaging scripts. This script generates the official set of QEMU-based hypervisor build configuration options. All repositories that need to build a hypervisor from source MUST use this script to ensure the hypervisor is built in a known way since using a different set of options can impact many areas including performance, memory footprint and security. Example usage: ``` $ configure-hypervisor.sh qemu ```"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "containerd is built with OCI support and with support for advanced features provided by the . Development (`-dev`) and pre-releases of containerd may depend features in `runc` that have not yet been released, and may require a specific runc build. The version of runc that is tested against in our CI can be found in the file, which may point to a git-commit (for pre releases) or tag in the runc repository. For regular (non-pre-)releases of containerd releases, we attempt to use released (tagged) versions of runc. We recommend using a version of runc that's equal to or higher than the version of runc described in . If you encounter any runtime errors, make sure your runc is in sync with the commit or tag provided in that file. If you do not have the correct version of `runc` installed, you can refer to the to learn how to build `runc` from source. runc builds have , , and support enabled by default. Note that \"seccomp\" can be disabled by passing an empty `BUILDTAGS` make variable, but is highly recommended to keep enabled. Use the output of the `runc --version` output to verify if your version of runc has seccomp enabled. For example: ```sh $ runc --version runc version 1.0.1 commit: v1.0.1-0-g4144b638 spec: 1.0.2-dev go: go1.16.6 libseccomp: 2.4.4 ```"
}
] |
{
"category": "Runtime",
"file_name": "RUNC.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Want to contribute? Great! First, read this page. Contributions to this project must be accompanied by a Contributor License Agreement. You (or your employer) retain the copyright to your contribution; this simply gives us permission to use and redistribute your contributions as part of the project. Head over to <https://cla.developers.google.com/> to see your current agreements on file or to sign a new one. You generally only need to submit a CLA once, so if you've already submitted one (even if it was for a different project), you probably don't need to do it again. Some editors may require the code to be structured in a `GOPATH` directory tree. In this case, you may use the `:gopath` target to generate a directory tree with symlinks to the original source files. ``` bazel build :gopath ``` You can then set the `GOPATH` in your editor to `bazel-bin/gopath`. If you use this mechanism, keep in mind that the generated tree is not the canonical source. You will still need to build and test with `bazel`. New files will need to be added to the appropriate `BUILD` files, and the `:gopath` target will need to be re-run to generate appropriate symlinks in the `GOPATH` directory tree. Dependencies can be added by using `go get`. In order to keep the `WORKSPACE` file in sync, run `bazel run //:gazelle -- update-repos -from_file=go.mod` in place of `go mod`. All code should comply with the . Note that code may be automatically formatted per the guidelines when merged. As a secure runtime, we need to maintain the safety of all of code included in gVisor. The following rules help mitigate issues. Definitions for the rules below: `core`: `//pkg/sentry/...` Transitive dependencies in `//pkg/...`, etc. `runsc`: `//runsc/...` Rules: No cgo in `core` or"
},
{
"data": "The final binary must be a statically-linked pure Go binary. Any files importing \"unsafe\" must have a name ending in `_unsafe.go`. `core` may only depend on the following packages: Itself. Go standard library. `@orggolangxsys//unix:godefault_library` (Go import `golang.org/x/sys/unix`). `@orggolangxtime//rate:godefault_library` (Go import `golang.org/x/time/rate`). `@comgithubgooglebtree//:godefault_library\"` (Go import `github.com/google/btree`). Generated Go protobuf packages. `@orggolanggoogleprotobuf//proto:godefault_library` (Go import `google.golang.org/protobuf`). `runsc` may only depend on the following packages: All packages allowed for `core`. `@comgithubgooglesubcommands//:godefault_library` (Go import `github.com/google/subcommands`). `@comgithubopencontainersruntimespec//specsgo:godefault_library` (Go import `github.com/opencontainers/runtime-spec/specs_go`). For performance reasons, `runsc boot` may not run the `netpoller` goroutine. Before sending code reviews, run `bazel test ...` to ensure tests are passing. Code changes are accepted via . When approved, the change will be submitted by a team member and automatically merged into the repository. Accessing check logs may require membership in the , which is public. Some TODOs and NOTEs sprinkled throughout the code have associated IDs of the form `b/1234`. These correspond to bugs in our internal bug tracker. Eventually these bugs will be moved to the GitHub Issues, but until then they can simply be ignored. Running `make dev` is a convenient way to build and install `runsc` as a Docker runtime. The output of this command will show the runtimes installed. You may use `make refresh` to refresh the binary after any changes. For example: ```bash make dev docker run --rm --runtime=my-branch --rm hello-world make refresh ``` First, we need to update dependencies in the go.mod and go.sum files. To do that, we should checkout the go branch and update dependencies using the go get tool. ```bash git checkout origin/go go get golang.org/x/net ``` Next, we should checkout the master branch and update dependencies in the WORKSPACE file using the Gazelle tool. ```bash git checkout origin/master bazel run //:gazelle -- update-repos -from_file=go.mod ``` Contributions made by corporations are covered by a different agreement than the one above, the ."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "All resources are used for demonstration purposes and are not intended for production. You can check if your system meets the requirements by running `firecracker/tools/devtool checkenv`. An opinionated way to run Firecracker is to launch an `c5.metal` instance with Ubuntu 22.04. Firecracker requires to perform its virtualization and emulation tasks. We exclusively use `.metal` instance types, because EC2 only supports KVM on `.metal` instance types. Firecracker supports x86_64 and aarch64 Linux, see . Firecracker requires read/write access to `/dev/kvm` exposed by the KVM module. The presence of the KVM module can be checked with: ```bash lsmod | grep kvm ``` An example output where it is enabled: ```bash kvm_intel 348160 0 kvm 970752 1 kvm_intel irqbypass 16384 1 kvm ``` Some Linux distributions use the `kvm` group to manage access to `/dev/kvm`, while others rely on access control lists. If you have the ACL package for your distro installed, you can grant Read+Write access with: ```bash sudo setfacl -m u:${USER}:rw /dev/kvm ``` Otherwise, if access is managed via the `kvm` group: ```bash [ $(stat -c \"%G\" /dev/kvm) = kvm ] && sudo usermod -aG kvm ${USER} \\ && echo \"Access granted.\" ``` If none of the above works, you will need to either install the file system ACL package for your distro and use the `setfacl` command as above, or run Firecracker as `root` (via `sudo`). You can check if you have access to `/dev/kvm` with: ```bash [ -r /dev/kvm ] && [ -w /dev/kvm ] && echo \"OK\" || echo \"FAIL\" ``` In production, Firecracker is designed to be run securely inside an execution jail, set up by the binary. This is how our does it. For simplicity, this guide will not use the . To successfully start a microVM, you will need an uncompressed Linux kernel binary, and an ext4 file system image (to use as rootfs). This guide uses a 5.10 kernel image with a Ubuntu 22.04 rootfs from our CI: ```bash ARCH=\"$(uname -m)\" wget https://s3.amazonaws.com/spec.ccfc.min/firecracker-ci/v1.8/${ARCH}/vmlinux-5.10.210 wget https://s3.amazonaws.com/spec.ccfc.min/firecracker-ci/v1.8/${ARCH}/ubuntu-22.04.ext4 wget https://s3.amazonaws.com/spec.ccfc.min/firecracker-ci/v1.8/${ARCH}/ubuntu-22.04.id_rsa chmod 400 ./ubuntu-22.04.id_rsa ``` There are two options for getting a firecracker binary: Downloading an official firecracker release from our , or Building firecracker from source. To download the latest firecracker release, run: ```bash ARCH=\"$(uname -m)\" release_url=\"https://github.com/firecracker-microvm/firecracker/releases\" latest=$(basename $(curl -fsSLI -o /dev/null -w %{urleffective} ${releaseurl}/latest)) curl -L ${release_url}/download/${latest}/firecracker-${latest}-${ARCH}.tgz \\ | tar -xz mv release-${latest}-$(uname -m)/firecracker-${latest}-${ARCH} firecracker ``` To instead build firecracker from source, you will need to have `docker` installed: ```bash ARCH=\"$(uname -m)\" git clone https://github.com/firecracker-microvm/firecracker firecracker_src sudo systemctl start docker sudo"
},
{
"data": "build sudo cp ./firecrackersrc/build/cargotarget/${ARCH}-unknown-linux-musl/debug/firecracker firecracker ``` Running firecracker will require two terminals, the first one running the firecracker binary, and a second one for communicating with the firecracker process via HTTP requests: ```bash API_SOCKET=\"/tmp/firecracker.socket\" sudo rm -f $API_SOCKET sudo ./firecracker --api-sock \"${API_SOCKET}\" ``` In a new terminal (do not close the 1st one): ```bash TAP_DEV=\"tap0\" TAP_IP=\"172.16.0.1\" MASK_SHORT=\"/30\" sudo ip link del \"$TAP_DEV\" 2> /dev/null || true sudo ip tuntap add dev \"$TAP_DEV\" mode tap sudo ip addr add \"${TAPIP}${MASKSHORT}\" dev \"$TAP_DEV\" sudo ip link set dev \"$TAP_DEV\" up sudo sh -c \"echo 1 > /proc/sys/net/ipv4/ip_forward\" HOST_IFACE=\"eth0\" sudo iptables -t nat -D POSTROUTING -o \"$HOST_IFACE\" -j MASQUERADE || true sudo iptables -D FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT \\ || true sudo iptables -D FORWARD -i tap0 -o \"$HOST_IFACE\" -j ACCEPT || true sudo iptables -t nat -A POSTROUTING -o \"$HOST_IFACE\" -j MASQUERADE sudo iptables -I FORWARD 1 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT sudo iptables -I FORWARD 1 -i tap0 -o \"$HOST_IFACE\" -j ACCEPT API_SOCKET=\"/tmp/firecracker.socket\" LOGFILE=\"./firecracker.log\" touch $LOGFILE sudo curl -X PUT --unix-socket \"${API_SOCKET}\" \\ --data \"{ \\\"log_path\\\": \\\"${LOGFILE}\\\", \\\"level\\\": \\\"Debug\\\", \\\"show_level\\\": true, \\\"showlogorigin\\\": true }\" \\ \"http://localhost/logger\" KERNEL=\"./vmlinux-5.10.210\" KERNELBOOTARGS=\"console=ttyS0 reboot=k panic=1 pci=off\" ARCH=$(uname -m) if [ ${ARCH} = \"aarch64\" ]; then KERNELBOOTARGS=\"keepbootcon ${KERNELBOOT_ARGS}\" fi sudo curl -X PUT --unix-socket \"${API_SOCKET}\" \\ --data \"{ \\\"kernelimagepath\\\": \\\"${KERNEL}\\\", \\\"bootargs\\\": \\\"${KERNELBOOT_ARGS}\\\" }\" \\ \"http://localhost/boot-source\" ROOTFS=\"./ubuntu-22.04.ext4\" sudo curl -X PUT --unix-socket \"${API_SOCKET}\" \\ --data \"{ \\\"drive_id\\\": \\\"rootfs\\\", \\\"pathonhost\\\": \\\"${ROOTFS}\\\", \\\"isrootdevice\\\": true, \\\"isreadonly\\\": false }\" \\ \"http://localhost/drives/rootfs\" FC_MAC=\"06:00:AC:10:00:02\" sudo curl -X PUT --unix-socket \"${API_SOCKET}\" \\ --data \"{ \\\"iface_id\\\": \\\"net1\\\", \\\"guestmac\\\": \\\"$FCMAC\\\", \\\"hostdevname\\\": \\\"$TAP_DEV\\\" }\" \\ \"http://localhost/network-interfaces/net1\" sleep 0.015s sudo curl -X PUT --unix-socket \"${API_SOCKET}\" \\ --data \"{ \\\"action_type\\\": \\\"InstanceStart\\\" }\" \\ \"http://localhost/actions\" sleep 2s ssh -i ./ubuntu-22.04.id_rsa [email protected] \"ip route add default via 172.16.0.1 dev eth0\" ssh -i ./ubuntu-22.04.id_rsa [email protected] \"echo 'nameserver 8.8.8.8' > /etc/resolv.conf\" ssh -i ./ubuntu-22.04.id_rsa [email protected] ``` Issuing a `reboot` command inside the guest will gracefully shutdown Firecracker. This is due to the fact that Firecracker doesn't implement guest power management. You can boot a guest without using the API socket by passing the parameter `--config-file` to the Firecracker process. E.g.: ```wrap sudo ./firecracker --api-sock /tmp/firecracker.socket --config-file <pathtotheconfigurationfile> ``` `pathtotheconfigurationfile` is the path to a JSON file with the configuration for all of the microVM's resources. The JSON must contain the configuration for the guest kernel and rootfs, all of the other resources are"
},
{
"data": "This configuration method will also start the microVM, as such you need to specify all desired pre-boot configurable resources in the JSON. The names of the resources can be seen in \\[`firecracker.yaml`\\] (../src/firecracker/swagger/firecracker.yaml) and the names of their fields are the same that are used in the API requests. An example of configuration file is provided: . Once the guest is booted, refer to bring up the network in the guest machine. After the microVM is started you can still use the socket to send API requests for post-boot operations. SSH can be used to work with libraries from private git repos by passing the `--ssh-keys` flag to specify the paths to your public and private SSH keys on the host. Both are required for git authentication when fetching the repositories. ```bash tools/devtool build --ssh-keys ~/.ssh/idrsa.pub ~/.ssh/idrsa ``` Only a single set of credentials is supported. `devtool` cannot fetch multiple private repos which rely on different credentials. `tools/devtool build` builds in `debug` to build release binaries pass `--release` e.g. `tools/devtool build --release` Documentation on `devtool` can be seen with `tools/devtool --help`. Integration tests can be run with `tools/devtool test`. The test suite is designed to ensure our as measured on EC2 .metal instances, as such performance tests may fail when not run on these machines. Specifically, don't be alarmed if you see `tests/integrationtests/performance/testprocessstartuptime.py` failing when not run on an EC2 .metal instance. You can skip performance tests with: ```bash ./tools/devtool test -- --ignore integration_tests/performance ``` If you run the integration tests on an EC2 .metal instance, and encounter failures such as the following `FAILED integrationtests/style/testmarkdown.py::testmarkdownstyle - requests.exceptions.ReadTimeout: HTTPConnectionPool(host='169.254.169.254', port=80): Read timed out. (read timeout=2)` try running `aws ec2 modify-instance-metadata-options --instance-id i-<your instance id> --http-put-response-hop-limit 2`. The integration tests framework uses IMDSv2 to determine information such as instance type. The additional hop is needed because the IMDS requests will pass through docker. Points to check to confirm the API socket is running and accessible: Check that the user running the Firecracker process and the user using `curl` have equivalent privileges. For example, if you run Firecracker with sudo that you run `curl` with sudo as well. can regulate access to sockets on RHEL based distributions. How user's permissions are configured is environmentally specific, but for the purposes of troubleshooting you can check if it is enabled in `/etc/selinux/config`. With the Firecracker process running using `--api-sock /tmp/firecracker.socket`, confirm that the socket is open: `ss -a | grep '/tmp/firecracker.socket'` If you have socat available, try `socat - UNIX-CONNECT:/tmp/firecracker.socket` This will throw an explicit error if the socket is inaccessible, or it will pause and wait for input to continue."
}
] |
{
"category": "Runtime",
"file_name": "getting-started.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Some deprecated APIs have been removed in Antrea v2.0. Before upgrading, please read these carefully. Support `LoadBalancerIPMode` in AntreaProxy to implement K8s . (, [@hongliangl]) Add `sameLabels` field support for Antrea ClusterNetworkPolicy peer Namespace selection to allow users to create ACNPs that isolate Namespaces based on their label values. (, [@Dyanngg]) Add multiple physical interfaces support for the secondary network bridge. (, [@aroradaman]) Use a Node's primary NIC as the secondary OVS bridge physical interface. (, [@aroradaman]) Add user documentation for Antrea native secondary network support. ( , [@jianjuns] [@antoninbas]) Add a new versioned API `NetworkPolicyEvaluation` and a new antctl sub-command for querying the effective policy rule applied to particular traffic. ( , [@qiyueyao]) Multiple deprecated APIs, fields and options have been removed from Antrea. Remove deprecated v1alpha1 CRDs `Tier`, `ClusterNetworkPolicy`, `NetworkPolicy`, `Traceflow` and `ExternalEntity`. ( , [@luolanzone] [@hjiajing] [@antoninbas]) Remove deprecated v1alpha2 and v1alpha3 CRDs `ClusterGroups`, `ExternalIPPool`, `ClusterGroup` and `Group`. ( , [@luolanzone] [@antoninbas]) Remove deprecated `ServiceAccount` field in `ClusterSet` type for Antrea Multi-cluster. (, [@luolanzone]) Remove deprecated options `enableIPSecTunnel`,`multicastInterfaces`, `multicluster.enable` and `legacyCRDMirroring`. (, [@luolanzone]) Clean up unused code for NodePortLocal and remove the deprecated `nplPortRange` config. (, [@luolanzone]) Clean up deprecated APIServices. (, [@tnqn]) Documentation has been updated to reflect recent changes and provide better guidance to users. Add upgrade instructions for Antrea v2.0. (, [@antoninbas]) Update the OVS pipeline document and workflow diagram to keep them up to date. (, [@hongliangl]) Clarify documentation for `IPPool` and `ExternalIPPool` CRDs. (, [@antoninbas]) Document Pods using FQDN based policies must respect DNS TTL. (, [@tnqn]) Document the limitations of Audit Logging for policy rules. (, [@antoninbas]) Optimizing Antrea binaries size. Optimize package organization to reduce antctl binary size. (, [@tnqn]) Reduce antrea-cni binary size by removing unnecessary import packages. (, [@tnqn]) Strip all debug symbols from Go binaries by default. (, [@antoninbas]) Disable cgo for all Antrea binaries. (, [@antoninbas]) Increase the minimum supported Kubernetes version to v1.19. (, [@hjiajing]) Add OVS groups dump information to support bundle to help troubleshooting. (, [@shikharish]) Add `egressNodeName` in flow records for Antrea Flow Aggregator. (, [@Atish-iaf]) Add `EgressNode` field in the Traceflow Egress observation to include the name of the Egress Node. (, [@Atish-iaf]) Upgrade `IPPool` CRD to v1beta1 and make the subnet definition consistent with the one in `ExternalIPPool` CRD. (, [@mengdie-song]) Request basic memory for antrea-controller to improve its scheduling and reduce its OOM adjustment score, enhancing overall robustness. (, [@tnqn]) Increase default rate limit of antrea-controller to improve performance for batch requests. (, [@tnqn]) Remove Docker support for antrea-agent on Windows, update Windows documentation to remove all Docker-specific instructions, and all mentions of (userspace)"
},
{
"data": "( , [@XinShuYang] [@antoninbas]) Stop publishing the legacy unified image. (, [@antoninbas]) Avoid unnecessary DNS queries for FQDN rule of NetworkPolicy in antrea-agent. (, [@tnqn]) Stop using `projects.registry.vmware.com` for user-facing images. (, [@antoninbas]) Fall back to lenient decoding when strict decoding config fails to tolerate unknown fields and duplicate fields, ensuring forward compatibility of configurations. (, [@tnqn]) Skip loading `openvswitch` kernel module if it's already built-in. (, [@antoninbas]) Persist TLS certificate and key of antrea-controller and sync the CA cert periodically to improve robustness. ( , [@tnqn]) Add more validations for `ExternalIPPool` CRD to improve robustness. (, [@aroradaman]) Add Antrea L7 NetworkPolicy logs for `allowed` HTTP traffic. (, [@qiyueyao]) Update maximum number of buckets to 700 in OVS group add/insert_bucket message. (, [@hongliangl]) Add a flag for antctl to print OVS table names when users run `antctl get ovsflows --table-names-only`. ( , [@luolanzone]) Improve log message when antrea-agent fails to join a new Node. (, [@roopeshsn]) Remove the prefix `rancher-wins` when collecting antrea-agent logs on Windows. (, [@wenyingd]) Upgrade K8s libraries to v0.29.2. (, [@hjiajing]) Upgrade base image from UBI8 to UBI9 for Antrea UBI images. (, [@xliuxu]) Fix nil pointer dereference when `ClusterGroup`/`Group` is used in NetworkPolicy controller. (, [@tnqn]) Disable `libcapng` to make logrotate run as root in UBI images to fix an OVS crash issue. (, [@xliuxu]) Fix a race condition in antrea-agent Traceflow controller when a tag is associated again with a new Traceflow before the old Traceflow deletion event is processed. (, [@tnqn]) Change the maximum flags from 7 to 255 to fix the wrong TCP flags validation issue in `Traceflow` CRD. (, [@gran-vmv]) Use 65000 MTU upper bound for interfaces in `encap` mode to account for the MTU automatically configured by OVS on tunnel ports, and avoid packet drops on some clusters. (, [@antoninbas]) Install multicast related iptables rules only on IPv4 chains to fix the antrea-agent initialization failure occurred when the Multicast feature is enabled in dual-stack clusters. (, [@wenyingd]) Remove incorrect AntreaProxy warning on Windows when `proxyAll` is disabled. (, [@antoninbas]) Explicitly set kubelet's log files in Prepare-Node.ps1 on Windows, to ensure that they are included in support bundle collections. (, [@wenyingd]) Add validation on antrea-agent options to fail immediately when encryption is requested and the Multicast feature enabled. (, [@wenyingd]) Don't print the incorrect warning message when users run `antrea-controller --version` outside of K8s. (, [@prakrit55]) Record event when EgressIP is uninstalled from a Node and remains unassigned. (, [@jainpulkit22]) Fix a bug that the local traffic cannot be identified on `networkPolicyOnly` mode. (, [@hongliangl]) Use reserved OVS controller ports for the default Antrea ports to fix a potential `ofport` mismatch issue. (, [@antoninbas])"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-2.0.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Automated Deployment sidebar_position: 7 Automated deployment is recommended when JuiceFS Client is to be installed on a large number of hosts. Below examples only demonstrate the mount process, you should before getting started. Below is the example to install and mount JuiceFS in localhost: ```yaml hosts: localhost tasks: set_fact: meta_url: sqlite3:///tmp/myjfs.db jfs_path: /jfs jfs_pkg: /tmp/juicefs-ce.tar.gz jfsbindir: /usr/local/bin get_url: url: https://d.juicefs.com/juicefs/releases/download/v1.0.2/juicefs-1.0.2-linux-amd64.tar.gz dest: \"{{jfs_pkg}}\" ansible.builtin.unarchive: src: \"{{jfs_pkg}}\" dest: \"{{jfsbindir}}\" include: juicefs name: Create symbolic for fstab ansible.builtin.file: src: \"{{jfsbindir}}/juicefs\" dest: \"/sbin/mount.juicefs\" state: link name: Mount JuiceFS and create fstab entry mount: path: \"{{jfs_path}}\" src: \"{{meta_url}}\" fstype: juicefs opts: _netdev state: mounted ```"
}
] |
{
"category": "Runtime",
"file_name": "automation.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Indicate PAL API version number implemented by runelet and enclave runtime; runelet is compatible with any enclave runtimes equal to or less than the indicated value. If this symbol is undefined in enclave runtime, version 1 is assuemd by runelet. ```c int palgetversion(); ``` ``` N/A ``` ``` @int: the PAL API version of the current enclave runtime. ``` Do libos initialization according to the incoming attr parameters. ```c struct palattrt { const char *args; const char *log_level; }; int palinit(struct palattr_t *attr); ``` ``` @args: Pass the required parameters of libos (can be instance path etc.). @log_level: Log level. ``` ``` 0: Success -EINVAL: Invalid argument -ENOSYS: The function is not supported ``` Create a new process, but do not run it; the real run is triggered by pal_exec(). ```c struct palstdiofds { int stdin, stdout, stderr; }; struct palcreateprocess_args { char *path; char *argv[]; char *env[]; struct palstdiofds *stdio; int *pid; }attribute((packed)); int palcreateprocess(struct palcreateprocess_args *args); ``` ``` @path: The path of the binary file to be run (relative path in the libos file system). @argv: Binary parameters, ending with a null element. @env: Binary environment variables, ending with a null element. @stdio: The fd of stdio. @pid: If the function return value is 0, pid stores the pid of the new process in libos. ``` ``` 0: Success -EINVAL: Invalid argument -ENOSYS: The function is not supported ``` Execute the program corresponding to pid. ```c struct palexecargs { int pid; int *exit_value; }attribute((packed)); int palexec(struct palexec_args *attr); ``` ``` @pid: The pid of the generation process. @exit_value: The exit value of the process. ``` ``` 0: Success -EINVAL: Invalid argument -ENOSYS: The function is not supported ``` Send signals to processes running in enclave runtime. ```c int pal_kill(int pid, int sig); ``` ``` @pid: Send to all processes if equal to -1, or send to current process if equal to 0, or send to the process that owns the pid if others. @sig: Signal number to be sent. ``` ``` 0: Success -EINVAL: Invalid argument -ENOSYS: The function is not supported ``` Destroy libos instance. ```c int pal_destroy(); ``` ``` N/A ``` ``` 0: Success -ENOSYS: The function is not supported ```"
}
] |
{
"category": "Runtime",
"file_name": "spec_v2.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Carina-node uses host network and listens port to `8080`. The port number can be changed in yaml. ```shell \"--metrics-addr=:8080\" ``` carina-controller listens to `8080 8443`, those ports can be changed in yaml ```shell \"--metrics-addr=:8080\" \"--webhook-addr=:8443\" ```"
}
] |
{
"category": "Runtime",
"file_name": "api.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The following covers the default \"bindir\" `ImageVerifier` plugin implementation. To enable image verification, add a stanza like the following to the containerd config: ```yaml [plugins] [plugins.\"io.containerd.image-verifier.v1.bindir\"] bin_dir = \"/opt/containerd/image-verifier/bin\" max_verifiers = 10 perverifiertimeout = \"10s\" ``` All files in `bin_dir`, if it exists, must be verifier executables which conform to the following API. `-name`: The given reference to the image that may be pulled. `-digest`: The resolved digest of the image that may be pulled. `-stdin-media-type`: The media type of the JSON data passed to stdin. A JSON encoded payload is passed to the verifier binary's standard input. The media type of this payload is specified by the `-stdin-media-type` CLI argument, and may change in future versions of containerd. Currently, the payload has a media type of `application/vnd.oci.descriptor.v1+json` and represents the OCI Content Descriptor of the image that may be pulled. See for more details. Print to standard output a reason for the image pull judgement. Return an exit code of 0 to allow the image to be pulled and any other exit code to block the image from being pulled. If `bin_dir` does not exist or contains no files, the image verifier does not block image pulls. An image is pulled only if all verifiers that are called return an \"ok\" judgement (exit with status code 0). In other words, image pull judgements are combined with an `AND` operator. If any verifiers exceeds the `perverifiertimeout` or fails to exec, the verification fails with an error and a `nil` judgement is returned. If `max_verifiers < 0`, there is no imposed limit on the number of image verifiers called. If `maxverifiers >= 0`, there is a limit imposed on the number of image verifiers called. The entries in `bindir` are lexicographically sorted by name, and the first `n = max_verifiers` of the verifiers will be called, and the rest will be skipped. There is no guarantee for the order of execution of verifier binaries. Standard error output of verifier binaries is logged at debug level by containerd, subject to truncation. Standard output of verifier binaries (the \"reason\" for the judgement) is subject to truncation. System resources used by verifier binaries are currently accounted for in and constrained by containerd's own cgroup, but this is subject to change."
}
] |
{
"category": "Runtime",
"file_name": "image-verification.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "layout: global title: Storage Integrations Overview This guide will cover general prerequisites and running Alluxio locally with your desired under storage system. To learn how to configure Alluxio with each individual storage system, please look at their respective pages. In preparation for using your chosen storage system with Alluxio, please be sure you have all the required location, credentials, and additional properties before you begin configuring Alluxio to your under storage system. For the purposes of this guide, the following are placeholders. <table class=\"table table-striped\"> <tr> <th>Storage System</th> <th>Location</th> <th>Credentials</th> <th>Additional Properties</th> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`S3BUCKET`, `S3DIRECTORY`</td> <td markdown=\"span\">`S3ACCESSKEYID`, `S3SECRET_KEY`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`HDFSNAMENODE`, `HDFSPORT`</td> <td markdown=\"span\"></td> <td markdown=\"span\"> Specify Hadoop version: <br /> `HADOOP_VERSION`</td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`OSSBUCKET`, `OSSDIRECTORY`</td> <td markdown=\"span\">`OSSACCESSKEYID`, `OSSACCESSKEYSECRET`, `OSS_ENDPOINT`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`AZURECONTAINER`, `AZUREDIRECTORY`</td> <td markdown=\"span\">`AZUREACCOUNT`, `AZUREACCOUNT_KEY`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`AZURE_DIRECTORY`</td> <td markdown=\"span\">`AZURE_ACCOUNT`</td> <td markdown=\"span\">OAuth credentials: <br /> `CLIENTID`, `AUTHENTICATIONKEY`, `TENANT_ID`</td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`AZURECONTAINER`, `AZUREDIRECTORY`</td> <td markdown=\"span\">`AZUREACCOUNT`, `AZURESHARED_KEY`</td> <td markdown=\"span\"> OAuth credentials: <br /> `OAUTHENDPOINT`, `CLIENTID`, `CLIENTSECRET`, `MSIENDPOINT`, `MSI_TENANT`</td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\"></td> <td markdown=\"span\">`CEPHFSCONFFILE`, `CEPHFSNAME`, `CEPHFSDIRECTORY`, `CEPHFSAUTHID`, `CEPHFSKEYRINGFILE`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`CEPHBUCKET`, `CEPHDIRECTORY`</td> <td markdown=\"span\"> `S3ACCESSKEYID`, `S3SECRETKEYID` </td> <td markdown=\"span\"> `RGWHOSTNAME`, `RGWPORT`, `INHERIT_ACL` </td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`GCSBUCKET`, `GCSDIRECTORY`</td> <td markdown=\"span\">For GCS Version 1: `GCSACCESSKEYID`, `GCSSECRETACCESSKEY`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`OBSBUCKET`, `OBSDIRECTORY`</td> <td markdown=\"span\">`OBSACCESSKEY`, `OBSSECRETKEY`, `OBS_ENDPOINT`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`MINIOBUCKET`, `MINIODIRECTORY`</td> <td markdown=\"span\">`S3ACCESSKEYID`, `S3SECRETKEY`, `MINIOENDPOINT`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\"></td> <td markdown=\"span\"></td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\"> : `OZONEBUCKET`, `OZONEVOLUME` <br /> : `OZONEMANAGER`, `OZONEBUCKET`, `OZONEDIRECTORY`, `OZONEVOLUME`</td> <td markdown=\"span\"> `OMSERVICEIDS`</td> <td markdown=\"span\"> Mount specific version: <br /> `OZONE_VERSION`</td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`COSBUCKET`, `COSDIRECTORY`</td> <td markdown=\"span\">`COSACCESSKEY`, `COSSECRETKEY`</td> <td markdown=\"span\"> Specify COS region: <br /> `COSREGION`, `COSAPPID` </td> </tr> <tr> <td markdown=\"span\"></td> <td markdown=\"span\">`COSNBUCKET`, `COSNDIRECTORY`</td> <td markdown=\"span\">`COSNSECRETID`, `COSNSECRETKEY`</td> <td markdown=\"span\"> Specify COSN region: <br /> `COSN_REGION` </td> </tr> </table> Once you have configured Alluxio to your desired under storage system, start up Alluxio locally to see that everything works. ```shell $ ./bin/alluxio init format $ ./bin/alluxio process start local ``` This should start an Alluxio master and an Alluxio worker. You can see the master UI at . Run a simple example program: ```shell $ ./bin/alluxio exec basicIOTest ``` Visit your container `<CONTAINER>/<DIRECTORY>` or bucket `<BUCKET>/<DIRECTORY>` to verify the files and directories created by Alluxio exist. If there are no errors, then you have successfully configured your storage system! To stop Alluxio, you can run: ``` shell $ ./bin/alluxio process stop local ```"
}
] |
{
"category": "Runtime",
"file_name": "Storage-Overview.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Authors: Jie Yu <<[email protected]>> (@jieyu) Saad Ali <<[email protected]>> (@saad-ali) James DeFelice <<[email protected]>> (@jdef) <[email protected]> The keywords \"MUST\", \"MUST NOT\", \"REQUIRED\", \"SHALL\", \"SHALL NOT\", \"SHOULD\", \"SHOULD NOT\", \"RECOMMENDED\", \"NOT RECOMMENDED\", \"MAY\", and \"OPTIONAL\" are to be interpreted as described in (Bradner, S., \"Key words for use in RFCs to Indicate Requirement Levels\", BCP 14, RFC 2119, March 1997). The key words \"unspecified\", \"undefined\", and \"implementation-defined\" are to be interpreted as described in the . An implementation is not compliant if it fails to satisfy one or more of the MUST, REQUIRED, or SHALL requirements for the protocols it implements. An implementation is compliant if it satisfies all the MUST, REQUIRED, and SHALL requirements for the protocols it implements. | Term | Definition | |-|--| | Volume | A unit of storage that will be made available inside of a CO-managed container, via the CSI. | | Block Volume | A volume that will appear as a block device inside the container. | | Mounted Volume | A volume that will be mounted using the specified file system and appear as a directory inside the container. | | CO | Container Orchestration system, communicates with Plugins using CSI service RPCs. | | SP | Storage Provider, the vendor of a CSI plugin implementation. | | RPC | . | | Node | A host where the user workload will be running, uniquely identifiable from the perspective of a Plugin by a node ID. | | Plugin | Aka plugin implementation, a gRPC endpoint that implements the CSI Services. | | Plugin Supervisor | Process that governs the lifecycle of a Plugin, MAY be the CO. | | Workload | The atomic unit of \"work\" scheduled by a CO. This MAY be a container or a collection of containers. | To define an industry standard Container Storage Interface (CSI) that will enable storage vendors (SP) to develop a plugin once and have it work across a number of container orchestration (CO) systems. The Container Storage Interface (CSI) will Enable SP authors to write one CSI compliant Plugin that just works across all COs that implement CSI. Define API (RPCs) that enable: Dynamic provisioning and deprovisioning of a volume. Attaching or detaching a volume from a node. Mounting/unmounting a volume from a node. Consumption of both block and mountable volumes. Local storage providers (e.g., device mapper, lvm). Creating and deleting a snapshot (source of the snapshot is a volume). Provisioning a new volume from a snapshot (reverting snapshot, where data in the original volume is erased and replaced with data in the snapshot, is out of scope). Define plugin protocol RECOMMENDATIONS. Describe a process by which a Supervisor configures a Plugin. Container deployment considerations (`CAPSYSADMIN`, mount namespace, etc.). The Container Storage Interface (CSI) explicitly will not define, provide, or dictate: Specific mechanisms by which a Plugin Supervisor manages the lifecycle of a Plugin, including: How to maintain state (e.g. what is attached, mounted, etc.). How to deploy, install, upgrade, uninstall, monitor, or respawn (in case of unexpected termination) Plugins. A first class message structure/field to represent \"grades of storage\" (aka \"storage class\"). Protocol-level authentication and authorization. Packaging of a Plugin. POSIX compliance: CSI provides no guarantee that volumes provided are POSIX compliant filesystems. Compliance is determined by the Plugin implementation (and any backend storage system(s) upon which it depends). CSI SHALL NOT obstruct a Plugin Supervisor or CO from interacting with Plugin-managed volumes in a POSIX-compliant manner. This specification defines an interface along with the minimum operational and packaging recommendations for a storage provider (SP) to implement a CSI compatible"
},
{
"data": "The interface declares the RPCs that a plugin MUST expose: this is the primary focus of the CSI specification. Any operational and packaging recommendations offer additional guidance to promote cross-CO compatibility. The primary focus of this specification is on the protocol between a CO and a Plugin. It SHOULD be possible to ship cross-CO compatible Plugins for a variety of deployment architectures. A CO SHOULD be equipped to handle both centralized and headless plugins, as well as split-component and unified plugins. Several of these possibilities are illustrated in the following figures. ``` CO \"Master\" Host +-+ | | | ++ ++ | | | CO | gRPC | Controller | | | | +--> Plugin | | | ++ ++ | | | +-+ CO \"Node\" Host(s) +-+ | | | ++ ++ | | | CO | gRPC | Node | | | | +--> Plugin | | | ++ ++ | | | +-+ Figure 1: The Plugin runs on all nodes in the cluster: a centralized Controller Plugin is available on the CO master host and the Node Plugin is available on all of the CO Nodes. ``` ``` CO \"Node\" Host(s) +-+ | | | ++ ++ | | | CO | gRPC | Controller | | | | +--+--> Plugin | | | ++ | ++ | | | | | | | | | ++ | | | | Node | | | +--> Plugin | | | ++ | | | +-+ Figure 2: Headless Plugin deployment, only the CO Node hosts run Plugins. Separate, split-component Plugins supply the Controller Service and the Node Service respectively. ``` ``` CO \"Node\" Host(s) +-+ | | | ++ ++ | | | CO | gRPC | Controller | | | | +--> Node | | | ++ | Plugin | | | ++ | | | +-+ Figure 3: Headless Plugin deployment, only the CO Node hosts run Plugins. A unified Plugin component supplies both the Controller Service and Node Service. ``` ``` CO \"Node\" Host(s) +-+ | | | ++ ++ | | | CO | gRPC | Node | | | | +--> Plugin | | | ++ ++ | | | +-+ Figure 4: Headless Plugin deployment, only the CO Node hosts run Plugins. A Node-only Plugin component supplies only the Node Service. Its GetPluginCapabilities RPC does not report the CONTROLLER_SERVICE capability. ``` ``` CreateVolume ++ DeleteVolume +->| CREATED +--+ | ++-^+ | | Controller | | Controller v +++ Publish | | Unpublish +++ |X| Volume | | Volume | | +-+ +v-++ +-+ | NODE_READY | ++-^+ Node | | Node Publish | | Unpublish Volume | | Volume +v-++ | PUBLISHED | ++ Figure 5: The lifecycle of a dynamically provisioned volume, from creation to destruction. ``` ``` CreateVolume ++ DeleteVolume +->| CREATED +--+ | ++-^+ | | Controller | | Controller v +++ Publish | | Unpublish +++ |X| Volume | | Volume | | +-+ +v-++ +-+ | NODE_READY | ++-^+ Node | | Node Stage | | Unstage Volume | | Volume +v-++ | VOL_READY | ++-^+ Node | | Node Publish | | Unpublish Volume | | Volume +v-++ | PUBLISHED | ++ Figure 6: The lifecycle of a dynamically provisioned volume, from creation to destruction, when the Node Plugin advertises the STAGEUNSTAGEVOLUME"
},
{
"data": "``` ``` Controller Controller Publish Unpublish Volume ++ Volume +->+ NODE_READY +--+ | ++-^+ | | Node | | Node v +++ Publish | | Unpublish +++ |X| <-+ Volume | | Volume | | +++ | +v-++ +-+ | | | PUBLISHED | | | ++ +-+ Validate Volume Capabilities Figure 7: The lifecycle of a pre-provisioned volume that requires controller to publish to a node (`ControllerPublishVolume`) prior to publishing on the node (`NodePublishVolume`). ``` ``` +-+ +-+ |X| | | +++ +^+ | | Node | | Node Publish | | Unpublish Volume | | Volume +v-++ | PUBLISHED | ++ Figure 8: Plugins MAY forego other lifecycle steps by contraindicating them via the capabilities API. Interactions with the volumes of such plugins is reduced to `NodePublishVolume` and `NodeUnpublishVolume` calls. ``` The above diagrams illustrate a general expectation with respect to how a CO MAY manage the lifecycle of a volume via the API presented in this specification. Plugins SHOULD expose all RPCs for an interface: Controller plugins SHOULD implement all RPCs for the `Controller` service. Unsupported RPCs SHOULD return an appropriate error code that indicates such (e.g. `CALLNOTIMPLEMENTED`). The full list of plugin capabilities is documented in the `ControllerGetCapabilities` and `NodeGetCapabilities` RPCs. This section describes the interface between COs and Plugins. A CO interacts with an Plugin through RPCs. Each SP MUST provide: Node Plugin*: A gRPC endpoint serving CSI RPCs that MUST be run on the Node whereupon an SP-provisioned volume will be published. Controller Plugin*: A gRPC endpoint serving CSI RPCs that MAY be run anywhere. In some circumstances a single gRPC endpoint MAY serve all CSI RPCs (see Figure 3 in ). ```protobuf syntax = \"proto3\"; package csi.v1; import \"google/protobuf/descriptor.proto\"; import \"google/protobuf/timestamp.proto\"; import \"google/protobuf/wrappers.proto\"; option go_package = \"github.com/container-storage-interface/spec/lib/go/csi\"; extend google.protobuf.EnumOptions { // Indicates that this enum is OPTIONAL and part of an experimental // API that may be deprecated and eventually removed between minor // releases. bool alpha_enum = 1060; } extend google.protobuf.EnumValueOptions { // Indicates that this enum value is OPTIONAL and part of an // experimental API that may be deprecated and eventually removed // between minor releases. bool alphaenumvalue = 1060; } extend google.protobuf.FieldOptions { // Indicates that a field MAY contain information that is sensitive // and MUST be treated as such (e.g. not logged). bool csi_secret = 1059; // Indicates that this field is OPTIONAL and part of an experimental // API that may be deprecated and eventually removed between minor // releases. bool alpha_field = 1060; } extend google.protobuf.MessageOptions { // Indicates that this message is OPTIONAL and part of an experimental // API that may be deprecated and eventually removed between minor // releases. bool alpha_message = 1060; } extend google.protobuf.MethodOptions { // Indicates that this method is OPTIONAL and part of an experimental // API that may be deprecated and eventually removed between minor // releases. bool alpha_method = 1060; } extend google.protobuf.ServiceOptions { // Indicates that this service is OPTIONAL and part of an experimental // API that may be deprecated and eventually removed between minor // releases. bool alpha_service = 1060; } ``` There are three sets of RPCs: Identity Service*: Both the Node Plugin and the Controller Plugin MUST implement this sets of RPCs. Controller Service*: The Controller Plugin MUST implement this sets of RPCs. Node Service*: The Node Plugin MUST implement this sets of"
},
{
"data": "```protobuf service Identity { rpc GetPluginInfo(GetPluginInfoRequest) returns (GetPluginInfoResponse) {} rpc GetPluginCapabilities(GetPluginCapabilitiesRequest) returns (GetPluginCapabilitiesResponse) {} rpc Probe (ProbeRequest) returns (ProbeResponse) {} } service Controller { rpc CreateVolume (CreateVolumeRequest) returns (CreateVolumeResponse) {} rpc DeleteVolume (DeleteVolumeRequest) returns (DeleteVolumeResponse) {} rpc ControllerPublishVolume (ControllerPublishVolumeRequest) returns (ControllerPublishVolumeResponse) {} rpc ControllerUnpublishVolume (ControllerUnpublishVolumeRequest) returns (ControllerUnpublishVolumeResponse) {} rpc ValidateVolumeCapabilities (ValidateVolumeCapabilitiesRequest) returns (ValidateVolumeCapabilitiesResponse) {} rpc ListVolumes (ListVolumesRequest) returns (ListVolumesResponse) {} rpc GetCapacity (GetCapacityRequest) returns (GetCapacityResponse) {} rpc ControllerGetCapabilities (ControllerGetCapabilitiesRequest) returns (ControllerGetCapabilitiesResponse) {} rpc CreateSnapshot (CreateSnapshotRequest) returns (CreateSnapshotResponse) {} rpc DeleteSnapshot (DeleteSnapshotRequest) returns (DeleteSnapshotResponse) {} rpc ListSnapshots (ListSnapshotsRequest) returns (ListSnapshotsResponse) {} rpc ControllerExpandVolume (ControllerExpandVolumeRequest) returns (ControllerExpandVolumeResponse) {} rpc ControllerGetVolume (ControllerGetVolumeRequest) returns (ControllerGetVolumeResponse) { option (alpha_method) = true; } rpc ControllerModifyVolume (ControllerModifyVolumeRequest) returns (ControllerModifyVolumeResponse) { option (alpha_method) = true; } } service GroupController { option (alpha_service) = true; rpc GroupControllerGetCapabilities ( GroupControllerGetCapabilitiesRequest) returns (GroupControllerGetCapabilitiesResponse) {} rpc CreateVolumeGroupSnapshot(CreateVolumeGroupSnapshotRequest) returns (CreateVolumeGroupSnapshotResponse) { option (alpha_method) = true; } rpc DeleteVolumeGroupSnapshot(DeleteVolumeGroupSnapshotRequest) returns (DeleteVolumeGroupSnapshotResponse) { option (alpha_method) = true; } rpc GetVolumeGroupSnapshot( GetVolumeGroupSnapshotRequest) returns (GetVolumeGroupSnapshotResponse) { option (alpha_method) = true; } } service SnapshotMetadata { option (alpha_service) = true; rpc GetMetadataAllocated(GetMetadataAllocatedRequest) returns (stream GetMetadataAllocatedResponse) {} rpc GetMetadataDelta(GetMetadataDeltaRequest) returns (stream GetMetadataDeltaResponse) {} } service Node { rpc NodeStageVolume (NodeStageVolumeRequest) returns (NodeStageVolumeResponse) {} rpc NodeUnstageVolume (NodeUnstageVolumeRequest) returns (NodeUnstageVolumeResponse) {} rpc NodePublishVolume (NodePublishVolumeRequest) returns (NodePublishVolumeResponse) {} rpc NodeUnpublishVolume (NodeUnpublishVolumeRequest) returns (NodeUnpublishVolumeResponse) {} rpc NodeGetVolumeStats (NodeGetVolumeStatsRequest) returns (NodeGetVolumeStatsResponse) {} rpc NodeExpandVolume(NodeExpandVolumeRequest) returns (NodeExpandVolumeResponse) {} rpc NodeGetCapabilities (NodeGetCapabilitiesRequest) returns (NodeGetCapabilitiesResponse) {} rpc NodeGetInfo (NodeGetInfoRequest) returns (NodeGetInfoResponse) {} } ``` In general the Cluster Orchestrator (CO) is responsible for ensuring that there is no more than one call in-flight per volume at a given time. However, in some circumstances, the CO MAY lose state (for example when the CO crashes and restarts), and MAY issue multiple calls simultaneously for the same volume. The plugin SHOULD handle this as gracefully as possible. The error code `ABORTED` MAY be returned by the plugin in this case (see the section for details). The requirements documented herein apply equally and without exception, unless otherwise noted, for the fields of all protobuf message types defined by this specification. Violation of these requirements MAY result in RPC message data that is not compatible with all CO, Plugin, and/or CSI middleware implementations. CSI defines general size limits for fields of various types (see table below). The general size limit for a particular field MAY be overridden by specifying a different size limit in said field's description. Unless otherwise specified, fields SHALL NOT exceed the limits documented here. These limits apply for messages generated by both COs and plugins. | Size | Field Type | ||| | 128 bytes | string | | 4 KiB | map<string, string> | A field noted as `REQUIRED` MUST be specified, subject to any per-RPC caveats; caveats SHOULD be rare. A `repeated` or `map` field listed as `REQUIRED` MUST contain at least 1 element. A field noted as `OPTIONAL` MAY be specified and the specification SHALL clearly define expected behavior for the default, zero-value of such fields. Scalar fields, even REQUIRED ones, will be defaulted if not specified and any field set to the default value will not be serialized over the wire as per . Any of the RPCs defined in this spec MAY timeout and MAY be retried. The CO MAY choose the maximum time it is willing to wait for a call, how long it waits between retries, and how many time it retries (these values are not negotiated between plugin and CO). Idempotency requirements ensure that a retried call with the same fields continues where it left off when retried. The only way to cancel a call is to issue a \"negation\" call if one exists. For example, issue a `ControllerUnpublishVolume` call to cancel a pending `ControllerPublishVolume` operation, etc. In some cases, a CO MAY NOT be able to cancel a pending operation because it depends on the result of the pending operation in order to execute the \"negation\" call. For example, if a `CreateVolume` call never completes then a CO MAY NOT have the `volume_id` to call `DeleteVolume`"
},
{
"data": "All CSI API calls defined in this spec MUST return a . Most gRPC libraries provide helper methods to set and read the status fields. The status `code` MUST contain a . COs MUST handle all valid error codes. Each RPC defines a set of gRPC error codes that MUST be returned by the plugin when specified conditions are encountered. In addition to those, if the conditions defined below are encountered, the plugin MUST return the associated gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Missing required field | 3 INVALID_ARGUMENT | Indicates that a required field is missing from the request. More human-readable information MAY be provided in the `status.message` field. | Caller MUST fix the request by adding the missing required field before retrying. | | Invalid or unsupported field in the request | 3 INVALID_ARGUMENT | Indicates that the one or more fields in this field is either not allowed by the Plugin or has an invalid value. More human-readable information MAY be provided in the gRPC `status.message` field. | Caller MUST fix the field before retrying. | | Permission denied | 7 PERMISSION_DENIED | The Plugin is able to derive or otherwise infer an identity from the secrets present within an RPC, but that identity does not have permission to invoke the RPC. | System administrator SHOULD ensure that requisite permissions are granted, after which point the caller MAY retry the attempted RPC. | | Operation pending for volume | 10 ABORTED | Indicates that there is already an operation pending for the specified volume. In general the Cluster Orchestrator (CO) is responsible for ensuring that there is no more than one call \"in-flight\" per volume at a given time. However, in some circumstances, the CO MAY lose state (for example when the CO crashes and restarts), and MAY issue multiple calls simultaneously for the same volume. The Plugin, SHOULD handle this as gracefully as possible, and MAY return this error code to reject secondary calls. | Caller SHOULD ensure that there are no other calls pending for the specified volume, and then retry with exponential back off. | | Call not implemented | 12 UNIMPLEMENTED | The invoked RPC is not implemented by the Plugin or disabled in the Plugin's current mode of operation. | Caller MUST NOT retry. Caller MAY call `GetPluginCapabilities`, `ControllerGetCapabilities`, or `NodeGetCapabilities` to discover Plugin capabilities. | | Not authenticated | 16 UNAUTHENTICATED | The invoked RPC does not carry secrets that are valid for authentication. | Caller SHALL either fix the secrets provided in the RPC, or otherwise regalvanize said secrets such that they will pass authentication by the Plugin for the attempted RPC, after which point the caller MAY retry the attempted RPC. | The status `message` MUST contain a human readable description of error, if the status `code` is not `OK`. This string MAY be surfaced by CO to end users. The status `details` MUST be empty. In the future, this spec MAY require `details` to return a machine-parsable protobuf message if the status `code` is not `OK` to enable CO's to implement smarter error handling and fault resolution. Secrets MAY be required by plugin to complete a RPC request. A secret is a string to string map where the key identifies the name of the secret (e.g. \"username\" or \"password\"), and the value contains the secret data (e.g. \"bob\" or \"abc123\"). Each key MUST consist of alphanumeric characters, '-', '_' or '.'. Each value MUST contain a valid"
},
{
"data": "An SP MAY choose to accept binary (non-string) data by using a binary-to-text encoding scheme, like base64. An SP SHALL advertise the requirements for required secret keys and values in documentation. CO SHALL permit passing through the required secrets. A CO MAY pass the same secrets to all RPCs, therefore the keys for all unique secrets that an SP expects MUST be unique across all CSI operations. This information is sensitive and MUST be treated as such (not logged, etc.) by the CO. Identity service RPCs allow a CO to query a plugin for capabilities, health, and other metadata. The general flow of the success case MAY be as follows (protos illustrated in YAML for brevity): CO queries metadata via Identity RPC. ``` request: response: name: org.foo.whizbang.super-plugin vendor_version: blue-green manifest: baz: qaz ``` CO queries available capabilities of the plugin. ``` request: response: capabilities: service: type: CONTROLLER_SERVICE ``` CO queries the readiness of the plugin. ``` request: response: {} ``` ```protobuf message GetPluginInfoRequest { // Intentionally empty. } message GetPluginInfoResponse { // The name MUST follow domain name notation format // (https://tools.ietf.org/html/rfc1035#section-2.3.1). It SHOULD // include the plugin's host company name and the plugin name, // to minimize the possibility of collisions. It MUST be 63 // characters or less, beginning and ending with an alphanumeric // character ([a-z0-9A-Z]) with dashes (-), dots (.), and // alphanumerics between. This field is REQUIRED. string name = 1; // This field is REQUIRED. Value of this field is opaque to the CO. string vendor_version = 2; // This field is OPTIONAL. Values are opaque to the CO. map<string, string> manifest = 3; } ``` If the plugin is unable to complete the GetPluginInfo call successfully, it MUST return a non-ok gRPC code in the gRPC status. This REQUIRED RPC allows the CO to query the supported capabilities of the Plugin \"as a whole\": it is the grand sum of all capabilities of all instances of the Plugin software, as it is intended to be deployed. All instances of the same version (see `vendor_version` of `GetPluginInfoResponse`) of the Plugin SHALL return the same set of capabilities, regardless of both: (a) where instances are deployed on the cluster as well as; (b) which RPCs an instance is serving. ```protobuf message GetPluginCapabilitiesRequest { // Intentionally empty. } message GetPluginCapabilitiesResponse { // All the capabilities that the controller service supports. This // field is OPTIONAL. repeated PluginCapability capabilities = 1; } // Specifies a capability of the plugin. message PluginCapability { message Service { enum Type { UNKNOWN = 0; // CONTROLLER_SERVICE indicates that the Plugin provides RPCs for // the ControllerService. Plugins SHOULD provide this capability. // In rare cases certain plugins MAY wish to omit the // ControllerService entirely from their implementation, but such // SHOULD NOT be the common case. // The presence of this capability determines whether the CO will // attempt to invoke the REQUIRED ControllerService RPCs, as well // as specific RPCs as indicated by ControllerGetCapabilities. CONTROLLER_SERVICE = 1; // VOLUMEACCESSIBILITYCONSTRAINTS indicates that the volumes for // this plugin MAY NOT be equally accessible by all nodes in the // cluster. The CO MUST use the topology information returned by // CreateVolumeRequest along with the topology information // returned by NodeGetInfo to ensure that a given volume is // accessible from a given node when scheduling workloads. VOLUMEACCESSIBILITYCONSTRAINTS = 2; // GROUPCONTROLLERSERVICE indicates that the Plugin provides // RPCs for operating on groups of volumes. Plugins MAY provide // this"
},
{
"data": "// The presence of this capability determines whether the CO will // attempt to invoke the REQUIRED GroupController service RPCs, as // well as specific RPCs as indicated by // GroupControllerGetCapabilities. GROUPCONTROLLERSERVICE = 3 [(alphaenumvalue) = true]; // SNAPSHOTMETADATASERVICE indicates that the Plugin provides // RPCs to retrieve metadata on the allocated blocks of a single // snapshot, or the changed blocks between a pair of snapshots of // the same block volume. // The presence of this capability determines whether the CO will // attempt to invoke the OPTIONAL SnapshotMetadata service RPCs. SNAPSHOTMETADATASERVICE = 4 [(alphaenumvalue) = true]; } Type type = 1; } message VolumeExpansion { enum Type { UNKNOWN = 0; // ONLINE indicates that volumes may be expanded when published to // a node. When a Plugin implements this capability it MUST // implement either the EXPAND_VOLUME controller capability or the // EXPAND_VOLUME node capability or both. When a plugin supports // ONLINE volume expansion and also has the EXPAND_VOLUME // controller capability then the plugin MUST support expansion of // volumes currently published and available on a node. When a // plugin supports ONLINE volume expansion and also has the // EXPAND_VOLUME node capability then the plugin MAY support // expansion of node-published volume via NodeExpandVolume. // // Example 1: Given a shared filesystem volume (e.g. GlusterFs), // the Plugin may set the ONLINE volume expansion capability and // implement ControllerExpandVolume but not NodeExpandVolume. // // Example 2: Given a block storage volume type (e.g. EBS), the // Plugin may set the ONLINE volume expansion capability and // implement both ControllerExpandVolume and NodeExpandVolume. // // Example 3: Given a Plugin that supports volume expansion only // upon a node, the Plugin may set the ONLINE volume // expansion capability and implement NodeExpandVolume but not // ControllerExpandVolume. ONLINE = 1; // OFFLINE indicates that volumes currently published and // available on a node SHALL NOT be expanded via // ControllerExpandVolume. When a plugin supports OFFLINE volume // expansion it MUST implement either the EXPAND_VOLUME controller // capability or both the EXPAND_VOLUME controller capability and // the EXPAND_VOLUME node capability. // // Example 1: Given a block storage volume type (e.g. Azure Disk) // that does not support expansion of \"node-attached\" (i.e. // controller-published) volumes, the Plugin may indicate // OFFLINE volume expansion support and implement both // ControllerExpandVolume and NodeExpandVolume. OFFLINE = 2; } Type type = 1; } oneof type { // Service that the plugin supports. Service service = 1; VolumeExpansion volume_expansion = 2; } } ``` If the plugin is unable to complete the GetPluginCapabilities call successfully, it MUST return a non-ok gRPC code in the gRPC status. A Plugin MUST implement this RPC call. The primary utility of the Probe RPC is to verify that the plugin is in a healthy and ready state. If an unhealthy state is reported, via a non-success response, a CO MAY take action with the intent to bring the plugin to a healthy state. Such actions MAY include, but SHALL NOT be limited to, the following: Restarting the plugin container, or Notifying the plugin supervisor. The Plugin MAY verify that it has the right configurations, devices, dependencies and drivers in order to run and return a success if the validation succeeds. The CO MAY invoke this RPC at any time. A CO MAY invoke this call multiple times with the understanding that a plugin's implementation MAY NOT be trivial and there MAY be overhead incurred by such repeated calls. The SP SHALL document guidance and known limitations regarding a particular Plugin's implementation of this RPC. For example, the SP MAY document the maximum frequency at which its Probe implementation SHOULD be"
},
{
"data": "```protobuf message ProbeRequest { // Intentionally empty. } message ProbeResponse { // Readiness allows a plugin to report its initialization status back // to the CO. Initialization for some plugins MAY be time consuming // and it is important for a CO to distinguish between the following // cases: // // 1) The plugin is in an unhealthy state and MAY need restarting. In // this case a gRPC error code SHALL be returned. // 2) The plugin is still initializing, but is otherwise perfectly // healthy. In this case a successful response SHALL be returned // with a readiness value of `false`. Calls to the plugin's // Controller and/or Node services MAY fail due to an incomplete // initialization state. // 3) The plugin has finished initializing and is ready to service // calls to its Controller and/or Node services. A successful // response is returned with a readiness value of `true`. // // This field is OPTIONAL. If not present, the caller SHALL assume // that the plugin is in a ready state and is accepting calls to its // Controller and/or Node services (according to the plugin's reported // capabilities). .google.protobuf.BoolValue ready = 1; } ``` If the plugin is unable to complete the Probe call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Plugin not healthy | 9 FAILED_PRECONDITION | Indicates that the plugin is not in a healthy/ready state. | Caller SHOULD assume the plugin is not healthy and that future RPCs MAY fail because of this condition. | | Missing required dependency | 9 FAILED_PRECONDITION | Indicates that the plugin is missing one or more required dependency. | Caller MUST assume the plugin is not healthy. | A Controller Plugin MUST implement this RPC call if it has `CREATEDELETEVOLUME` controller capability. This RPC will be called by the CO to provision a new volume on behalf of a user (to be consumed as either a block device or a mounted filesystem). This operation MUST be idempotent. If a volume corresponding to the specified volume `name` already exists, is accessible from `accessibilityrequirements`, and is compatible with the specified `capacityrange`, `volumecapabilities`, `parameters` and `mutableparameters` in the `CreateVolumeRequest`, the Plugin MUST reply `0 OK` with the corresponding `CreateVolumeResponse`. The `parameters` field SHALL contain opaque volume attributes to be specified at creation time. The `mutable_parameters` field SHALL contain opaque volume attributes that are defined at creation time but MAY also be changed during the lifetime of the volume via a subsequent `ControllerModifyVolume` RPC. Values specified in `mutable_parameters` MUST take precedence over the values from `parameters`. Plugins MAY create 3 types of volumes: Empty volumes. When plugin supports `CREATEDELETEVOLUME` OPTIONAL capability. From an existing snapshot. When plugin supports `CREATEDELETEVOLUME` and `CREATEDELETESNAPSHOT` OPTIONAL capabilities. From an existing volume. When plugin supports cloning, and reports the OPTIONAL capabilities `CREATEDELETEVOLUME` and `CLONE_VOLUME`. If CO requests a volume to be created from existing snapshot or volume and the requested size of the volume is larger than the original snapshotted (or cloned volume), the Plugin can either refuse such a call with `OUTOFRANGE` error or MUST provide a volume that, when presented to a workload by `NodePublish` call, has both the requested (larger) size and contains data from the snapshot (or original"
},
{
"data": "Explicitly, it's the responsibility of the Plugin to resize the filesystem of the newly created volume at (or before) the `NodePublish` call, if the volume has `VolumeCapability` access type `MountVolume` and the filesystem resize is required in order to provision the requested capacity. ```protobuf message CreateVolumeRequest { // The suggested name for the storage space. This field is REQUIRED. // It serves two purposes: // 1) Idempotency - This name is generated by the CO to achieve // idempotency. The Plugin SHOULD ensure that multiple // `CreateVolume` calls for the same name do not result in more // than one piece of storage provisioned corresponding to that // name. If a Plugin is unable to enforce idempotency, the CO's // error recovery logic could result in multiple (unused) volumes // being provisioned. // In the case of error, the CO MUST handle the gRPC error codes // per the recovery behavior defined in the \"CreateVolume Errors\" // section below. // The CO is responsible for cleaning up volumes it provisioned // that it no longer needs. If the CO is uncertain whether a volume // was provisioned or not when a `CreateVolume` call fails, the CO // MAY call `CreateVolume` again, with the same name, to ensure the // volume exists and to retrieve the volume's `volume_id` (unless // otherwise prohibited by \"CreateVolume Errors\"). // 2) Suggested name - Some storage systems allow callers to specify // an identifier by which to refer to the newly provisioned // storage. If a storage system supports this, it can optionally // use this name as the identifier for the new volume. // Any Unicode string that conforms to the length limit is allowed // except those containing the following banned characters: // U+0000-U+0008, U+000B, U+000C, U+000E-U+001F, U+007F-U+009F. // (These are control characters other than commonly used whitespace.) string name = 1; // This field is OPTIONAL. This allows the CO to specify the capacity // requirement of the volume to be provisioned. If not specified, the // Plugin MAY choose an implementation-defined capacity range. If // specified it MUST always be honored, even when creating volumes // from a source; which MAY force some backends to internally extend // the volume after creating it. CapacityRange capacity_range = 2; // The capabilities that the provisioned volume MUST have. SP MUST // provision a volume that will satisfy ALL of the capabilities // specified in this list. Otherwise SP MUST return the appropriate // gRPC error code. // The Plugin MUST assume that the CO MAY use the provisioned volume // with ANY of the capabilities specified in this list. // For example, a CO MAY specify two volume capabilities: one with // access mode SINGLENODEWRITER and another with access mode // MULTINODEREADER_ONLY. In this case, the SP MUST verify that the // provisioned volume can be used in either mode. // This also enables the CO to do early validation: If ANY of the // specified volume capabilities are not supported by the SP, the call // MUST return the appropriate gRPC error code. // This field is REQUIRED. repeated VolumeCapability volume_capabilities = 3; // Plugin specific creation-time parameters passed in as opaque // key-value pairs. This field is OPTIONAL. The Plugin is responsible // for parsing and validating these parameters. COs will treat // these as opaque. map<string, string> parameters = 4; // Secrets required by plugin to complete volume creation request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 5 [(csi_secret) = true]; // If specified, the new volume will be pre-populated with data from // this"
},
{
"data": "This field is OPTIONAL. VolumeContentSource volumecontentsource = 6; // Specifies where (regions, zones, racks, etc.) the provisioned // volume MUST be accessible from. // An SP SHALL advertise the requirements for topological // accessibility information in documentation. COs SHALL only specify // topological accessibility information supported by the SP. // This field is OPTIONAL. // This field SHALL NOT be specified unless the SP has the // VOLUMEACCESSIBILITYCONSTRAINTS plugin capability. // If this field is not specified and the SP has the // VOLUMEACCESSIBILITYCONSTRAINTS plugin capability, the SP MAY // choose where the provisioned volume is accessible from. TopologyRequirement accessibility_requirements = 7; // Plugin specific creation-time parameters passed in as opaque // key-value pairs. These mutable_parameteres MAY also be // changed during the lifetime of the volume via a subsequent // `ControllerModifyVolume` RPC. This field is OPTIONAL. // The Plugin is responsible for parsing and validating these // parameters. COs will treat these as opaque. // Plugins MUST treat these // as if they take precedence over the parameters field. // This field SHALL NOT be specified unless the SP has the // MODIFY_VOLUME plugin capability. map<string, string> mutableparameters = 8 [(alphafield) = true]; } // Specifies what source the volume will be created from. One of the // type fields MUST be specified. message VolumeContentSource { message SnapshotSource { // Contains identity information for the existing source snapshot. // This field is REQUIRED. Plugin is REQUIRED to support creating // volume from snapshot if it supports the capability // CREATEDELETESNAPSHOT. string snapshot_id = 1; } message VolumeSource { // Contains identity information for the existing source volume. // This field is REQUIRED. Plugins reporting CLONE_VOLUME // capability MUST support creating a volume from another volume. string volume_id = 1; } oneof type { SnapshotSource snapshot = 1; VolumeSource volume = 2; } } message CreateVolumeResponse { // Contains all attributes of the newly created volume that are // relevant to the CO along with information required by the Plugin // to uniquely identify the volume. This field is REQUIRED. Volume volume = 1; } // Specify a capability of a volume. message VolumeCapability { // Indicate that the volume will be accessed via the block device API. message BlockVolume { // Intentionally empty, for now. } // Indicate that the volume will be accessed via the filesystem API. message MountVolume { // The filesystem type. This field is OPTIONAL. // An empty string is equal to an unspecified field value. string fs_type = 1; // The mount options that can be used for the volume. This field is // OPTIONAL. `mount_flags` MAY contain sensitive information. // Therefore, the CO and the Plugin MUST NOT leak this information // to untrusted entities. The total size of this repeated field // SHALL NOT exceed 4 KiB. repeated string mount_flags = 2; // If SP has VOLUMEMOUNTGROUP node capability and CO provides // this field then SP MUST ensure that the volumemountgroup // parameter is passed as the group identifier to the underlying // operating system mount system call, with the understanding // that the set of available mount call parameters and/or // mount implementations may vary across operating systems. // Additionally, new file and/or directory entries written to // the underlying filesystem SHOULD be permission-labeled in such a // manner, unless otherwise modified by a workload, that they are // both readable and writable by said mount group identifier. // This is an OPTIONAL field. string volumemountgroup = 3; } // Specify how a volume can be"
},
{
"data": "message AccessMode { enum Mode { UNKNOWN = 0; // Can only be published once as read/write on a single node, at // any given time. SINGLENODEWRITER = 1; // Can only be published once as readonly on a single node, at // any given time. SINGLENODEREADER_ONLY = 2; // Can be published as readonly at multiple nodes simultaneously. MULTINODEREADER_ONLY = 3; // Can be published at multiple nodes simultaneously. Only one of // the node can be used as read/write. The rest will be readonly. MULTINODESINGLE_WRITER = 4; // Can be published as read/write at multiple nodes // simultaneously. MULTINODEMULTI_WRITER = 5; // Can only be published once as read/write at a single workload // on a single node, at any given time. SHOULD be used instead of // SINGLENODEWRITER for COs using the experimental // SINGLENODEMULTI_WRITER capability. SINGLENODESINGLEWRITER = 6 [(alphaenum_value) = true]; // Can be published as read/write at multiple workloads on a // single node simultaneously. SHOULD be used instead of // SINGLENODEWRITER for COs using the experimental // SINGLENODEMULTI_WRITER capability. SINGLENODEMULTIWRITER = 7 [(alphaenum_value) = true]; } // This field is REQUIRED. Mode mode = 1; } // Specifies what API the volume will be accessed using. One of the // following fields MUST be specified. oneof access_type { BlockVolume block = 1; MountVolume mount = 2; } // This is a REQUIRED field. AccessMode access_mode = 3; } // The capacity of the storage space in bytes. To specify an exact size, // `requiredbytes` and `limitbytes` SHALL be set to the same value. At // least one of the these fields MUST be specified. message CapacityRange { // Volume MUST be at least this big. This field is OPTIONAL. // A value of 0 is equal to an unspecified field value. // The value of this field MUST NOT be negative. int64 required_bytes = 1; // Volume MUST not be bigger than this. This field is OPTIONAL. // A value of 0 is equal to an unspecified field value. // The value of this field MUST NOT be negative. int64 limit_bytes = 2; } // Information about a specific volume. message Volume { // The capacity of the volume in bytes. This field is OPTIONAL. If not // set (value of 0), it indicates that the capacity of the volume is // unknown (e.g., NFS share). // The value of this field MUST NOT be negative. int64 capacity_bytes = 1; // The identifier for this volume, generated by the plugin. // This field is REQUIRED. // This field MUST contain enough information to uniquely identify // this specific volume vs all other volumes supported by this plugin. // This field SHALL be used by the CO in subsequent calls to refer to // this volume. // The SP is NOT responsible for global uniqueness of volume_id across // multiple SPs. string volume_id = 2; // Opaque static properties of the volume. SP MAY use this field to // ensure subsequent volume validation and publishing calls have // contextual information. // The contents of this field SHALL be opaque to a CO. // The contents of this field SHALL NOT be mutable. // The contents of this field SHALL be safe for the CO to cache. // The contents of this field SHOULD NOT contain sensitive // information. // The contents of this field SHOULD NOT be used for uniquely // identifying a volume. The `volume_id` alone SHOULD be sufficient to // identify the volume. // A volume uniquely identified by `volume_id` SHALL always report the // same"
},
{
"data": "// This field is OPTIONAL and when present MUST be passed to volume // validation and publishing calls. map<string, string> volume_context = 3; // If specified, indicates that the volume is not empty and is // pre-populated with data from the specified source. // This field is OPTIONAL. VolumeContentSource content_source = 4; // Specifies where (regions, zones, racks, etc.) the provisioned // volume is accessible from. // A plugin that returns this field MUST also set the // VOLUMEACCESSIBILITYCONSTRAINTS plugin capability. // An SP MAY specify multiple topologies to indicate the volume is // accessible from multiple locations. // COs MAY use this information along with the topology information // returned by NodeGetInfo to ensure that a given volume is accessible // from a given node when scheduling workloads. // This field is OPTIONAL. If it is not specified, the CO MAY assume // the volume is equally accessible from all nodes in the cluster and // MAY schedule workloads referencing the volume on any available // node. // // Example 1: // accessible_topology = {\"region\": \"R1\", \"zone\": \"Z2\"} // Indicates a volume accessible only from the \"region\" \"R1\" and the // \"zone\" \"Z2\". // // Example 2: // accessible_topology = // {\"region\": \"R1\", \"zone\": \"Z2\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"} // Indicates a volume accessible from both \"zone\" \"Z2\" and \"zone\" \"Z3\" // in the \"region\" \"R1\". repeated Topology accessible_topology = 5; } message TopologyRequirement { // Specifies the list of topologies the provisioned volume MUST be // accessible from. // This field is OPTIONAL. If TopologyRequirement is specified either // requisite or preferred or both MUST be specified. // // If requisite is specified, the provisioned volume MUST be // accessible from at least one of the requisite topologies. // // Given // x = number of topologies provisioned volume is accessible from // n = number of requisite topologies // The CO MUST ensure n >= 1. The SP MUST ensure x >= 1 // If x==n, then the SP MUST make the provisioned volume available to // all topologies from the list of requisite topologies. If it is // unable to do so, the SP MUST fail the CreateVolume call. // For example, if a volume should be accessible from a single zone, // and requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"} // then the provisioned volume MUST be accessible from the \"region\" // \"R1\" and the \"zone\" \"Z2\". // Similarly, if a volume should be accessible from two zones, and // requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"} // then the provisioned volume MUST be accessible from the \"region\" // \"R1\" and both \"zone\" \"Z2\" and \"zone\" \"Z3\". // // If x<n, then the SP SHALL choose x unique topologies from the list // of requisite topologies. If it is unable to do so, the SP MUST fail // the CreateVolume call. // For example, if a volume should be accessible from a single zone, // and requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"} // then the SP may choose to make the provisioned volume available in // either the \"zone\" \"Z2\" or the \"zone\" \"Z3\" in the \"region\" \"R1\". // Similarly, if a volume should be accessible from two zones, and // requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"}, // {\"region\": \"R1\", \"zone\": \"Z4\"} // then the provisioned volume MUST be accessible from any combination // of two unique topologies: e.g. \"R1/Z2\" and \"R1/Z3\", or \"R1/Z2\" and // \"R1/Z4\", or \"R1/Z3\" and"
},
{
"data": "// // If x>n, then the SP MUST make the provisioned volume available from // all topologies from the list of requisite topologies and MAY choose // the remaining x-n unique topologies from the list of all possible // topologies. If it is unable to do so, the SP MUST fail the // CreateVolume call. // For example, if a volume should be accessible from two zones, and // requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"} // then the provisioned volume MUST be accessible from the \"region\" // \"R1\" and the \"zone\" \"Z2\" and the SP may select the second zone // independently, e.g. \"R1/Z4\". repeated Topology requisite = 1; // Specifies the list of topologies the CO would prefer the volume to // be provisioned in. // // This field is OPTIONAL. If TopologyRequirement is specified either // requisite or preferred or both MUST be specified. // // An SP MUST attempt to make the provisioned volume available using // the preferred topologies in order from first to last. // // If requisite is specified, all topologies in preferred list MUST // also be present in the list of requisite topologies. // // If the SP is unable to to make the provisioned volume available // from any of the preferred topologies, the SP MAY choose a topology // from the list of requisite topologies. // If the list of requisite topologies is not specified, then the SP // MAY choose from the list of all possible topologies. // If the list of requisite topologies is specified and the SP is // unable to to make the provisioned volume available from any of the // requisite topologies it MUST fail the CreateVolume call. // // Example 1: // Given a volume should be accessible from a single zone, and // requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"} // preferred = // {\"region\": \"R1\", \"zone\": \"Z3\"} // then the SP SHOULD first attempt to make the provisioned volume // available from \"zone\" \"Z3\" in the \"region\" \"R1\" and fall back to // \"zone\" \"Z2\" in the \"region\" \"R1\" if that is not possible. // // Example 2: // Given a volume should be accessible from a single zone, and // requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"}, // {\"region\": \"R1\", \"zone\": \"Z4\"}, // {\"region\": \"R1\", \"zone\": \"Z5\"} // preferred = // {\"region\": \"R1\", \"zone\": \"Z4\"}, // {\"region\": \"R1\", \"zone\": \"Z2\"} // then the SP SHOULD first attempt to make the provisioned volume // accessible from \"zone\" \"Z4\" in the \"region\" \"R1\" and fall back to // \"zone\" \"Z2\" in the \"region\" \"R1\" if that is not possible. If that // is not possible, the SP may choose between either the \"zone\" // \"Z3\" or \"Z5\" in the \"region\" \"R1\". // // Example 3: // Given a volume should be accessible from TWO zones (because an // opaque parameter in CreateVolumeRequest, for example, specifies // the volume is accessible from two zones, aka synchronously // replicated), and // requisite = // {\"region\": \"R1\", \"zone\": \"Z2\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"}, // {\"region\": \"R1\", \"zone\": \"Z4\"}, // {\"region\": \"R1\", \"zone\": \"Z5\"} // preferred = // {\"region\": \"R1\", \"zone\": \"Z5\"}, // {\"region\": \"R1\", \"zone\": \"Z3\"} // then the SP SHOULD first attempt to make the provisioned volume // accessible from the combination of the two \"zones\" \"Z5\" and \"Z3\" in // the \"region\" \"R1\". If that's not possible, it should fall back to // a combination of \"Z5\" and other possibilities from the list of //"
},
{
"data": "If that's not possible, it should fall back to a // combination of \"Z3\" and other possibilities from the list of // requisite. If that's not possible, it should fall back to a // combination of other possibilities from the list of requisite. repeated Topology preferred = 2; } // Topology is a map of topological domains to topological segments. // A topological domain is a sub-division of a cluster, like \"region\", // \"zone\", \"rack\", etc. // A topological segment is a specific instance of a topological domain, // like \"zone3\", \"rack3\", etc. // For example {\"com.company/zone\": \"Z1\", \"com.company/rack\": \"R3\"} // Valid keys have two segments: an OPTIONAL prefix and name, separated // by a slash (/), for example: \"com.company.example/zone\". // The key name segment is REQUIRED. The prefix is OPTIONAL. // The key name MUST be 63 characters or less, begin and end with an // alphanumeric character ([a-z0-9A-Z]), and contain only dashes (-), // underscores (_), dots (.), or alphanumerics in between, for example // \"zone\". // The key prefix MUST be 63 characters or less, begin and end with a // lower-case alphanumeric character ([a-z0-9]), contain only // dashes (-), dots (.), or lower-case alphanumerics in between, and // follow domain name notation format // (https://tools.ietf.org/html/rfc1035#section-2.3.1). // The key prefix SHOULD include the plugin's host company name and/or // the plugin name, to minimize the possibility of collisions with keys // from other plugins. // If a key prefix is specified, it MUST be identical across all // topology keys returned by the SP (across all RPCs). // Keys MUST be case-insensitive. Meaning the keys \"Zone\" and \"zone\" // MUST not both exist. // Each value (topological segment) MUST contain 1 or more strings. // Each string MUST be 63 characters or less and begin and end with an // alphanumeric character with '-', '_', '.', or alphanumerics in // between. message Topology { map<string, string> segments = 1; } ``` If the plugin is unable to complete the CreateVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Source incompatible or not supported | 3 INVALIDARGUMENT | Besides the general cases, this code MUST also be used to indicate when plugin supporting CREATEDELETE_VOLUME cannot create a volume from the requested source (`SnapshotSource` or `VolumeSource`). Failure MAY be caused by not supporting the source (CO SHOULD NOT have provided that source) or incompatibility between `parameters` from the source and the ones requested for the new volume. More human-readable information SHOULD be provided in the gRPC `status.message` field if the problem is the source. | On source related issues, caller MUST use different parameters, a different source, or no source at all. | | Source does not exist | 5 NOTFOUND | Indicates that the specified source does not exist. | Caller MUST verify that the `volumecontent_source` is correct, the source is accessible, and has not been deleted before retrying with exponential back off. | | Volume already exists but is incompatible | 6 ALREADYEXISTS | Indicates that a volume corresponding to the specified volume `name` already exists but is incompatible with the specified `capacityrange`, `volumecapabilities`, `parameters`, `accessibilityrequirements` or `volumecontentsource`. | Caller MUST fix the arguments or use a different `name` before retrying. | | Unable to provision in `accessibletopology` | 8 RESOURCEEXHAUSTED | Indicates that although the `accessible_topology` field is valid, a new volume can not be provisioned with the specified topology"
},
{
"data": "More human-readable information MAY be provided in the gRPC `status.message` field. | Caller MUST ensure that whatever is preventing volumes from being provisioned in the specified location (e.g. quota issues) is addressed before retrying with exponential backoff. | | Unsupported `capacityrange` | 11 OUTOF_RANGE | Indicates that the capacity range is not allowed by the Plugin, for example when trying to create a volume smaller than the source snapshot or the Plugin does not support creating volumes larger than the source snapshot or source volume. More human-readable information MAY be provided in the gRPC `status.message` field. | Caller MUST fix the capacity range before retrying. | A Controller Plugin MUST implement this RPC call if it has `CREATEDELETEVOLUME` capability. This RPC will be called by the CO to deprovision a volume. This operation MUST be idempotent. If a volume corresponding to the specified `volume_id` does not exist or the artifacts associated with the volume do not exist anymore, the Plugin MUST reply `0 OK`. CSI plugins SHOULD treat volumes independent from their snapshots. If the Controller Plugin supports deleting a volume without affecting its existing snapshots, then these snapshots MUST still be fully operational and acceptable as sources for new volumes as well as appear on `ListSnapshot` calls once the volume has been deleted. When a Controller Plugin does not support deleting a volume without affecting its existing snapshots, then the volume MUST NOT be altered in any way by the request and the operation must return the `FAILED_PRECONDITION` error code and MAY include meaningful human-readable information in the `status.message` field. ```protobuf message DeleteVolumeRequest { // The ID of the volume to be deprovisioned. // This field is REQUIRED. string volume_id = 1; // Secrets required by plugin to complete volume deletion request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 2 [(csi_secret) = true]; } message DeleteVolumeResponse { // Intentionally empty. } ``` If the plugin is unable to complete the DeleteVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume in use | 9 FAILEDPRECONDITION | Indicates that the volume corresponding to the specified `volumeid` could not be deleted because it is in use by another resource or has snapshots and the plugin doesn't treat them as independent entities. | Caller SHOULD ensure that there are no other resources using the volume and that it has no snapshots, and then retry with exponential back off. | A Controller Plugin MUST implement this RPC call if it has `PUBLISHUNPUBLISHVOLUME` controller capability. This RPC will be called by the CO when it wants to place a workload that uses the volume onto a node. The Plugin SHOULD perform the work that is necessary for making the volume available on the given node. The Plugin MUST NOT assume that this RPC will be executed on the node where the volume will be used. This operation MUST be idempotent. If the volume corresponding to the `volumeid` has already been published at the node corresponding to the `nodeid`, and is compatible with the specified `volume_capability` and `readonly` flag, the Plugin MUST reply `0"
},
{
"data": "If the operation failed or the CO does not know if the operation has failed or not, it MAY choose to call `ControllerPublishVolume` again or choose to call `ControllerUnpublishVolume`. The CO MAY call this RPC for publishing a volume to multiple nodes if the volume has `MULTINODE` capability (i.e., `MULTINODEREADERONLY`, `MULTINODESINGLEWRITER` or `MULTINODEMULTIWRITER`). ```protobuf message ControllerPublishVolumeRequest { // The ID of the volume to be used on a node. // This field is REQUIRED. string volume_id = 1; // The ID of the node. This field is REQUIRED. The CO SHALL set this // field to match the node ID returned by `NodeGetInfo`. string node_id = 2; // Volume capability describing how the CO intends to use this volume. // SP MUST ensure the CO can use the published volume as described. // Otherwise SP MUST return the appropriate gRPC error code. // This is a REQUIRED field. VolumeCapability volume_capability = 3; // Indicates SP MUST publish the volume in readonly mode. // CO MUST set this field to false if SP does not have the // PUBLISH_READONLY controller capability. // This is a REQUIRED field. bool readonly = 4; // Secrets required by plugin to complete controller publish volume // request. This field is OPTIONAL. Refer to the // `Secrets Requirements` section on how to use this field. map<string, string> secrets = 5 [(csi_secret) = true]; // Volume context as returned by SP in // CreateVolumeResponse.Volume.volume_context. // This field is OPTIONAL and MUST match the volume_context of the // volume identified by `volume_id`. map<string, string> volume_context = 6; } message ControllerPublishVolumeResponse { // Opaque static publish properties of the volume. SP MAY use this // field to ensure subsequent `NodeStageVolume` or `NodePublishVolume` // calls calls have contextual information. // The contents of this field SHALL be opaque to a CO. // The contents of this field SHALL NOT be mutable. // The contents of this field SHALL be safe for the CO to cache. // The contents of this field SHOULD NOT contain sensitive // information. // The contents of this field SHOULD NOT be used for uniquely // identifying a volume. The `volume_id` alone SHOULD be sufficient to // identify the volume. // This field is OPTIONAL and when present MUST be passed to // subsequent `NodeStageVolume` or `NodePublishVolume` calls map<string, string> publish_context = 1; } ``` If the plugin is unable to complete the ControllerPublishVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist. | Caller MUST verify that the `volume_id` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | Node does not exist | 5 NOTFOUND | Indicates that a node corresponding to the specified `nodeid` does not exist. | Caller MUST verify that the `node_id` is correct and that the node is available and has not been terminated or deleted before retrying with exponential backoff. | | Volume published but is incompatible | 6 ALREADYEXISTS | Indicates that a volume corresponding to the specified `volumeid` has already been published at the node corresponding to the specified `nodeid` but is incompatible with the specified `volumecapability` or `readonly` flag . | Caller MUST fix the arguments before"
},
{
"data": "| | Volume published to another node | 9 FAILEDPRECONDITION | Indicates that a volume corresponding to the specified `volumeid` has already been published at another node and does not have MULTINODE volume capability. If this error code is returned, the Plugin SHOULD specify the `nodeid` of the node at which the volume is published as part of the gRPC `status.message`. | Caller SHOULD ensure the specified volume is not published at any other node before retrying with exponential back off. | | Max volumes attached | 8 RESOURCE_EXHAUSTED | Indicates that the maximum supported number of volumes that can be attached to the specified node are already attached. Therefore, this operation will fail until at least one of the existing attached volumes is detached from the node. | Caller MUST ensure that the number of volumes already attached to the node is less then the maximum supported number of volumes before retrying with exponential backoff. | Controller Plugin MUST implement this RPC call if it has `PUBLISHUNPUBLISHVOLUME` controller capability. This RPC is a reverse operation of `ControllerPublishVolume`. It MUST be called after all `NodeUnstageVolume` and `NodeUnpublishVolume` on the volume are called and succeed. The Plugin SHOULD perform the work that is necessary for making the volume ready to be consumed by a different node. The Plugin MUST NOT assume that this RPC will be executed on the node where the volume was previously used. This RPC is typically called by the CO when the workload using the volume is being moved to a different node, or all the workload using the volume on a node has finished. This operation MUST be idempotent. If the volume corresponding to the `volumeid` is not attached to the node corresponding to the `nodeid`, the Plugin MUST reply `0 OK`. If the volume corresponding to the `volumeid` or the node corresponding to `nodeid` cannot be found by the Plugin and the volume can be safely regarded as ControllerUnpublished from the node, the plugin SHOULD return `0 OK`. If this operation failed, or the CO does not know if the operation failed or not, it can choose to call `ControllerUnpublishVolume` again. ```protobuf message ControllerUnpublishVolumeRequest { // The ID of the volume. This field is REQUIRED. string volume_id = 1; // The ID of the node. This field is OPTIONAL. The CO SHOULD set this // field to match the node ID returned by `NodeGetInfo` or leave it // unset. If the value is set, the SP MUST unpublish the volume from // the specified node. If the value is unset, the SP MUST unpublish // the volume from all nodes it is published to. string node_id = 2; // Secrets required by plugin to complete controller unpublish volume // request. This SHOULD be the same secrets passed to the // ControllerPublishVolume call for the specified volume. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 3 [(csi_secret) = true]; } message ControllerUnpublishVolumeResponse { // Intentionally empty. } ``` If the plugin is unable to complete the ControllerUnpublishVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist and volume not assumed ControllerUnpublished from node | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist and is not assumed to be ControllerUnpublished from node corresponding to the specified"
},
{
"data": "| Caller SHOULD verify that the `volumeid` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | Node does not exist and volume not assumed ControllerUnpublished from node | 5 NOTFOUND | Indicates that a node corresponding to the specified `nodeid` does not exist and the volume corresponding to the specified `volumeid` is not assumed to be ControllerUnpublished from node. | Caller SHOULD verify that the `nodeid` is correct and that the node is available and has not been terminated or deleted before retrying with exponential backoff. | A Controller Plugin MUST implement this RPC call. This RPC will be called by the CO to check if a pre-provisioned volume has all the capabilities that the CO wants. This RPC call SHALL return `confirmed` only if all the volume capabilities specified in the request are supported (see caveat below). This operation MUST be idempotent. NOTE: Older plugins will parse but likely not \"process\" newer fields that MAY be present in capability-validation messages (and sub-messages) sent by a CO that is communicating using a newer, backwards-compatible version of the CSI protobufs. Therefore, the CO SHALL reconcile successful capability-validation responses by comparing the validated capabilities with those that it had originally requested. ```protobuf message ValidateVolumeCapabilitiesRequest { // The ID of the volume to check. This field is REQUIRED. string volume_id = 1; // Volume context as returned by SP in // CreateVolumeResponse.Volume.volume_context. // This field is OPTIONAL and MUST match the volume_context of the // volume identified by `volume_id`. map<string, string> volume_context = 2; // The capabilities that the CO wants to check for the volume. This // call SHALL return \"confirmed\" only if all the volume capabilities // specified below are supported. This field is REQUIRED. repeated VolumeCapability volume_capabilities = 3; // See CreateVolumeRequest.parameters. // This field is OPTIONAL. map<string, string> parameters = 4; // Secrets required by plugin to complete volume validation request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 5 [(csi_secret) = true]; // See CreateVolumeRequest.mutable_parameters. // This field is OPTIONAL. map<string, string> mutableparameters = 6 [(alphafield) = true]; } message ValidateVolumeCapabilitiesResponse { message Confirmed { // Volume context validated by the plugin. // This field is OPTIONAL. map<string, string> volume_context = 1; // Volume capabilities supported by the plugin. // This field is REQUIRED. repeated VolumeCapability volume_capabilities = 2; // The volume creation parameters validated by the plugin. // This field is OPTIONAL. map<string, string> parameters = 3; // The volume creation mutable_parameters validated by the plugin. // This field is OPTIONAL. map<string, string> mutableparameters = 4 [(alphafield) = true]; } // Confirmed indicates to the CO the set of capabilities that the // plugin has validated. This field SHALL only be set to a non-empty // value for successful validation responses. // For successful validation responses, the CO SHALL compare the // fields of this message to the originally requested capabilities in // order to guard against an older plugin reporting \"valid\" for newer // capability fields that it does not yet understand. // This field is OPTIONAL. Confirmed confirmed = 1; // Message to the CO if `confirmed` above is empty. This field is // OPTIONAL. // An empty string is equal to an unspecified field value. string message = 2; } ``` If the plugin is unable to complete the ValidateVolumeCapabilities call successfully, it MUST return a non-ok gRPC code in the gRPC"
},
{
"data": "If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist. | Caller MUST verify that the `volume_id` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | A Controller Plugin MUST implement this RPC call if it has `LIST_VOLUMES` capability. The Plugin SHALL return the information about all the volumes that it knows about. If volumes are created and/or deleted while the CO is concurrently paging through `ListVolumes` results then it is possible that the CO MAY either witness duplicate volumes in the list, not witness existing volumes, or both. The CO SHALL NOT expect a consistent \"view\" of all volumes when paging through the volume list via multiple calls to `ListVolumes`. ```protobuf message ListVolumesRequest { // If specified (non-zero value), the Plugin MUST NOT return more // entries than this number in the response. If the actual number of // entries is more than this number, the Plugin MUST set `next_token` // in the response which can be used to get the next page of entries // in the subsequent `ListVolumes` call. This field is OPTIONAL. If // not specified (zero value), it means there is no restriction on the // number of entries that can be returned. // The value of this field MUST NOT be negative. int32 max_entries = 1; // A token to specify where to start paginating. Set this field to // `next_token` returned by a previous `ListVolumes` call to get the // next page of entries. This field is OPTIONAL. // An empty string is equal to an unspecified field value. string starting_token = 2; } message ListVolumesResponse { message VolumeStatus{ // A list of all `node_id` of nodes that the volume in this entry // is controller published on. // This field is OPTIONAL. If it is not specified and the SP has // the LISTVOLUMESPUBLISHED_NODES controller capability, the CO // MAY assume the volume is not controller published to any nodes. // If the field is not specified and the SP does not have the // LISTVOLUMESPUBLISHED_NODES controller capability, the CO MUST // not interpret this field. // publishednodeids MAY include nodes not published to or // reported by the SP. The CO MUST be resilient to that. repeated string publishednodeids = 1; // Information about the current condition of the volume. // This field is OPTIONAL. // This field MUST be specified if the // VOLUME_CONDITION controller capability is supported. VolumeCondition volumecondition = 2 [(alphafield) = true]; } message Entry { // This field is REQUIRED Volume volume = 1; // This field is OPTIONAL. This field MUST be specified if the // LISTVOLUMESPUBLISHED_NODES controller capability is // supported. VolumeStatus status = 2; } repeated Entry entries = 1; // This token allows you to get the next page of entries for // `ListVolumes` request. If the number of entries is larger than // `maxentries`, use the `nexttoken` as a value for the // `starting_token` field in the next `ListVolumes` request. This // field is OPTIONAL. // An empty string is equal to an unspecified field value. string next_token = 2; } ``` If the plugin is unable to complete the ListVolumes call successfully, it MUST return a non-ok gRPC code in the gRPC"
},
{
"data": "If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Invalid `startingtoken` | 10 ABORTED | Indicates that `startingtoken` is not valid. | Caller SHOULD start the `ListVolumes` operation again with an empty `starting_token`. | ALPHA FEATURE This optional RPC MAY be called by the CO to fetch current information about a volume. A Controller Plugin MUST implement this `ControllerGetVolume` RPC call if it has `GET_VOLUME` capability. A Controller Plugin MUST provide a non-empty `volumecondition` field in `ControllerGetVolumeResponse` if it has `VOLUMECONDITION` capability. `ControllerGetVolumeResponse` should contain current information of a volume if it exists. If the volume does not exist any more, `ControllerGetVolume` should return gRPC error code `NOT_FOUND`. ```protobuf message ControllerGetVolumeRequest { option (alpha_message) = true; // The ID of the volume to fetch current volume information for. // This field is REQUIRED. string volume_id = 1; } message ControllerGetVolumeResponse { option (alpha_message) = true; message VolumeStatus{ // A list of all the `node_id` of nodes that this volume is // controller published on. // This field is OPTIONAL. // This field MUST be specified if the LISTVOLUMESPUBLISHED_NODES // controller capability is supported. // publishednodeids MAY include nodes not published to or // reported by the SP. The CO MUST be resilient to that. repeated string publishednodeids = 1; // Information about the current condition of the volume. // This field is OPTIONAL. // This field MUST be specified if the // VOLUME_CONDITION controller capability is supported. VolumeCondition volume_condition = 2; } // This field is REQUIRED Volume volume = 1; // This field is REQUIRED. VolumeStatus status = 2; } ``` If the plugin is unable to complete the ControllerGetVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist. | Caller MUST verify that the `volume_id` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | A Controller plugin MUST implement this RPC call if the plugin has the MODIFY_VOLUME controller capability. This RPC allows the CO to change mutable key attributes of a volume. This operation MUST be idempotent. The new mutable parameters in ControllerModifyVolume can be different from the existing mutable parameters. ```protobuf message ControllerModifyVolumeRequest { option (alpha_message) = true; // Contains identity information for the existing volume. // This field is REQUIRED. string volume_id = 1; // Secrets required by plugin to complete modify volume request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 2 [(csi_secret) = true]; // Plugin specific volume attributes to mutate, passed in as // opaque key-value pairs. // This field is REQUIRED. The Plugin is responsible for // parsing and validating these parameters. COs will treat these // as opaque. The CO SHOULD specify the intended values of all mutable // parameters it intends to modify. SPs MUST NOT modify volumes based // on the absence of keys, only keys that are specified should result // in modifications to the"
},
{
"data": "map<string, string> mutable_parameters = 3; } message ControllerModifyVolumeResponse { option (alpha_message) = true; } ``` | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Parameters not supported | 3 INVALID_ARGUMENT | Indicates that the CO has specified mutable parameters not supported by the volume. | Caller MAY verify mutable parameters. | | Exceeds capabilities | 3 INVALID_ARGUMENT | Indicates that the CO has specified capabilities not supported by the volume. | Caller MAY verify volume capabilities by calling ValidateVolumeCapabilities and retry with matching capabilities. | | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified volumeid does not exist. | Caller MUST verify that the volume_id is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | A Controller Plugin MUST implement this RPC call if it has `GET_CAPACITY` controller capability. The RPC allows the CO to query the capacity of the storage pool from which the controller provisions volumes. ```protobuf message GetCapacityRequest { // If specified, the Plugin SHALL report the capacity of the storage // that can be used to provision volumes that satisfy ALL of the // specified `volume_capabilities`. These are the same // `volume_capabilities` the CO will use in `CreateVolumeRequest`. // This field is OPTIONAL. repeated VolumeCapability volume_capabilities = 1; // If specified, the Plugin SHALL report the capacity of the storage // that can be used to provision volumes with the given Plugin // specific `parameters`. These are the same `parameters` the CO will // use in `CreateVolumeRequest`. This field is OPTIONAL. map<string, string> parameters = 2; // If specified, the Plugin SHALL report the capacity of the storage // that can be used to provision volumes that in the specified // `accessible_topology`. This is the same as the // `accessible_topology` the CO returns in a `CreateVolumeResponse`. // This field is OPTIONAL. This field SHALL NOT be set unless the // plugin advertises the VOLUMEACCESSIBILITYCONSTRAINTS capability. Topology accessible_topology = 3; } message GetCapacityResponse { // The available capacity, in bytes, of the storage that can be used // to provision volumes. If `volume_capabilities` or `parameters` is // specified in the request, the Plugin SHALL take those into // consideration when calculating the available capacity of the // storage. This field is REQUIRED. // The value of this field MUST NOT be negative. int64 available_capacity = 1; // The largest size that may be used in a // CreateVolumeRequest.capacityrange.requiredbytes field // to create a volume with the same parameters as those in // GetCapacityRequest. // // If `volume_capabilities` or `parameters` is // specified in the request, the Plugin SHALL take those into // consideration when calculating the minimum volume size of the // storage. // // This field is OPTIONAL. MUST NOT be negative. // The Plugin SHOULD provide a value for this field if it has // a maximum size for individual volumes and leave it unset // otherwise. COs MAY use it to make decision about // where to create volumes. google.protobuf.Int64Value maximumvolumesize = 2; // The smallest size that may be used in a // CreateVolumeRequest.capacityrange.limitbytes field // to create a volume with the same parameters as those in // GetCapacityRequest. // // If `volume_capabilities` or `parameters` is // specified in the request, the Plugin SHALL take those into // consideration when calculating the maximum volume size of the // storage. // // This field is OPTIONAL. MUST NOT be negative. // The Plugin SHOULD provide a value for this field if it has // a minimum size for individual volumes and leave it unset //"
},
{
"data": "COs MAY use it to make decision about // where to create volumes. google.protobuf.Int64Value minimumvolumesize = 3 [(alpha_field) = true]; } ``` If the plugin is unable to complete the GetCapacity call successfully, it MUST return a non-ok gRPC code in the gRPC status. A Controller Plugin MUST implement this RPC call. This RPC allows the CO to check the supported capabilities of controller service provided by the Plugin. ```protobuf message ControllerGetCapabilitiesRequest { // Intentionally empty. } message ControllerGetCapabilitiesResponse { // All the capabilities that the controller service supports. This // field is OPTIONAL. repeated ControllerServiceCapability capabilities = 1; } // Specifies a capability of the controller service. message ControllerServiceCapability { message RPC { enum Type { UNKNOWN = 0; CREATEDELETEVOLUME = 1; PUBLISHUNPUBLISHVOLUME = 2; LIST_VOLUMES = 3; GET_CAPACITY = 4; // Currently the only way to consume a snapshot is to create // a volume from it. Therefore plugins supporting // CREATEDELETESNAPSHOT MUST support creating volume from // snapshot. CREATEDELETESNAPSHOT = 5; LIST_SNAPSHOTS = 6; // Plugins supporting volume cloning at the storage level MAY // report this capability. The source volume MUST be managed by // the same plugin. Not all volume sources and parameters // combinations MAY work. CLONE_VOLUME = 7; // Indicates the SP supports ControllerPublishVolume.readonly // field. PUBLISH_READONLY = 8; // See VolumeExpansion for details. EXPAND_VOLUME = 9; // Indicates the SP supports the // ListVolumesResponse.entry.publishednodeids field and the // ControllerGetVolumeResponse.publishednodeids field. // The SP MUST also support PUBLISHUNPUBLISHVOLUME. LISTVOLUMESPUBLISHED_NODES = 10; // Indicates that the Controller service can report volume // conditions. // An SP MAY implement `VolumeCondition` in only the Controller // Plugin, only the Node Plugin, or both. // If `VolumeCondition` is implemented in both the Controller and // Node Plugins, it SHALL report from different perspectives. // If for some reason Controller and Node Plugins report // misaligned volume conditions, CO SHALL assume the worst case // is the truth. // Note that, for alpha, `VolumeCondition` is intended be // informative for humans only, not for automation. VOLUMECONDITION = 11 [(alphaenum_value) = true]; // Indicates the SP supports the ControllerGetVolume RPC. // This enables COs to, for example, fetch per volume // condition after a volume is provisioned. GETVOLUME = 12 [(alphaenum_value) = true]; // Indicates the SP supports the SINGLENODESINGLE_WRITER and/or // SINGLENODEMULTI_WRITER access modes. // These access modes are intended to replace the // SINGLENODEWRITER access mode to clarify the number of writers // for a volume on a single node. Plugins MUST accept and allow // use of the SINGLENODEWRITER access mode when either // SINGLENODESINGLEWRITER and/or SINGLENODEMULTIWRITER are // supported, in order to permit older COs to continue working. SINGLENODEMULTIWRITER = 13 [(alphaenum_value) = true]; // Indicates the SP supports modifying volume with mutable // parameters. See ControllerModifyVolume for details. MODIFYVOLUME = 14 [(alphaenum_value) = true]; } Type type = 1; } oneof type { // RPC that the controller supports. RPC rpc = 1; } } ``` If the plugin is unable to complete the ControllerGetCapabilities call successfully, it MUST return a non-ok gRPC code in the gRPC status. A Controller Plugin MUST implement this RPC call if it has `CREATEDELETESNAPSHOT` controller capability. This RPC will be called by the CO to create a new snapshot from a source volume on behalf of a user. This operation MUST be idempotent. If a snapshot corresponding to the specified snapshot `name` is successfully cut and ready to use (meaning it MAY be specified as a `volumecontentsource` in a `CreateVolumeRequest`), the Plugin MUST reply `0 OK` with the corresponding"
},
{
"data": "If an error occurs before a snapshot is cut, `CreateSnapshot` SHOULD return a corresponding gRPC error code that reflects the error condition. For plugins that supports snapshot post processing such as uploading, `CreateSnapshot` SHOULD return `0 OK` and `readytouse` SHOULD be set to `false` after the snapshot is cut but still being processed. CO SHOULD then reissue the same `CreateSnapshotRequest` periodically until boolean `readytouse` flips to `true` indicating the snapshot has been \"processed\" and is ready to use to create new volumes. If an error occurs during the process, `CreateSnapshot` SHOULD return a corresponding gRPC error code that reflects the error condition. A snapshot MAY be used as the source to provision a new volume. A CreateVolumeRequest message MAY specify an OPTIONAL source snapshot parameter. Reverting a snapshot, where data in the original volume is erased and replaced with data in the snapshot, is an advanced functionality not every storage system can support and therefore is currently out of scope. Some SPs MAY \"process\" the snapshot after the snapshot is cut, for example, maybe uploading the snapshot somewhere after the snapshot is cut. The post-cut process MAY be a long process that could take hours. The CO MAY freeze the application using the source volume before taking the snapshot. The purpose of `freeze` is to ensure the application data is in consistent state. When `freeze` is performed, the container is paused and the application is also paused. When `thaw` is performed, the container and the application start running again. During the snapshot processing phase, since the snapshot is already cut, a `thaw` operation can be performed so application can start running without waiting for the process to complete. The `readytouse` parameter of the snapshot will become `true` after the process is complete. For SPs that do not do additional processing after cut, the `readytouse` parameter SHOULD be `true` after the snapshot is cut. `thaw` can be done when the `readytouse` parameter is `true` in this case. The `readytouse` parameter provides guidance to the CO on when it can \"thaw\" the application in the process of snapshotting. If the cloud provider or storage system needs to process the snapshot after the snapshot is cut, the `readytouse` parameter returned by CreateSnapshot SHALL be `false`. CO MAY continue to call CreateSnapshot while waiting for the process to complete until `readytouse` becomes `true`. Note that CreateSnapshot no longer blocks after the snapshot is cut. A gRPC error code SHALL be returned if an error occurs during any stage of the snapshotting process. A CO SHOULD explicitly delete snapshots when an error occurs. Based on this information, CO can issue repeated (idempotent) calls to CreateSnapshot, monitor the response, and make decisions. Note that CreateSnapshot is a synchronous call and it MUST block until the snapshot is cut. ```protobuf message CreateSnapshotRequest { // The ID of the source volume to be snapshotted. // This field is REQUIRED. string sourcevolumeid = 1; // The suggested name for the snapshot. This field is REQUIRED for // idempotency. // Any Unicode string that conforms to the length limit is allowed // except those containing the following banned characters: // U+0000-U+0008, U+000B, U+000C, U+000E-U+001F, U+007F-U+009F. // (These are control characters other than commonly used whitespace.) string name = 2; // Secrets required by plugin to complete snapshot creation request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 3 [(csi_secret) = true]; // Plugin specific parameters passed in as opaque key-value pairs. // This field is OPTIONAL. The Plugin is responsible for parsing and // validating these"
},
{
"data": "COs will treat these as opaque. // Use cases for opaque parameters: // - Specify a policy to automatically clean up the snapshot. // - Specify an expiration date for the snapshot. // - Specify whether the snapshot is readonly or read/write. // - Specify if the snapshot should be replicated to some place. // - Specify primary or secondary for replication systems that // support snapshotting only on primary. map<string, string> parameters = 4; } message CreateSnapshotResponse { // Contains all attributes of the newly created snapshot that are // relevant to the CO along with information required by the Plugin // to uniquely identify the snapshot. This field is REQUIRED. Snapshot snapshot = 1; } // Information about a specific snapshot. message Snapshot { // This is the complete size of the snapshot in bytes. The purpose of // this field is to give CO guidance on how much space is needed to // create a volume from this snapshot. The size of the volume MUST NOT // be less than the size of the source snapshot. This field is // OPTIONAL. If this field is not set, it indicates that this size is // unknown. The value of this field MUST NOT be negative and a size of // zero means it is unspecified. int64 size_bytes = 1; // The identifier for this snapshot, generated by the plugin. // This field is REQUIRED. // This field MUST contain enough information to uniquely identify // this specific snapshot vs all other snapshots supported by this // plugin. // This field SHALL be used by the CO in subsequent calls to refer to // this snapshot. // The SP is NOT responsible for global uniqueness of snapshot_id // across multiple SPs. string snapshot_id = 2; // Identity information for the source volume. Note that creating a // snapshot from a snapshot is not supported here so the source has to // be a volume. This field is REQUIRED. string sourcevolumeid = 3; // Timestamp when the point-in-time snapshot is taken on the storage // system. This field is REQUIRED. .google.protobuf.Timestamp creation_time = 4; // Indicates if a snapshot is ready to use as a // `volumecontentsource` in a `CreateVolumeRequest`. The default // value is false. This field is REQUIRED. bool readytouse = 5; // The ID of the volume group snapshot that this snapshot is part of. // It uniquely identifies the group snapshot on the storage system. // This field is OPTIONAL. // If this snapshot is a member of a volume group snapshot, and it // MUST NOT be deleted as a stand alone snapshot, then the SP // MUST provide the ID of the volume group snapshot in this field. // If provided, CO MUST use this field in subsequent volume group // snapshot operations to indicate that this snapshot is part of the // specified group snapshot. // If not provided, CO SHALL treat the snapshot as independent, // and SP SHALL allow it to be deleted separately. // If this message is inside a VolumeGroupSnapshot message, the value // MUST be the same as the groupsnapshotid in that message. string groupsnapshotid = 6 [(alpha_field) = true]; } ``` If the plugin is unable to complete the CreateSnapshot call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error"
},
{
"data": "| Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Snapshot already exists but is incompatible | 6 ALREADYEXISTS | Indicates that a snapshot corresponding to the specified snapshot `name` already exists but is incompatible with the specified `volumeid`. | Caller MUST fix the arguments or use a different `name` before retrying. | | Operation pending for snapshot | 10 ABORTED | Indicates that there is already an operation pending for the specified snapshot. In general the Cluster Orchestrator (CO) is responsible for ensuring that there is no more than one call \"in-flight\" per snapshot at a given time. However, in some circumstances, the CO MAY lose state (for example when the CO crashes and restarts), and MAY issue multiple calls simultaneously for the same snapshot. The Plugin, SHOULD handle this as gracefully as possible, and MAY return this error code to reject secondary calls. | Caller SHOULD ensure that there are no other calls pending for the specified snapshot, and then retry with exponential back off. | | Not enough space to create snapshot | 13 RESOURCE_EXHAUSTED | There is not enough space on the storage system to handle the create snapshot request. | Caller SHOULD fail this request. Future calls to CreateSnapshot MAY succeed if space is freed up. | A Controller Plugin MUST implement this RPC call if it has `CREATEDELETESNAPSHOT` capability. This RPC will be called by the CO to delete a snapshot. This operation MUST be idempotent. If a snapshot corresponding to the specified `snapshot_id` does not exist or the artifacts associated with the snapshot do not exist anymore, the Plugin MUST reply `0 OK`. The CO SHALL NOT call this RPC with a snapshot for which SP provided a non-empty groupsnapshotid field at creation time. A snapshot for which SP provided a non-empty groupsnapshotid indicates a snapshot that CAN NOT be deleted stand alone. The SP MAY refuse to delete such snapshots with this RPC call and return an error instead. For such snapshots SP MUST delete the entire snapshot group via a DeleteVolumeGroupSnapshotRequest call. ```protobuf message DeleteSnapshotRequest { // The ID of the snapshot to be deleted. // This field is REQUIRED. string snapshot_id = 1; // Secrets required by plugin to complete snapshot deletion request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 2 [(csi_secret) = true]; } message DeleteSnapshotResponse {} ``` If the plugin is unable to complete the DeleteSnapshot call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Snapshot is part of a group | 3 INVALIDARGUMENT | Indicates that the snapshot corresponding to the specified `snapshotid` could not be deleted because it is part of a group snapshot and CAN NOT be deleted stand alone. | Caller SHOULD stop calling DeleteSnapshot and call DeleteVolumeGroupSnapshot instead. | | Snapshot in use | 9 FAILEDPRECONDITION | Indicates that the snapshot corresponding to the specified `snapshotid` could not be deleted because it is in use by another resource. | Caller SHOULD ensure that there are no other resources using the snapshot, and then retry with exponential back off. | | Operation pending for snapshot | 10 ABORTED | Indicates that there is already an operation pending for the specified"
},
{
"data": "In general the Cluster Orchestrator (CO) is responsible for ensuring that there is no more than one call \"in-flight\" per snapshot at a given time. However, in some circumstances, the CO MAY lose state (for example when the CO crashes and restarts), and MAY issue multiple calls simultaneously for the same snapshot. The Plugin, SHOULD handle this as gracefully as possible, and MAY return this error code to reject secondary calls. | Caller SHOULD ensure that there are no other calls pending for the specified snapshot, and then retry with exponential back off. | A Controller Plugin MUST implement this RPC call if it has `LIST_SNAPSHOTS` capability. The Plugin SHALL return the information about all snapshots on the storage system within the given parameters regardless of how they were created. `ListSnapshots` SHALL NOT list a snapshot that is being created but has not been cut successfully yet. If snapshots are created and/or deleted while the CO is concurrently paging through `ListSnapshots` results then it is possible that the CO MAY either witness duplicate snapshots in the list, not witness existing snapshots, or both. The CO SHALL NOT expect a consistent \"view\" of all snapshots when paging through the snapshot list via multiple calls to `ListSnapshots`. ```protobuf // List all snapshots on the storage system regardless of how they were // created. message ListSnapshotsRequest { // If specified (non-zero value), the Plugin MUST NOT return more // entries than this number in the response. If the actual number of // entries is more than this number, the Plugin MUST set `next_token` // in the response which can be used to get the next page of entries // in the subsequent `ListSnapshots` call. This field is OPTIONAL. If // not specified (zero value), it means there is no restriction on the // number of entries that can be returned. // The value of this field MUST NOT be negative. int32 max_entries = 1; // A token to specify where to start paginating. Set this field to // `next_token` returned by a previous `ListSnapshots` call to get the // next page of entries. This field is OPTIONAL. // An empty string is equal to an unspecified field value. string starting_token = 2; // Identity information for the source volume. This field is OPTIONAL. // It can be used to list snapshots by volume. string sourcevolumeid = 3; // Identity information for a specific snapshot. This field is // OPTIONAL. It can be used to list only a specific snapshot. // ListSnapshots will return with current snapshot information // and will not block if the snapshot is being processed after // it is cut. string snapshot_id = 4; // Secrets required by plugin to complete ListSnapshot request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 5 [(csi_secret) = true]; } message ListSnapshotsResponse { message Entry { Snapshot snapshot = 1; } repeated Entry entries = 1; // This token allows you to get the next page of entries for // `ListSnapshots` request. If the number of entries is larger than // `maxentries`, use the `nexttoken` as a value for the // `starting_token` field in the next `ListSnapshots` request. This // field is OPTIONAL. // An empty string is equal to an unspecified field value. string next_token = 2; } ``` If the plugin is unable to complete the ListSnapshots call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error"
},
{
"data": "The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Invalid `startingtoken` | 10 ABORTED | Indicates that `startingtoken` is not valid. | Caller SHOULD start the `ListSnapshots` operation again with an empty `starting_token`. | A Controller plugin MUST implement this RPC call if plugin has `EXPAND_VOLUME` controller capability. This RPC allows the CO to expand the size of a volume. This operation MUST be idempotent. If a volume corresponding to the specified volume ID is already larger than or equal to the target capacity of the expansion request, the plugin SHOULD reply 0 OK. This call MAY be made by the CO during any time in the lifecycle of the volume after creation if plugin has `VolumeExpansion.ONLINE` capability. If plugin has `EXPANDVOLUME` node capability, then `NodeExpandVolume` MUST be called after successful `ControllerExpandVolume` and `nodeexpansion_required` in `ControllerExpandVolumeResponse` is `true`. If specified, the `volume_capability` in `ControllerExpandVolumeRequest` should be same as what CO would pass in `ControllerPublishVolumeRequest`. If the plugin has only `VolumeExpansion.OFFLINE` expansion capability and volume is currently published or available on a node then `ControllerExpandVolume` MUST be called ONLY after either: The plugin has controller `PUBLISHUNPUBLISHVOLUME` capability and `ControllerUnpublishVolume` has been invoked successfully. OR ELSE The plugin does NOT have controller `PUBLISHUNPUBLISHVOLUME` capability, the plugin has node `STAGEUNSTAGEVOLUME` capability, and `NodeUnstageVolume` has been completed successfully. OR ELSE The plugin does NOT have controller `PUBLISHUNPUBLISHVOLUME` capability, nor node `STAGEUNSTAGEVOLUME` capability, and `NodeUnpublishVolume` has completed successfully. Examples: Offline Volume Expansion: Given an ElasticSearch process that runs on Azure Disk and needs more space. The administrator takes the Elasticsearch server offline by stopping the workload and CO calls `ControllerUnpublishVolume`. The administrator requests more space for the volume from CO. The CO in turn first makes `ControllerExpandVolume` RPC call which results in requesting more space from Azure cloud provider for volume ID that was being used by ElasticSearch. Once `ControllerExpandVolume` is completed and successful, the CO will inform administrator about it and administrator will resume the ElasticSearch workload. On the node where the ElasticSearch workload is scheduled, the CO calls `NodeExpandVolume` after calling `NodeStageVolume`. Calling `NodeExpandVolume` on volume results in expanding the underlying file system and added space becomes available to workload when it starts up. Online Volume Expansion: Given a Mysql server running on Openstack Cinder and needs more space. The administrator requests more space for volume from the CO. The CO in turn first makes `ControllerExpandVolume` RPC call which results in requesting more space from Openstack Cinder for given volume. On the node where the mysql workload is running, the CO calls `NodeExpandVolume` while volume is in-use using the path where the volume is staged. Calling `NodeExpandVolume` on volume results in expanding the underlying file system and added space automatically becomes available to mysql workload without any downtime. ```protobuf message ControllerExpandVolumeRequest { // The ID of the volume to expand. This field is REQUIRED. string volume_id = 1; // This allows CO to specify the capacity requirements of the volume // after expansion. This field is REQUIRED. CapacityRange capacity_range = 2; // Secrets required by the plugin for expanding the volume. // This field is OPTIONAL. map<string, string> secrets = 3 [(csi_secret) = true]; // Volume capability describing how the CO intends to use this volume. // This allows SP to determine if volume is being used as a block // device or mounted file system. For example - if volume is // being used as a block device - the SP MAY set // nodeexpansionrequired to false in ControllerExpandVolumeResponse // to skip invocation of NodeExpandVolume on the node by the"
},
{
"data": "// This is an OPTIONAL field. VolumeCapability volume_capability = 4; } message ControllerExpandVolumeResponse { // Capacity of volume after expansion. This field is REQUIRED. int64 capacity_bytes = 1; // Whether node expansion is required for the volume. When true // the CO MUST make NodeExpandVolume RPC call on the node. This field // is REQUIRED. bool nodeexpansionrequired = 2; } ``` | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Exceeds capabilities | 3 INVALID_ARGUMENT | Indicates that the CO has specified capabilities not supported by the volume. | Caller MAY verify volume capabilities by calling ValidateVolumeCapabilities and retry with matching capabilities. | | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified volumeid does not exist. | Caller MUST verify that the volume_id is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | Volume in use | 9 FAILEDPRECONDITION | Indicates that the volume corresponding to the specified `volumeid` could not be expanded because it is currently published on a node but the plugin does not have ONLINE expansion capability. | Caller SHOULD ensure that volume is not published and retry with exponential back off. | | Unsupported `capacityrange` | 11 OUTOF_RANGE | Indicates that the capacity range is not allowed by the Plugin. More human-readable information MAY be provided in the gRPC `status.message` field. | Caller MUST fix the capacity range before retrying. | It is worth noting that the plugin-generated `volume_id` is a REQUIRED field for the `DeleteVolume` RPC, as opposed to the CO-generated volume `name` that is REQUIRED for the `CreateVolume` RPC: these fields MAY NOT contain the same value. If a `CreateVolume` operation times out, leaving the CO without an ID with which to reference a volume, and the CO also decides that it no longer needs/wants the volume in question then the CO MAY choose one of the following paths: Replay the `CreateVolume` RPC that timed out; upon success execute `DeleteVolume` using the known volume ID (from the response to `CreateVolume`). Execute the `ListVolumes` RPC to possibly obtain a volume ID that may be used to execute a `DeleteVolume` RPC; upon success execute `DeleteVolume`. The CO takes no further action regarding the timed out RPC, a volume is possibly leaked and the operator/user is expected to clean up. It is NOT REQUIRED for a controller plugin to implement the `LISTVOLUMES` capability if it supports the `CREATEDELETE_VOLUME` capability: the onus is upon the CO to take into consideration the full range of plugin capabilities before deciding how to proceed in the above scenario. The plugin-generated `snapshot_id` is a REQUIRED field for the `DeleteSnapshot` RPC, as opposed to the CO-generated snapshot `name` that is REQUIRED for the `CreateSnapshot` RPC. A `CreateSnapshot` operation SHOULD return with a `snapshot_id` when the snapshot is cut successfully. If a `CreateSnapshot` operation times out before the snapshot is cut, leaving the CO without an ID with which to reference a snapshot, and the CO also decides that it no longer needs/wants the snapshot in question then the CO MAY choose one of the following paths: Execute the `ListSnapshots` RPC to possibly obtain a snapshot ID that may be used to execute a `DeleteSnapshot` RPC; upon success execute `DeleteSnapshot`. The CO takes no further action regarding the timed out RPC, a snapshot is possibly leaked and the operator/user is expected to clean"
},
{
"data": "It is NOT REQUIRED for a controller plugin to implement the `LISTSNAPSHOTS` capability if it supports the `CREATEDELETE_SNAPSHOT` capability: the onus is upon the CO to take into consideration the full range of plugin capabilities before deciding how to proceed in the above scenario. ListSnapshots SHALL return with current information regarding the snapshots on the storage system. When processing is complete, the `readytouse` parameter of the snapshot from ListSnapshots SHALL become `true`. The downside of calling ListSnapshots is that ListSnapshots will not return a gRPC error code if an error occurs during the processing. So calling CreateSnapshot repeatedly is the preferred way to check if the processing is complete. A Node Plugin MUST implement this RPC call if it has `STAGEUNSTAGEVOLUME` node capability. This RPC is called by the CO prior to the volume being consumed by any workloads on the node by `NodePublishVolume`. The Plugin SHALL assume that this RPC will be executed on the node where the volume will be used. This RPC SHOULD be called by the CO when a workload that wants to use the specified volume is placed (scheduled) on the specified node for the first time or for the first time since a `NodeUnstageVolume` call for the specified volume was called and returned success on that node. If the corresponding Controller Plugin has `PUBLISHUNPUBLISHVOLUME` controller capability and the Node Plugin has `STAGEUNSTAGEVOLUME` capability, then the CO MUST guarantee that this RPC is called after `ControllerPublishVolume` is called for the given volume on the given node and returns a success. The CO MUST guarantee that this RPC is called and returns a success before any `NodePublishVolume` is called for the given volume on the given node. This operation MUST be idempotent. If the volume corresponding to the `volumeid` is already staged to the `stagingtargetpath`, and is identical to the specified `volumecapability` the Plugin MUST reply `0 OK`. If this RPC failed, or the CO does not know if it failed or not, it MAY choose to call `NodeStageVolume` again, or choose to call `NodeUnstageVolume`. ```protobuf message NodeStageVolumeRequest { // The ID of the volume to publish. This field is REQUIRED. string volume_id = 1; // The CO SHALL set this field to the value returned by // `ControllerPublishVolume` if the corresponding Controller Plugin // has `PUBLISHUNPUBLISHVOLUME` controller capability, and SHALL be // left unset if the corresponding Controller Plugin does not have // this capability. This is an OPTIONAL field. map<string, string> publish_context = 2; // The path to which the volume MAY be staged. It MUST be an // absolute path in the root filesystem of the process serving this // request, and MUST be a directory. The CO SHALL ensure that there // is only one `stagingtargetpath` per volume. The CO SHALL ensure // that the path is directory and that the process serving the // request has `read` and `write` permission to that directory. The // CO SHALL be responsible for creating the directory if it does not // exist. // This is a REQUIRED field. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string stagingtargetpath = 3; // Volume capability describing how the CO intends to use this volume. // SP MUST ensure the CO can use the staged volume as described. // Otherwise SP MUST return the appropriate gRPC error code. // This is a REQUIRED field. VolumeCapability volume_capability = 4; // Secrets required by plugin to complete node stage volume request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this"
},
{
"data": "map<string, string> secrets = 5 [(csi_secret) = true]; // Volume context as returned by SP in // CreateVolumeResponse.Volume.volume_context. // This field is OPTIONAL and MUST match the volume_context of the // volume identified by `volume_id`. map<string, string> volume_context = 6; } message NodeStageVolumeResponse { // Intentionally empty. } ``` If the plugin is unable to complete the NodeStageVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist. | Caller MUST verify that the `volume_id` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | Volume published but is incompatible | 6 ALREADYEXISTS | Indicates that a volume corresponding to the specified `volumeid` has already been published at the specified `stagingtargetpath` but is incompatible with the specified `volume_capability` flag. | Caller MUST fix the arguments before retrying. | | Exceeds capabilities | 9 FAILED_PRECONDITION | Indicates that the CO has specified capabilities not supported by the volume. | Caller MAY choose to call `ValidateVolumeCapabilities` to validate the volume capabilities, or wait for the volume to be unpublished on the node. | A Node Plugin MUST implement this RPC call if it has `STAGEUNSTAGEVOLUME` node capability. This RPC is a reverse operation of `NodeStageVolume`. This RPC MUST undo the work by the corresponding `NodeStageVolume`. This RPC SHALL be called by the CO once for each `stagingtargetpath` that was successfully setup via `NodeStageVolume`. If the corresponding Controller Plugin has `PUBLISHUNPUBLISHVOLUME` controller capability and the Node Plugin has `STAGEUNSTAGEVOLUME` capability, the CO MUST guarantee that this RPC is called and returns success before calling `ControllerUnpublishVolume` for the given node and the given volume. The CO MUST guarantee that this RPC is called after all `NodeUnpublishVolume` have been called and returned success for the given volume on the given node. The Plugin SHALL assume that this RPC will be executed on the node where the volume is being used. This RPC MAY be called by the CO when the workload using the volume is being moved to a different node, or all the workloads using the volume on a node have finished. This operation MUST be idempotent. If the volume corresponding to the `volumeid` is not staged to the `stagingtarget_path`, the Plugin MUST reply `0 OK`. If this RPC failed, or the CO does not know if it failed or not, it MAY choose to call `NodeUnstageVolume` again. ```protobuf message NodeUnstageVolumeRequest { // The ID of the volume. This field is REQUIRED. string volume_id = 1; // The path at which the volume was staged. It MUST be an absolute // path in the root filesystem of the process serving this request. // This is a REQUIRED field. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string stagingtargetpath = 2; } message NodeUnstageVolumeResponse { // Intentionally empty. } ``` If the plugin is unable to complete the NodeUnstageVolume call successfully, it MUST return a non-ok gRPC code in the gRPC"
},
{
"data": "If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist. | Caller MUST verify that the `volume_id` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | `NodeStageVolume`, `NodeUnstageVolume`, `NodePublishVolume`, `NodeUnpublishVolume` The following interaction semantics ARE REQUIRED if the plugin advertises the `STAGEUNSTAGEVOLUME` capability. `NodeStageVolume` MUST be called and return success once per volume per node before any `NodePublishVolume` MAY be called for the volume. All `NodeUnpublishVolume` MUST be called and return success for a volume before `NodeUnstageVolume` MAY be called for the volume. Note that this requires that all COs MUST support reference counting of volumes so that if `STAGEUNSTAGEVOLUME` is advertised by the SP, the CO MUST fulfill the above interaction semantics. This RPC is called by the CO when a workload that wants to use the specified volume is placed (scheduled) on a node. The Plugin SHALL assume that this RPC will be executed on the node where the volume will be used. If the corresponding Controller Plugin has `PUBLISHUNPUBLISHVOLUME` controller capability, the CO MUST guarantee that this RPC is called after `ControllerPublishVolume` is called for the given volume on the given node and returns a success. This operation MUST be idempotent. If the volume corresponding to the `volumeid` has already been published at the specified `targetpath`, and is compatible with the specified `volume_capability` and `readonly` flag, the Plugin MUST reply `0 OK`. If this RPC failed, or the CO does not know if it failed or not, it MAY choose to call `NodePublishVolume` again, or choose to call `NodeUnpublishVolume`. This RPC MAY be called by the CO multiple times on the same node for the same volume with a possibly different `targetpath` and/or other arguments if the volume supports either `MULTINODE...` or `SINGLENODEMULTIWRITER` access modes (see second table). The possible `MULTINODE...` access modes are `MULTINODEREADERONLY`, `MULTINODESINGLEWRITER` or `MULTINODEMULTI_WRITER`. COs SHOULD NOT call `NodePublishVolume` a second time with a different `volume_capability`. If this happens, the Plugin SHOULD return `FAILED_PRECONDITION`. The following table shows what the Plugin SHOULD return when receiving a second `NodePublishVolume` on the same volume on the same node: (T<sub>n</sub>: target path of the n<sup>th</sup> `NodePublishVolume`; P<sub>n</sub>: other arguments of the n<sup>th</sup> `NodePublishVolume` except `secrets`) | | T1=T2, P1=P2 | T1=T2, P1!=P2 | T1!=T2, P1=P2 | T1!=T2, P1!=P2 | |--|--|-||--| | MULTINODE... | OK (idempotent) | ALREADY_EXISTS | OK | OK | | Non MULTINODE... | OK (idempotent) | ALREADYEXISTS | FAILEDPRECONDITION | FAILED_PRECONDITION| NOTE: If the Plugin supports the `SINGLENODEMULTI_WRITER` capability, use the following table instead for what the Plugin SHOULD return when receiving a second `NodePublishVolume` on the same volume on the same node: | | T1=T2, P1=P2 | T1=T2, P1!=P2 | T1!=T2, P1=P2 | T1!=T2, P1!=P2 | ||--|-||| | SINGLENODESINGLEWRITER | OK (idempotent) | ALREADYEXISTS | FAILEDPRECONDITION | FAILEDPRECONDITION | | SINGLENODEMULTIWRITER | OK (idempotent) | ALREADYEXISTS | OK | OK | | MULTINODE... | OK (idempotent) | ALREADY_EXISTS | OK | OK | | Non MULTINODE... | OK (idempotent) | ALREADYEXISTS | FAILEDPRECONDITION | FAILED_PRECONDITION | The `SINGLENODESINGLEWRITER` and `SINGLENODEMULTIWRITER` access modes are intended to replace the `SINGLENODEWRITER` access mode to clarify the number of writers for a volume on a single"
},
{
"data": "Plugins MUST accept and allow use of the `SINGLENODEWRITER` access mode (subject to the processing rules above), when either `SINGLENODESINGLEWRITER` and/or `SINGLENODEMULTIWRITER` are supported, in order to permit older COs to continue working. ```protobuf message NodePublishVolumeRequest { // The ID of the volume to publish. This field is REQUIRED. string volume_id = 1; // The CO SHALL set this field to the value returned by // `ControllerPublishVolume` if the corresponding Controller Plugin // has `PUBLISHUNPUBLISHVOLUME` controller capability, and SHALL be // left unset if the corresponding Controller Plugin does not have // this capability. This is an OPTIONAL field. map<string, string> publish_context = 2; // The path to which the volume was staged by `NodeStageVolume`. // It MUST be an absolute path in the root filesystem of the process // serving this request. // It MUST be set if the Node Plugin implements the // `STAGEUNSTAGEVOLUME` node capability. // This is an OPTIONAL field. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string stagingtargetpath = 3; // The path to which the volume will be published. It MUST be an // absolute path in the root filesystem of the process serving this // request. The CO SHALL ensure uniqueness of target_path per volume. // The CO SHALL ensure that the parent directory of this path exists // and that the process serving the request has `read` and `write` // permissions to that parent directory. // For volumes with an access type of block, the SP SHALL place the // block device at target_path. // For volumes with an access type of mount, the SP SHALL place the // mounted directory at target_path. // Creation of target_path is the responsibility of the SP. // This is a REQUIRED field. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string target_path = 4; // Volume capability describing how the CO intends to use this volume. // SP MUST ensure the CO can use the published volume as described. // Otherwise SP MUST return the appropriate gRPC error code. // This is a REQUIRED field. VolumeCapability volume_capability = 5; // Indicates SP MUST publish the volume in readonly mode. // This field is REQUIRED. bool readonly = 6; // Secrets required by plugin to complete node publish volume request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 7 [(csi_secret) = true]; // Volume context as returned by SP in // CreateVolumeResponse.Volume.volume_context. // This field is OPTIONAL and MUST match the volume_context of the // volume identified by `volume_id`. map<string, string> volume_context = 8; } message NodePublishVolumeResponse { // Intentionally empty. } ``` If the plugin is unable to complete the NodePublishVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not"
},
{
"data": "| Caller MUST verify that the `volume_id` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | Volume published but is incompatible | 6 ALREADYEXISTS | Indicates that a volume corresponding to the specified `volumeid` has already been published at the specified `targetpath` but is incompatible with the specified `volumecapability` or `readonly` flag. | Caller MUST fix the arguments before retrying. | | Exceeds capabilities | 9 FAILED_PRECONDITION | Indicates that the CO has specified capabilities not supported by the volume. | Caller MAY choose to call `ValidateVolumeCapabilities` to validate the volume capabilities, or wait for the volume to be unpublished on the node. | | Staging target path not set | 9 FAILEDPRECONDITION | Indicates that `STAGEUNSTAGEVOLUME` capability is set but no `stagingtargetpath` was set. | Caller MUST make sure call to `NodeStageVolume` is made and returns success before retrying with valid `stagingtarget_path`. | A Node Plugin MUST implement this RPC call. This RPC is a reverse operation of `NodePublishVolume`. This RPC MUST undo the work by the corresponding `NodePublishVolume`. This RPC SHALL be called by the CO at least once for each `target_path` that was successfully setup via `NodePublishVolume`. If the corresponding Controller Plugin has `PUBLISHUNPUBLISHVOLUME` controller capability, the CO SHOULD issue all `NodeUnpublishVolume` (as specified above) before calling `ControllerUnpublishVolume` for the given node and the given volume. The Plugin SHALL assume that this RPC will be executed on the node where the volume is being used. This RPC is typically called by the CO when the workload using the volume is being moved to a different node, or all the workload using the volume on a node has finished. This operation MUST be idempotent. If this RPC failed, or the CO does not know if it failed or not, it can choose to call `NodeUnpublishVolume` again. ```protobuf message NodeUnpublishVolumeRequest { // The ID of the volume. This field is REQUIRED. string volume_id = 1; // The path at which the volume was published. It MUST be an absolute // path in the root filesystem of the process serving this request. // The SP MUST delete the file or directory it created at this path. // This is a REQUIRED field. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string target_path = 2; } message NodeUnpublishVolumeResponse { // Intentionally empty. } ``` If the plugin is unable to complete the NodeUnpublishVolume call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist. | Caller MUST verify that the `volume_id` is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | A Node plugin MUST implement this RPC call if it has GETVOLUMESTATS node capability or VOLUME_CONDITION node capability. `NodeGetVolumeStats` RPC call returns the volume capacity statistics available for the volume. If the volume is being used in `BlockVolume` mode then `used` and `available` MAY be omitted from `usage` field of `NodeGetVolumeStatsResponse`. Similarly, inode information MAY be omitted from `NodeGetVolumeStatsResponse` when unavailable. The `stagingtargetpath` field is not required, for backwards compatibility, but the CO SHOULD supply"
},
{
"data": "Plugins can use this field to determine if `volume_path` is where the volume is published or staged, and setting this field to non-empty allows plugins to function with less stored state on the node. ```protobuf message NodeGetVolumeStatsRequest { // The ID of the volume. This field is REQUIRED. string volume_id = 1; // It can be any valid path where volume was previously // staged or published. // It MUST be an absolute path in the root filesystem of // the process serving this request. // This is a REQUIRED field. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string volume_path = 2; // The path where the volume is staged, if the plugin has the // STAGEUNSTAGEVOLUME capability, otherwise empty. // If not empty, it MUST be an absolute path in the root // filesystem of the process serving this request. // This field is OPTIONAL. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string stagingtargetpath = 3; } message NodeGetVolumeStatsResponse { // This field is OPTIONAL. repeated VolumeUsage usage = 1; // Information about the current condition of the volume. // This field is OPTIONAL. // This field MUST be specified if the VOLUME_CONDITION node // capability is supported. VolumeCondition volumecondition = 2 [(alphafield) = true]; } message VolumeUsage { enum Unit { UNKNOWN = 0; BYTES = 1; INODES = 2; } // The available capacity in specified Unit. This field is OPTIONAL. // The value of this field MUST NOT be negative. int64 available = 1; // The total capacity in specified Unit. This field is REQUIRED. // The value of this field MUST NOT be negative. int64 total = 2; // The used capacity in specified Unit. This field is OPTIONAL. // The value of this field MUST NOT be negative. int64 used = 3; // Units by which values are measured. This field is REQUIRED. Unit unit = 4; } // VolumeCondition represents the current condition of a volume. message VolumeCondition { option (alpha_message) = true; // Normal volumes are available for use and operating optimally. // An abnormal volume does not meet these criteria. // This field is REQUIRED. bool abnormal = 1; // The message describing the condition of the volume. // This field is REQUIRED. string message = 2; } ``` If the plugin is unable to complete the `NodeGetVolumeStats` call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified `volumeid` does not exist on specified `volumepath`. | Caller MUST verify that the `volumeid` is correct and that the volume is accessible on specified `volume_path` and has not been deleted before retrying with exponential back off. | A Node Plugin MUST implement this RPC call. This RPC allows the CO to check the supported capabilities of node service provided by the Plugin. ```protobuf message NodeGetCapabilitiesRequest { // Intentionally"
},
{
"data": "} message NodeGetCapabilitiesResponse { // All the capabilities that the node service supports. This field // is OPTIONAL. repeated NodeServiceCapability capabilities = 1; } // Specifies a capability of the node service. message NodeServiceCapability { message RPC { enum Type { UNKNOWN = 0; STAGEUNSTAGEVOLUME = 1; // If Plugin implements GETVOLUMESTATS capability // then it MUST implement NodeGetVolumeStats RPC // call for fetching volume statistics. GETVOLUMESTATS = 2; // See VolumeExpansion for details. EXPAND_VOLUME = 3; // Indicates that the Node service can report volume conditions. // An SP MAY implement `VolumeCondition` in only the Node // Plugin, only the Controller Plugin, or both. // If `VolumeCondition` is implemented in both the Node and // Controller Plugins, it SHALL report from different // perspectives. // If for some reason Node and Controller Plugins report // misaligned volume conditions, CO SHALL assume the worst case // is the truth. // Note that, for alpha, `VolumeCondition` is intended to be // informative for humans only, not for automation. VOLUMECONDITION = 4 [(alphaenum_value) = true]; // Indicates the SP supports the SINGLENODESINGLE_WRITER and/or // SINGLENODEMULTI_WRITER access modes. // These access modes are intended to replace the // SINGLENODEWRITER access mode to clarify the number of writers // for a volume on a single node. Plugins MUST accept and allow // use of the SINGLENODEWRITER access mode (subject to the // processing rules for NodePublishVolume), when either // SINGLENODESINGLEWRITER and/or SINGLENODEMULTIWRITER are // supported, in order to permit older COs to continue working. SINGLENODEMULTIWRITER = 5 [(alphaenum_value) = true]; // Indicates that Node service supports mounting volumes // with provided volume group identifier during node stage // or node publish RPC calls. VOLUMEMOUNTGROUP = 6; } Type type = 1; } oneof type { // RPC that the controller supports. RPC rpc = 1; } } ``` If the plugin is unable to complete the NodeGetCapabilities call successfully, it MUST return a non-ok gRPC code in the gRPC status. A Node Plugin MUST implement this RPC call if the plugin has `PUBLISHUNPUBLISHVOLUME` controller capability. The Plugin SHALL assume that this RPC will be executed on the node where the volume will be used. The CO SHOULD call this RPC for the node at which it wants to place the workload. The CO MAY call this RPC more than once for a given node. The SP SHALL NOT expect the CO to call this RPC more than once. The result of this call will be used by CO in `ControllerPublishVolume`. ```protobuf message NodeGetInfoRequest { } message NodeGetInfoResponse { // The identifier of the node as understood by the SP. // This field is REQUIRED. // This field MUST contain enough information to uniquely identify // this specific node vs all other nodes supported by this plugin. // This field SHALL be used by the CO in subsequent calls, including // `ControllerPublishVolume`, to refer to this node. // The SP is NOT responsible for global uniqueness of node_id across // multiple SPs. // This field overrides the general CSI size limit. // The size of this field SHALL NOT exceed 256 bytes. The general // CSI size limit, 128 byte, is RECOMMENDED for best backwards // compatibility. string node_id = 1; // Maximum number of volumes that controller can publish to the node. // If value is not set or zero CO SHALL decide how many volumes of // this type can be published by the controller to the node. The // plugin MUST NOT set negative values here. // This field is OPTIONAL. int64 maxvolumesper_node = 2; // Specifies where (regions, zones, racks,"
},
{
"data": "the node is // accessible from. // A plugin that returns this field MUST also set the // VOLUMEACCESSIBILITYCONSTRAINTS plugin capability. // COs MAY use this information along with the topology information // returned in CreateVolumeResponse to ensure that a given volume is // accessible from a given node when scheduling workloads. // This field is OPTIONAL. If it is not specified, the CO MAY assume // the node is not subject to any topological constraint, and MAY // schedule workloads that reference any volume V, such that there are // no topological constraints declared for V. // // Example 1: // accessible_topology = // {\"region\": \"R1\", \"zone\": \"Z2\"} // Indicates the node exists within the \"region\" \"R1\" and the \"zone\" // \"Z2\". Topology accessible_topology = 3; } ``` If the plugin is unable to complete the NodeGetInfo call successfully, it MUST return a non-ok gRPC code in the gRPC status. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. A Node Plugin MUST implement this RPC call if it has `EXPAND_VOLUME` node capability. This RPC call allows CO to expand volume on a node. This operation MUST be idempotent. If a volume corresponding to the specified volume ID is already larger than or equal to the target capacity of the expansion request, the plugin SHOULD reply 0 OK. `NodeExpandVolume` ONLY supports expansion of already node-published or node-staged volumes on the given `volume_path`. If plugin has `STAGEUNSTAGEVOLUME` node capability then: `NodeExpandVolume` MUST be called after successful `NodeStageVolume`. `NodeExpandVolume` MAY be called before or after `NodePublishVolume`. Otherwise `NodeExpandVolume` MUST be called after successful `NodePublishVolume`. If a plugin only supports expansion via the `VolumeExpansion.OFFLINE` capability, then the volume MUST first be taken offline and expanded via `ControllerExpandVolume` (see `ControllerExpandVolume` for more details), and then node-staged or node-published before it can be expanded on the node via `NodeExpandVolume`. The `stagingtargetpath` field is not required, for backwards compatibility, but the CO SHOULD supply it. Plugins can use this field to determine if `volume_path` is where the volume is published or staged, and setting this field to non-empty allows plugins to function with less stored state on the node. ```protobuf message NodeExpandVolumeRequest { // The ID of the volume. This field is REQUIRED. string volume_id = 1; // The path on which volume is available. This field is REQUIRED. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128 bytes. string volume_path = 2; // This allows CO to specify the capacity requirements of the volume // after expansion. If capacity_range is omitted then a plugin MAY // inspect the file system of the volume to determine the maximum // capacity to which the volume can be expanded. In such cases a // plugin MAY expand the volume to its maximum capacity. // This field is OPTIONAL. CapacityRange capacity_range = 3; // The path where the volume is staged, if the plugin has the // STAGEUNSTAGEVOLUME capability, otherwise empty. // If not empty, it MUST be an absolute path in the root // filesystem of the process serving this request. // This field is OPTIONAL. // This field overrides the general CSI size limit. // SP SHOULD support the maximum path length allowed by the operating // system/filesystem, but, at a minimum, SP MUST accept a max path // length of at least 128"
},
{
"data": "string stagingtargetpath = 4; // Volume capability describing how the CO intends to use this volume. // This allows SP to determine if volume is being used as a block // device or mounted file system. For example - if volume is being // used as a block device the SP MAY choose to skip expanding the // filesystem in NodeExpandVolume implementation but still perform // rest of the housekeeping needed for expanding the volume. If // volume_capability is omitted the SP MAY determine // accesstype from given volumepath for the volume and perform // node expansion. This is an OPTIONAL field. VolumeCapability volume_capability = 5; // Secrets required by plugin to complete node expand volume request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 6 [(csisecret) = true, (alphafield) = true]; } message NodeExpandVolumeResponse { // The capacity of the volume in bytes. This field is OPTIONAL. int64 capacity_bytes = 1; } ``` | Condition | gRPC code | Description | Recovery Behavior | |--|--|--|--| | Exceeds capabilities | 3 INVALID_ARGUMENT | Indicates that the CO has specified capabilities not supported by the volume. | Caller MAY verify volume capabilities by calling ValidateVolumeCapabilities and retry with matching capabilities. | | Volume does not exist | 5 NOTFOUND | Indicates that a volume corresponding to the specified volumeid does not exist. | Caller MUST verify that the volume_id is correct and that the volume is accessible and has not been deleted before retrying with exponential back off. | | Volume in use | 9 FAILEDPRECONDITION | Indicates that the volume corresponding to the specified `volumeid` could not be expanded because it is node-published or node-staged and the underlying filesystem does not support expansion of published or staged volumes. | Caller MUST NOT retry. | | Unsupported capacityrange | 11 OUTOF_RANGE | Indicates that the capacity range is not allowed by the Plugin. More human-readable information MAY be provided in the gRPC `status.message` field. | Caller MUST fix the capacity range before retrying. | A Plugin that implements GroupController MUST implement this RPC call. This RPC allows the CO to check the supported capabilities of group controller service provided by the Plugin. ```protobuf message GroupControllerGetCapabilitiesRequest { option (alpha_message) = true; // Intentionally empty. } message GroupControllerGetCapabilitiesResponse { option (alpha_message) = true; // All the capabilities that the group controller service supports. // This field is OPTIONAL. repeated GroupControllerServiceCapability capabilities = 1; } // Specifies a capability of the group controller service. message GroupControllerServiceCapability { option (alpha_message) = true; message RPC { enum Type { UNKNOWN = 0; // Indicates that the group controller plugin supports // creating, deleting, and getting details of a volume // group snapshot. CREATEDELETEGETVOLUMEGROUP_SNAPSHOT = 1 [(alphaenumvalue) = true]; } Type type = 1; } oneof type { // RPC that the controller supports. RPC rpc = 1; } } ``` If the plugin is unable to complete the GroupControllerGetCapabilities call successfully, it MUST return a non-ok gRPC code in the gRPC status. ALPHA FEATURE A Group Controller Plugin MUST implement this RPC call if it has `CREATEDELETEGETVOLUMEGROUP_SNAPSHOT` controller capability. This RPC will be called by the CO to create a new volume group snapshot from a list of source volumes on behalf of a user. The purpose of this call is to request the creation of a multi-volume group snapshot. This group snapshot MUST give a write-order consistency guarantee or fail if that's not"
},
{
"data": "That is to say, all the of the volume snapshots in the group MUST be taken at the same point-in-time relative to a stream of write traffic to the specified volumes. Note that calls to this function MUST be idempotent - the function may be called multiple times for the same name, with the same `sourcevolumeids` and `parameters` - the group snapshot MUST only be created once. If a group snapshot corresponding to the specified group snapshot `name`, `sourcevolumeids`, and `parameters` is successfully created (meaning all snapshots associated with the group are successfully cut), the Plugin MUST reply `0 OK` with the corresponding `CreateVolumeGroupSnapshotResponse`. If an error occurs before a group snapshot is cut, `CreateVolumeGroupSnapshot` SHOULD return a corresponding gRPC error code that reflects the error condition. For plugins that support snapshot post processing such as uploading, CreateVolumeGroupSnapshot SHOULD return 0 OK and the readytouse field SHOULD be set to false after the group snapshot is cut but still being processed. The CO SHOULD issue the CreateVolumeGroupSnapshotRequest RPC with the same arguments again periodically until the readytouse field has a value of true indicating all the snapshots have been \"processed\" and are ready to use to create new volumes. If an error occurs during the process for any individual snapshot, CreateVolumeGroupSnapshot SHOULD return a corresponding gRPC error code that reflects the error condition. The readytouse field for each individual snapshot SHOULD have a value of false until the snapshot has been \"processed\" and is ready to use to create new volumes. After snapshot creation, any individual snapshot from the group MAY be used as a source to provision a new volume. In the VolumeGroupSnapshot message, both snapshots and groupsnapshotid are required fields. If an error occurs before all the individual snapshots are cut when creating a group snapshot of multiple volumes and a groupsnapshotid is not yet available for CO to do clean up, SP MUST return an error, and SP SHOULD also do clean up and make sure no snapshots are leaked. ```protobuf message CreateVolumeGroupSnapshotRequest { option (alpha_message) = true; // The suggested name for the group snapshot. This field is REQUIRED // for idempotency. // Any Unicode string that conforms to the length limit is allowed // except those containing the following banned characters: // U+0000-U+0008, U+000B, U+000C, U+000E-U+001F, U+007F-U+009F. // (These are control characters other than commonly used whitespace.) string name = 1; // volume IDs of the source volumes to be snapshotted together. // This field is REQUIRED. repeated string sourcevolumeids = 2; // Secrets required by plugin to complete // ControllerCreateVolumeGroupSnapshot request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. // The secrets provided in this field SHOULD be the same for // all group snapshot operations on the same group snapshot. map<string, string> secrets = 3 [(csi_secret) = true]; // Plugin specific parameters passed in as opaque key-value pairs. // This field is OPTIONAL. The Plugin is responsible for parsing and // validating these parameters. COs will treat these as opaque. map<string, string> parameters = 4; } message CreateVolumeGroupSnapshotResponse { option (alpha_message) = true; // Contains all attributes of the newly created group snapshot. // This field is REQUIRED. VolumeGroupSnapshot group_snapshot = 1; } message VolumeGroupSnapshot { option (alpha_message) = true; // The identifier for this group snapshot, generated by the plugin. // This field MUST contain enough information to uniquely identify // this specific snapshot vs all other group snapshots supported by // this plugin. // This field SHALL be used by the CO in subsequent calls to refer to // this group snapshot. // The SP is NOT responsible for global uniqueness of // groupsnapshotid across multiple SPs. // This field is"
},
{
"data": "string groupsnapshotid = 1; // A list of snapshots belonging to this group. // This field is REQUIRED. repeated Snapshot snapshots = 2; // Timestamp of when the volume group snapshot was taken. // This field is REQUIRED. .google.protobuf.Timestamp creation_time = 3; // Indicates if all individual snapshots in the group snapshot // are ready to use as a `volumecontentsource` in a // `CreateVolumeRequest`. The default value is false. // If any snapshot in the list of snapshots in this message have // readytouse set to false, the SP MUST set this field to false. // If all of the snapshots in the list of snapshots in this message // have readytouse set to true, the SP SHOULD set this field to // true. // This field is REQUIRED. bool readytouse = 4; } ``` If the plugin is unable to complete the CreateVolumeGroupSnapshot call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Group snapshot already exists but is incompatible | 6 ALREADYEXISTS | Indicates that a group snapshot corresponding to the specified group snapshot `name` already exists but is incompatible with the specified `sourcevolume_ids` or `parameters`. | Caller MUST fix the arguments or use a different `name` before retrying. | | Cannot snapshot multiple volumes together | 9 FAILED_PRECONDITION | Indicates that the specified volumes cannot be snapshotted together because the volumes are not configured properly based on requirements from the SP. | Caller MUST fix the configuration of the volumes so that they meet the requirements for group snapshotting before retrying. | | Not enough space to create group snapshot | 13 RESOURCE_EXHAUSTED | There is not enough space on the storage system to handle the create group snapshot request. | Future calls to CreateVolumeGroupSnapshot MAY succeed if space is freed up. | ALPHA FEATURE A Controller Plugin MUST implement this RPC call if it has `CREATEDELETEGETVOLUMEGROUP_SNAPSHOT` capability. This RPC will be called by the CO to delete a volume group snapshot. This operation will delete a volume group snapshot as well as all individual snapshots that are part of this volume group snapshot. This operation MUST be idempotent. If a group snapshot corresponding to the specified `groupsnapshotid` does not exist or the artifacts associated with the group snapshot do not exist anymore, the Plugin MUST reply `0 OK`. ```protobuf message DeleteVolumeGroupSnapshotRequest { option (alpha_message) = true; // The ID of the group snapshot to be deleted. // This field is REQUIRED. string groupsnapshotid = 1; // A list of snapshot IDs that are part of this group snapshot. // If SP does not need to rely on this field to delete the snapshots // in the group, it SHOULD check this field and report an error // if it has the ability to detect a mismatch. // Some SPs require this list to delete the snapshots in the group. // If SP needs to use this field to delete the snapshots in the // group, it MUST report an error if it has the ability to detect // a mismatch. // This field is REQUIRED. repeated string snapshot_ids = 2; // Secrets required by plugin to complete group snapshot deletion // request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this"
},
{
"data": "// The secrets provided in this field SHOULD be the same for // all group snapshot operations on the same group snapshot. map<string, string> secrets = 3 [(csi_secret) = true]; } message DeleteVolumeGroupSnapshotResponse { // Intentionally empty. option (alpha_message) = true; } ``` If the plugin is unable to complete the DeleteVolumeGroupSnapshot call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Snapshot list mismatch | 3 INVALIDARGUMENT | Besides the general cases, this code SHOULD also be used to indicate when plugin supporting CREATEDELETEGETVOLUMEGROUPSNAPSHOT detects a mismatch in the `snapshotids`. | If a mismatch is detected in the `snapshotids`, caller SHOULD use different `snapshot_ids`. | | Volume group snapshot in use | 9 FAILEDPRECONDITION | Indicates that the volume group snapshot corresponding to the specified `groupsnapshot_id` could not be deleted because it is in use by another resource. | Caller SHOULD ensure that there are no other resources using the volume group snapshot, and then retry with exponential back off. | ALPHA FEATURE This optional RPC MAY be called by the CO to fetch current information about a volume group snapshot. A Controller Plugin MUST implement this `GetVolumeGroupSnapshot` RPC call if it has `CREATEDELETEGETVOLUMEGROUP_SNAPSHOT` capability. `GetVolumeGroupSnapshotResponse` should contain current information of a volume group snapshot if it exists. If the volume group snapshot does not exist any more, `GetVolumeGroupSnapshot` should return gRPC error code `NOT_FOUND`. ```protobuf message GetVolumeGroupSnapshotRequest { option (alpha_message) = true; // The ID of the group snapshot to fetch current group snapshot // information for. // This field is REQUIRED. string groupsnapshotid = 1; // A list of snapshot IDs that are part of this group snapshot. // If SP does not need to rely on this field to get the snapshots // in the group, it SHOULD check this field and report an error // if it has the ability to detect a mismatch. // Some SPs require this list to get the snapshots in the group. // If SP needs to use this field to get the snapshots in the // group, it MUST report an error if it has the ability to detect // a mismatch. // This field is REQUIRED. repeated string snapshot_ids = 2; // Secrets required by plugin to complete // GetVolumeGroupSnapshot request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. // The secrets provided in this field SHOULD be the same for // all group snapshot operations on the same group snapshot. map<string, string> secrets = 3 [(csi_secret) = true]; } message GetVolumeGroupSnapshotResponse { option (alpha_message) = true; // This field is REQUIRED VolumeGroupSnapshot group_snapshot = 1; } ``` If the plugin is unable to complete the GetVolumeGroupSnapshot call successfully, it MUST return a non-ok gRPC code in the gRPC status. If the conditions defined below are encountered, the plugin MUST return the specified gRPC error code. The CO MUST implement the specified error recovery behavior when it encounters the gRPC error code. | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Snapshot list mismatch | 3 INVALIDARGUMENT | Besides the general cases, this code SHOULD also be used to indicate when plugin supporting CREATEDELETEGETVOLUMEGROUPSNAPSHOT detects a mismatch in the `snapshotids`. | If a mismatch is detected in the `snapshotids`, caller SHOULD use different"
},
{
"data": "| | Volume group snapshot does not exist | 5 NOTFOUND | Indicates that a volume group snapshot corresponding to the specified `groupsnapshotid` does not exist. | Caller MUST verify that the `groupsnapshot_id` is correct and that the volume group snapshot is accessible and has not been deleted before retrying with exponential back off. | ALPHA FEATURE The Snapshot Metadata service is an optional service that is used to retrieve metadata on the allocated blocks of a single snapshot, or the changed blocks between a pair of snapshots of the same block volume. Retrieval of the data blocks of a snapshot is not addressed by this service and it is assumed that existing mechanisms to read snapshot data will suffice. Block volume data ranges are specified by a sequence of `BlockMetadata` tuples. Tuples in a sequence are in ascending order of `byte_offset`, with no overlap between adjacent tuples. ```protobuf // BlockMetadata specifies a data range. message BlockMetadata { // This is the zero based byte position in the volume or snapshot, // measured from the start of the object. // This field is REQUIRED. int64 byte_offset = 1; // This is the size of the data range. // size_bytes MUST be greater than zero. // This field is REQUIRED. int64 size_bytes = 2; } ``` The `BlockMetadataType` enumerated type describes how the `size_bytes` fields of `BlockMetadata` tuples may vary in a sequence. There are two prevalent styles in which data ranges are described: The FIXED_LENGTH style requires a fixed value for the `size_bytes` field in all tuples in a given sequence. The VARIABLE_LENGTH style does not constrain the value of the `size_bytes` field in a sequence. The Snapshot Metadata service permits either style at the discretion of the plugin as long as the style does not change mid-stream in any given RPC. The style is represented by the following data type: ```protobuf enum BlockMetadataType { UNKNOWN = 0; // The FIXED_LENGTH value indicates that data ranges are // returned in fixed size blocks. FIXED_LENGTH = 1; // The VARIABLE_LENGTH value indicates that data ranges // are returned in potentially variable sized extents. VARIABLE_LENGTH = 2; } ``` The remote procedure calls of this service return snapshot metadata within a gRPC stream of response messages. There are some important things for the SP and CO to consider here: In normal operation an SP MUST terminate the stream only after ALL metadata is transmitted, using the language specific idiom for terminating a gRPC stream source. This results in the CO receiving its language specific notification idiom for the end of a gRPC stream, which provides a definitive indication that all available metadata has been received. If the SP encounters an error while recovering the metadata it MUST abort transmission of the stream with its language specific error idiom. This results in the CO receiving the error in its language specific idiom, which will be different from the language specific idiom for the end of a gRPC stream. It is possible that the gRPC stream gets interrupted for arbitrary reasons beyond the control of either the SP or the CO. The SP will get an error when writing to the stream and MUST abort its transmission. The CO will receive an error in its language specific idiom, which will be different from the language specific idiom for the end of a gRPC stream. In all circumstances where the CO receives an error when reading from the gRPC stream it MAY attempt to continue the operation by re-sending its request message but with a `starting_offset` adjusted to the NEXT byte position beyond that of any metadata already"
},
{
"data": "The SP MUST always ensure that the `startingoffset` requested be considered in the computation of the data range for the first message in the returned gRPC stream, though the data range of the first message is not required to actually include the `startingoffset` if there is no applicable data between the `starting_offset` and the start of the data range returned by the first message. The plugin must implement this RPC if it provides the SnapshotMetadata service. ```protobuf // The GetMetadataAllocatedRequest message is used to solicit metadata // on the allocated blocks of a snapshot: i.e. this identifies the // data ranges that have valid data as they were the target of some // previous write operation on the volume. message GetMetadataAllocatedRequest { // This is the identifier of the snapshot. // This field is REQUIRED. string snapshot_id = 1; // This indicates the zero based starting byte position in the volume // snapshot from which the result should be computed. // It is intended to be used to continue a previously interrupted // call. // The CO SHOULD specify this value to be the offset of the byte // position immediately after the last byte of the last data range // received, if continuing an interrupted operation, or zero if not. // The SP MUST ensure that the returned response stream does not // contain BlockMetadata tuples that end before the requested // startingoffset: i.e. if S is the requested startingoffset, and // B0 is block_metadata[0] of the first message in the response // stream, then (S < B0.byteoffset + B0.sizebytes) must be true. // This field is REQUIRED. int64 starting_offset = 2; // This is an optional parameter, and if non-zero it specifies the // maximum number of tuples to be returned in each // GetMetadataAllocatedResponse message returned by the RPC stream. // The plugin will determine an appropriate value if 0, and is // always free to send less than the requested value. // This field is OPTIONAL. int32 max_results = 3; // Secrets required by plugin to complete the request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 4 [(csi_secret) = true]; } // GetMetadataAllocatedResponse messages are returned in a gRPC stream. // Cumulatively, they provide information on the allocated data // ranges in the snapshot. message GetMetadataAllocatedResponse { // This specifies the style used in the BlockMetadata sequence. // This value must be the same in all such messages returned by // the stream. // If blockmetadatatype is FIXEDLENGTH, then the sizebytes field // of each message in the block_metadata list MUST be constant. // This field is REQUIRED. BlockMetadataType blockmetadatatype = 1; // This returns the capacity of the underlying volume in bytes. // This value must be the same in all such messages returned by // the stream. // This field is REQUIRED. int64 volumecapacitybytes = 2; // This is a list of data range tuples. // If the value of max_results in the GetMetadataAllocatedRequest // message is greater than zero, then the number of entries in this // list MUST be less than or equal to that value. // The SP MUST respect the value of starting_offset in the request. // The byte_offset fields of adjacent BlockMetadata messages // MUST be strictly increasing and messages MUST NOT overlap: // i.e. for any two BlockMetadata messages, A and B, if A is returned // before B, then (A.byteoffset + A.sizebytes <= B.byte_offset) // MUST be"
},
{
"data": "// This MUST also be true if A and B are from block_metadata lists in // different GetMetadataAllocatedResponse messages in the gRPC stream. // This field is OPTIONAL. repeated BlockMetadata block_metadata = 3; } ``` If the plugin is unable to complete the `GetMetadataAllocated` call successfully it must return a non-OK gRPC code in the gRPC status. The following conditions are well defined: | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Missing or otherwise invalid argument | 3 INVALID_ARGUMENT | Indicates that a required argument field was not specified or an argument value is invalid | The caller should correct the error and resubmit the call. | | Invalid `snapshotid` | 5 NOTFOUND | Indicates that the snapshot specified was not found. | The caller should re-check that this object exists. | | Invalid `startingoffset` | 11 OUTOF_RANGE | The starting offset is negative or exceeds the volume size. | The caller should specify a valid offset. | The plugin must implement this RPC if it provides the SnapshotMetadata service. ```protobuf // The GetMetadataDeltaRequest message is used to solicit metadata on // the data ranges that have changed between two snapshots. message GetMetadataDeltaRequest { // This is the identifier of the snapshot against which changes // are to be computed. // This field is REQUIRED. string basesnapshotid = 1; // This is the identifier of a second snapshot in the same volume, // created after the base snapshot. // This field is REQUIRED. string targetsnapshotid = 2; // This indicates the zero based starting byte position in the volume // snapshot from which the result should be computed. // It is intended to be used to continue a previously interrupted // call. // The CO SHOULD specify this value to be the offset of the byte // position immediately after the last byte of the last data range // received, if continuing an interrupted operation, or zero if not. // The SP MUST ensure that the returned response stream does not // contain BlockMetadata tuples that end before the requested // startingoffset: i.e. if S is the requested startingoffset, and // B0 is block_metadata[0] of the first message in the response // stream, then (S < B0.byteoffset + B0.sizebytes) must be true. // This field is REQUIRED. int64 starting_offset = 3; // This is an optional parameter, and if non-zero it specifies the // maximum number of tuples to be returned in each // GetMetadataDeltaResponse message returned by the RPC stream. // The plugin will determine an appropriate value if 0, and is // always free to send less than the requested value. // This field is OPTIONAL. int32 max_results = 4; // Secrets required by plugin to complete the request. // This field is OPTIONAL. Refer to the `Secrets Requirements` // section on how to use this field. map<string, string> secrets = 5 [(csi_secret) = true]; } // GetMetadataDeltaResponse messages are returned in a gRPC stream. // Cumulatively, they provide information on the data ranges that // have changed between the base and target snapshots specified // in the GetMetadataDeltaRequest message. message GetMetadataDeltaResponse { // This specifies the style used in the BlockMetadata sequence. // This value must be the same in all such messages returned by // the stream. // If blockmetadatatype is FIXEDLENGTH, then the sizebytes field // of each message in the block_metadata list MUST be constant. // This field is REQUIRED. BlockMetadataType blockmetadatatype = 1; // This returns the capacity of the underlying volume in bytes. // This value must be the same in all such messages returned by // the stream. // This field is"
},
{
"data": "int64 volumecapacitybytes = 2; // This is a list of data range tuples. // If the value of max_results in the GetMetadataDeltaRequest message // is greater than zero, then the number of entries in this list MUST // be less than or equal to that value. // The SP MUST respect the value of starting_offset in the request. // The byte_offset fields of adjacent BlockMetadata messages // MUST be strictly increasing and messages MUST NOT overlap: // i.e. for any two BlockMetadata messages, A and B, if A is returned // before B, then (A.byteoffset + A.sizebytes <= B.byte_offset) // MUST be true. // This MUST also be true if A and B are from block_metadata lists in // different GetMetadataDeltaResponse messages in the gRPC stream. // This field is OPTIONAL. repeated BlockMetadata block_metadata = 3; } ``` If the plugin is unable to complete the `GetMetadataDelta` call successfully it must return a non-OK gRPC code in the gRPC status. The following conditions are well defined: | Condition | gRPC Code | Description | Recovery Behavior | |--|--|-|-| | Missing or otherwise invalid argument | 3 INVALID_ARGUMENT | Indicates that a required argument field was not specified or an argument value is invalid | The caller should correct the error and resubmit the call. | | Invalid `basesnapshotid` or `targetsnapshotid` | 5 NOT_FOUND | Indicates that the snapshots specified were not found. | The caller should re-check that these objects exist. | | Invalid `startingoffset` | 11 OUTOF_RANGE | The starting offset is negative or exceeds the volume size. | The caller should specify a valid offset. | A CO SHALL communicate with a Plugin using gRPC to access the `Identity`, and (optionally) the `Controller` and `Node` services. proto3 SHOULD be used with gRPC, as per the . All Plugins SHALL implement the REQUIRED Identity service RPCs. Support for OPTIONAL RPCs is reported by the `ControllerGetCapabilities` and `NodeGetCapabilities` RPC calls. The CO SHALL provide the listen-address for the Plugin by way of the `CSI_ENDPOINT` environment variable. Plugin components SHALL create, bind, and listen for RPCs on the specified listen address. Only UNIX Domain Sockets MAY be used as endpoints. This will likely change in a future version of this specification to support non-UNIX platforms. All supported RPC services MUST be available at the listen address of the Plugin. The CO operator and Plugin Supervisor SHOULD take steps to ensure that any and all communication between the CO and Plugin Service are secured according to best practices. Communication between a CO and a Plugin SHALL be transported over UNIX Domain Sockets. gRPC is compatible with UNIX Domain Sockets; it is the responsibility of the CO operator and Plugin Supervisor to properly secure access to the Domain Socket using OS filesystem ACLs and/or other OS-specific security context tooling. SPs supplying stand-alone Plugin controller appliances, or other remote components that are incompatible with UNIX Domain Sockets MUST provide a software component that proxies communication between a UNIX Domain Socket and the remote component(s). Proxy components transporting communication over IP networks SHALL be responsible for securing communications over such networks. Both the CO and Plugin SHOULD avoid accidental leakage of sensitive information (such as redacting such information from log files). Debugging and tracing are supported by external, CSI-independent additions and extensions to gRPC APIs, such as . The `CSI_ENDPOINT` environment variable SHALL be supplied to the Plugin by the Plugin Supervisor. An operator SHALL configure the CO to connect to the Plugin via the listen address identified by `CSI_ENDPOINT`"
},
{
"data": "With exception to sensitive data, Plugin configuration SHOULD be specified by environment variables, whenever possible, instead of by command line flags or bind-mounted/injected files. Supervisor -> Plugin: `CSI_ENDPOINT=unix:///path/to/unix/domain/socket.sock`. Operator -> CO: use plugin at endpoint `unix:///path/to/unix/domain/socket.sock`. CO: monitor `/path/to/unix/domain/socket.sock`. Plugin: read `CSI_ENDPOINT`, create UNIX socket at specified path, bind and listen. CO: observe that socket now exists, establish connection. CO: invoke `GetPluginCapabilities`. Plugins SHALL NOT specify requirements that include or otherwise reference directories and/or files on the root filesystem of the CO. Plugins SHALL NOT create additional files or directories adjacent to the UNIX socket specified by `CSI_ENDPOINT`; violations of this requirement constitute \"abuse\". The Plugin Supervisor is the ultimate authority of the directory in which the UNIX socket endpoint is created and MAY enforce policies to prevent and/or mitigate abuse of the directory by Plugins. For Plugins packaged in software form: Plugin Packages SHOULD use a well-documented container image format (e.g., Docker, OCI). The chosen package image format MAY expose configurable Plugin properties as environment variables, unless otherwise indicated in the section below. Variables so exposed SHOULD be assigned default values in the image manifest. A Plugin Supervisor MAY programmatically evaluate or otherwise scan a Plugin Packages image manifest in order to discover configurable environment variables. A Plugin SHALL NOT assume that an operator or Plugin Supervisor will scan an image manifest for environment variables. Variables defined by this specification SHALL be identifiable by their `CSI_` name prefix. Configuration properties not defined by the CSI specification SHALL NOT use the same `CSI_` name prefix; this prefix is reserved for common configuration properties defined by the CSI specification. The Plugin Supervisor SHOULD supply all RECOMMENDED CSI environment variables to a Plugin. The Plugin Supervisor SHALL supply all REQUIRED CSI environment variables to a Plugin. Network endpoint at which a Plugin SHALL host CSI RPC services. The general format is: {scheme}://{authority}{endpoint} The following address types SHALL be supported by Plugins: unix:///path/to/unix/socket.sock Note: All UNIX endpoints SHALL end with `.sock`. See . This variable is REQUIRED. The Plugin Supervisor expects that a Plugin SHALL act as a long-running service vs. an on-demand, CLI-driven process. Supervised plugins MAY be isolated and/or resource-bounded. Plugins SHOULD generate log messages to ONLY standard output and/or standard error. In this case the Plugin Supervisor SHALL assume responsibility for all log lifecycle management. Plugin implementations that deviate from the above recommendation SHALL clearly and unambiguously document the following: Logging configuration flags and/or variables, including working sample configurations. Default log destination(s) (where do the logs go if no configuration is specified?) Log lifecycle management ownership and related guidance (size limits, rate limits, rolling, archiving, expunging, etc.) applicable to the logging mechanism embedded within the Plugin. Plugins SHOULD NOT write potentially sensitive data to logs (e.g. secrets). Plugin Packages MAY support all or a subset of CSI services; service combinations MAY be configurable at runtime by the Plugin Supervisor. A plugin MUST know the \"mode\" in which it is operating (e.g. node, controller, or both). This specification does not dictate the mechanism by which mode of operation MUST be discovered, and instead places that burden upon the SP. Misconfigured plugin software SHOULD fail-fast with an OS-appropriate error code. Plugin Supervisor SHALL guarantee that plugins will have `CAPSYSADMIN` capability on Linux when running on Nodes. Plugins SHOULD clearly document any additionally required capabilities and/or security context. A Plugin SHOULD NOT assume that it is in the same as the Plugin Supervisor. The CO MUST clearly document the requirements for Node Plugins and the Plugin Supervisor SHALL satisfy the COs requirements. A Plugin MAY be constrained by cgroups. An operator or Plugin"
}
] |
{
"category": "Runtime",
"file_name": "spec.md",
"project_name": "Container Storage Interface (CSI)",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Properly adjusting the kernel FUSE parameters can achieve better performance in sequential writing and high-concurrency scenarios. You can refer to the following steps: Obtain the Linux kernel source code. Download the corresponding Linux kernel source code package and install the source code. The source code installation directory is `~/rpmbuild/BUILD/`. ``` bash rpm -i kernel-3.10.0-327.28.3.el7.src.rpm 2>&1 | grep -v exist cd ~/rpmbuild/SPECS rpmbuild -bp --target=$(uname -m) kernel.spec ``` Optimize the Linux FUSE kernel module parameters. To achieve optimal performance, you can modify the FUSE parameters `FUSEMAXPAGESPERREQ` and `FUSEDEFAULTMAX_BACKGROUND`. The optimized reference values are as follows: ``` C / fs/fuse/fuse_i.h / / fs/fuse/inode.c / ``` Compile the corresponding version of the Linux kernel module. ``` bash yum install kernel-devel-3.10.0-327.28.3.el7.x86_64 cd ~/rpmbuild/BUILD/kernel-3.10.0-327.28.3.el7/linux-3.10.0-327.28.3.el7.x86_64/fs/fuse make -C /lib/modules/`uname -r`/build M=$PWD ``` Insert the kernel module. ``` bash cp fuse.ko /lib/modules/`uname -r`/kernel/fs/fuse rmmod fuse depmod -a modprobe fuse ```"
}
] |
{
"category": "Runtime",
"file_name": "fuse.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- Insert PR description--> Please check the type of change your PR introduces: [ ] :construction: Work in Progress [ ] :rainbow: Refactoring (no functional changes, no api changes) [ ] :hamster: Trivial/Minor [ ] :bug: Bugfix [ ] :sunflower: Feature [ ] :world_map: Documentation [ ] :robot: Test fixes #issue-number <!-- Will run prior to merging.--> <!-- Include example how to run.--> [ ] :muscle: Manual [ ] :zap: Unit test [ ] :green_heart: E2E"
}
] |
{
"category": "Runtime",
"file_name": "pull_request_template.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Concepts menu_order: 10 search_type: Documentation This section describes some of the essential concepts with which you will need to be familiar before continuing to the example deployment scenarios. The following concepts are described: * * * * For the purposes of this documentation a host is an installation of the Linux operating system that is running an instance of the Docker Engine. The host may be executing directly on bare hardware or inside a virtual machine. A peer is a running instance of Weave Net, typically one per host. Weave Net peers are identified by a 48-bit value formatted like an ethernet MAC address, for example, `01:23:45:67:89:ab`. The 'peer name' is used for various purposes: Routing of packets between containers on the overlay network Recording the origin peer of DNS entries Recording ownership of IP address ranges While it is desirable for the peer name to remain stable across restarts, it is essential that it is unique. If two or more peers share the same name chaos will ensue, which includes but is not limited to double allocation of addresses and the inability to route packets on the overlay network. When the router is launched on a host, it derives its peer name in order of preference: From the command line, where the user is responsible for uniqueness and stability From Weave's data volume container (`weavedb`), if you upgrade your Weave Net install from a pre-1.9 version and the underlying machine IDs still match. From a unique machine ID. This is generated by combining IDs from or with the BIOS product UUID or hypervisor UUID. Most machines have one or two of these IDs, and the combination is practically stable across restarts unless a user re-generates it. If none of the machine IDs are found, a random value is generated. This is practically unique across different physical hardware and cloned VMs but not stable across restarts. The appropriate strategy for assigning peer names depends on the type and method of your particular deployment and is discussed in more detail below. Peer discovery is a mechanism that allows peers to learn about new Weave Net hosts from existing peers without being explicitly told. Peer discovery is . A network partition is a transient condition whereby some arbitrary subsets of peers are unable to communicate with each other for the duration - perhaps because a network switch has failed, or a fibre optic line"
},
{
"data": "Weave Net is designed to allow peers and their containers to make maximum safe progress under conditions of partition, healing automatically once the partition is over. is the subsystem responsible for dividing up a large contiguous block of IP addresses (known as the IP allocation range) amongst peers so that individual addresses may be uniquely assigned to containers anywhere on the overlay network. When a new network is formed an initial division of the IP allocation range must be made. Two (mutually exclusive) mechanisms with different tradeoffs are provided to perform this task: seeding and consensus. Seeding requires each peer to be told the list of peer names amongst which the address space is to be divided initially. There are some constraints and consequences: Every peer added to the network must receive the same seed list, for all time, or they will not be able to join together to form a single cohesive whole. Because the 'product UUID' and 'random value' methods of peer name assignment are unpredictable, the end user must by necessity also specify peer names. Even though every peer must receive the same seed, that seed does not have to include every peer in the network, nor does it have to be updated when new peers are added (in fact due to the first constraint above it may not be). Example configurations are given in the section on deployment scenarios: Alternatively, when a new network is formed for the first time, peers can be configured to co-ordinate amongst themselves to automatically divide up the IP allocation range. This process is known as consensus and it requires each peer to be told the total number of expected peers (the 'initial peer count') in order to prevent the formation of disjoint peer groups which would, ultimately, result in duplicate IP addresses. Example configurations are given in the section on deployment scenarios: Finally, an option is provided to start a peer as an observer. Such peers do not require a seed peer name list or an initial peer count; instead they rely on the existence of other peers in the network which have been so configured. When an observer needs address space, it asks for it from one of the peers which partook of the initial division, triggering consensus if necessary. Example configurations are given in the section on deployment scenarios: Certain information is remembered between launches of Weave Net (for example across reboots): The division of the IP allocation range amongst peers Allocation of addresses to containers The persistence of this information is managed transparently in a volume container but can be if necessary."
}
] |
{
"category": "Runtime",
"file_name": "concepts.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "In our , we covered some secure design principles and how they guided the architecture of gVisor as a whole. In this post, we will cover how these principles guided the networking architecture of gVisor, and the tradeoffs involved. In particular, we will cover how these principles culminated in two networking modes, how they work, and the properties of each. Linux networking is complicated. The TCP protocol is over 40 years old, and has been repeatedly extended over the years to keep up with the rapid pace of network infrastructure improvements, all while maintaining compatibility. On top of that, Linux networking has a fairly large API surface. Linux supports for the most common socket types alone. In fact, the net subsystem is one of the largest and fastest growing in Linux at approximately 1.1 million lines of code. For comparison, that is several times the size of the entire gVisor codebase. At the same time, networking is increasingly important. The cloud era is arguably about making everything a network service, and in order to make that work, the interconnect performance is critical. Adding networking support to gVisor was difficult, not just due to the inherent complexity, but also because it has the potential to significantly weaken gVisor's security model. As outlined in the previous blog post, gVisor's are: Defense in Depth: each component of the software stack trusts each other component as little as possible. Least Privilege: each software component has only the permissions it needs to function, and no more. Attack Surface Reduction: limit the surface area of the host exposed to the sandbox. Secure by Default: the default choice for a user should be safe. gVisor manifests these principles as a multi-layered system. An application running in the sandbox interacts with the Sentry, a userspace kernel, which mediates all interactions with the Host OS and beyond. The Sentry is written in pure Go with minimal unsafe code, making it less vulnerable to buffer overflows and related memory bugs that can lead to a variety of compromises including code injection. It emulates Linux using only a minimal and audited set of Host OS syscalls that limit the Host OS's attack surface exposed to the Sentry itself. The syscall restrictions are enforced by running the Sentry with seccomp filters, which enforce that the Sentry can only use the expected set of syscalls. The Sentry runs as an unprivileged user and in namespaces, which, along with the seccomp filters, ensure that the Sentry is run with the Least Privilege required. gVisor's multi-layered design provides Defense in Depth. The Sentry, which does not trust the application because it may attack the Sentry and try to bypass it, is the first layer. The sandbox that the Sentry runs in is the second layer. If the Sentry were compromised, the attacker would still be in a highly restrictive sandbox which they must also break out of in order to compromise the Host"
},
{
"data": "To enable networking functionality while preserving gVisor's security properties, we implemented a in the Sentry, which we creatively named Netstack. Netstack is also written in Go, not only to avoid unsafe code in the network stack itself, but also to avoid a complicated and unsafe Foreign Function Interface. Having its own integrated network stack allows the Sentry to implement networking operations using up to three Host OS syscalls to read and write packets. These syscalls allow a very minimal set of operations which are already allowed (either through the same or a similar syscall). Moreover, because packets typically come from off-host (e.g. the internet), the Host OS's packet processing code has received a lot of scrutiny, hopefully resulting in a high degree of hardening. Netstack was written from scratch specifically for gVisor. Because Netstack was designed and implemented to be modular, flexible and self-contained, there are now several more projects using Netstack in creative and exciting ways. As we discussed, a custom network stack has enabled a variety of security-related goals which would not have been possible any other way. This came at a cost though. Network stacks are complex and writing a new one comes with many challenges, mostly related to application compatibility and performance. Compatibility issues typically come in two forms: missing features, and features with behavior that differs from Linux (usually due to bugs). Both of these are inevitable in an implementation of a complex system spanning many quickly evolving and ambiguous standards. However, we have invested heavily in this area, and the vast majority of applications have no issues using Netstack. For example, versus . We are continuing to make good progress in this area. Performance issues typically come from TCP behavior and packet processing speed. To improve our TCP behavior, we are working on implementing the full set of TCP RFCs. There are many RFCs which are significant to performance (e.g. and ) that we have yet to implement. This mostly affects TCP performance with non-ideal network conditions (e.g. cross continent connections). Faster packet processing mostly improves TCP performance when network conditions are very good (e.g. within a datacenter). Our primary strategy here is to reduce interactions with the Go runtime, specifically the garbage collector (GC) and scheduler. We are currently optimizing buffer management to reduce the amount of garbage, which will lower the GC cost. To reduce scheduler interactions, we are re-architecting the TCP implementation to use fewer goroutines. Performance today is good enough for most applications and we are making steady improvements. For example, since May of 2019, we have improved the Netstack runsc score by roughly 15% and upload score by around 10,000X. Current numbers are about 17 Gbps download and about 8 Gbps upload versus about 42 Gbps and 43 Gbps for native (Linux) respectively. We also offer an alternative network mode: passthrough. This name can be misleading as syscalls are never passed through from the app to the Host"
},
{
"data": "Instead, the passthrough mode implements networking in gVisor using the Host OS's network stack. (This mode is called in the codebase.) Passthrough mode can improve performance for some use cases as the Host OS's network stack has had an enormous number of person-years poured into making it highly performant. However, there is a rather large downside to using passthrough mode: it weakens gVisor's security model by increasing the Host OS's Attack Surface. This is because using the Host OS's network stack requires the Sentry to use the Host OS's . The Berkeley socket interface is a much larger API surface than the packet interface that our network stack uses. When passthrough mode is in use, the Sentry is allowed to use . Further, this set of syscalls includes some that allow the Sentry to create file descriptors, something that as it opens up classes of file-based attacks. There are some networking features that we can't implement on top of syscalls that we feel are safe (most notably those behind ) and therefore are not supported. Because of this, we actually support fewer networking features in passthrough mode than we do in Netstack, reducing application compatibility. That's right: using our networking stack provides better overall application compatibility than using our passthrough mode. That said, gVisor with passthrough networking still provides a high level of isolation. Applications cannot specify host syscall arguments directly, and the sentry's seccomp policy restricts its syscall use significantly more than a general purpose seccomp policy. The goal of the Secure by Default principle is to make it easy to securely sandbox containers. Of course, disabling network access entirely is the most secure option, but that is not practical for most applications. To make gVisor Secure by Default, we have made Netstack the default networking mode in gVisor as we believe that it provides significantly better isolation. For this reason we strongly caution users from changing the default unless Netstack flat out won't work for them. The passthrough mode option is still provided, but we want users to make an informed decision when selecting it. Another way in which gVisor makes it easy to securely sandbox containers is by allowing applications to run unmodified, with no special configuration needed. In order to do this, gVisor needs to support all of the features and syscalls that applications use. Neither seccomp nor gVisor's passthrough mode can do this as applications commonly use syscalls which are too dangerous to be included in a secure policy. Even if this dream isn't fully realized today, gVisor's architecture with Netstack makes this possible. If you haven't already, try running a workload in gVisor with Netstack. You can find instructions on how to get started in our . We want to hear about both your successes and any issues you encounter. We welcome your contributions, whether that be verbal feedback or code contributions, via our , , , and . Feel free to express interest in an , or reach out if you aren't sure where to start."
}
] |
{
"category": "Runtime",
"file_name": "2020-04-02-networking-security.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "carina-node runs in hostNetwork mode and listens to `8080`. ```shell \"--metrics-addr=:8080\" ``` If changing those ports, please also change the csi-carina-node service. carina-controller listens to `8080 8443`. ```shell \"--metrics-addr=:8080\" \"--webhook-addr=:8443\" ``` carina metrics | Metrics | Description | | - | - | | carinascrapecollectordurationseconds | carinacsiexporter: Duration of a collector scrape | | carinascrapecollectorsuccess | carinacsi_exporter: Whether a collector succeeded | | carinavolumegroupstatscapacitybytestotal | The number of lvm vg total bytes | | carinavolumegroupstatscapacitybytesused | The number of lvm vg used bytes | | carinavolumegroupstatslv_total | The number of lv total | | carinavolumegroupstatspv_total | The number of pv total | | carinavolumestatsreadscompleted_total | The total number of reads completed successfully | | carinavolumestatsreadsmerged_total | The total number of reads merged | | carinavolumestatsreadbytes_total | The total number of bytes read successfully | | carinavolumestatsreadtimesecondstotal | The total number of seconds spent by all reads | | carinavolumestatswritescompleted_total | The total number of writes completed successfully | | carinavolumestatswritesmerged_total | The number of writes merged | | carinavolumestatswritebytes_total | The total number of bytes write successfully | | carinavolumestatswritetimesecondstotal | This is the total number of seconds spent by all writes | | carinavolumestatsionow | The number of I/Os currently in progress | | carinavolumestatsiotimesecondstotal | Total seconds spent doing I/Os | carina provides a wealth of storage volume metrics, and kubelet itself also exposes PVC capacity and other metrics, as seen in the Grafana Kubernetes built-in view of this template. Notice The storage capacity indicator of the PVC is displayed only when the PVC is in use and mounted to the node"
}
] |
{
"category": "Runtime",
"file_name": "metrics.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Tag | string | | Socket | string | | NumQueues | int32 | | [default to 1] QueueSize | int32 | | [default to 1024] PciSegment | Pointer to int32 | | [optional] Id | Pointer to string | | [optional] `func NewFsConfig(tag string, socket string, numQueues int32, queueSize int32, ) *FsConfig` NewFsConfig instantiates a new FsConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewFsConfigWithDefaults() *FsConfig` NewFsConfigWithDefaults instantiates a new FsConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *FsConfig) GetTag() string` GetTag returns the Tag field if non-nil, zero value otherwise. `func (o FsConfig) GetTagOk() (string, bool)` GetTagOk returns a tuple with the Tag field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *FsConfig) SetTag(v string)` SetTag sets Tag field to given value. `func (o *FsConfig) GetSocket() string` GetSocket returns the Socket field if non-nil, zero value otherwise. `func (o FsConfig) GetSocketOk() (string, bool)` GetSocketOk returns a tuple with the Socket field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *FsConfig) SetSocket(v string)` SetSocket sets Socket field to given value. `func (o *FsConfig) GetNumQueues() int32` GetNumQueues returns the NumQueues field if non-nil, zero value otherwise. `func (o FsConfig) GetNumQueuesOk() (int32, bool)` GetNumQueuesOk returns a tuple with the NumQueues field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *FsConfig) SetNumQueues(v int32)` SetNumQueues sets NumQueues field to given value. `func (o *FsConfig) GetQueueSize() int32` GetQueueSize returns the QueueSize field if non-nil, zero value otherwise. `func (o FsConfig) GetQueueSizeOk() (int32, bool)` GetQueueSizeOk returns a tuple with the QueueSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *FsConfig) SetQueueSize(v int32)` SetQueueSize sets QueueSize field to given value. `func (o *FsConfig) GetPciSegment() int32` GetPciSegment returns the PciSegment field if non-nil, zero value otherwise. `func (o FsConfig) GetPciSegmentOk() (int32, bool)` GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *FsConfig) SetPciSegment(v int32)` SetPciSegment sets PciSegment field to given value. `func (o *FsConfig) HasPciSegment() bool` HasPciSegment returns a boolean if a field has been set. `func (o *FsConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o FsConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *FsConfig) SetId(v string)` SetId sets Id field to given value. `func (o *FsConfig) HasId() bool` HasId returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "FsConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Enhancements as per Kubernetes Network Plumbing Working Group conclusions/decisions CNI version upgrade based on new CNI version release Support pod level network policy to co-exist with network level policy Enhancement of network crd objects to provide more CNI customizations Enhance network smart selection mechanism New requirement/usecase support based on users demands Integrate genie with other ecosystem projects (e.g., kubespray) Helm charts based on updated features Verification and user guide update for usage of SR-IOV, DPDK E2E test suite additions/improvements Enhance logging mechanisms"
}
] |
{
"category": "Runtime",
"file_name": "ROADMAP-old.md",
"project_name": "CNI-Genie",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This guide will show you how to use remote attestation in skeleton with rune and enclave-tls. Build and install `rune` according to this . Build and install `enclave-tls` according to this . Please refer to to install the dependencies of skeleton enclave runtime. Then type the following commands to build and install the PAL of the skeleton enclave runtime. ```shell cd \"${pathtoinclavare_containers}/rune/libenclave/internal/runtime/pal/skeleton\" make TLS_SERVER=1 cp liberpal-skeleton-v3.so /usr/lib ``` Type the following commands to create a Dockerfile: ```Shell cp /opt/enclave-tls/bin/sgxstubenclave.signed.so ./ cp /etc/sgxdefaultqcnl.conf ./ cp /usr/lib/x8664-linux-gnu/libsgxpce.signed.so ./ cp /usr/lib/x8664-linux-gnu/libsgxqe3.signed.so ./ cat >Dockerfile <<EOF FROM ubuntu:18.04 WORKDIR / COPY sgxstubenclave.signed.so / COPY sgxdefaultqcnl.conf /etc/ COPY libsgxpce.signed.so /usr/lib/x8664-linux-gnu COPY libsgxqe3.signed.so /usr/lib/x8664-linux-gnu EOF ``` Then build the skeleton docker image with the command: ```shell docker build . -t skeleton-enclave ``` Type the following commands to create a Dockerfile: ```Shell cp /opt/enclave-tls/bin/sgxstubenclave.signed.so ./ cat >Dockerfile <<EOF FROM ubuntu:18.04 WORKDIR / COPY sgxstubenclave.signed.so / EOF ``` Then build the skeleton docker image with the command: ```shell docker build . -t skeleton-enclave ``` Please refer to to integrate OCI runtime rune with docker. At present, TLS server based on `enclave-tls` is only implemented in skeleton v3. ```shell docker run -i --rm --net=host --runtime=rune \\ -e ENCLAVE_TYPE=intelSgx \\ -e ENCLAVERUNTIMEPATH=/usr/lib/liberpal-skeleton-v3.so \\ -e ENCLAVERUNTIMEARGS=\"debug attester=sgx_ecdsa tls=openssl crypto=openssl\" \\ skeleton-enclave:latest ``` ```shell docker run -i --rm --net=host --runtime=rune \\ -e ENCLAVE_TYPE=intelSgx \\ -e ENCLAVERUNTIMEPATH=/usr/lib/liberpal-skeleton-v3.so \\ -e ENCLAVERUNTIMEARGS=\"debug attester=sgx_la tls=openssl crypto=openssl\" \\ skeleton-enclave:latest ``` The following method to run skeleton bundle with rune is usually provided for development purposes. Assuming you have an OCI bundle according to , please add config into config.json as following: ```shell \"cwd\": \"/\", \"annotations\": { \"enclave.type\": \"intelSgx\", \"enclave.runtime.path\": \"/usr/lib/liberpal-skeleton-v3.so\", \"enclave.runtime.args\": \"debug attester=sgx_ecdsa tls=openssl crypto=openssl\" } ``` If you do NOT set runtime parameters in `enclave.runtime.args`, TLS server will run the highest priority `enclave quote/tls wrapper/crypto` instance. Please refer to this for more information. Remember that you also need to delete the network namespace configuration in config.json to ensure you run skeleton in host network mode. After doing this, your namespaces are as following without the network type namespace: ```shell \"namespaces\": [ { \"type\": \"pid\" }, { \"type\": \"ipc\" }, { \"type\": \"uts\" }, { \"type\": \"mount\" } ], ``` Assuming you have an OCI bundle from the previous step you can execute the container in this way. ```shell cd \"$HOME/rune_workdir/rune-container\" sudo rune run skeleton-enclave-container ``` If you run the skeleton image in the docker environment, you might meet the problem as follow. ``` [getplatformquotecertdata ../qelogic.cpp:346] Error returned from the psgxgetquote_config API. 0xe019 [ERROR] sgxqegettargetinfo() with error code 0xe019 [getplatformquotecertdata ../qelogic.cpp:346] Error returned from the psgxgetquote_config API. 0xe019 [ERROR] sgxqegetquotesize(): 0xe019 ``` Please type the following code to solve the problem ```shell cp /etc/resolv.conf rootfs/etc/ ``` ```shell cd /opt/enclave-tls/bin ./enclave-tls-client -a sgx_ecdsa -t openssl -c openssl ./enclave-tls-client -a sgx_la -t openssl -c openssl ``` Only support run remote attestation based on ecdsa. ```shell cd /opt/enclave-tls/bin ./enclave-tls-client -v sgx_ecdsa -t openssl -c openssl ```"
}
] |
{
"category": "Runtime",
"file_name": "intergrate_skeleton_with_enclave_tls.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Status as of 2023-06-23: Under review Allow applications running within gVisor sandboxes to use CUDA on GPUs by providing implementations of Nvidia GPU kernel driver files that proxy ioctls to their host equivalents. Non-goals: Provide additional isolation of, or multiplexing between, GPU workloads beyond that provided by the driver and hardware. Support use of GPUs for graphics rendering. gVisor executes unmodified Linux applications in a sandboxed environment. Application system calls are intercepted by gVisor and handled by (in essence) a Go implementation of the Linux kernel called the sentry, which in turn executes as a sandboxed userspace process running on a Linux host. gVisor can execute application code via a variety of mechanisms, referred to as \"platforms\". Most platforms can broadly be divided into process-based (ptrace, systrap) and KVM-based (kvm). Process-based platforms execute application code in sandboxed host processes, and establish application memory mappings by invoking the `mmap` syscall from application process context; sentry and application processes share a file descriptor (FD) table, allowing application `mmap` to use sentry FDs. KVM-based platforms execute application code in the guest userspace of a virtual machine, and establish application memory mappings by establishing mappings in the sentry's address space, then forwarding those mappings into the guest physical address space using KVM memslots and finally setting guest page table entries to point to the relevant guest physical addresses. provides code for preparing a container for GPU use, and serves as a useful reference for the environment that applications using GPUs expect. In particular, contains a helpful list of relevant filesystem paths, while is the primary entry point into container configuration. Of these paths, `/dev/nvidiactl`, `/dev/nvidia#` (per-device, numbering from 0), `/dev/nvidia-uvm`, and `/proc/driver/nvidia/params` are kernel-driver-backed and known to be required. Most \"control\" interactions between applications and the driver consist of invocations of the `ioctl` syscall on `/dev/nvidiactl`, `/dev/nvidia#`, or `/dev/nvidia-uvm`. Application data generally does not flow through ioctls; instead, applications access driver-provided memory mappings. `/proc/driver/nvidia/params` is informational and read-only. `/dev/nvidiactl` and `/dev/nvidia#` are backed by the same `struct fileoperations nvfrontend_fops` in kernel module `nvidia.ko`, rooted in `kernel-open/nvidia` in the . The top-level `ioctl` implementation for both, `kernel-open/nvidia/nv.c:nvidia_ioctl()`, handles a small number of ioctl commands but delegates the majority to the \"resource manager\" (RM) subsystem, `src/nvidia/arch/nvalloc/unix/src/escape.c:RmIoctl()`. Both functions constrain most commands to either `/dev/nvidiactl` or `/dev/nvidia#`, as indicated by the presence of the `NVCTLDEVICEONLY` or `NVACTUALDEVICEONLY` macros respectively. `/dev/nvidia-uvm` is implemented in kernel module `nvidia-uvm.ko`, rooted in `kernel-open/nvidia-uvm` in the OSS driver source; its `ioctl` implementation is `kernel-open/nvidia-uvm/uvm.c:uvm_ioctl()`. The driver API models a collection of objects, using numeric handles as references (akin to the relationship between file descriptions and file descriptors). Objects are instances of classes, which exist in a C++-like inheritance hierarchy that is implemented in C via code generation; for example, the `RsResource` class inherits from the `Object` class, which is the hierarchy's root. Objects exist in a tree of parent-child relationships, defined by methods on the `Object` class. API-accessed objects are most frequently created by invocations of `ioctl(NVESCRM_ALLOC)`, which is parameterized by `hClass`. `src/nvidia/src/kernel/rmapi/resource_list.h` specifies the mapping from `hClass` to instantiated (\"internal\") class, as well as the type of the pointee of `NVOS21PARAMETERS::pAllocParms` or `NVOS64PARAMETERS::pAllocParms` which the object's constructor takes as input (\"alloc param"
},
{
"data": "Most application ioctls to GPU drivers can be proxied straightforwardly by the sentry: The sentry copies the ioctl's parameter struct, and the transitive closure of structs it points to, from application to sentry memory; reissues the ioctl to the host, passing pointers in the sentry's address space rather than the application's; and copies updated fields (or whole structs for simplicity) back to application memory. Below we consider complications to this basic idea. GPUs are equipped with \"device\" memory that is much faster for the GPU to access than \"system\" memory (as used by CPUs). CUDA supports two basic memory models: `cudaMalloc()` allocates device memory, which is not generally usable by the CPU; instead `cudaMemcpy()` is used to copy between system and device memory. `cudaMallocManaged()` allocates \"unified memory\", which can be used by both CPU and GPU. `nvidia-uvm.ko` backs mappings returned by `cudaMallocManaged()`, migrating pages from system to device memory on GPU page faults and from device to system memory on CPU page faults. We cannot implement UVM by substituting a sentry-controlled buffer and copying to/from UVM-controlled memory mappings \"on demand\", since GPU-side demand is driven by GPU page faults which the sentry cannot intercept directly; instead, we must map `/dev/nvidia-uvm` into application address spaces as in native execution. UVM requires that the virtual addresses of all mappings of `nvidia-uvm` match their respective mapped file offset, which in conjunction with the FD uniquely identify a shared memory segment[^cite-uvm-mmap]. Since this constraint also applies to sentry mappings of `nvidia-uvm`, if an application happens to request a mapping of `nvidia-uvm` at a virtual address that overlaps with an existing sentry memory mapping, then `memmap.File.MapInternal()` is unimplementable. On KVM-based platforms, this means that we cannot implement the application mapping, since `MapInternal` is a required step to propagating the mapping into application address spaces. On process-based platforms, this only means that we cannot support e.g. `read(2)` syscalls targeting UVM memory; if this is required, we can perform buffered copies from/to UVM memory using `ioctl(UVMTOOLSREAD/WRITEPROCESSMEMORY)`, at the cost of requiring `MapInternal` users to explicitly indicate fill/flush points before/after I/O. The extent to which applications use `cudaMallocManaged()` is unclear; use of `cudaMalloc()` and explicit copying appears to predominate in performance-sensitive code. PyTorch contains one non-test use of `cudaMallocManaged()`[^cite-pytorch-uvm], but it is not immediately clear what circumstances cause the containing function to be invoked. Tensorflow does not appear to use `cudaMallocManaged()` outside of test code. For both `cudaMalloc()` and \"control plane\" purposes, applications using CUDA map some device memory into application address spaces, as follows: The application opens a new `/dev/nvidiactl` or `/dev/nvidia#` FD, depending on the memory being mapped. The application invokes `ioctl(NVESCRMMAPMEMORY)` on an existing `/dev/nvidiactl` FD, passing the new FD as an ioctl parameter (`nvioctlnvos33parameterswith_fd::fd`). This ioctl stores information for the mapping in the new FD (`nvlinuxfileprivatet::mmap_context`), but does not modify the application's address space. The application invokes `mmap` on the new FD to actually establish the mapping into its address"
},
{
"data": "Conveniently, it is apparently permissible for the `ioctl` in step 2 to be invoked from a different process than the `mmap` in step 3, so no gVisor changes are required to support this pattern in general; we can invoke the `ioctl` in the sentry and implement `mmap` as usual. However, mappings of device memory often need to disable or constrain processor caching for correct behavior. In modern x86 processors, caching behavior is specified by page table entry flags[^cite-sdm-pat]. On process-based platforms, application page tables are defined by the host kernel, whose `mmap` will choose the correct caching behavior by delegating to the driver's implementation. On KVM-based platforms, the sentry maintains guest page tables and consequently must set caching behavior correctly. Caching behavior for mappings obtained as described above is decided during `NVESCRMMAPMEMORY`, by the \"method `RsResource::resMap`\" for the driver object specified by ioctl parameter `NVOS33_PARAMETERS::hMemory`. In most cases, this eventually results in a call to on an associated `Memory` object. Caching behavior thus depends on the logic of that function and the `MEMORY_DESCRIPTOR` associated with the `Memory` object, which is typically determined during object creation. Therefore, to support KVM-based platforms, the sentry could track allocated driver objects and emulate the driver's logic to determine appropriate caching behavior. Alternatively, could we replicate the caching behavior of the host kernel's mapping in the sentry's address space (in `vmareastruct::vmpageprot`)? There is no apparent way for userspace to obtain this information, so this would necessitate a Linux kernel patch or upstream change. `ioctl(NVESCRMALLOCMEMORY, hClass=NV01MEMORYSYSTEMOSDESCRIPTOR)` and `ioctl(NVESCRMVIDHEAP_CONTROL, function=NVOS32FUNCTIONALLOCOSDESCRIPTOR)` create `OsDescMem` objects, which are `Memory` objects backed by application anonymous memory. The ioctls treat `NVOS02PARAMETERS::pMemory` or `NVOS32PARAMETERS::data.AllocOsDesc.descriptor` respectively as an application virtual address and call Linux's `pinuserpages()` or `getuserpages()` to get `struct page` pointers representing pages starting at that address[^cite-osdesc-rmapi]. Pins are held on those pages for the lifetime of the `OsDescMem` object. The proxy driver will need to replicate this behavior in the sentry, though doing so should not require major changes outside of the driver. When one of these ioctls is invoked by an application: Invoke `mmap` to create a temporary `PROT_NONE` mapping in the sentry's address space of the size passed by the application. Call `mm.MemoryManager.Pin()` to acquire file-page references on the given application memory. Call `memmap.File.MapInternal()` to get sentry mappings of pinned file-pages. Use `mremap(oldsize=0, flags=MREMAPFIXED)` to replicate mappings returned by `MapInternal()` into the temporary mapping, resulting in a virtually-contiguous sentry mapping of the application-specified address range. Invoke the host ioctl using the sentry mapping. `munmap` the temporary mapping, which is no longer required after the host ioctl. Hold the file-page references returned by `mm.MemoryManager.Pin()` until an application ioctl is observed freeing the corresponding `OsDescMem`, then call `mm.Unpin()`. Since ioctl parameter structs must be copied into the sentry in order to proxy them, gVisor implicitly restrict the set of application requests to those that are explicitly implemented. We can impose additional restrictions based on parameter values in order to further reduce attack surface, although possibly at the cost of reduced development velocity; introducing new restrictions after launch is difficult due to the risk of regressing existing users. Intuitively, limiting the scope of our support to GPU compute should allow us to narrow API usage to that of the CUDA runtime. [Nvidia GPU driver CVEs are published in moderately large batches every ~3-4 months](https://www.nvidia.com/en-us/security/), but insufficient information regarding these CVEs is available for us to determine how many of these vulnerabilities we could mitigate via parameter"
},
{
"data": "By default, the driver prevents a `/dev/nvidiactl` FD from using objects created by other `/dev/nvidiactl` FDs[^cite-rm-validate], providing driver-level resource isolation between applications. Since we need to track at least a subset of object allocations for OS-described memory, and possibly for determining memory caching type, we can optionally track all objects and further constrain ioctls to using valid object handles if driver-level isolation is believed inadequate. While `seccomp-bpf` filters allow us to limit the set of ioctl requests that the sentry can make, they cannot filter based on ioctl parameters passed via memory such as allocation `hClass`, `NVESCRM_CONTROL` command, or `NVESCRMVIDHEAP_CONTROL` function, limiting the extent to which they can protect the host from a compromised sentry. The contains code to configure an unstarted container based on , [invoking `nvidia-container-cli` from `libnvidia-container` (described above) to do most of the actual work](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/arch-overview.html). It is used ubiquitously for this purpose, including by the . The simplest way for `runsc` to obtain Nvidia Container Toolkit's behavior is obviously to use it, either by invoking `nvidia-container-runtime-hook` or by using the Toolkit's code (which is written in Go) directly. However, filesystem modifications made to the container's `/dev` and `/proc` directories on the host will not be application-visible since `runsc` necessarily injects sentry `devtmpfs` and `procfs` mounts at these locations, requiring that `runsc` internally replicate the effects of `libnvidia-container` in these directories. Note that host filesystem modifications are still necessary, since the sentry itself needs access to relevant host device files and MIG capabilities. Conversely, we can attempt to emulate the behavior of `nvidia-container-toolkit` and `libnvidia-container` within `runsc`; however, note that `libnvidia-container` executes `ldconfig` to regenerate the container's runtime linker cache after mounting the driver's shared libraries into the container[^cite-nvc-ldcache_update], which is more difficult if said mounts exist within the sentry's VFS rather than on the host. When running on the proprietary kernel driver, applications invoke `ioctl(NVESCRM_CONTROL)` commands that do not appear to exist in the OSS driver. The OSS driver lacks support for GPU virtualization[^cite-oss-vgpu]; however, Google Compute Engine (GCE) GPUs are exposed to VMs in passthrough mode[^cite-oss-gce], and Container-Optimized OS (COS) switched to the OSS driver in Milestone 105[^cite-oss-cos], suggesting that OSS-driver-only support may be sufficient. If support for the proprietary driver is required, we can request documentation from Nvidia. Nvidia requires that the kernel and userspace components of the driver match versions[^cite-abi-readme], and does not guarantee kernel ABI stability[^cite-abi-discuss], so we may need to support multiple ABI versions in the proxy. It is not immediately clear if this will be a problem in practice. To simplify the initial implementation, we will focus immediate efforts on process-based platforms and defer support for KVM-based platforms to future work. In the sentry: Add structure and constant definitions from the Nvidia open-source kernel driver to new package `//pkg/abi/nvidia`. Implement the proxy driver under `//pkg/sentry/devices/nvproxy`, initially comprising `FileDescriptionImpl` implementations proxying `/dev/nvidiactl`, `/dev/nvidia#`, and `/dev/nvidia-uvm`. `/proc/driver/nvidia/params` can probably be (optionally) read once during startup and implemented as a static file in the sentry. Each ioctl command and object class is associated with its own parameters type and logic; thus, each needs to be implemented"
},
{
"data": "We can generate lists of required commands/classes by running representative applications under on a variety of GPUs; a list derived from a minimal CUDA workload run on a single VM follows below. The proxy driver itself should also log unimplemented commands/classes for iterative development. For the most part, known-required commands/classes should be implementable incrementally and in parallel. Concurrently, at the API level, i.e. within `//runsc`: Add an option to enable Nvidia GPU support. When this option is enabled, and `runsc` detects that GPU support is requested by the container, it enables the proxy driver (by calling `nvproxy.Register(vfsObj)`) and configures the container consistently with `nvidia-container-toolkit` and `libnvidia-container`. Since setting the wrong caching behavior for device memory mappings will fail in unpredictable ways, `runsc` must ensure that GPU support cannot be enabled when an unsupported platform is selected. To support Nvidia Multi-Process Service (MPS), we need: Support for `SCM_CREDENTIALS` on host Unix domain sockets; already implemented as part of previous MPS investigation, but not merged. Optional pass-through of `statfs::f_type` through `fsimpl/gofer`; needed for a runsc bind mount of the host's `/dev/shm`, through which MPS shares memory; previously hacked in (optionality not implemented). Features required to support Nvidia Persistence Daemon and Nvidia Fabric Manager are currently unknown, but these are not believed to be critical, and we may choose to deliberately deny access to them (and/or MPS) to reduce attack surface. , so it is not clear that granting MPS access to sandboxed containers is safe. Implementation notes: Each application `open` of `/dev/nvidictl`, `/dev/nvidia#`, or `/dev/nvidia-uvm` must be backed by a distinct host FD. Furthermore, the proxy driver cannot go through sentry VFS to obtain this FD since doing so would recursively attempt to open the proxy driver. Instead, we must allow the proxy driver to invoke host `openat`, and ensure that the mount namespace in which the sentry executes contains the required device special files. `/dev/nvidia-uvm` FDs may need to be `UVM_INITIALIZE`d with `UVMINITFLAGSMULTIPROCESSSHARINGMODE` to be used from both sentry and application processes[^cite-uvm-vaspacemm_enabled]. Known-used `nvidia.ko` ioctls: `NVESCCHECKVERSIONSTR`, `NVESCSYSPARAMS`, `NVESCCARDINFO`, `NVESCNUMA_INFO`, `NVESCREGISTERFD`, `NVESCRMALLOC`, `NVESCRMALLOCMEMORY`, `NVESCRMALLOCOSEVENT`, `NVESCRMCONTROL`, `NVESCRM_FREE`, `NVESCRMMAPMEMORY`, `NVESCRMVIDHEAP_CONTROL`, `NVESCRMDUPOBJECT`, `NVESCRMUPDATEDEVICEMAPPINGINFO` `NVESCRM_CONTROL` is essentially another level of ioctls. Known-used `NVOS54PARAMETERS::cmd`: `NV0000CTRLCMDSYSTEMGETBUILD_VERSION`, `NV0000CTRLCMDCLIENTSETINHERITEDSHARE_POLICY`, `NV0000CTRLCMDSYSTEMGETFABRICSTATUS`, `NV0000CTRLCMDGPUGETPROBEDIDS`, `NV0000CTRLCMDSYNCGPUBOOSTGROUP_INFO`, `NV0000CTRLCMDGPUATTACHIDS`, `NV0000CTRLCMDGPUGETID_INFO`, `NV0000CTRLCMDGPUGETATTACHEDIDS`, `NV2080CTRLCMDGPUGETACTIVEPARTITION_IDS`, `NV2080CTRLCMDGPUGETGIDINFO`, `NV0080CTRLCMDGPUGETVIRTUALIZATIONMODE`, `NV2080CTRLCMDFBGETINFO`, `NV2080CTRLCMDGPUGETINFO`, `NV0080CTRLCMDMCGETARCHINFO`, `NV2080CTRLCMDBUSGET_INFO`, `NV2080CTRLCMDBUSGETPCIINFO`, `NV2080CTRLCMDBUSGETPCIBAR_INFO`, `NV2080CTRLCMDGPUQUERYECCSTATUS`, `NV0080CTRLFIFOGETCAPS`, `NV0080CTRLCMDGPUGETCLASSLIST`, `NV2080CTRLCMDGPUGETENGINES`, `NV2080CTRLCMDGPUGETSIMULATIONINFO`, `NV0000CTRLCMDGPUGETMEMOPENABLE`, `NV2080CTRLCMDGRGET_INFO`, `NV2080CTRLCMDGRGETGPCMASK`, `NV2080CTRLCMDGRGETTPCMASK`, `NV2080CTRLCMDGRGETCAPSV2`, `NV2080CTRLCMDCEGET_CAPS`, `NV2080CTRLCMDGPUGETCOMPUTEPOLICY_CONFIG`, `NV2080CTRLCMDGRGETGLOBALSMORDER`, `NV0080CTRLCMDFBGETCAPS`, `NV0000CTRLCMDCLIENTGETADDRSPACE_TYPE`, `NV2080CTRLCMDGSPGET_FEATURES`, `NV2080CTRLCMDGPUGETSHORTNAME_STRING`, `NV2080CTRLCMDGPUGETNAMESTRING`, `NV2080CTRLCMDGPUQUERYCOMPUTEMODE_RULES`, `NV2080CTRLCMDRCRELEASEWATCHDOGREQUESTS`, `NV2080CTRLCMDRCSOFTDISABLEWATCHDOG`, `NV2080CTRLCMDNVLINKGETNVLINKSTATUS`, `NV2080CTRLCMDRCGETWATCHDOGINFO`, `NV2080CTRLCMDPERFBOOST`, `NV0080CTRLCMDFIFOGETCHANNELLIST`, `NVC36FCTRLGETCLASS_ENGINEID`, `NVC36FCTRLCMDGPFIFOGETWORKSUBMIT_TOKEN`, `NV2080CTRLCMDGRGETCTXBUFFERSIZE`, `NVA06FCTRLCMDGPFIFO_SCHEDULE` Known-used `NVOS54_PARAMETERS::cmd` that are apparently unimplemented and may be proprietary-driver-only (or just well-hidden?): 0x20800159, 0x20800161, 0x20801001, 0x20801009, 0x2080100a, 0x20802016, 0x20802084, 0x503c0102, 0x90e60102 Known-used `nvidia-uvm.ko` ioctls: `UVM_INITIALIZE`, `UVMPAGEABLEMEMACCESS`, `UVMREGISTERGPU`, `UVMCREATERANGEGROUP`, `UVMREGISTERGPUVASPACE`, `UVMCREATEEXTERNALRANGE`, `UVMMAPEXTERNALALLOCATION`, `UVMREGISTER_CHANNEL`, `UVMALLOCSEMAPHOREPOOL`, `UVMVALIDATEVARANGE` Known-used `NVESCRM_ALLOC` `hClass`, i.e. allocated object classes: `NV01ROOTCLIENT`, `MPSCOMPUTE`, `NV01DEVICE0`, `NV20SUBDEVICE_0`, `TURINGUSERMODEA`, `FERMIVASPACEA`, `NV50THIRDPARTY_P2P`, `FERMICONTEXTSHAREA`, `TURINGCHANNELGPFIFOA`, `TURINGCOMPUTEA`, `TURINGDMACOPYA`, `NV01EVENTOSEVENT`, `KEPLERCHANNELGROUP_A` guarantees whatsoever, and even API compatibility breaks occasionally.\" - https://github.com/NVIDIA/open-gpu-kernel-modules/discussions/157#discussioncomment-2757388 kernel modules, version 530.41.03. ... Note that the kernel modules built here must be used with GSP firmware and user-space NVIDIA GPU driver components from a corresponding 530.41.03 driver release.\" - https://github.com/NVIDIA/open-gpu-kernel-modules/blob/6dd092ddb7c165fb1ec48b937fa6b33daa37f9c1/README.md => (OSS).\" - https://cloud.google.com/container-optimized-os/docs/release-notes/m105#cos-beta-105-17412-1-2vsmilestone101. Also see b/235364591, go/cos-oss-gpu. passthrough mode so that your VMs have direct control over the GPUs and their associated memory.\" - https://cloud.google.com/compute/docs/gpus virtualization, neither as a host nor a guest.\" - https://github.com/NVIDIA/open-gpu-kernel-modules/discussions/157#discussioncomment-2752052 => . `APISECURITYINFO::clientOSInfo` is set by . Both `PDBPROPSYSVALIDATECLIENT_HANDLE` and `PDBPROPSYSVALIDATECLIENTHANDLESTRICT` are enabled by default by ."
}
] |
{
"category": "Runtime",
"file_name": "nvidia_driver_proxy.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Throughout this document `$var` is used to refer to the directory `/var/lib/rkt/pods`, and `$uuid` refers to a pod's UUID e.g. \"076292e6-54c4-4cc8-9fa7-679c5f7dcfd3\". Due to rkt's - and specifically its lack of any management daemon process - a combination of advisory file locking and atomic directory renames (via ) is used to represent and transition the basic pod states. At times where a state must be reliably coupled to an executing process, that process is executed with an open file descriptor possessing an exclusive advisory lock on the respective pod's directory. Should that process exit for any reason, its open file descriptors will automatically be closed by the kernel, implicitly unlocking the pod's directory. By attempting to acquire a shared non-blocking advisory lock on a pod directory we're able to poll for these process-bound states, additionally by employing a blocking acquisition mode we may reliably synchronize indirectly with the exit of such processes, effectively providing us with a wake-up event the moment such a state transitions. For more information on advisory locks see the man page. At this time there are four distinct phases of a pod's life which involve process-bound states: Prepare Run ExitedGarbage Garbage Each of these phases involves an exclusive lock on a given pod's directory. As an exclusive lock by itself cannot express both the phase and process-bound activity within that phase, we combine the lock with the pod's directory location to represent the whole picture: | Phase | Directory | Locked exclusively | Unlocked | ||--|-|--| | Prepare | \"$var/prepare/$uuid\" | preparing | prepare-failed | | Run | \"$var/run/$uuid\" | running | exited | | ExitedGarbage | \"$var/exited-garbage/$uuid\" | exited+deleting | exited+gc-marked | | Garbage | \"$var/garbage/$uuid\" | prepare-failed+deleting | prepare-failed+gc-marked | To prevent the period between first creating a pod's directory and acquiring its lock from appearing as prepare-failed in the Prepare phase, and to provide a phase for prepared pods where they may dwell and the lock may be acquired prior to entering the Run phase, two additional directories are employed where locks have no meaning: | Phase | Directory | Locked exclusively | Unlocked | |--|--|-|--| | Embryo | \"$var/embryo/$uuid\" | - | - | | Prepare | \"$var/prepare/$uuid\" | preparing | prepare-failed | | Prepared | \"$var/prepared/$uuid\" | - | - | | Run | \"$var/run/$uuid\" | running | exited | | ExitedGarbage | \"$var/exited-garbage/$uuid\" | exited+deleting | exited+gc-marked | | Garbage | \"$var/garbage/$uuid\" | prepare-failed+deleting | prepare-failed+gc-marked | The `rkt app` experimental family of subcommands allow mutating operations on a running pod: namely, adding, starting, stopping, and removing applications. To be able to use these subcommands the environment variable `RKTEXPERIMENTAPP=true` must be set. The `rkt app sandbox` subcommand transitions to the Run phase as described above, whereas the remaining subcommands mutate the pod while staying in the Run phase. To synchronize operations inside the Run phase an additional advisory lock `$var/run/$uuid/pod.lck` is being introduced. Locking on the `$var/run/$uuid/pod` manifest won't work because changes on it need to be atomic, realized by overwriting the original manifest. If this file is locked, the pod is undergoing a mutation. Note that only `rkt add/rm` operations are synchronized. To retain consistency for all other operations (i.e. `rkt list`) that need to read the `$var/run/$uuid/pod` manifest all mutating operations are atomic. The `app add/start/stop/rm` subcommands all run within the Run phase where the exclusive advisory lock on the `$var/run/$uuid` directory is held by the systemd-nspawn"
},
{
"data": "The following table gives an overview of the states when a lock on `$var/run/$uuid/pod.lck` is being held: | Phase | Locked exclusively | Unlocked | |--|--|-| | Add | adding | added | | Start | - | - | | Stop | - | - | | Remove | removing | removed | These phases, their function, and how they proceed through their respective states is explained in more detail below. `rkt run` and `rkt prepare` instantiate a new pod by creating an empty directory at `$var/embryo/$uuid`. An exclusive lock is immediately acquired on the created directory which is then renamed to `$var/prepare/$uuid`, transitioning to the `Prepare` phase. `rkt run` and `rkt prepare` enter this phase identically; holding an exclusive lock on the pod directory `$var/prepare/$uuid`. After preparation completes, while still holding the exclusive lock (the lock is held for the duration): `rkt prepare` transitions to `Prepared` by renaming `$var/prepare/$uuid` to `$var/prepared/$uuid`. `rkt run` transitions directly from `Prepare` to `Run` by renaming `$var/prepare/$uuid` to `$var/run/$uuid`, entirely skipping the `Prepared` phase. Should `Prepare` fail or be interrupted, `$var/prepare/$uuid` will be left in an unlocked state. Any directory in `$var/prepare` in an unlocked state is considered a failed prepare. `rkt gc` identifies failed prepares in need of clean up by trying to acquire a shared lock on all directories in `$var/prepare`, renaming successfully locked directories to `$var/garbage` where they are then deleted. `rkt prepare` concludes successfully by leaving the pod directory at `$var/prepared/$uuid` in an unlocked state before returning `$uuid` to the user. `rkt run-prepared` resumes where `rkt prepare` concluded by exclusively locking the pod at `$var/prepared/$uuid` before renaming it to `$var/run/$uuid`, specifically acquiring the lock prior to entering the `Run` phase. `rkt run` never enters this phase, skipping directly from `Prepare` to `Run` with the lock held. `rkt run` and `rkt run-prepared` both arrive here with the pod at `$var/run/$uuid` while holding the exclusive lock. The pod is then executed while holding this lock. It is required that the stage1 `coreos.com/rkt/stage1/run` entrypoint keep the file descriptor representing the exclusive lock open for the lifetime of the pod's process. All this requires is that the stage1 implementation not close the inherited file descriptor. This is facilitated by supplying stage1 its number in the RKTLOCKFD environment variable. What follows applies equally to `rkt run` and `rkt run-prepared`. A pod is considered exited if a shared lock can be acquired on `$var/run/$uuid`. Upon exit of a pod's process, the exclusive lock acquired before entering the `Run` phase becomes released by the kernel. Exited pods are discarded using a common mark-and-sweep style of garbage collection by invoking the `rkt gc` command. This relatively simple approach lends itself well to a minimal file-system based implementation utilizing no additional daemons or record keeping with good efficiency. The process is performed in two distinct passes explained in detail below. All directories found in `$var/run` are tested for exited status by trying to acquire a shared advisory lock on each directory. When a directory's lock cannot be acquired, the directory is skipped as it indicates the pod is currently executing. When the lock is successfully acquired, the directory is renamed from `$var/run/$uuid` to `$var/exited-garbage/$uuid`. This renaming effectively implements the \"mark\" operation. Since the locks are immediately released, operations like `rkt status` may safely execute concurrently with `rkt gc`. Marked exited pods dwell in the `$var/exited-garbage` directory for a grace period during which their status may continue to be queried by `rkt"
},
{
"data": "The rename from `$var/run/$uuid` to `$var/exited-garbage/$uuid` serves in part to keep marked pods from cluttering the `$var/run` directory during their respective dwell periods. A side-effect of the rename operation responsible for moving a pod from `$var/run` to `$var/exited-garbage` is an update to the pod directory's change time. The sweep operation takes this updated file change time as the beginning of the \"dwell\" grace period, and discards exited pods at the expiration of that period. This grace period currently defaults to 30 minutes, and may be explicitly specified using the `--grace-period=duration` flag with `rkt gc`. Note that this grace period begins from the time a pod was marked by `rkt gc`, not when the pod exited. A pod becomes eligible for marking when it exits, but will not actually be marked for collection until a subsequent `rkt gc`. The change times of all directories found in `$var/exited-garbage` are compared against the current time. Directories having sufficiently old change times are locked exclusively and cleaned up. If a lock acquisition fails, the directory is skipped. `rkt gc` may fail to acquire an exclusive lock if the pod to be collected is currently being accessed, by `rkt status` or another `rkt gc`, for example. The skipped pods will be revisited on a subsequent `rkt gc` invocation's sweep pass. During the cleanup, the pod's stage1 gc entry point is first executed. This gives the stage1 a chance to clean up anything related to the environment shared between containers. The default stage1 uses the gc entrypoint to clean up the private networking artifacts. After the completion of the gc entrypoint, the pod directory is recursively deleted. To answer the questions \"Has this pod exited?\" and \"Is this pod being deleted?\" the pod's UUID is looked for in `$var/run` and `$var/exited-garbage`, respectively. Pods found in the `$var/exited-garbage` directory must already be exited, and a shared lock acquisition may be used to determine if the garbage pod is actively being deleted. Those found in the `$var/run` directory may be exited or running, and a failed shared lock acquisition indicates a pod in `$var/run` is alive at the time of the failed acquisition. Care must be taken when acting on what is effectively always going to be stale knowledge of pod state; though a pod's status may be found to be \"running\" by the mechanisms documented here, this was an instantaneously sampled state that was true at the time sampled (failed lock attempt at `$var/run/$uuid`), and may cease to be true by the time code execution progressed to acting on that sample. Pod exit is totally asynchronous and cannot be prevented, relevant code must take this into consideration (e.g. `rkt enter`) and be tolerant of states progressing. For example, two `rkt run-prepared` invocations for the same UUID may occur simultaneously. Only one of these will successfully transition the pod from `Prepared` to `Run` due to rename's atomicity, which is exactly what we want. The loser of this race needs to simply inform the user of the inability to transition the pod to the run state, perhaps with a check to see if the pod transitioned independently and a useful message mentioning it. Another example would be two `rkt gc` commands finding the same exited pods and attempting to transition them to the `Garbage` phase concurrently. They can't both perform the transitions, one will lose the race at each pod. This needs to be considered in the error handling of the transition callers as perfectly normal. Simply ignoring ENOENT errors propagated from the loser's rename calls can suffice."
}
] |
{
"category": "Runtime",
"file_name": "pod-lifecycle.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "All notable changes to Linstor OPENAPI(REST) will be documented in this file. Added force_restore parameter to backup ship, restore and schedule backups Added volume_passphrases to resource-group spawn Added volume_passphrases to ResourceDefinitionCloneRequest Added passphrase to volume-definition create Added volume definition PUT /encryption-passphrase endpoint Added initial resource definition properties to spawn command Removed everything OpenFlex related (entry-points, schemas, etc..) Added peerSlots to ResourceGroup (create, modify and spawn) Added storpoolfreecapacityoversubscriptionratio and storpooltotalcapacityoversubscriptionratio to QuerySizeInfoSpawnResult Added GET /v1/files/{extFileName}/check/{node} Added ExtFileCheckResult Added storpool_rename to SnapshotRestore and BackupSchedule Deprecated EXOS entry points Added EffectivePropertiesMap, EffectivePropertiesMapValue and PropertyWithDescription objects Added maxrollbackentries to ControllerConfigDbK8s Added GET /v1/view/backup/queue Added SnapQueue, NodeQueue, and BackupQueues objects Added POST /v1/queries/resource-groups/query-all-size-info Added QueryAllSizeInfoRequest and QueryAllSizeInfoResponse objects Added POST /v1/action/snapshot/multi as well as CreateMultiSnapshotRequest and CreateMultiSnapshotResponse objects Added POST /v1/resource-groups/{resource_group}/query-size-info Added QuerySizeInfoRequest and QuerySizeInfoResponse objects Added GET /v1/node-connections/ with filter-options ?nodea and ?nodeb Added PUT /v1/node-connections/{nodeA}/{nodeB} Added NodeConnection and NodeConnectionModify objects Added GET, POST /v1/remotes/ebs Added PUT /v1/remotes/ebs/{remoteName} Extended RemoteList with ebs_remotes Added POST /v1/nodes/ebs Extended SatelliteConfig with ebs boolean Added SnapshotVolumeNode which includes an optional state Added /v1/physical-storage/{node} to view devices for a single node Added /v1/schedules/ and /v1/schedules/{scheduleName} endpoints as well as Schedule to create, delete, modify and list schedules Added /v1/remotes/{remotename}/backups/schedule/{schedulename}/enable, disable and delete to create, delete and modify schedule-remote-rscDfn-triples Added /v1/schedules/list and /v1/schedules/list/{rscName} endpoints as well as ScheduledRscs and ScheduleDetails to list schedule-remote-rscDfn-triples Added k8s options to DB config Added NetCom properties to whitelist Added usezfsclone to clone request Added snap_name to backup create Added /v1/events/nodes SSE node events stream Added ?cached=true option for all storage-pool list APIs Added ?limit and ?offset to /v1/nodes/{node}/storage-pools/{storagepool} Added /v1/controller/backup/db to do an database online backup Added /v1/resource-definitions/{resource}/sync-status to check if resource is ready to be used Added /v1/nodes/{node}/evacuate Added /v1/stats/* endpoints for all objects Added /v1/remotes/{remote-name}/backups/info as well as BackupInfo, BackupInfoRequest, BackupInfoStorPool and BackupInfoVolume Added download_only to BackupShip and BackupRestore Added query-param \"nodes\" for DELETE /v1/resource-definitions/{resource}/snapshots/{snapshot} Added optional"
},
{
"data": "Added Schema NodeRestore Extended PUT /v1/nodes/{node_name}/restore to accept new NodeRestore body Added resource-definition clone API POST /v1/resource-definitions/{rscname}/clone GET /v1/resource-definitions/{rscname}/clone/{clonedrsc} Added /v1/resource-groups/{resource_group}/adjust Added /v1/resource-groups/adjustall Added Backup and BackupList, as well as the entrypoint /v1/remotes/{remote_name}/backups Added S3Remote and the corresponding entrypoint /v1/remotes Added LinstorRemote and the corresponding entrypoint /v1/remotes Extended SatelliteConfig with remote_spdk boolean Added layer_stack to toggle-disk API Added layer BCACHE Schemas BCacheResource and BCacheVolume Added shared_name to ResourceWithVolumes Added makeResourceAvailable API Added diskless storage pool filter to AutoSelectFilter Added external_locking to StoragePool Added sharedspace and externallocking to PhysicalStorageStoragePoolCreate Added shared_space to StoragePool (for listing) Added EXOS API /v1/vendor/seagate/exos/defaults /v1/vendor/seagate/exos/enclosure/ /v1/vendor/seagate/exos/enclosure/{enclosure} /v1/vendor/seagate/exos/enclosure/{enclosure}/events /v1/vendor/seagate/exos/map Added ExternalFiles API with corresponding data structure /v1/files /v1/files/{extFileName} /v1/resource-definitions/{resource}/files/{extFileName} Added de-/activate to resource API /v1/resource-definitions/{resource}/resources/{node_name}/activate /v1/resource-definitions/{resource}/resources/{node_name}/deactivate Added the PropsInfo API which exposes meta information about properties: /v1/controller/properties/info /v1/controller/properties/info/all /v1/nodes/properties/info /v1/storage-pool-definitions/properties/info /v1/nodes/{node}/storage-pools/properties/info /v1/resource-definitions/properties/info /v1/resource-definitions/{resource}/resources/properties/info /v1/resource-definitions/{resource}/volume-definitions/properties/info /v1/resource-definitions/{resource}/resources/{node}/volumes/properties/info /v1/resource-definitions/{resource}/resource-connections/properties/info /v1/resource-groups/properties/info /v1/resource-groups/{resource_group}/volume-groups/properties/info /v1/resource-definitions/{resource}/drbd-proxy/properties/info Added /v1/nodes/{node}/restore Added additionalplacecount to AutoSelectFilter Added etcd.prefix to ControllerConfigDbEtcd parameters Added promotionscore and maypromote to DrbdResource object Added /v1/error-reports DELETE method, to delete a range of error reports or single ones Added SSE (Server Sent Events) url /v1/events/drbd/promotion Added /v1/view/snapshot-shippings Added optional AutoSelectFilter to resource-group/spawn Added /v1/nodes/{node}/config, that allows you to get and set the satellite config Added /v1/sos-report to create bug reports you can send to linbit Added new fields to the ErrorReport object Added /v1/resource-definitions/{resource}/snapshot-shipping Allow to modify the resource group in Resource definitions Added createTimestamp to Resource and Snapshot Added default value (null) for AutoPlaceRequest's layer_list Added /v1/view/snapshots for a faster all in one snapshot list Filter lists by properties: /v1/nodes /v1/resource-definitions /v1/resource-groups /v1/view/storage-pools /v1/view/resources Added CacheResource and CacheVolume schemas AutSelectFilter arrays are now null per default Added connections map to the DRBD resource layer data Added support for Openflex Added /v1/controller/config, that gives you the controller config information Fixed broken volume definition modify `flags` handling Added flags to volume groups (create/modify) Added WritecacheResource and WritecacheVolume schemas. Removed support for swordfish Added `withstoragepool` to PhysicalStorageCreate post request, allowing to create linstor storage pools too Added `gross` flag for volume-definition size Added flags to VolumeDefinitionModify (so that `gross` flag can be changed) Added query-max-volume-size to resource-groups Added /v1/physical-storage endpoint, that lets you query and create lvm/zfs pools Extended Node with list of supported providers and layers as well as lists of reasons for unsupported providers and layers Added `reports` array field to Volume object, contains ApiCallRcs for problems Changed `ResourceDefinitions` can now include `VolumeDefinitions` in `volume_definitions` field Added various filter query parameters Added supports_snapshots to StoragePool Added /v1/resource-groups Added /v1/resource-groups/{rscgrp}/volume-groups Moved AutoSelectFilter::place_count default indirectly to create resource implementation Added disklessonremaining to AutoSelectFilter Changed /v1/view/resources return type to ResourceWithVolumes ResourceWithVolumes is now a child type of Resource (removed volumes from Resource) Added extmetastor_pool to DrbdVolume Added is_active field to the NetInterface type Added /v1/resource-definitions/{rscName}/resources/{nodeName}/volumes/{vlmnr} PUT Added `reports` field to StoragePool object Added /v1/view/storage-pools overview path Added uuid fields for objects Added /v1/view/resources overview path documentation schema extraction Added /v1/storage-pool-definitions object path added NVME layer object type Documentation review and updates no functional changes Initial REST API v1"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-OPENAPI.md",
"project_name": "LINSTOR",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "When in noop mode target configuration files will not be modified. ``` confd -noop ``` ``` noop = true ``` ``` confd -onetime -noop ``` ``` 2014-07-08T22:30:10-07:00 confd[16397]: INFO /tmp/myconfig.conf has md5sum c1924fc5c5f2698e2019080b7c043b7a should be 8e76340b541b8ee29023c001a5e4da18 2014-07-08T22:30:10-07:00 confd[16397]: WARNING Noop mode enabled /tmp/myconfig.conf will not be modified ```"
}
] |
{
"category": "Runtime",
"file_name": "noop-mode.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "We added an events service for getting events across various services including containers, content, execution, images, namespaces, and snapshots. Additionally we added `ctr events` to view events emitted by the service. ``` $ ctr events 2017-06-23 23:21:30.271802153 +0000 UTC /snapshot/prepare key=registry parent=sha256:dc22a13eb565d14bfe2b16f6fa731a05da0eeff02a52059c7b59cdc2c232a2b2 2017-06-23 23:21:30.28045193 +0000 UTC /containers/create id=registry image=docker.io/library/registry:latest runtime=&ContainerCreate_Runtime{Name:io.containerd.runtime.v1.linux,Options:map[string]string{},} 2017-06-23 23:21:30.347842352 +0000 UTC /runtime/task-create id=registry type=CREATE pid=30411 status=0 exited=0001-01-01 00:00:00 +0000 UTC 2017-06-23 23:21:30.355290368 +0000 UTC /runtime/create id=registry bundle=/var/lib/containerd/io.containerd.runtime.v1.linux/default/registry rootfs=type=overlay:src=overlay checkpoint= 2017-06-23 23:21:30.362012776 +0000 UTC /tasks/create id=registry 2017-06-23 23:21:30.369742117 +0000 UTC /runtime/task-start id=registry type=START pid=30411 status=0 exited=0001-01-01 00:00:00 +0000 UTC 2017-06-23 23:21:30.369793151 +0000 UTC /tasks/start id=registry 2017-06-23 23:21:40.169884207 +0000 UTC /runtime/task-exit id=registry type=EXIT pid=30411 status=130 exited=2017-06-23 23:21:40.16962998 +0000 UTC 2017-06-23 23:21:40.185850194 +0000 UTC /runtime/delete id=registry runtime=io.containerd.runtime.v1.linux status=130 exited=2017-06-23 23:21:40.16962998 +0000 UTC 2017-06-23 23:21:40.225633455 +0000 UTC /tasks/delete id=registry pid=30411 status=130 2017-06-23 23:21:42.053154149 +0000 UTC /snapshot/remove key=registry 2017-06-23 23:21:42.061579495 +0000 UTC /containers/delete id=registry ``` We added the syntax to use for filtration of items over the containerd API. The `filter` package defines a syntax and parser that can be used across types and use cases in a uniform manner. This will be used commonly across the API for images, containers, events, snapshots, etc. The syntax is fairly familiar, if you've used container ecosystem projects. At the core, we base it on the concept of protobuf field paths, augmenting with the ability to quote portions of the field path to match arbitrary labels. These \"selectors\" come in the following syntax: ``` <fieldpath>[<operator><value>] ``` A basic example is as follows: ``` name==foo ``` This would match all objects that have a field `name` with the value `foo`. If we only want to test if the field is present, we can omit the operator. This is most useful for matching labels in containerd. The following will match objects that has the field labels and have the label \"foo\" defined: ``` labels.foo ``` We also allow for quoting of parts of the field path to allow matching of arbitrary items: ``` labels.\"very complex label\"==something ``` We also define `!=` and `~=` as operators. The `!=` operator will match all objects that don't match the value for a field and `~=` will compile the target value as a regular expression and match the field value against that. Selectors can be combined using a comma, such that the resulting selector will require all selectors are matched for the object to match. The following example will match objects that are named `foo` and have the label `bar`: ```"
},
{
"data": "``` We added support for pushing and pulling OCI indexes. Currently all content referenced by the list are pulled and further work on the client will be done to allow selection of the pulled manifest to extract and run. We added `ctr snapshot list` to snapshots from containerd. This will output all snapshots, not just the active snapshots used by containers. ``` $ ctr snapshot list ID Parent State Readonly registry2 sha256:dc22a13eb565d14bfe2b16f6fa731a05da0eeff02a52059c7b59cdc2c232a2b2 active false registry3 sha256:dc22a13eb565d14bfe2b16f6fa731a05da0eeff02a52059c7b59cdc2c232a2b2 active false sha256:4ac69ce655ab8aa97362915793348d31361fb3c047e223c2b58be706e89c48fc sha256:ba2cc2690e31f63847e4bc0d266b354f8f11dc04474d45d44312ff70edae9c98 committed true sha256:ba2cc2690e31f63847e4bc0d266b354f8f11dc04474d45d44312ff70edae9c98 committed true sha256:bfe0b04fc169b94099b29dbf5a527f6a11db627cd0a6126803edf8f42bd7b4b3 sha256:4ac69ce655ab8aa97362915793348d31361fb3c047e223c2b58be706e89c48fc committed true sha256:d959def87dadbb9ba85070c09e99b46d994967b12f5748f617c377073b8d1e39 sha256:bfe0b04fc169b94099b29dbf5a527f6a11db627cd0a6126803edf8f42bd7b4b3 committed true sha256:dc22a13eb565d14bfe2b16f6fa731a05da0eeff02a52059c7b59cdc2c232a2b2 sha256:d959def87dadbb9ba85070c09e99b46d994967b12f5748f617c377073b8d1e39 committed true ``` As part of our API review process we have started implementing some changes to make the API clearer and more consistent. At the Moby summit on 06/19/2017 there was a containerd round table meeting. This was a good opportunity to discuss the upcoming API freeze and release of containerd with others working with it in the community. Always remember that these summit round tables are not the only opportunity to have these topics discussed and everyone is encouraged to open issues and engage the community on Slack. \"What are the plans for a resolver-resolver and image reference namespacing?\" Maintainers are trying to figure out what everyones plans/needs are for a resolver-resolver. A resolver-resolver allows configuring where push/pull happens. Could take in a configuration format which specifies how to push, pull, and authenticate. Needed in order to do discover of names to registry configurations. Stephen confirms we are thinking of more configuration driven rather programmatic. The resolver-resolver and any configuration is always client side, allowing the integrator to design any interface or configuration suits their needs. But we are also looking to define something that could be consistently applied by containerd clients. The resolver-resolver will be compatible with Docker, but could allow Docker to expand its resolution support. \"What is the plan for garbage collection?\" Current design is oriented around being garbage collected. The data model is designed around references which are used to walk a root set. Everything unreferenced would be garbage collected. Another more complicated aspect of garbage collection is around policy, allowing to clean up resources based on age or client specified policies. Client side implementations would allow establishing policies. Containerd will require a stop the world to do the garbage collection. A heavy delete which does not garbage collect is an option and similar to to the interface"
},
{
"data": "The API does not guarantee that disk is cleaned up after a deletion, only that the resource is freed. Inline deletion would require reference counting to know when to delete. This would also require global locking to protect the references. How to handle content which is active but unreferenced, leasing vs pre-allocate. This has not been decided on. \"What will need to change in Docker in regards to graphdrivers for accommodating the containerd snapshotters?\" The goal is to have graphdrivers be able to be used as snapshot drivers. To accomplish this graphdrivers need to be able to return mounts without needing to require action on unmount. Unmount will no longer contact the graphdriver since snapshotters to do not control the mount/unmount lifecycle. For implementation, in the repository tree only overlayfs and btrfs are supported. Everything else will be out of tree and require special builds or proxying. \"Version numbers on events and API objects\" Are objects mutable? Some objects are mutable but not necessarily relevant to clients. Are containers mutable? It can have multiple tasks, can be started and stopped. We may need to have task id separate from the container to differentiate between tasks within a container. Tasks have a pid which could be used to identify the task. Someone had a customer that ran out of memory from running too many tasks, could be caused by repeated execs in the same container. Getting state of a containers task involves requesting containers and tasks, or just tasks could be listed. What are the API costs, is there measurements of the cost of API calls and comparisons with Docker? Calling docker ps can be expensive due to locks, containerd should be much cheaper and faster to call. This need to be verified and measurements added. \"How can clients understand the capabilities of the containerd instance it is talking to?\" As an orchestrator, interested in understanding what can be done with the plugins. Example, docker log drivers change feature set but the log driver names do not change. Stuck on checking docker version. There have been problems in the past with requiring multiple clients of docker to handle changes. GRPC is adding introspection, plan is to wait for this work rather than making something ourselves. The maintainers would like to understand what were the really bad changes in Docker that caused problems with version support for users. Problems around needing to bump the whole API to get a new feature. Containerd API versions each service and v1 interface will be stable and supported."
}
] |
{
"category": "Runtime",
"file_name": "2017-06-23.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document describes the versioning policy for this repository. This policy is designed so the following goals can be achieved. Users are provided a codebase of value that is stable and secure. Versioning of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be used. Versions will comply with . If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/otel/v2`, `require go.opentelemetry.io/otel/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/otel/v2/trace\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/otel/[email protected]`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. Modules will be used to encapsulate signals and components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API will be versioned with a major version greater than `v0`. The decision to make a module stable will be made on a case-by-case basis by the maintainers of this project. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. All stable modules that use the same major version number will use the same entire version number. Stable modules may be released with an incremented minor or patch version even though that module has not been changed, but rather so that it will remain at the same version as other stable modules that did undergo change. When an experimental module becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable modules as well as the newly stable module being released. Versioning of the associated [contrib repository](https://github.com/open-telemetry/opentelemetry-go-contrib) of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be used. Versions will comply with"
},
{
"data": "If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/contrib/instrumentation/host/v2`, `require go.opentelemetry.io/contrib/instrumentation/host/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/contrib/instrumentation/host/v2\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/contrib/instrumentation/host/[email protected]`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. In addition to public APIs, telemetry produced by stable instrumentation will remain stable and backwards compatible. This is to avoid breaking alerts and dashboard. Modules will be used to encapsulate instrumentation, detectors, exporters, propagators, and any other independent sets of related components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API and telemetry will be versioned with a major version greater than `v0`. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. Stable contrib modules cannot depend on experimental modules from this project. All stable contrib modules of the same major version with this project will use the same entire version as this project. Stable modules may be released with an incremented minor or patch version even though that module's code has not been changed. Instead the only change that will have been included is to have updated that modules dependency on this project's stable APIs. When an experimental module in contrib becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable contrib modules, this project's modules, and the newly stable module being released. Contrib modules will be kept up to date with this project's releases. Due to the dependency contrib modules will implicitly have on this project's modules the release of stable contrib modules to match the released version number will be staggered after this project's release. There is no explicit time guarantee for how long after this projects release the contrib release will be. Effort should be made to keep them as close in time as possible. No additional stable release in this project can be made until the contrib repository has a matching stable"
},
{
"data": "No release can be made in the contrib repository after this project's stable release except for a stable release of the contrib repository. GitHub releases will be made for all releases. Go modules will be made available at Go package mirrors. To better understand the implementation of the above policy the following example is provided. This project is simplified to include only the following modules and their versions: `otel`: `v0.14.0` `otel/trace`: `v0.14.0` `otel/metric`: `v0.14.0` `otel/baggage`: `v0.14.0` `otel/sdk/trace`: `v0.14.0` `otel/sdk/metric`: `v0.14.0` These modules have been developed to a point where the `otel/trace`, `otel/baggage`, and `otel/sdk/trace` modules have reached a point that they should be considered for a stable release. The `otel/metric` and `otel/sdk/metric` are still under active development and the `otel` module depends on both `otel/trace` and `otel/metric`. The `otel` package is refactored to remove its dependencies on `otel/metric` so it can be released as stable as well. With that done the following release candidates are made: `otel`: `v1.0.0-rc.1` `otel/trace`: `v1.0.0-rc.1` `otel/baggage`: `v1.0.0-rc.1` `otel/sdk/trace`: `v1.0.0-rc.1` The `otel/metric` and `otel/sdk/metric` modules remain at `v0.14.0`. A few minor issues are discovered in the `otel/trace` package. These issues are resolved with some minor, but backwards incompatible, changes and are released as a second release candidate: `otel`: `v1.0.0-rc.2` `otel/trace`: `v1.0.0-rc.2` `otel/baggage`: `v1.0.0-rc.2` `otel/sdk/trace`: `v1.0.0-rc.2` Notice that all module version numbers are incremented to adhere to our versioning policy. After these release candidates have been evaluated to satisfaction, they are released as version `v1.0.0`. `otel`: `v1.0.0` `otel/trace`: `v1.0.0` `otel/baggage`: `v1.0.0` `otel/sdk/trace`: `v1.0.0` Since both the `go` utility and the Go module system support [the semantic versioning definition of precedence](https://semver.org/spec/v2.0.0.html#spec-item-11), this release will correctly be interpreted as the successor to the previous release candidates. Active development of this project continues. The `otel/metric` module now has backwards incompatible changes to its API that need to be released and the `otel/baggage` module has a minor bug fix that needs to be released. The following release is made: `otel`: `v1.0.1` `otel/trace`: `v1.0.1` `otel/metric`: `v0.15.0` `otel/baggage`: `v1.0.1` `otel/sdk/trace`: `v1.0.1` `otel/sdk/metric`: `v0.15.0` Notice that, again, all stable module versions are incremented in unison and the `otel/sdk/metric` package, which depends on the `otel/metric` package, also bumped its version. This bump of the `otel/sdk/metric` package makes sense given their coupling, though it is not explicitly required by our versioning policy. As we progress, the `otel/metric` and `otel/sdk/metric` packages have reached a point where they should be evaluated for stability. The `otel` module is reintegrated with the `otel/metric` package and the following release is made: `otel`: `v1.1.0-rc.1` `otel/trace`: `v1.1.0-rc.1` `otel/metric`: `v1.1.0-rc.1` `otel/baggage`: `v1.1.0-rc.1` `otel/sdk/trace`: `v1.1.0-rc.1` `otel/sdk/metric`: `v1.1.0-rc.1` All the modules are evaluated and determined to a viable stable release. They are then released as version `v1.1.0` (the minor version is incremented to indicate the addition of new signal). `otel`: `v1.1.0` `otel/trace`: `v1.1.0` `otel/metric`: `v1.1.0` `otel/baggage`: `v1.1.0` `otel/sdk/trace`: `v1.1.0` `otel/sdk/metric`: `v1.1.0`"
}
] |
{
"category": "Runtime",
"file_name": "VERSIONING.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Feature request about: Suggest an idea for this project title: '' labels: enhancement assignees: '' Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Environment Kubernetes Version/Provider: ... Storage Provider: ... Cluster Size (#nodes): ... Data Size: ... Additional context Add any other context or screenshots about the feature request here."
}
] |
{
"category": "Runtime",
"file_name": "feature_request.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Path | string | | Readonly | Pointer to bool | | [optional] [default to false] Direct | Pointer to bool | | [optional] [default to false] Iommu | Pointer to bool | | [optional] [default to false] NumQueues | Pointer to int32 | | [optional] [default to 1] QueueSize | Pointer to int32 | | [optional] [default to 128] VhostUser | Pointer to bool | | [optional] [default to false] VhostSocket | Pointer to string | | [optional] RateLimiterConfig | Pointer to | | [optional] PciSegment | Pointer to int32 | | [optional] Id | Pointer to string | | [optional] Serial | Pointer to string | | [optional] `func NewDiskConfig(path string, ) *DiskConfig` NewDiskConfig instantiates a new DiskConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewDiskConfigWithDefaults() *DiskConfig` NewDiskConfigWithDefaults instantiates a new DiskConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *DiskConfig) GetPath() string` GetPath returns the Path field if non-nil, zero value otherwise. `func (o DiskConfig) GetPathOk() (string, bool)` GetPathOk returns a tuple with the Path field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetPath(v string)` SetPath sets Path field to given value. `func (o *DiskConfig) GetReadonly() bool` GetReadonly returns the Readonly field if non-nil, zero value otherwise. `func (o DiskConfig) GetReadonlyOk() (bool, bool)` GetReadonlyOk returns a tuple with the Readonly field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetReadonly(v bool)` SetReadonly sets Readonly field to given value. `func (o *DiskConfig) HasReadonly() bool` HasReadonly returns a boolean if a field has been set. `func (o *DiskConfig) GetDirect() bool` GetDirect returns the Direct field if non-nil, zero value otherwise. `func (o DiskConfig) GetDirectOk() (bool, bool)` GetDirectOk returns a tuple with the Direct field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetDirect(v bool)` SetDirect sets Direct field to given value. `func (o *DiskConfig) HasDirect() bool` HasDirect returns a boolean if a field has been set. `func (o *DiskConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o DiskConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *DiskConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set. `func (o *DiskConfig) GetNumQueues() int32` GetNumQueues returns the NumQueues field if non-nil, zero value otherwise. `func (o DiskConfig) GetNumQueuesOk() (int32, bool)` GetNumQueuesOk returns a tuple with the NumQueues field if it's non-nil, zero value otherwise and a boolean to check if the value has been"
},
{
"data": "`func (o *DiskConfig) SetNumQueues(v int32)` SetNumQueues sets NumQueues field to given value. `func (o *DiskConfig) HasNumQueues() bool` HasNumQueues returns a boolean if a field has been set. `func (o *DiskConfig) GetQueueSize() int32` GetQueueSize returns the QueueSize field if non-nil, zero value otherwise. `func (o DiskConfig) GetQueueSizeOk() (int32, bool)` GetQueueSizeOk returns a tuple with the QueueSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetQueueSize(v int32)` SetQueueSize sets QueueSize field to given value. `func (o *DiskConfig) HasQueueSize() bool` HasQueueSize returns a boolean if a field has been set. `func (o *DiskConfig) GetVhostUser() bool` GetVhostUser returns the VhostUser field if non-nil, zero value otherwise. `func (o DiskConfig) GetVhostUserOk() (bool, bool)` GetVhostUserOk returns a tuple with the VhostUser field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetVhostUser(v bool)` SetVhostUser sets VhostUser field to given value. `func (o *DiskConfig) HasVhostUser() bool` HasVhostUser returns a boolean if a field has been set. `func (o *DiskConfig) GetVhostSocket() string` GetVhostSocket returns the VhostSocket field if non-nil, zero value otherwise. `func (o DiskConfig) GetVhostSocketOk() (string, bool)` GetVhostSocketOk returns a tuple with the VhostSocket field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetVhostSocket(v string)` SetVhostSocket sets VhostSocket field to given value. `func (o *DiskConfig) HasVhostSocket() bool` HasVhostSocket returns a boolean if a field has been set. `func (o *DiskConfig) GetRateLimiterConfig() RateLimiterConfig` GetRateLimiterConfig returns the RateLimiterConfig field if non-nil, zero value otherwise. `func (o DiskConfig) GetRateLimiterConfigOk() (RateLimiterConfig, bool)` GetRateLimiterConfigOk returns a tuple with the RateLimiterConfig field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetRateLimiterConfig(v RateLimiterConfig)` SetRateLimiterConfig sets RateLimiterConfig field to given value. `func (o *DiskConfig) HasRateLimiterConfig() bool` HasRateLimiterConfig returns a boolean if a field has been set. `func (o *DiskConfig) GetPciSegment() int32` GetPciSegment returns the PciSegment field if non-nil, zero value otherwise. `func (o DiskConfig) GetPciSegmentOk() (int32, bool)` GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetPciSegment(v int32)` SetPciSegment sets PciSegment field to given value. `func (o *DiskConfig) HasPciSegment() bool` HasPciSegment returns a boolean if a field has been set. `func (o *DiskConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o DiskConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetId(v string)` SetId sets Id field to given value. `func (o *DiskConfig) HasId() bool` HasId returns a boolean if a field has been set. `func (o *DiskConfig) GetSerial() string` GetSerial returns the Serial field if non-nil, zero value otherwise. `func (o DiskConfig) GetSerialOk() (string, bool)` GetSerialOk returns a tuple with the Serial field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *DiskConfig) SetSerial(v string)` SetSerial sets Serial field to given value. `func (o *DiskConfig) HasSerial() bool` HasSerial returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "DiskConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(howto-storage-create-instance)= Instance storage volumes are created in the storage pool that is specified by the instance's root disk device. This configuration is normally provided by the profile or profiles applied to the instance. See {ref}`storage-default-pool` for detailed information. To use a different storage pool when creating or launching an instance, add the `--storage` flag. This flag overrides the root disk device from the profile. For example: incus launch <image> <instancename> --storage <storagepool> % Include content from ```{include} storagemovevolume.md :start-after: (storage-move-instance)= ```"
}
] |
{
"category": "Runtime",
"file_name": "storage_create_instance.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "In the old , the Kata was an executable called `kata-runtime`. The container manager called this executable multiple times when creating each container. Each time the runtime was called a different OCI command-line verb was provided. This architecture was simple, but not well suited to creating VM based containers due to the issue of handling state between calls. Additionally, the architecture suffered from performance issues related to continually having to spawn new instances of the runtime binary, and and processes for systems that did not provide VSOCK. See the section of the architecture document. | Kata version | Kata Runtime process calls | Kata shim processes | Kata proxy processes (if no VSOCK) | |-|-|-|-| | 1.x | multiple per container | 1 per container connection | 1 | | 2.x | 1 per VM (hosting any number of containers) | 0 | 0 | Notes: A single VM can host one or more containers. The \"Kata shim processes\" column refers to the old (`kata-shim` binary), not the new shimv2 runtime instance (`containerd-shim-kata-v2` binary). The diagram below shows how the original architecture was simplified with the advent of shimv2."
}
] |
{
"category": "Runtime",
"file_name": "history.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"Providers\" layout: docs Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase. {{< table caption=\"Velero supported providers\" >}} | Provider | Object Store | Volume Snapshotter | Plugin Provider Repo | Setup Instructions | Parameters | |--|--|-|--|-|| | | AWS S3 | AWS EBS | | | <br/> | | | | | | | <br/> | | | Azure Blob Storage | Azure Managed Disks | | | <br/> | | | | vSphere Volumes | | | | | | | CSI Volumes | | | | {{< /table >}} Contact: , {{< table caption=\"Community supported providers\" >}} | Provider | Object Store | Volume Snapshotter | Plugin Documentation | Contact | |||||| | | Alibaba Cloud OSS | Alibaba Cloud | | | | | DigitalOcean Object Storage | DigitalOcean Volumes Block Storage | | | | | | HPE Storage | | , | | | | OpenEBS CStor Volume | | , | | | Swift | Cinder | | | | | | Portworx Volume | | , | | | Storj Object Storage | | | | {{< /table >}} Velero's AWS Object Store plugin uses to connect to the AWS S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero: Note that these storage providers are not regularly tested by the Velero team. * * * Ceph RADOS v12.2.7 Quobyte Some storage providers, like Quobyte, may need a different . In the case you want to take volume snapshots but didn't find a plugin for your provider, Velero has support for snapshotting using restic. Please see the documentation."
}
] |
{
"category": "Runtime",
"file_name": "supported-providers.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Integrating with the Host Network menu_order: 20 search_type: Documentation Weave application networks can be integrated with an external host network, establishing connectivity between the host and with application containers running anywhere. For example, returning to the , youve now decided that you need to have the application containers that are running on `$HOST2` accessible by other hosts and containers. On `$HOST2` run: host2$ weave expose 10.2.1.132 This command grants the host access to all of the application containers in the default subnet. An IP address is allocated by Weave Net especially for that purpose, and is returned after running `weave expose`. Now you are able to ping the host: host2$ ping 10.2.1.132 And you can also ping the `a1` netcat application container residing on `$HOST1`: host2$ ping $(weave dns-lookup a1) Multiple subnet addresses can be exposed or hidden using a single command: host2$ weave expose net:default net:10.2.2.0/24 10.2.1.132 10.2.2.130 host2$ weave hide net:default net:10.2.2.0/24 10.2.1.132 10.2.2.130 Exposed addresses can also be added to weaveDNS by supplying fully qualified domain names: host2$ weave expose -h exposed.weave.local 10.2.1.132 After running `weave expose`, you can use Linux routing to provide access to the Weave network from hosts that are not running Weave Net: ip route add <network-cidr> via <exposing-host> Where, `<network-cidr>` is an IP address range in use on Weave Net, for example, `10.2.0.0/16` or `10.32.0.0/12` and, `<exposing-host>` is the address of the machine on which you ran `weave expose`. Note: You must ensure that the [IP subnet used by Weave Net](/site/tasks/ipam/ipam.md#range) does not clash with anything on those other hosts. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "host-network-integration.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Display cached list of events for a BPF map ``` cilium-dbg map events <name> [flags] ``` ``` cilium map events cilium_ipcache ``` ``` -f, --follow If set then events will be streamed -h, --help help for events -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Access userspace cached content of BPF maps"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_map_events.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Pods can have priority which indicates the importance of a pod relative to other pods. Longhorn pods will face unexpected eviction due to insufficient cluster resources or being preempted by higher-priority pods A PriorityClass defines a mapping from a priority class name to the integer value of the priority, where a higher value signifies greater priority.. Reduce the opportunity of the unexpected eviction of Longhorn pods. `None` Have a default priority class with the highest value to all Longhorn pods. After a fresh installation or an upgrade, users will have Longhorn pods with the field `priorityClassName: longhorn-critical` in `spec` to prevent Longhorn pods from unexpected eviction. For install, the priority class will not be set using the default value if the value has been provided by users. For upgrade, the priority class will not be set using the default value if the settings already exists. For a fresh install, the default priority class will be set if there is no priority class set by users. The setting of the priority class will be updated. For an upgrade, if all volumes are detached and the setting of the priority class is empty, the setting should be applied including user-managed components. The setting of the priority class will be updated because the source of truth of settings is the config map (longhorn-default-setting). For an upgrade, no matter weather all volumes are detached or not, if the setting of the priority class has been set, the setting should not be updated with the default priority class. For an upgrade, if volumes are attached, the setting should be applied to the user-managed components only if they haven't been set. The setting of the priority class will not be updated. NOTE: Before modifying the setting `priority-class`, all Longhorn volumes must be detached. `None` Adding a new template YAML file in the Longhorn chart/templates to create a default priority class. ```yaml apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: longhorn-critical value: 1000000000 description: \"Ensure Longhorn pods have the highest priority to prevent any unexpected eviction by the Kubernetes scheduler under node pressure\" globalDefault: false preemptionPolicy: PreemptLowerPriority ``` Set the `defaultSettings.priorityClass` to `longhorn-critical` and `priorityClass` of Longhorn components in the value.yaml of Longhorn chart. ```yaml ... defaultSettings: ... priorityClass: &defaultPriorityClassNameRef \"longhorn-critical\" ... longhornManager: log: format: plain priorityClass: *defaultPriorityClassNameRef longhornDriver: priorityClass: *defaultPriorityClassNameRef longhornUI: priorityClass: *defaultPriorityClassNameRef ``` Modify the test case `testsettingpriority_class` to check if there is a default priority class `longhorn-critical` existing. Add a test case where the `defaultSettings.priorityClass` has been modified by users and the setting `priority-class` should be the same to the value provided by users after a fresh install. Add a test case where the setting `priority-class` will be updated if all volumes are detached and the setting of the priority class is not set for an upgrade. Add a test case where the setting `priority-class` will not be updated after the upgrade if any volume is attached. Add a test case where the setting `priority-class` will not be updated if the setting of the priority class has been set for an upgrade. When adding a priority class, all Longhorn volumes must be detached. Therefore we will not change the setting `priority-class` if there is any volume attached to the node or the setting `priority-class` is set to a priority class. `None`"
}
] |
{
"category": "Runtime",
"file_name": "20231204-default-priority-class.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document summarises a set of proposals triggered by the . This section explains some terminology required to understand the proposals. Further details can be found in the . | Trace mode | Description | Use-case | |-|-|-| | Static | Trace agent from startup to shutdown | Entire lifespan | | Dynamic | Toggle tracing on/off as desired | On-demand \"snapshot\" | | Trace type | Description | Use-case | |-|-|-| | isolated | traces all relate to single component | Observing lifespan | | collated | traces \"grouped\" (runtime+agent) | Understanding component interaction | | Lifespan | trace mode | trace type | |-|-|-| | short-lived | static | collated if possible, else isolated? | | long-running | dynamic | collated? (to see interactions) | Implement all trace types and trace modes for agent. Why? Maximum flexibility. > Counterargument: > > Due to the intrusive nature of adding tracing, we have > learnt that landing small incremental changes is simpler and quicker! Compatibility with . > Counterargument: > > Agent tracing in Kata 1.x was extremely awkward to setup (to the extent > that it's unclear how many users actually used it!) > > This point, coupled with the new architecture for Kata 2.x, suggests > that we may not need to supply the same set of tracing features (in fact > they may not make sense)). All tracing will be static. Why? Because dynamic tracing will always be \"partial\" > In fact, not only would it be only a \"snapshot\" of activity, it may not > even be possible to create a complete \"trace transaction\". If this is > true, the trace output would be partial and would appear \"unstructured\". Agent tracing will be \"isolated\" by default. Agent tracing will be \"collated\" if runtime tracing is also enabled. Why? Offers a graceful fallback for agent tracing if runtime tracing disabled. Simpler code! Are your containers long-running or short-lived? Would you ever need to turn on tracing \"briefly\"? If \"yes\", is a \"partial trace\" useful or useless? > Likely to be considered useless as it is a partial snapshot. > Alternative tracing methods may be more appropriate to dynamic > OpenTelemetry tracing. Are you happy to stop a container to enable tracing? If \"no\", dynamic tracing may be"
},
{
"data": "Would you ever want to trace the agent and the runtime \"in isolation\" at the same time? If \"yes\", we need to fully implement `trace_mode=isolated` > This seems unlikely though. The second set of proposals affect the way traces are collected. Currently: The runtime sends trace spans to Jaeger directly. The agent will send trace spans to the component. The trace forwarder will send trace spans to Jaeger. Kata agent tracing overview: ``` +-+ | Host | | | | +--+ | | | Trace | | | | Collector | | | +--+--+ | | ^ +--+ | | | spans | Kata VM | | | +--+--+ | | | | | Kata | spans | +--+ | | | | Trace |<--|Kata | | | | | Forwarder | VSOCK | |Agent| | | | +--+ Channel | +--+ | | | +--+ | +-+ ``` Currently: If agent tracing is enabled but the trace forwarder is not running, the agent will error. If the trace forwarder is started but Jaeger is not running, the trace forwarder will error. The runtime and agent should: Use the same trace collection implementation. Use the most the common configuration items. Kata should should support more trace collection software or `SaaS` (for example `Zipkin`, `datadog`). Trace collection should not block normal runtime/agent operations (for example if `vsock-exporter`/Jaeger is not running, Kata Containers should work normally). Kata runtime/agent all send spans to trace forwarder, and the trace forwarder, acting as a tracing proxy, sends all spans to a tracing back-end, such as Jaeger or `datadog`. Pros: Runtime/agent will be simple. Could update trace collection target while Kata Containers are running. Cons: Requires the trace forwarder component to be running (that is a pressure to operation). Send spans to collector directly from runtime/agent, this proposal need network accessible to the collector. Pros: No additional trace forwarder component needed. Cons: Need more code/configuration to support all trace collectors. We could add dynamic and fully isolated tracing at a later stage, if required. See the new . gist. . 2021-07-01: A summary of the discussion was . 2021-06-22: These proposals were . 2021-06-18: These proposals where . Nobody opposed the agent proposals, so they are being implemented. The trace collection proposals are still being considered."
}
] |
{
"category": "Runtime",
"file_name": "tracing-proposals.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document outlines how to make hotfix binaries and containers for MinIO?. The main focus in this article is about how to backport patches to a specific branch and finally building binaries/containers. A working knowledge of MinIO codebase and its various components. A working knowledge of AWS S3 API behaviors and corner cases. Fixes that are allowed a backport must satisfy any of the following criteria's: A fix must not be a feature, for example. ``` commit faf013ec84051b92ae0f420a658b8d35bb7bb000 Author: Klaus Post <[email protected]> Date: Thu Nov 18 12:15:22 2021 -0800 Improve performance on multiple versions (#13573) ``` A fix must be a valid fix that was reproduced and seen in a customer environment, for example. ``` commit 886262e58af77ebc7c836ef587c08544e9a0c271 Author: Harshavardhana <[email protected]> Date: Wed Nov 17 15:49:12 2021 -0800 heal legacy objects when versioning is enabled after upgrade (#13671) ``` A security fix must be backported if a customer is affected by it, we have a mechanism in SUBNET to send out notifications to affected customers in such situations, this is a mandatory requirement. ``` commit 99bf4d0c429f04dbd013ba98840d07b759ae1702 (tag: RELEASE.2019-06-15T23-07-18Z) Author: Harshavardhana <[email protected]> Date: Sat Jun 15 11:27:17 2019 -0700 [security] Match ${aws:username} exactly instead of prefix match (#7791) This PR fixes a security issue where an IAM user based on his policy is granted more privileges than restricted by the users IAM policy. This is due to an issue of prefix based Matcher() function which was incorrectly matching prefix based on resource prefixes instead of exact match. ``` There is always a possibility of a fix that is new, it is advised that the developer must make sure that the fix is sent upstream, reviewed and merged to the master branch. Customers in MinIO are allowed LTS on any release they choose to standardize. Production setups seldom change and require maintenance. Hotfix branches are such maintenance branches that allow customers to operate a production cluster without drastic changes to their"
},
{
"data": "Developer is advised to clone the MinIO source and checkout the MinIO release tag customer is currently on. ``` git checkout RELEASE.2021-04-22T15-44-28Z ``` Create a branch and proceed to push the branch upstream (upstream here points to [email protected]:minio/minio.git) ``` git branch -m RELEASE.2021-04-22T15-44-28Z.hotfix git push -u upstream RELEASE.2021-04-22T15-44-28Z.hotfix ``` Pick the relevant commit-id say for example commit-id from the master branch ``` commit 4f3317effea38c203c358af9cb5ce3c0e4173976 Author: Klaus Post <[email protected]> Date: Mon Nov 8 08:41:27 2021 -0800 Close stream on panic (#13605) Always close streamHTTPResponse on panic on main thread to avoid write/flush after response handler has returned. ``` ``` git cherry-pick 4f3317effea38c203c358af9cb5ce3c0e4173976 ``` A self contained patch usually applies fine on the hotfix branch during backports as long it is self contained. There are situations however this may lead to conflicts and the patch will not cleanly apply. Conflicts might be trivial which can be resolved easily, when conflicts seem to be non-trivial or touches the part of the code-base the developer is not confident - to get additional clarity reach out to #hack on MinIOHQ slack channel. Hasty changes must be avoided, minor fixes and logs may be added to hotfix branches but this should not be followed as practice. Once the patch is successfully applied, developer must run tests to validate the fix that was backported by running following tests, locally. Unit tests ``` make test ``` Verify different type of MinIO deployments work ``` make verify ``` Verify if healing and replacing a drive works ``` make verify-healing ``` At this point in time the backport is ready to be submitted as a pull request to the relevant branch. A pull request is recommended to ensure tests are validated. Pull request also ensures code-reviews for the backports in case of any unforeseen regressions. To add a hotfix tag to the binary version and embed the relevant `commit-id` following build helpers are available ``` CRED_DIR=/media/builder/minio make hotfix-push ``` ``` CRED_DIR=/media/builder/minio make docker-hotfix-push ``` ``` REPO=\"registry.min.dev/<customer>\" CRED_DIR=/media/builder/minio make docker-hotfix-push ``` Once this has been provided to the customer relevant binary will be uploaded from our release server securely, directly to <https://dl.minio.io/server/minio/hotfixes/archive/>"
}
] |
{
"category": "Runtime",
"file_name": "hotfixes.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Currently, Longhorn uses the Kubernetes cluster CNI network and share the network with the entire cluster resources. This makes network availability impossible to control. We would like to have a global `Storage Network` setting to allow users to input an existing Multus `NetworkAttachmentDefinition` CR network in `<namespace>/<name>` format. Longhorn can use the storage network for in-cluster data traffics. The segregation can achieve by replacing the engine binary calls in the Longhorn manager with gRPC connections to the instance manager. Then the instance manager will be responsible for handling the requests between the management network and storage network. _NOTE:_ There are other possible approaches we have considered to segregating the networks: Add Longhorn Manager to the storage network. The Manager needs to restart itself to get the secondary storage network IP, and there is no storage network segregation to the Longhorn data plane (engine & replica). Provide Engine/Replica with dual IPs. Code change around this approach is confusing and likely to increase maintenance complexity. https://github.com/longhorn/longhorn/issues/2285 https://github.com/longhorn/longhorn/issues/3546 Have a new `Storage Network` setting. Replace Manager engine binary calls with gRPC client to the instance manager. Keep using the management network for the communication between Manager and Instance Manager. Use the storage network for the data traffic of data plane components to the instance processes. Those are the engines and replicas in Instance Manager pods. Support backward compatibility of the communication between the new Manager and the old Instance Manager after the upgrade. Ensure existing engine/replicas work without issues. Setup and configure the Multus `NetworkAttachmentDefinition` CRs. Monitor for `NetworkAttachmentDefintition` CRs. The user needs to ensure the traffic is reachable between pods and across different nodes. Without monitoring, Longhorn will not get notified of the update of the `NetworkAttachmentDefinition` CRs. Thus the user should create a new `NetworkAttachmentDefinition` CR and update the `storage-network` setting. Out-cluster data traffic. For example, backing image upload and download. Introduce a new gRPC server in Instance Manager. Keep reusable connections between Manager and Instance Managers. Allow Manager to fall back to engine binary call when communicating with old Instance Manager. Add a new `Storage Network` global setting. Add `k8s.v1.cni.cncf.io/networks` annotation to pods that involve data transfer. The annotation will use the value from the storage network setting. Multus will attach a secondary network to pods with this annotation. Engine instance manager pods Replica instance manager pods Backing image data source pods. Data traffic between replicas and backing image data source. Backing image manager pods. Data traffic in-between backing image managers. Add new `storageIP` to `Engine`, `Replica` and `BackingImageManager` CRD status. The storage IP will be use to communicate to the instance processes. As a Longhorn user / System administrator. I have set up Multus `NetworkAttachmentDefinition` for additional network management. And I want to segregate Longhorn in-cluster data traffic with an additional network interface. Longhorn should provide a setting to input the `NetworkAttachmentDefinition` CR name for the storage network. So I can guarantee network availability for Longhorn in-cluster data traffic. As a Longhorn user / System administrator. When I upgrade Longhorn, the changes should support existing attached"
},
{
"data": "So I can decide when to upgrade the Engine Image. I have a Kubernetes cluster with Multus installed. I created `NetworkAttachmentDefinition` CR and ensured the configuration is correct. I Added `<namespace>/<NetworkAttachmentDefinition name>` to Longhorn `Storage Network` setting. I see setting update failed when volumes are attached. I detach all volumes. When updating the setting I see engine/replica instance manager pod and backing image manager pods is restarted. I attach the volumes. I describe Engine, Replica, and BackingImageManager, and see the `storageIP` in CR status is in the range of the `NetworkAttachmentDefinition` subnet/CIDR. I also see the `storageIP` is different from the `ip` in CR status. I describe the Engine and see the `replicaAddressMap` in CR spec and status is using the storage IP. I see pod logs indicate the network directions. I Longhorn v1.2.4 cluster. I have healthy volumes attached. I upgrade Longhorn. I see volumes still attached and healthy with available engine image upgrade. I cannot upgrade the volume engine image with the volume attached. After I detach the volume, I can upgrade its engine image. I attached the volumes. I see the volumes are healthy. The new global setting `Storage Network` will use the existing `/v1/settings` API. Start the gRPC proxy server with the next port to the process server. The default should be `localhost:8501`. The gRPC proxy service shares the same `imrpc` package name as the process server. ``` Ping ServerVersionGet VolumeGet VolumeExpand VolumeFrontendStart VolumeFrontendShutdown VolumeSnapshot SnapshotList SnapshotRevert SnapshotPurge SnapshotPurgeStatus SnapshotClone SnapshotCloneStatus SnapshotRemove SnapshotBackup SnapshotBackupStatus BackupRestore BackupRestoreStatus BackupVolumeList BackupVolumeGet BackupGet BackupConfigMetaGet BackupRemove ReplicaAdd ReplicaList ReplicaRebuildingStatus ReplicaVerifyRebuild ReplicaRemove ``` Create a proxyHandler object to map the controller ID to an EngineClient interface. The proxyHandler object is shared between controllers. The Instance Manager Controller is responsible for the life cycle of the proxy gRPC client. For every enqueue: Check for the existing gRPC client in the proxyHandler, and check the connection liveness with the `Ping` request. If the proxy gRPC client connection is dead, stop the proxy gRPC client and error so it will re-queue. If the proxy gRPC client doesn't exist in the proxyHandler, start a new gRPC connection and map it to the current controller ID. Do not create the proxy gRPC connection when the instance manager version is less than the current version. We will provide the fallback interface caller provided when getting the client. The gRPC client will use the EngineClient interface. Provide a fallback interface caller when getting the gRPC client from the proxyHandler. The fallback callers are: the existing `Engine` client used for the binary call `BackupTargetClient`. Use the fallback caller when the instance manager version is less than the current version. Add new `BackupTargetBinaryClient` interface for"
},
{
"data": "``` type BackupTargetBinaryClient interface { BackupGet(destURL string, credential map[string]string) (*Backup, error) BackupVolumeGet(destURL string, credential map[string]string) (volume *BackupVolume, err error) BackupNameList(destURL, volumeName string, credential map[string]string) (names []string, err error) BackupVolumeNameList(destURL string, credential map[string]string) (names []string, err error) BackupDelete(destURL string, credential map[string]string) (err error) BackupVolumeDelete(destURL, volumeName string, credential map[string]string) (err error) BackupConfigMetaGet(destURL string, credential map[string]string) (*ConfigMetadata, error) } ``` Introduce A new `EngineClientProxy` interface for the Proxy, which includes proxy-specific methods and implementation of the existing `EnglineClient` and `BackupTargetClient` interfaces. This will be adaptive when using the EngineClient interface for the proxy or non-proxy/fallback operations. ``` type EngineClientProxy interface { EngineClient BackupTargetBinaryClient IsGRPC() bool Start(longhorn.InstanceManager, logrus.FieldLogger, datastore.DataStore) error Stop(*longhorn.InstanceManager) error Ping() error } ``` Add a new global setting `Storage Network`. The setting is `string`. The default value is `\"\"`. The setting should be in the `danger zone` category. The setting will be validated at admission webhook setting validator. The setting should be in the form of `< NAMESPACE>/<NETWORK-ATTACHMENT-DEFINITION-NAME>`. The setting cannot be updated when volumes are attached. Engine: New `storageIP` in status. Use the replica `status.storageIP` instead of the replica `status.IP` for the replicaAddressMap. Replica: New `storageIP` in status. BackingImageManager: New `storageIP` in status. When creating instance manager pods, add `k8s.v1.cni.cncf.io/networks` annotation with `lhnet1` as interface name. Use the `storage-network` setting value for the namespace and name. ``` k8s.v1.cni.cncf.io/networks: ' [ { \"namespace\": \"kube-system\", \"name\": \"demo-10-30-0-0\", \"interface\": \"lhnet1\" } ] ' ``` Get the IP from instance manager Pod annotation `k8s.v1.cni.cncf.io/network-status`. Use the IP for `Engine` and `Replica` Storage IP. When the `storage-network` setting is empty, The Storage IP will be the pod IP. When creating backing image manager pods, add `k8s.v1.cni.cncf.io/networks` annotation with `lhnet1` as interface name. Use the `storage-network` setting value for the namespace and name. ``` k8s.v1.cni.cncf.io/networks: ' [ { \"namespace\": \"kube-system\", \"name\": \"demo-10-30-0-0\", \"interface\": \"lhnet1\" } ] ' ``` Get the IP from backing image manager Pod annotation `k8s.v1.cni.cncf.io/network-status`. Use the IP for `BackingImageManager` Storage IP. When the `storage-network` setting is empty, The Storage IP will be the pod IP. When creating backing image data source pods, add `k8s.v1.cni.cncf.io/networks` annotation with `lhnet1` as interface name. Use the `storage-network` setting value for the namespace and name. ``` k8s.v1.cni.cncf.io/networks: ' [ { \"namespace\": \"kube-system\", \"name\": \"demo-10-30-0-0\", \"interface\": \"lhnet1\" } ] ' ``` get the IPv4 of the `lhnet1` interface and use it as the receiver address. Use the pod IP if the interface doesn't exist. Do not update the `storage-network` setting and return an error when `Volumes` are attached. Delete all backing image manager pods. Delete all instance manager pods. All existing tests should pass when the cluster has the storage network configured. We should consider having a new test pipeline for the storage network. Infra Prerequisites: Secondary network interface added to each cluster instance. Multus deployed. Network-attachment-definition created. Routing is configured in all cluster nodes to ensure the network is accessible between instances. For AWS, disable network source/destination checks for each cloud-provider instance. Scenario: `Engine`, `Replica` and `BackingImageManager` should use IP in `storage-network` `NetworkAttachmentDefinition` subnet/CIDR range after setting update. . Old engine instance managers do not have the gRPC proxy server for Manager to communicate. Hence, we need to support backward compatibility. Manager communication: Bump instance manager API version. Manager checks for incompatible version and fall back to requests through the engine binary. Volume/Engine live upgrade: Keep live upgrade. This will be a soft notice for users to know we will not enforce any change in 1.3, but it will happen in 1.4. `None`"
}
] |
{
"category": "Runtime",
"file_name": "20220428-storage-network-through-grpc-proxy.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- Thanks for sending a pull request! Here are some tips for you: If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide Follow the instructions for writing a release note: https://git.k8s.io/community/contributors/guide/release-notes.md If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests --> What type of PR is this? Uncomment only one ` /kind <>` line, hit enter to put that in a new line, and remove leading whitespaces from that line: /kind api-change /kind bug /kind cleanup /kind design /kind documentation /kind failing-test /kind feature /kind flake What this PR does / why we need it: Which issue(s) this PR fixes: <!-- *Automatically closes linked issue when PR is merged. Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`. If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`* --> Fixes # Special notes for your reviewer: Does this PR introduce a user-facing change?: <!-- If no, just write \"NONE\" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string \"action required\". --> ```release-note none ```"
}
] |
{
"category": "Runtime",
"file_name": "PULL_REQUEST_TEMPLATE.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "While you are welcome to provide your own organization, typically a Cobra-based application will follow the following organizational structure: ``` appName/ cmd/ add.go your.go commands.go here.go main.go ``` In a Cobra app, typically the main.go file is very bare. It serves one purpose: initializing Cobra. ```go package main import ( \"{pathToYourApp}/cmd\" ) func main() { cmd.Execute() } ``` Cobra provides its own program that will create your application and add any commands you want. It's the easiest way to incorporate Cobra into your application. you can find more information about it. To manually implement Cobra you need to create a bare main.go file and a rootCmd file. You will optionally provide additional commands as you see fit. Cobra doesn't require any special constructors. Simply create your commands. Ideally you place this in app/cmd/root.go: ```go var rootCmd = &cobra.Command{ Use: \"hugo\", Short: \"Hugo is a very fast static site generator\", Long: `A Fast and Flexible Static Site Generator built with love by spf13 and friends in Go. Complete documentation is available at http://hugo.spf13.com`, Run: func(cmd *cobra.Command, args []string) { // Do Stuff Here }, } func Execute() { if err := rootCmd.Execute(); err != nil { fmt.Fprintln(os.Stderr, err) os.Exit(1) } } ``` You will additionally define flags and handle configuration in your init() function. For example cmd/root.go: ```go package cmd import ( \"fmt\" \"os\" \"github.com/spf13/cobra\" \"github.com/spf13/viper\" ) var ( // Used for flags. cfgFile string userLicense string rootCmd = &cobra.Command{ Use: \"cobra\", Short: \"A generator for Cobra based Applications\", Long: `Cobra is a CLI library for Go that empowers applications. This application is a tool to generate the needed files to quickly create a Cobra application.`, } ) // Execute executes the root command. func Execute() error { return rootCmd.Execute() } func init() { cobra.OnInitialize(initConfig) rootCmd.PersistentFlags().StringVar(&cfgFile, \"config\", \"\", \"config file (default is $HOME/.cobra.yaml)\") rootCmd.PersistentFlags().StringP(\"author\", \"a\", \"YOUR NAME\", \"author name for copyright attribution\") rootCmd.PersistentFlags().StringVarP(&userLicense, \"license\", \"l\", \"\", \"name of license for the project\") rootCmd.PersistentFlags().Bool(\"viper\", true, \"use Viper for configuration\") viper.BindPFlag(\"author\", rootCmd.PersistentFlags().Lookup(\"author\")) viper.BindPFlag(\"useViper\", rootCmd.PersistentFlags().Lookup(\"viper\")) viper.SetDefault(\"author\", \"NAME HERE <EMAIL ADDRESS>\") viper.SetDefault(\"license\", \"apache\") rootCmd.AddCommand(addCmd) rootCmd.AddCommand(initCmd) } func initConfig() { if cfgFile != \"\" { // Use config file from the flag. viper.SetConfigFile(cfgFile) } else { // Find home directory. home, err := os.UserHomeDir() cobra.CheckErr(err) // Search config in home directory with name \".cobra\" (without extension). viper.AddConfigPath(home) viper.SetConfigType(\"yaml\") viper.SetConfigName(\".cobra\") } viper.AutomaticEnv() if err := viper.ReadInConfig(); err == nil { fmt.Println(\"Using config file:\", viper.ConfigFileUsed()) } } ``` With the root command you need to have your main function execute it. Execute should be run on the root for clarity, though it can be called on any command. In a Cobra app, typically the main.go file is very bare. It serves one purpose: to initialize Cobra. ```go package main import ( \"{pathToYourApp}/cmd\" ) func main() { cmd.Execute() } ``` Additional commands can be defined and typically are each given their own file inside of the cmd/"
},
{
"data": "If you wanted to create a version command you would create cmd/version.go and populate it with the following: ```go package cmd import ( \"fmt\" \"github.com/spf13/cobra\" ) func init() { rootCmd.AddCommand(versionCmd) } var versionCmd = &cobra.Command{ Use: \"version\", Short: \"Print the version number of Hugo\", Long: `All software has versions. This is Hugo's`, Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Hugo Static Site Generator v0.9 -- HEAD\") }, } ``` If you wish to return an error to the caller of a command, `RunE` can be used. ```go package cmd import ( \"fmt\" \"github.com/spf13/cobra\" ) func init() { rootCmd.AddCommand(tryCmd) } var tryCmd = &cobra.Command{ Use: \"try\", Short: \"Try and possibly fail at something\", RunE: func(cmd *cobra.Command, args []string) error { if err := someFunc(); err != nil { return err } return nil }, } ``` The error can then be caught at the execute function call. Flags provide modifiers to control how the action command operates. Since the flags are defined and used in different locations, we need to define a variable outside with the correct scope to assign the flag to work with. ```go var Verbose bool var Source string ``` There are two different approaches to assign a flag. A flag can be 'persistent', meaning that this flag will be available to the command it's assigned to as well as every command under that command. For global flags, assign a flag as a persistent flag on the root. ```go rootCmd.PersistentFlags().BoolVarP(&Verbose, \"verbose\", \"v\", false, \"verbose output\") ``` A flag can also be assigned locally, which will only apply to that specific command. ```go localCmd.Flags().StringVarP(&Source, \"source\", \"s\", \"\", \"Source directory to read from\") ``` By default, Cobra only parses local flags on the target command, and any local flags on parent commands are ignored. By enabling `Command.TraverseChildren`, Cobra will parse local flags on each command before executing the target command. ```go command := cobra.Command{ Use: \"print [OPTIONS] [COMMANDS]\", TraverseChildren: true, } ``` You can also bind your flags with : ```go var author string func init() { rootCmd.PersistentFlags().StringVar(&author, \"author\", \"YOUR NAME\", \"Author name for copyright attribution\") viper.BindPFlag(\"author\", rootCmd.PersistentFlags().Lookup(\"author\")) } ``` In this example, the persistent flag `author` is bound with `viper`. Note: the variable `author` will not be set to the value from config, when the `--author` flag is not provided by user. More in . Flags are optional by default. If instead you wish your command to report an error when a flag has not been set, mark it as required: ```go rootCmd.Flags().StringVarP(&Region, \"region\", \"r\", \"\", \"AWS region (required)\") rootCmd.MarkFlagRequired(\"region\") ``` Or, for persistent flags: ```go rootCmd.PersistentFlags().StringVarP(&Region, \"region\", \"r\", \"\", \"AWS region (required)\") rootCmd.MarkPersistentFlagRequired(\"region\") ``` Validation of positional arguments can be specified using the `Args` field of `Command`. The following validators are built in: `NoArgs` - the command will report an error if there are any positional args. `ArbitraryArgs` - the command will accept any"
},
{
"data": "`OnlyValidArgs` - the command will report an error if there are any positional args that are not in the `ValidArgs` field of `Command`. `MinimumNArgs(int)` - the command will report an error if there are not at least N positional args. `MaximumNArgs(int)` - the command will report an error if there are more than N positional args. `ExactArgs(int)` - the command will report an error if there are not exactly N positional args. `ExactValidArgs(int)` - the command will report an error if there are not exactly N positional args OR if there are any positional args that are not in the `ValidArgs` field of `Command` `RangeArgs(min, max)` - the command will report an error if the number of args is not between the minimum and maximum number of expected args. An example of setting the custom validator: ```go var cmd = &cobra.Command{ Short: \"hello\", Args: func(cmd *cobra.Command, args []string) error { if len(args) < 1 { return errors.New(\"requires a color argument\") } if myapp.IsValidColor(args[0]) { return nil } return fmt.Errorf(\"invalid color specified: %s\", args[0]) }, Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Hello, World!\") }, } ``` In the example below, we have defined three commands. Two are at the top level and one (cmdTimes) is a child of one of the top commands. In this case the root is not executable, meaning that a subcommand is required. This is accomplished by not providing a 'Run' for the 'rootCmd'. We have only defined one flag for a single command. More documentation about flags is available at https://github.com/spf13/pflag ```go package main import ( \"fmt\" \"strings\" \"github.com/spf13/cobra\" ) func main() { var echoTimes int var cmdPrint = &cobra.Command{ Use: \"print [string to print]\", Short: \"Print anything to the screen\", Long: `print is for printing anything back to the screen. For many years people have printed back to the screen.`, Args: cobra.MinimumNArgs(1), Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Print: \" + strings.Join(args, \" \")) }, } var cmdEcho = &cobra.Command{ Use: \"echo [string to echo]\", Short: \"Echo anything to the screen\", Long: `echo is for echoing anything back. Echo works a lot like print, except it has a child command.`, Args: cobra.MinimumNArgs(1), Run: func(cmd *cobra.Command, args []string) { fmt.Println(\"Echo: \" + strings.Join(args, \" \")) }, } var cmdTimes = &cobra.Command{ Use: \"times [string to echo]\", Short: \"Echo anything to the screen more times\", Long: `echo things multiple times back to the user by providing a count and a string.`, Args: cobra.MinimumNArgs(1), Run: func(cmd *cobra.Command, args []string) { for i := 0; i < echoTimes; i++ { fmt.Println(\"Echo: \" + strings.Join(args, \" \")) } }, } cmdTimes.Flags().IntVarP(&echoTimes, \"times\", \"t\", 1, \"times to echo the input\") var rootCmd = &cobra.Command{Use: \"app\"} rootCmd.AddCommand(cmdPrint, cmdEcho) cmdEcho.AddCommand(cmdTimes) rootCmd.Execute() } ``` For a more complete example of a larger application, please checkout . Cobra automatically adds a help command to your application when you have subcommands. This will be called when a user runs 'app help'. Additionally, help will also support all other commands as"
},
{
"data": "Say, for instance, you have a command called 'create' without any additional configuration; Cobra will work when 'app help create' is called. Every command will automatically have the '--help' flag added. The following output is automatically generated by Cobra. Nothing beyond the command and flag definitions are needed. $ cobra help Cobra is a CLI library for Go that empowers applications. This application is a tool to generate the needed files to quickly create a Cobra application. Usage: cobra [command] Available Commands: add Add a command to a Cobra Application help Help about any command init Initialize a Cobra Application Flags: -a, --author string author name for copyright attribution (default \"YOUR NAME\") --config string config file (default is $HOME/.cobra.yaml) -h, --help help for cobra -l, --license string name of license for the project --viper use Viper for configuration (default true) Use \"cobra [command] --help\" for more information about a command. Help is just a command like any other. There is no special logic or behavior around it. In fact, you can provide your own if you want. You can provide your own Help command or your own template for the default command to use with following functions: ```go cmd.SetHelpCommand(cmd *Command) cmd.SetHelpFunc(f func(*Command, []string)) cmd.SetHelpTemplate(s string) ``` The latter two will also apply to any children commands. When the user provides an invalid flag or invalid command, Cobra responds by showing the user the 'usage'. You may recognize this from the help above. That's because the default help embeds the usage as part of its output. $ cobra --invalid Error: unknown flag: --invalid Usage: cobra [command] Available Commands: add Add a command to a Cobra Application help Help about any command init Initialize a Cobra Application Flags: -a, --author string author name for copyright attribution (default \"YOUR NAME\") --config string config file (default is $HOME/.cobra.yaml) -h, --help help for cobra -l, --license string name of license for the project --viper use Viper for configuration (default true) Use \"cobra [command] --help\" for more information about a command. You can provide your own usage function or template for Cobra to use. Like help, the function and template are overridable through public methods: ```go cmd.SetUsageFunc(f func(*Command) error) cmd.SetUsageTemplate(s string) ``` Cobra adds a top-level '--version' flag if the Version field is set on the root command. Running an application with the '--version' flag will print the version to stdout using the version template. The template can be customized using the `cmd.SetVersionTemplate(s string)` function. It is possible to run functions before or after the main `Run` function of your command. The `PersistentPreRun` and `PreRun` functions will be executed before `Run`. `PersistentPostRun` and `PostRun` will be executed after `Run`. The `Persistent*Run` functions will be inherited by children if they do not declare their"
},
{
"data": "These functions are run in the following order: `PersistentPreRun` `PreRun` `Run` `PostRun` `PersistentPostRun` An example of two commands which use all of these features is below. When the subcommand is executed, it will run the root command's `PersistentPreRun` but not the root command's `PersistentPostRun`: ```go package main import ( \"fmt\" \"github.com/spf13/cobra\" ) func main() { var rootCmd = &cobra.Command{ Use: \"root [sub]\", Short: \"My root command\", PersistentPreRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PersistentPreRun with args: %v\\n\", args) }, PreRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PreRun with args: %v\\n\", args) }, Run: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd Run with args: %v\\n\", args) }, PostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PostRun with args: %v\\n\", args) }, PersistentPostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside rootCmd PersistentPostRun with args: %v\\n\", args) }, } var subCmd = &cobra.Command{ Use: \"sub [no options!]\", Short: \"My subcommand\", PreRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd PreRun with args: %v\\n\", args) }, Run: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd Run with args: %v\\n\", args) }, PostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd PostRun with args: %v\\n\", args) }, PersistentPostRun: func(cmd *cobra.Command, args []string) { fmt.Printf(\"Inside subCmd PersistentPostRun with args: %v\\n\", args) }, } rootCmd.AddCommand(subCmd) rootCmd.SetArgs([]string{\"\"}) rootCmd.Execute() fmt.Println() rootCmd.SetArgs([]string{\"sub\", \"arg1\", \"arg2\"}) rootCmd.Execute() } ``` Output: ``` Inside rootCmd PersistentPreRun with args: [] Inside rootCmd PreRun with args: [] Inside rootCmd Run with args: [] Inside rootCmd PostRun with args: [] Inside rootCmd PersistentPostRun with args: [] Inside rootCmd PersistentPreRun with args: [arg1 arg2] Inside subCmd PreRun with args: [arg1 arg2] Inside subCmd Run with args: [arg1 arg2] Inside subCmd PostRun with args: [arg1 arg2] Inside subCmd PersistentPostRun with args: [arg1 arg2] ``` Cobra will print automatic suggestions when \"unknown command\" errors happen. This allows Cobra to behave similarly to the `git` command when a typo happens. For example: ``` $ hugo srever Error: unknown command \"srever\" for \"hugo\" Did you mean this? server Run 'hugo --help' for usage. ``` Suggestions are automatic based on every subcommand registered and use an implementation of . Every registered command that matches a minimum distance of 2 (ignoring case) will be displayed as a suggestion. If you need to disable suggestions or tweak the string distance in your command, use: ```go command.DisableSuggestions = true ``` or ```go command.SuggestionsMinimumDistance = 1 ``` You can also explicitly set names for which a given command will be suggested using the `SuggestFor` attribute. This allows suggestions for strings that are not close in terms of string distance, but makes sense in your set of commands and for some which you don't want aliases. Example: ``` $ kubectl remove Error: unknown command \"remove\" for \"kubectl\" Did you mean this? delete Run 'kubectl help' for usage. ``` Cobra can generate documentation based on subcommands, flags, etc. Read more about it in the . Cobra can generate a shell-completion file for the following shells: bash, zsh, fish, PowerShell. If you add more information to your commands, these completions can be amazingly powerful and flexible. Read more about it in ."
}
] |
{
"category": "Runtime",
"file_name": "user_guide.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "For most of the local storage projects, once pod has been scheduled to node and the pod will use the local pv from that node. That means the pod has been nailed to that node and if node fails, the pod will not migrate to other nodes. With carina, user can allow the pod to migrate in such case with one annotation. Using LVM backend engine, checking current LV first. ```shell $ kubectl get lv NAME SIZE GROUP NODE STATUS pvc-177854eb-f811-4612-92c5-b8bb98126b94 5Gi carina-vg-hdd 10.20.9.154 Success pvc-1fed3234-ff89-4c58-8c65-e21ca338b099 5Gi carina-vg-hdd 10.20.9.153 Success pvc-527b5989-3ac3-4d7a-a64d-24e0f665788b 10Gi carina-vg-hdd 10.20.9.154 Success pvc-b987d27b-39f3-4e74-9465-91b3e6b13837 3Gi carina-vg-hdd 10.20.9.154 Success $ kubectl delete node 10.20.9.154 $ kubectl get lv NAME SIZE GROUP NODE STATUS pvc-177854eb-f811-4612-92c5-b8bb98126b94 5Gi carina-vg-hdd 10.20.9.153 Success pvc-1fed3234-ff89-4c58-8c65-e21ca338b099 5Gi carina-vg-hdd 10.20.9.153 Success pvc-527b5989-3ac3-4d7a-a64d-24e0f665788b 10Gi carina-vg-hdd 10.20.9.153 Success pvc-b987d27b-39f3-4e74-9465-91b3e6b13837 3Gi carina-vg-hdd 10.20.9.153 Success ``` LV has one:one mapping with local volume. Carina will delete local volume if it does't have an associated LV every 600s. Carina will delete LV if it doesn't have an associated PV every 600s. If node been deleted, all volumes will be rebuild on other nodes. Carina will track each node's status. If node enters NotReady state, carina will trigger pod migration policy. Carina will allow pod to migrate if it has annotation `carina.storage.io/allow-pod-migration-if-node-notready` with value of `true`. Carina will not copy data from failed node to other node. So the newly borned pod will have an empty PV. The middleware layer should trigger data migration. For example, master-slave mysql cluster should trigger master-slave replication."
}
] |
{
"category": "Runtime",
"file_name": "failover.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- A few sentences describing the overall goals of the pull request's commits. Please include the type of fix - (e.g. bug fix, new feature, documentation) some details on why this PR should be merged the details of the testing you've done on it (both manual and automated) which components are affected by this PR links to issues that this PR addresses --> <!-- If appropriate, include a link to the issue this fixes. fixes <ISSUE LINK> If appropriate, add links to any number of PRs documented by this PR documents <PR LINK> --> [ ] Tests [ ] Documentation [ ] Release note <!-- Writing a release note: By default, no release note action is required. If you're unsure whether or not your PR needs a note, ask your reviewer for guidance. If this PR requires a release note, update the block below to include a concise note describing the change and any important impacts this PR may have. --> ```release-note TBD ``` Make sure that this PR has the correct labels and milestone set. Every PR needs one `docs-*` label. `docs-pr-required`: This change requires a change to the documentation that has not been completed yet. `docs-completed`: This change has all necessary documentation completed. `docs-not-required`: This change has no user-facing impact and requires no docs. Every PR needs one `release-note-*` label. `release-note-required`: This PR has user-facing changes. Most PRs should have this label. `release-note-not-required`: This PR has no user-facing changes. Other optional labels: `cherry-pick-candidate`: This PR should be cherry-picked to an earlier release. For bug fixes only. `needs-operator-pr`: This PR is related to install and requires a corresponding change to the operator."
}
] |
{
"category": "Runtime",
"file_name": "PULL_REQUEST_TEMPLATE.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: \"ark create restore\" layout: docs Create a restore Create a restore ``` ark create restore [RESTORENAME] --from-backup BACKUPNAME [flags] ``` ``` ark restore create restore-1 --from-backup backup-1 ark restore create --from-backup backup-1 ``` ``` --exclude-namespaces stringArray namespaces to exclude from the restore --exclude-resources stringArray resources to exclude from the restore, formatted as resource.group, such as storageclasses.storage.k8s.io --from-backup string backup to restore from -h, --help help for restore --include-cluster-resources optionalBool[=true] include cluster-scoped resources in the restore --include-namespaces stringArray namespaces to include in the restore (use '' for all namespaces) (default ) --include-resources stringArray resources to include in the restore, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources) --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the restore --namespace-mappings mapStringString namespace mappings from name in the backup to desired restored name in the form src1:dst1,src2:dst2,... -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. --restore-volumes optionalBool[=true] whether to restore volumes from snapshots -l, --selector labelSelector only restore resources matching this label selector (default <none>) --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Create ark resources"
}
] |
{
"category": "Runtime",
"file_name": "ark_create_restore.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "| Author | Ilya Kuksenok | | | - | | Date | 2022-12-19 | | Email | | Right now in Kubectl supports only , protocol for cp and port forwarding. SPDY was deprecated a few years ago and now should be switched to something else, for example or . Visual representation of the problem As you can see on this diagram, SPDY is widely used in communication between all necessary components. Desired communication diagram. This section should help you to understand why websockets approach still wasn't implemented in current k8s implementation. Is this issue community trying to deside is it ok to use HTTP/2 or no and generaly speaking, this is not a best idea because websockets are already used in some parts of k8s, like in kubelet. In this case if HTTP/2 will be implemented consistency in the project will be broken . and nevertheless looks like all faced problems until now have been resolved, but now they dont have any sort of consesus about HTTP/2 or other possible ways of SPDY deprecation. In the issue Kubernetes team discussing faced problems related to SPDY -> websockets migration. As for now, Websockets are highly used in API server and kubelet, for example, starting from version release-1.19, look for details. Right now some big parts of k8s are already using websockets; if you look closely to the source code you will find that main websocket library is a part of standard package , also you can find gorilla/websockets package. In other words, the k8s community is actively trying to remove SPDY. This package uses SPDY for container control and communication, for example . There were several tryouts about untroducing websockets into kubetcl, for example: 1) Protocol related issue 2) will be by the . Lets discuss these problems in a more detailed manner. Problem definition: Websocket protocol does not support half-close feature look at , exactly to and for more detailed info, but at the same time SPDY it. In case of SPDY this behavior is defined by FLAG_FIN, in case of websockets we dont have same possibility out of the box. According to to have some sort of half-close feature implementation would be very useful, for more details please look and . Based on this are required, this will allow to fix existing issues, like half-closed connections without big troubles, and in future it will prevent other possible websocket relaterd issues. 1) Break backward compatibility with old clients 2) Huge amount of refactoring (current implementation is hardly tired to SPDY) As I have already mentioned, already archived, and we or anyone else should not use it in any production scenario, even in case of MEDIUM CVE (described in paragraph 4 table 12 of ) or any other CVE, we cant get a fix from the package developer and we will have to fix it on our end which is bad in terms of user and developer experience. Because of declared issues K8S developers community finally made a decision to drop SPDY and move to websockets , but new protocol still in development stage, as for now development process looks frozen and no concrete plan of development was introduced, but at least process started."
}
] |
{
"category": "Runtime",
"file_name": "k8s_websockets_problem.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at ."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- Placeholder page required to add the automatically generated config-options page to the ToC. --> <!-- To update content on that page, edit .sphinx/_templates/domainindex.html. -->"
}
] |
{
"category": "Runtime",
"file_name": "config-options.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Output the dependencies graph in graphviz dot format ``` cilium-operator-generic hive dot-graph [flags] ``` ``` -h, --help help for dot-graph ``` ``` --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in"
},
{
"data": "(default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created ``` - Inspect the hive"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-generic_hive_dot-graph.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.