content
listlengths
1
171
tag
dict
[ { "data": "<img align=\"right\" alt=\"MooseFS logo\" src=\"https://moosefs.com/Content/Images/moosefs.png\" /> MooseFS is a Petabyte Open Source Network Distributed File System. It is easy todeploy and maintain, highly reliable, fault tolerant, highly performing, easily scalable and POSIX compliant. MooseFS spreads data over a number of commodity servers, which are visible to the user as one resource. For standard file operations MooseFS acts like ordinary Unix-like file system: A hierarchical structure directory tree* Stores POSIX file attributes* permissions, last access and modification times, etc. Supports ACLs* Supports POSIX and BSD file locks including support for distributed file locking* Supports special files* block and character devices, pipes and sockets Supports symbolic links* file names pointing to target files, not necessarily on MooseFS Supports hard links* different names of files which refer to the same data on MooseFS Distinctive MooseFS features: High reliability* files are stored in several copies on separate servers. The number of copies is a configurable parameter, even per each file No Single Point of Failure* all hardware and software components may be redundant Parallel* data operations many clients can access many files concurrently Capacity can be dynamically expanded* by simply adding new servers/disks on the fly Retired hardware may be removed on the fly* Deleted files are retained for a configurable period of time (a file system level \"trash bin\"*) Coherent, \"atomic\" snapshots* of files, even while the files are being written/accessed Access to the file system can be limited* based on IP address and/or password (similarly as in NFS) Data tiering* supports different storage policies for different files/directories in Storage Classes mechanism Per-directory, \"project\" quotas* configurable per RAW space, usable space, number of inodes with hard and soft quotas support Apart from file system storage, MooseFS also provides block storage* (`mfsbdev`) Efficient, pure C* implementation Ethernet* support MooseFS can be installed on any POSIX compliant operating system including various Linux distributions, FreeBSD andmacOS: Ubuntu Debian RHEL / CentOS OpenSUSE FreeBSD macOS MooseFS Linux Client uses . MooseFS macOS Client uses . There is a separate MooseFS Client for Microsoft Windows available, built on top of . You can install MooseFS using your favourite package manager on one of the following platforms using : Ubuntu 16 / 18 / 20 / 22 Debian 8 / 9 / 10 / 11 RHEL / CentOS 7 / 8 / 9 FreeBSD 11 / 12 / 13 macOS 10.11+ Ubuntu 20 / 22 Raspberry Pi Debian 10 / 11 Raspberry Pi Packages for Ubuntu 14 and CentOS 6 are also available, but no longer supported. Minimal set of packages, which are needed to run MooseFS: `moosefs-master` MooseFS Master Server for metadata servers, `moosefs-chunkserver` MooseFS Chunkserver for data storage servers, `moosefs-client` MooseFS Client client side package to mount the filesystem. Feel free to download the source code from our GitHub code repository! Install the following dependencies before building MooseFS from sources: Debian/Ubuntu: `sudo apt install build-essential libpcap-dev zlib1g-dev libfuse3-dev pkg-config` (if you don't have FUSE v. 3 in your system, use `sudo apt install build-essential libpcap-dev zlib1g-dev libfuse-dev pkg-config`) CentOS/RHEL: `sudo yum install gcc make libpcap-devel zlib-devel fuse3-devel pkgconfig` (if you don't have FUSE v. 3 in your system, use `sudo yum install gcc make libpcap-devel zlib-devel fuse-devel pkgconfig`) Recommended packages: Debian/Ubuntu: `sudo apt install fuse3` (if you don't have FUSE v. 3 in your system, use `sudo apt install fuse`) CentOS/RHEL: `sudo yum install fuse3` (if you don't have FUSE" }, { "data": "3 in your system, use `sudo yum install fuse`) Building MooseFS on Linux can be easily done by running `./linuxbuild.sh`. Similarly, use `./freebsdbuild.sh` in order to build MooseFS on FreeBSD and respectively `./macosx_build.sh` on macOS. Remember that these scripts do not install binaries (i.e. do not run `make install`) at the end. Run this command manually. Just three steps to have MooseFS up and running: Install `moosefs-master` package Prepare default config (as `root`): ``` cd /etc/mfs cp mfsmaster.cfg.sample mfsmaster.cfg cp mfsexports.cfg.sample mfsexports.cfg ``` Prepare the metadata file (as `root`): ``` cd /var/lib/mfs cp metadata.mfs.empty metadata.mfs chown mfs:mfs metadata.mfs rm metadata.mfs.empty ``` Run Master Server (as `root`): `mfsmaster start` Make this machine visible under `mfsmaster` name, e.g. by adding a DNS entry (recommended) or by adding it in `/etc/hosts` on all servers that run any of MooseFS components. Install `moosefs-chunkserver` package Prepare default config (as `root`): ``` cd /etc/mfs cp mfschunkserver.cfg.sample mfschunkserver.cfg cp mfshdd.cfg.sample mfshdd.cfg ``` At the end of `mfshdd.cfg` file make one or more entries containing paths to HDDs / partitions designated for storing chunks, e.g.: ``` /mnt/chunks1 /mnt/chunks2 /mnt/chunks3 ``` It is recommended to use XFS as an underlying filesystem for disks designated to store chunks. More than two Chunkservers are strongly recommended. Change the ownership and permissions to `mfs:mfs` to above mentioned locations: ``` chown mfs:mfs /mnt/chunks1 /mnt/chunks2 /mnt/chunks3 chmod 770 /mnt/chunks1 /mnt/chunks2 /mnt/chunks3 ``` Start the Chunkserver: `mfschunkserver start` Repeat steps above for second (third, ...) Chunkserver. Install `moosefs-client` package Mount MooseFS (as `root`): ``` mkdir /mnt/mfs mount -t moosefs mfsmaster: /mnt/mfs ``` or: `mfsmount -H mfsmaster /mnt/mfs` if the above method is not supported by your system You can also add an `/etc/fstab` entry to mount MooseFS during the system boot: ``` mfsmaster: /mnt/mfs moosefs defaults,mfsdelayedinit 0 0 ``` There are more configuration parameters available but most of them may stay with defaults. We do our best to keep MooseFS easy to deploy and maintain. MooseFS, for testing purposes, can even be installed on a single machine! Setting up `moosefs-cli` or `moosefs-cgi` with `moosefs-cgiserv` is also recommended it gives you a possibility to monitor the cluster online: Install `moosefs-cli moosefs-cgi moosefs-cgiserv` packages (they are typically set up on the Master Server) Run MooseFS CGI Server (as `root`): `mfscgiserv start` Open http://mfsmaster:9425 in your web browser It is also strongly recommended to set up at least one Metalogger on a different machine than Master Server (e.g. on one of Chunkservers). Metalogger constantly synchronizes and backups the metadata: Install `moosefs-metalogger` package Prepare default config (as `root`): ``` cd /etc/mfs cp mfsmetalogger.cfg.sample mfsmetalogger.cfg ``` Run Metalogger (as `root`): `mfsmetalogger start` Refer to for more details. Date of the first public release: 2008-05-30 The project web site: https://moosefs.com Installation and using MooseFS: https://moosefs.com/support (Old) Sourceforge project site: http://sourceforge.net/projects/moosefs Reporting bugs: or General: Copyright (c) 2008-2023 Jakub Kruszona-Zawadzki, Saglabs SA This file is part of MooseFS. MooseFS is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 2 (only). MooseFS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with MooseFS; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02111-1301, USA or visit http://www.gnu.org/licenses/gpl-2.0.html." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "MooseFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Key Management System <!-- markdownlint-disable MD024 --> <!-- allow duplicate headers in this file --> Rook has the ability to encrypt OSDs of clusters running on PVC via the flag (`encrypted: true`) in your `storageClassDeviceSets` . Rook also has the ability to rotate encryption keys of OSDs using a cron job per OSD. By default, the Key Encryption Keys (also known as Data Encryption Keys) are stored in a Kubernetes Secret. However, if a Key Management System exists Rook is capable of using it. The `security` section contains settings related to encryption of the cluster. `security`: `kms`: Key Management System settings `connectionDetails`: the list of parameters representing kms connection details `tokenSecretName`: the name of the Kubernetes Secret containing the kms authentication token `keyRotation`: Key Rotation settings `enabled`: whether key rotation is enabled or not, default is `false` `schedule`: the schedule, written in , with which key rotation is created, default value is `\"@weekly\"`. !!! note Currently key rotation is only supported for the default type, where the Key Encryption Keys are stored in a Kubernetes Secret. Supported KMS providers: - - - - - - Rook supports storing OSD encryption keys in . Rook support two authentication methods: : a token is provided by the user and is stored in a Kubernetes Secret. It's used to authenticate the KMS by the Rook operator. This has several pitfalls such as: when the token expires it must be renewed, so the secret holding it must be updated no token automatic rotation uses [Vault Kubernetes native authentication](https://www.vaultproject.io/docs/auth/kubernetes) mechanism and alleviate some of the limitations from the token authentication such as token automatic renewal. This method is generally recommended over the token-based authentication. When using the token-based authentication, a Kubernetes Secret must be created to hold the token. This is governed by the `tokenSecretName` parameter. Note: Rook supports all the Vault . The Kubernetes Secret `rook-vault-token` should contain: ```yaml apiVersion: v1 kind: Secret metadata: name: rook-vault-token namespace: rook-ceph data: token: <TOKEN> # base64 of a token to connect to Vault, for example: cy5GWXpsbzAyY2duVGVoRjhkWG5Bb3EyWjkK ``` You can create a token in Vault by running the following command: ```console vault token create -policy=rook ``` Refer to the official vault document for more details on [how to create a token](https://www.vaultproject.io/docs/commands/token/create). For which policy to apply see the next section. In order for Rook to connect to Vault, you must configure the following in your `CephCluster` template: ```yaml security: kms: connectionDetails: KMS_PROVIDER: vault VAULT_ADDR:" }, { "data": "VAULTBACKENDPATH: rook VAULTSECRETENGINE: kv VAULTAUTHMETHOD: token tokenSecretName: rook-vault-token ``` In order to use the Kubernetes Service Account authentication method, the following must be run to properly configure Vault: ```console ROOK_NAMESPACE=rook-ceph ROOKVAULTSA=rook-vault-auth ROOKSYSTEMSA=rook-ceph-system ROOKOSDSA=rook-ceph-osd VAULTPOLICYNAME=rook kubectl -n \"$ROOKNAMESPACE\" create serviceaccount \"$ROOKVAULT_SA\" kubectl -n \"$ROOKNAMESPACE\" create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=\"$ROOKNAMESPACE\":\"$ROOKVAULTSA\" VAULTSASECRETNAME=$(kubectl -n \"$ROOKNAMESPACE\" get sa \"$ROOKVAULTSA\" -o jsonpath=\"{.secrets}\") SAJWTTOKEN=$(kubectl -n \"$ROOKNAMESPACE\" get secret \"$VAULTSASECRETNAME\" -o jsonpath=\"{.data.token}\" | base64 --decode) SACACRT=$(kubectl -n \"$ROOKNAMESPACE\" get secret \"$VAULTSASECRETNAME\" -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode) K8S_HOST=$(kubectl config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\") vault auth enable kubernetes kubectl proxy & proxy_pid=$! vault write auth/kubernetes/config \\ tokenreviewerjwt=\"$SAJWTTOKEN\" \\ kuberneteshost=\"$K8SHOST\" \\ kubernetescacert=\"$SACACRT\" \\ issuer=\"$(curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill $proxy_pid vault write auth/kubernetes/role/\"$ROOK_NAMESPACE\" \\ boundserviceaccountnames=\"$ROOKSYSTEMSA\",\"$ROOKOSD_SA\" \\ boundserviceaccountnamespaces=\"$ROOKNAMESPACE\" \\ policies=\"$VAULTPOLICYNAME\" \\ ttl=1440h ``` Once done, your `CephCluster` CR should look like: ```yaml security: kms: connectionDetails: KMS_PROVIDER: vault VAULT_ADDR: https://vault.default.svc.cluster.local:8200 VAULTBACKENDPATH: rook VAULTSECRETENGINE: kv VAULTAUTHMETHOD: kubernetes VAULTAUTHKUBERNETES_ROLE: rook-ceph ``` !!! note The `VAULT_ADDR` value above assumes that Vault is accessible within the cluster itself on the default port (8200). If running elsewhere, please update the URL accordingly. As part of the token, here is an example of a policy that can be used: ```hcl path \"rook/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] } ``` You can write the policy like so and then create a token: ```console $ vault policy write rook /tmp/rook.hcl $ vault token create -policy=rook Key Value -- token s.FYzlo02cgnTehF8dXnAoq2Z9 token_accessor oMo7sAXQKbYtxU4HtO8k3pko token_duration 768h token_renewable true token_policies [\"default\" \"rook\"] identity_policies [] policies [\"default\" \"rook\"] ``` In the above example, Vault's secret backend path name is `rook`. It must be enabled with the following: ```console vault secrets enable -path=rook kv ``` If a different path is used, the `VAULTBACKENDPATH` key in `connectionDetails` must be changed. This is an advanced but recommended configuration for production deployments, in this case the `vault-connection-details` will look like: ```yaml security: kms: connectionDetails: KMS_PROVIDER: vault VAULT_ADDR: https://vault.default.svc.cluster.local:8200 VAULT_CACERT: <name of the k8s secret containing the PEM-encoded CA certificate> VAULTCLIENTCERT: <name of the k8s secret containing the PEM-encoded client certificate> VAULTCLIENTKEY: <name of the k8s secret containing the PEM-encoded private key> tokenSecretName: rook-vault-token ``` Each secret keys are expected to be: VAULT_CACERT: `cert` VAULTCLIENTCERT: `cert` VAULTCLIENTKEY: `key` For instance `VAULT_CACERT` Secret named `vault-tls-ca-certificate` will look like: ```yaml apiVersion: v1 kind: Secret metadata: name: vault-tls-ca-certificate namespace: rook-ceph data: cert: <PEM base64 encoded CA certificate> ``` Note: if you are using self-signed certificates (not known/approved by a proper CA) you must pass `VAULTSKIPVERIFY: true`. Communications will remain encrypted but the validity of the certificate will not be verified. Rook supports storing OSD encryption keys in [IBM Key Protect](https://www.ibm.com/cloud/key-protect). The current implementation stores OSD encryption keys as using the [Bring Your Own Key](https://cloud.ibm.com/docs/key-protect?topic=key-protect-importing-keys) (BYOK) method. This means that the Key Protect instance policy must have Standard Imported Key enabled. First, you need to on the IBM Cloud. Once completed," }, { "data": "Make a record of it; we need it in the CRD. On the IBM Cloud, the user must create a Service ID, then assign an Access Policy to this service. Ultimately, a Service API Key needs to be generated. All the steps are summarized in the [official documentation](https://www.ibm.com/docs/en/cloud-private/3.2.0?topic=dg-creating-service-id-by-using-cloud-private-management-console). The Service ID must be granted access to the Key Protect Service. Once the Service API Key is generated, store it in a Kubernetes Secret. ```yaml apiVersion: v1 kind: Secret metadata: name: ibm-kp-svc-api-key namespace: rook-ceph data: IBMKPSERVICEAPIKEY: <service API Key> ``` In order for Rook to connect to IBM Key Protect, you must configure the following in your `CephCluster` template: ```yaml security: kms: connectionDetails: KMS_PROVIDER: ibmkeyprotect IBMKPSERVICEINSTANCEID: <instance ID that was retrieved in the first paragraph> tokenSecretName: ibm-kp-svc-api-key ``` More options are supported such as: `IBMBASEURL`: the base URL of the Key Protect instance, depending on your . Defaults to `https://us-south.kms.cloud.ibm.com`. `IBMTOKENURL`: the URL of the Key Protect instance to retrieve the token. Defaults to `https://iam.cloud.ibm.com/oidc/token`. Only needed for private instances. Rook supports storing OSD encryption keys in supported KMS. The current implementation stores OSD encryption keys using the operation. Key is fetched and deleted using and operations respectively. The Secret with credentials for the KMIP KMS is expected to contain the following. ```yaml apiVersion: v1 kind: Secret metadata: name: kmip-credentials namespace: rook-ceph stringData: CA_CERT: <ca certificate> CLIENT_CERT: <client certificate> CLIENT_KEY: <client key> ``` In order for Rook to connect to KMIP, you must configure the following in your `CephCluster` template: ```yaml security: kms: connectionDetails: KMS_PROVIDER: kmip KMIP_ENDPOINT: <KMIP endpoint address> TLSSERVERNAME: <tls server name> READ_TIMEOUT: <read timeout> WRITE_TIMEOUT: <write timeout> tokenSecretName: kmip-credentials ``` Rook supports storing OSD encryption keys in Different methods are available in Azure to authenticate a client. Rook supports Azure recommended method of authentication with Service Principal and a certificate. Refer the following Azure documentation to set up key vault and authenticate it via service principal and certtificate * `AZUREVAULTURL` can be retrieved at this step * `AZURECLIENTID` and `AZURETENANTID` can be obtained after creating the service principal Ensure that the service principal is authenticated with a certificate and not with a client secret. * Ensure that the role assigned to the key vault should be able to create, retrieve and delete secrets in the key vault. Provide the following KMS connection details in order to connect with Azure Key Vault. ```yaml security: kms: connectionDetails: KMS_PROVIDER: azure-kv AZUREVAULTURL: https://<key-vault name>.vault.azure.net AZURECLIENTID: Application ID of an Azure service principal AZURETENANTID: ID of the application's Microsoft Entra tenant AZURECERTSECRET_NAME: <name of the k8s secret containing the certificate along with the private key (without password protection)> ``` `AZURECERTSECRET_NAME` should hold the name of the k8s secret. The secret data should be base64 encoded certificate along with private key (without password protection)" } ]
{ "category": "Runtime", "file_name": "key-management-system.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Templates define a single application configuration template. Templates are stored under the `/etc/confd/templates` directory by default. Templates are written in Go's . creates a key-value map of string -> interface{} ``` {{$endpoint := map \"name\" \"elasticsearch\" \"privateport\" 9200 \"publicport\" 443}} name: {{index $endpoint \"name\"}} private-port: {{index $endpoint \"private_port\"}} public-port: {{index $endpoint \"public_port\"}} ``` specifically useful if you a sub-template and you want to pass multiple values to it. Alias for the function. ``` {{with get \"/key\"}} key: {{base .Key}} value: {{.Value}} {{end}} ``` Checks if the key exists. Return false if key is not found. ``` {{if exists \"/key\"}} value: {{getv \"/key\"}} {{end}} ``` Returns the KVPair where key matches its argument. Returns an error if key is not found. ``` {{with get \"/key\"}} key: {{.Key}} value: {{.Value}} {{end}} ``` Returns all KVPair, []KVPair, where key matches its argument. Returns an error if key is not found. ``` {{range gets \"/*\"}} key: {{.Key}} value: {{.Value}} {{end}} ``` Returns the value as a string where key matches its argument or an optional default value. Returns an error if key is not found and no default value given. ``` value: {{getv \"/key\"}} ``` ``` value: {{getv \"/key\" \"default_value\"}} ``` Returns all values, []string, where key matches its argument. Returns an error if key is not found. ``` {{range getvs \"/*\"}} value: {{.}} {{end}} ``` Wrapper for . Retrieves the value of the environment variable named by the key. It returns the value, which will be empty if the variable is not present. Optionally, you can give a default value that will be returned if the key is not present. ``` export HOSTNAME=`hostname` ``` ``` hostname: {{getenv \"HOSTNAME\"}} ``` ``` ipaddr: {{getenv \"HOST_IP\" \"127.0.0.1\"}} ``` Alias for ``` ``` Outputs: ``` ``` ``` ``` Outputs: ``` ``` See the time package for more usage: http://golang.org/pkg/time/ Wrapper for . Splits the input string on the separating string and returns a slice of substrings. ``` {{ $url := split (getv \"/deis/service\") \":\" }} host: {{index $url 0}} port: {{index $url 1}} ``` Alias for Returns uppercased string. ``` key: {{toUpper \"value\"}} ``` Alias for . Returns lowercased string. ``` key: {{toLower \"Value\"}} ``` Returns an map[string]interface{} of the json value. Wrapper for . The wrapper also sorts the SRV records alphabetically by combining all the fields of the net.SRV struct to reduce unnecessary config reloads. ``` {{range lookupSRV \"mail\" \"tcp\" \"example.com\"}} target: {{.Target}} port: {{.Port}} priority: {{.Priority}} weight: {{.Weight}} {{end}} ``` Returns a base64 encoded string of the value. ``` key: {{base64Encode \"Value\"}} ``` Returns the string representing the decoded base64 value. ``` key: {{base64Decode \"VmFsdWU=\"}} ``` ``` etcdctl set /services/zookeeper/host1 '{\"Id\":\"host1\", \"IP\":\"192.168.10.11\"}' etcdctl set /services/zookeeper/host2 '{\"Id\":\"host2\", \"IP\":\"192.168.10.12\"}' ``` ``` [template] src = \"services.conf.tmpl\" dest = \"/tmp/services.conf\" keys = [ \"/services/zookeeper/\" ] ``` ``` {{range gets \"/services/zookeeper/*\"}} {{$data := json .Value}} id: {{$data.Id}} ip: {{$data.IP}} {{end}} ``` Once you have parsed the JSON, it is possible to traverse it with normal Go template functions such as" }, { "data": "A more advanced structure, like this: ``` { \"animals\": [ {\"type\": \"dog\", \"name\": \"Fido\"}, {\"type\": \"cat\", \"name\": \"Misse\"} ] } ``` It can be traversed like this: ``` {{$data := json (getv \"/test/data/\")}} type: {{ (index $data.animals 1).type }} name: {{ (index $data.animals 1).name }} {{range $data.animals}} {{.name}} {{end}} ``` Returns a []interface{} from a json array such as `[\"a\", \"b\", \"c\"]`. ``` {{range jsonArray (getv \"/services/data/\")}} val: {{.}} {{end}} ``` Returns all subkeys, []string, where path matches its argument. Returns an empty list if path is not found. ``` {{range ls \"/deis/services\"}} value: {{.}} {{end}} ``` Returns all subkeys, []string, where path matches its argument. It only returns subkeys that also have subkeys. Returns an empty list if path is not found. ``` {{range lsdir \"/deis/services\"}} value: {{.}} {{end}} ``` Returns the parent directory of a given key. ``` {{with dir \"/services/data/url\"}} dir: {{.}} {{end}} ``` Alias for the function. ``` {{$services := getvs \"/services/elasticsearch/*\"}} services: {{join $services \",\"}} ``` Alias for the function. ``` {{$backend := getv \"/services/backend/nginx\"}} backend = {{replace $backend \"-\" \"_\" -1}} ``` Wrapper for function. The wrapper also sorts (alphabetically) the IP addresses. This is crucial since in dynamic environments DNS servers typically shuffle the addresses linked to domain name. And that would cause unnecessary config reloads. ``` {{range lookupIP \"some.host.local\"}} server {{.}}; {{end}} ``` ```Bash etcdctl set /nginx/domain 'example.com' etcdctl set /nginx/root '/var/www/example_dotcom' etcdctl set /nginx/worker_processes '2' etcdctl set /app/upstream/app1 \"10.0.1.100:80\" etcdctl set /app/upstream/app2 \"10.0.1.101:80\" ``` `/etc/confd/templates/nginx.conf.tmpl` ```Text workerprocesses {{getv \"/nginx/workerprocesses\"}}; upstream app { {{range getvs \"/app/upstream/*\"}} server {{.}}; {{end}} } server { listen 80; server_name www.{{getv \"/nginx/domain\"}}; access_log /var/log/nginx/{{getv \"/nginx/domain\"}}.access.log; error_log /var/log/nginx/{{getv \"/nginx/domain\"}}.log; location / { root {{getv \"/nginx/root\"}}; index index.html index.htm; proxy_pass http://app; proxy_redirect off; proxysetheader Host $host; proxysetheader X-Real-IP $remote_addr; proxysetheader X-Forwarded-For $proxyaddxforwardedfor; } } ``` Output: `/etc/nginx/nginx.conf` ```Text worker_processes 2; upstream app { server 10.0.1.100:80; server 10.0.1.101:80; } server { listen 80; server_name www.example.com; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; location / { root /var/www/example_dotcom; index index.html index.htm; proxy_pass http://app; proxy_redirect off; proxysetheader Host $host; proxysetheader X-Real-IP $remote_addr; proxysetheader X-Forwarded-For $proxyaddxforwardedfor; } } ``` This examples show how to use a combination of the templates functions to do nested iteration. ``` etcdctl mkdir /services/web/cust1/ etcdctl mkdir /services/web/cust2/ etcdctl set /services/web/cust1/2 '{\"IP\": \"10.0.0.2\"}' etcdctl set /services/web/cust2/2 '{\"IP\": \"10.0.0.4\"}' etcdctl set /services/web/cust2/1 '{\"IP\": \"10.0.0.3\"}' etcdctl set /services/web/cust1/1 '{\"IP\": \"10.0.0.1\"}' ``` ``` [template] src = \"services.conf.tmpl\" dest = \"/tmp/services.conf\" keys = [ \"/services/web\" ] ``` ``` {{range $dir := lsdir \"/services/web\"}} upstream {{base $dir}} { {{$custdir := printf \"/services/web/%s/*\" $dir}}{{range gets $custdir}} server {{$data := json .Value}}{{$data.IP}}:80; {{end}} } server { server_name {{base $dir}}.example.com; location / { proxy_pass {{base $dir}}; } } {{end}} ``` Output:`/tmp/services.conf` ```Text upstream cust1 { server 10.0.0.1:80; server 10.0.0.2:80; } server { server_name cust1.example.com; location / { proxy_pass cust1; } } upstream cust2 { server 10.0.0.3:80; server 10.0.0.4:80; } server { server_name cust2.example.com; location / { proxy_pass cust2; } } ``` Go's package is very powerful. For more details on it's capabilities see its" } ]
{ "category": "Runtime", "file_name": "templates.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "Support KubeEdge/SuperEdge/Openyurt Automatic management of node certificate Air-gap installation Support Flannel/Calico Support IPV4 Support IPSec Tunnel Support Edge Cluster Support topology-aware service discovery Support IPV6 Implement a flexiable way to configure fabedge-agent Support auto networking of edge nodes in LAN Change the naming strategy of fabedge-agent pods Add commonName validation for fabedge-agent certificates Implement node-specific configuration of fabedge-agent arguments Let fabedge-agent configure sysctl parameters needed Let fabedge-operator manage calico ippools for CIDRs Support settings strongswan's port Support strongswan hole punching Release fabctl which is a CLI to help diagnosing networking problems; Integerate fabedge-agent with coredns and kube-proxy Implement connector high availability Improve dual stack implementation Improve iptables rule configuring (ensure the order of rules)" } ]
{ "category": "Runtime", "file_name": "roadmap.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "English Spiderpool reserve some IP addresses for the whole Kubernetes cluster through the ReservedIP CR, ensuring that these addresses are not allocated by IPAM. To avoid IP conflicts when it is known that an IP address is being used externally to the cluster, it can be a time-consuming and labor-intensive task to remove that IP address from existing IPPool instances. Furthermore, network administrators want to ensure that this IP address is not allocated from any current or future IPPool resources. To address these concerns, the ReservedIP CR allows for the specification of IP addresses that should not be utilized by the cluster. Even if an IPPool instance includes those IP addresses, the IPAM plugin will refrain from assigning them to Pods. The IP addresses specified in the ReservedIP CR serve two purposes: Clearly identify those IP addresses already in use by hosts outside the cluster. Explicitly prevent the utilization of those IP addresses for network communication, such as subnet IPs or broadcast IPs. A ready Kubernetes kubernetes. has been already installed. Refer to to install Spiderpool. To simplify the creation of JSON-formatted Multus CNI configurations, Spiderpool introduces the SpiderMultusConfig CR, which automates the management of Multus NetworkAttachmentDefinition CRs. Here is an example of creating a Macvlan SpiderMultusConfig: masterthe interface `ens192` is used as the spec for master. ```bash MACVLANMASTERINTERFACE=\"ens192\" MACVLANMULTUSNAME=\"macvlan-$MACVLANMASTERINTERFACE\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${MACVLANMULTUSNAME} namespace: kube-system spec: cniType: macvlan enableCoordinator: true macvlan: master: ${MACVLANMASTERINTERFACE} EOF ``` With the provided configuration, we create a Macvlan SpiderMultusConfig that will automatically generate the corresponding Multus NetworkAttachmentDefinition CR. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -n kube-system NAME AGE macvlan-ens192 26m ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system NAME AGE macvlan-ens192 27m ``` To create reserved IPs, use the following YAML to specify `spec.ips` as `10.6.168.131-10.6.168.132`: ```bash cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderReservedIP metadata: name: test-reservedip spec: ips: 10.6.168.131-10.6.168.132 EOF ``` Create an IP pool with `spec.ips` set to `10.6.168.131-10.6.168.133`, containing a total of 3 IP addresses. However, given the previously created reserved IPs, only 1 IP address is available in this IP pool. ```bash cat <<EOF | kubectl apply -f - apiVersion:" }, { "data": "kind: SpiderIPPool metadata: name: test-ippool spec: subnet: 10.6.0.0/16 ips: 10.6.168.131-10.6.168.133 EOF ``` To allocate IP addresses from this IP pool, use the following YAML to create a Deployment with 2 replicas: `ipam.spidernet.io/ippool`: specify the IP pool for assigning IP addresses to the application ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app spec: replicas: 2 selector: matchLabels: app: test-app template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"test-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-ens192 labels: app: test-app spec: containers: name: test-app image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP EOF ``` Because both IP addresses in the IP pool are reserved by the ReservedIP CR, only one IP address is available in the pool. This means that only one Pod of the application can run successfully, while the other Pod fails to create due to the \"all IPs have been exhausted\" error. ```bash ~# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-67dd9f645-dv8xz 1/1 Running 0 17s 10.6.168.133 node2 <none> <none> test-app-67dd9f645-lpjgs 0/1 ContainerCreating 0 17s <none> node1 <none> <none> ``` If a Pod of the application already has been assigned a reserved IP, adding that IP address to the ReservedIP CR will result in the replica failing to run after restarting. Use the following command to add the Pod's allocated IP address to the ReservedIP CR, and then restart the Pod. As expected, the Pod will fail to start due to the \"all IPs have been exhausted\" error. ```bash ~# kubectl patch spiderreservedip test-reservedip --patch '{\"spec\":{\"ips\":[\"10.6.168.131-10.6.168.133\"]}}' --type=merge pod \"test-app-67dd9f645-dv8xz\" deleted ~# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-67dd9f645-fvx4m 0/1 ContainerCreating 0 9s <none> node2 <none> <none> test-app-67dd9f645-lpjgs 0/1 ContainerCreating 0 2m18s <none> node1 <none> <none> ``` Once the reserved IP is removed, the Pod can obtain an IP address and run successfully. ```bash ~# kubectl delete sr test-reservedip spiderreservedip.spiderpool.spidernet.io \"test-reservedip\" deleted ~# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-67dd9f645-fvx4m 1/1 Running 0 4m23s 10.6.168.133 node2 <none> <none> test-app-67dd9f645-lpjgs 1/1 Running 0 6m14s 10.6.168.131 node1 <none> <none> ``` SpiderReservedIP simplifies network planning for infrastructure administrators." } ]
{ "category": "Runtime", "file_name": "reserved-ip.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "[TOC] This guide describes how to obtain Prometheus monitoring data from gVisor sandboxes running with `runsc`. NOTE: These metrics are mostly information about gVisor internals, and do not provide introspection capabilities into the workload being sandboxed. If you would like to monitor the sandboxed workload (e.g. for threat detection), refer to . `runsc` implements a HTTP metric server using the `runsc metric-server` subcommand. This server is meant to run unsandboxed as a sidecar process of your container runtime (e.g. Docker). You can export metric information from running sandboxes using the `runsc export-metrics` subcommand. This does not require special configuration or setting up a Prometheus server. ``` $ docker run -d --runtime=runsc --name=foobar debian sleep 1h c7ce77796e0ece4c0881fb26261608552ea4a67b2fe5934658b8b4433e5190ed $ sudo /path/to/runsc --root=/var/run/docker/runtime-runc/moby export-metrics c7ce77796e0ece4c0881fb26261608552ea4a67b2fe5934658b8b4433e5190ed Command-line export for sandbox c7ce77796e0ece4c0881fb26261608552ea4a67b2fe5934658b8b4433e5190ed Writing data from snapshot containing 175 data points taken at 2023-01-25 15:46:50.469403696 -0800 PST. HELP runscfsopens Number of file opens. TYPE runscfsopens counter runscfsopens{sandbox=\"c7ce77796e0ece4c0881fb26261608552ea4a67b2fe5934658b8b4433e5190ed\"} 62 1674690410469 HELP runscfsread_wait Time waiting on file reads, in nanoseconds. TYPE runscfsread_wait counter runscfsread_wait{sandbox=\"c7ce77796e0ece4c0881fb26261608552ea4a67b2fe5934658b8b4433e5190ed\"} 0 1674690410469 HELP runscfsreads Number of file reads. TYPE runscfsreads counter runscfsreads{sandbox=\"c7ce77796e0ece4c0881fb26261608552ea4a67b2fe5934658b8b4433e5190ed\"} 54 1674690410469 [...] ``` Use the `runsc metric-server` subcommand: ```shell $ sudo runsc \\ --root=/var/run/docker/runtime-runc/moby \\ --metric-server=localhost:1337 \\ metric-server ``` `--root` needs to be set to the OCI runtime root directory that your runtime implementation uses. For Docker, this is typically `/var/run/docker/runtime-runc/moby`; otherwise, if you already have gVisor set up, you can use `ps aux | grep runsc` on the host to find the `--root` that a running sandbox is using. This directory is typically only accessible by the user Docker runs as (usually `root`), hence `sudo`. The metric server uses the `--root` directory to scan for sandboxes running on the system. The `--metric-server` flag is the network address or UDS path to bind to. In this example, this will create a server bound on all interfaces on TCP port `1337`. To listen on `lo` only, you could alternatively use `--metric-server=localhost:1337`. If something goes wrong, you may also want to add `--debug --debug-log=/dev/stderr` to understand the metric server's behavior. You can query the metric server with `curl`: ``` $ curl http://localhost:1337/metrics Data for runsc metric server exporting data for sandboxes in root directory /var/run/docker/runtime-runc/moby [...] HELP processstarttime_seconds Unix timestamp at which the process started. Used by Prometheus for counter resets. TYPE processstarttime_seconds gauge processstarttime_seconds 1674598082.698509 1674598109532 End of metric data. ``` Sandbox metrics are disabled by default. To enable, add the flag `--metric-server={ADDRESS}:{PORT}` to the runtime configuration. With Docker, this can be set in `/etc/docker/daemon.json` like so: ```json { \"runtimes\": { \"runsc\": { \"path\": \"/path/to/runsc\", \"runtimeArgs\": [ \"--metric-server=localhost:1337\" ] } } } ``` NOTE: The `--metric-server` flag value must be an exact string match between the runtime configuration and the `runsc metric-server` command. Once you've done this, you can start a container and see that it shows up in the list of Prometheus metrics. ``` $ docker run -d --runtime=runsc --name=foobar debian sleep 1h 32beefcafe $ curl http://localhost:1337/metrics Data for runsc metric server exporting data for sandboxes in root directory /var/run/docker/runtime-runc/moby Writing data from 3 snapshots: [...] HELP processstarttime_seconds Unix timestamp at which the process" }, { "data": "Used by Prometheus for counter resets. TYPE processstarttime_seconds gauge processstarttime_seconds 1674599158.286067 1674599159819 HELP runscfsopens Number of file opens. TYPE runscfsopens counter runscfsopens{iteration=\"42asdf\",sandbox=\"32beefcafe\"} 12 1674599159819 HELP runscfsread_wait Time waiting on file reads, in nanoseconds. TYPE runscfsread_wait counter runscfsread_wait{iteration=\"42asdf\",sandbox=\"32beefcafe\"} 0 1674599159819 [...] End of metric data. ``` Each per-container metric is labeled with at least: `sandbox`: The container ID, in this case `32beefcafe` `iteration`: A randomly-generated string (in this case `42asdf`) that stays constant for the lifetime of the sandbox. This helps distinguish between successive instances of the same sandbox with the same ID. If you'd like to run some containers with metrics turned off and some on within the same system, use two runtime entries in `/etc/docker/daemon.json` with only one of them having the `--metric-server` flag set. The metric server exposes a on the address given by the `--metric-server` flag passed to `runsc metric-server`. Simply point Prometheus at this address. If desired, you can change the (prefix applied to all metric names) using the `--exporter-prefix` flag. It defaults to `runsc_`. The sandbox metrics exported may be filtered by using the optional `GET` parameter `runsc-sandbox-metrics-filter`, e.g. `/metrics?runsc-sandbox-metrics-filter=fs_.*`. Metric names must fully match this regular expression. Note that this filtering is performed before prepending `--exporter-prefix` to metric names. The metric server also supports listening on a . This can be convenient to avoid reserving port numbers on the machine's network interface, or for tighter control over who can read the data. Clients should talk HTTP over this UDS. While Prometheus doesn't natively support reading metrics from a UDS, this feature can be used in conjunction with a tool such as within another context (e.g. a tightly-managed network namespace that Prometheus runs in). ``` $ sudo runsc --root=/var/run/docker/runtime-runc/moby --metric-server=/run/docker/runsc-metrics.sock metric-server & $ sudo curl --unix-socket /run/docker/runsc-metrics.sock http://runsc-metrics/metrics Data for runsc metric server exporting data for sandboxes in root directory /var/run/docker/runtime-runc/moby [...] End of metric data. Set up socat to forward requests from *:1337 to /run/docker/runsc-metrics.sock in its own network namespace: $ sudo unshare --net socat TCP-LISTEN:1337,reuseaddr,fork UNIX-CONNECT:/run/docker/runsc-metrics.sock & Set up basic networking for socat's network namespace: $ sudo nsenter --net=\"/proc/$(pidof socat)/ns/net\" sh -c 'ip link set lo up && ip route add default dev lo' Grab metric data from this namespace: $ sudo nsenter --net=\"/proc/$(pidof socat)/ns/net\" curl http://localhost:1337/metrics Data for runsc metric server exporting data for sandboxes in root directory /var/run/docker/runtime-runc/moby [...] End of metric data. ``` If you would like to run the metric server in a gVisor sandbox, you may do so, provided that you give it access to the OCI runtime root directory, forward the network port it binds to for external access, and enable host UDS support. WARNING: Doing this does not provide you the full security of gVisor, as it still grants the metric server full control over all running gVisor sandboxes on the system. This step is only a defense-in-depth measure. To do this, add a runtime with the `--host-uds=all` flag to `/etc/docker/daemon.json`. The metric server needs the ability to open existing UDSs (in order to communicate with running sandboxes), and to create new UDSs (in order to create and listen on" }, { "data": "```json { \"runtimes\": { \"runsc\": { \"path\": \"/path/to/runsc\", \"runtimeArgs\": [ \"--metric-server=/run/docker/runsc-metrics.sock\" ] }, \"runsc-metric-server\": { \"path\": \"/path/to/runsc\", \"runtimeArgs\": [ \"--metric-server=/run/docker/runsc-metrics.sock\", \"--host-uds=all\" ] } } } ``` Then start the metric server with this runtime, passing through the directories containing the control files `runsc` uses to detect and communicate with running sandboxes: ```shell $ docker run -d --runtime=runsc-metric-server --name=runsc-metric-server \\ --volume=\"$(which runsc):/runsc:ro\" \\ --volume=/var/run/docker/runtime-runc/moby:/var/run/docker/runtime-runc/moby \\ --volume=/run/docker:/run/docker \\ --volume=/var/run:/var/run \\ alpine \\ /runsc \\ --root=/var/run/docker/runtime-runc/moby \\ --metric-server=/run/docker/runsc-metrics.sock \\ --debug --debug-log=/dev/stderr \\ metric-server ``` Yes, this means the metric server will report data about its own sandbox: ``` $ metricserverid=\"$(docker inspect --format='{{.ID}}' runsc-metric-server)\" $ sudo curl --unix-socket /run/docker/runsc-metrics.sock http://runsc-metrics/metrics | grep \"$metricserverid\" Snapshot with 175 data points taken at 2023-01-25 15:45:33.70256855 -0800 -0800: map[iteration:2407456650315156914 sandbox:737ce142058561d764ad870d028130a29944821dd918c7979351b249d5d30481] runscfsopens{iteration=\"2407456650315156914\",sandbox=\"737ce142058561d764ad870d028130a29944821dd918c7979351b249d5d30481\"} 54 1674690333702 runscfsread_wait{iteration=\"2407456650315156914\",sandbox=\"737ce142058561d764ad870d028130a29944821dd918c7979351b249d5d30481\"} 0 1674690333702 runscfsreads{iteration=\"2407456650315156914\",sandbox=\"737ce142058561d764ad870d028130a29944821dd918c7979351b249d5d30481\"} 52 1674690333702 [...] ``` When using Kubernetes, users typically deal with pod names and container names. On Kubelet machines, the underlying container names passed to the runtime are non-human-friendly hexadecimal strings. In order to provide more user-friendly labels, the metric server will pick up the `io.kubernetes.cri.sandbox-name` and `io.kubernetes.cri.sandbox-namespace` annotations provided by `containerd`, and automatically add these as labels (`podname` and `namespacename` respectively) for each per-sandbox metric. The metric server exports a lot of gVisor-internal metrics, and generates its own metrics as well. All metrics have documentation and type annotations in the `/metrics` output, and this section aims to document some useful ones. `processstarttime_seconds`: Unix timestamp representing the time at which the metric server started. This specific metric name is used by Prometheus, and as such its name is not affected by the `--exporter-prefix` flag. This metric is process-wide and has no labels. `numsandboxestotal`: A process-wide metric representing the total number of sandboxes that the metric server knows about. `numsandboxesrunning`: A process-wide metric representing the number of running sandboxes that the metric server knows about. `numsandboxesbroken_metrics`: A process-wide metric representing the number of sandboxes from which the metric server could not get metric data. `sandbox_presence`: A per-sandbox metric that is set to `1` for each sandbox that the metric server knows about. This can be used to join with other per-sandbox or per-pod metrics for which metric existence is not guaranteed. `sandbox_running`: A per-sandbox metric that is set to `1` for each sandbox that the metric server knows about and that is actively running. This can be used in conjunction with `sandbox_presence` to determine the set of sandboxes that aren't running; useful if you want to alert about sandboxes that are down. `sandbox_metadata`: A per-sandbox metric that carries a superset of the typical per-sandbox labels found on other per-sandbox metrics. These extra labels contain useful metadata about the sandbox, such as the version number, , and being used. `sandbox_capabilities`: A per-sandbox, per-capability metric that carries the union of all capabilities present on at least one container of the sandbox. Can optionally be filtered to only a subset of capabilities using the `runsc-capability-filter` GET parameter on `/metrics` requests (regular expression). Useful for auditing and aggregating the capabilities you rely on across multiple sandboxes. `sandboxcreationtime_seconds`: A per-sandbox Unix timestamp representing the time at which this sandbox was created." } ]
{ "category": "Runtime", "file_name": "observability.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "The Vision To create a modern, geometric typeface. Open sourced, and openly available. Influenced by other popular geometric, minimalist sans-serif typefaces of the new millennium. Designed for optimal readability at small point sizes while beautiful at large point sizes. December 2017 update Currently working on greatly improving spacing and kerning of the base typeface. Once this is done, work on other variations (e.g. rounded or slab) can begin in earnest. The License Licensed under Open Font License (OFL). Available to anyone and everyone. Contributions welcome. Contact Contact me via [email protected] or http://twitter.com/ChrisMSimpson for any questions, requests or improvements (or just submit a pull request). Support You can now support work on Metropolis via Patreon at https://www.patreon.com/metropolis. ally-added-mutations.yaml case-b-target-manually-added-mutations.yaml source: https://raw.githubusercontent.com/brito-rafa/k8s-webhooks/master/examples-for-projectvelero/case-b/target/case-b-target-manually-added-mutations.yaml case-c-target-manually-added-mutations.yaml source: https://raw.githubusercontent.com/brito-rafa/k8s-webhooks/master/examples-for-projectvelero/case-a/source/case-a-source.yaml case-c-target-manually-added-mutations.yaml source: https://raw.githubusercontent.com/brito-rafa/k8s-webhooks/master/examples-for-projectvelero/case-c/target/case-c-target-manually-added-mutations.yaml musicv1rockband.yaml source: https://github.com/brito-rafa/k8s-webhooks/blob/master/examples-for-projectvelero/case-a/source/music/config/samples/musicv1rockband.yaml musicv1alpha1rockband.yaml source: https://github.com/brito-rafa/k8s-webhooks/blob/master/examples-for-projectvelero/case-a/source/music/config/samples/musicv1alpha1rockband.yaml musicv2rockband.yaml source: https://github.com/brito-rafa/k8s-webhooks/blob/master/examples-for-projectvelero/case-c/target/music/config/samples/musicv2rockband.yaml musicv2beta1rockband.yaml source: https://github.com/brito-rafa/k8s-webhooks/blob/master/examples-for-projectvelero/case-b/source/music/config/samples/musicv2beta1rockband.yaml musicv2beta2rockband.yaml source: https://github.com/brito-rafa/k8s-webhooks/blob/master/examples-for-projectvelero/case-b/source/music/config/samples/musicv2beta2rockband.yaml" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.9.0 `velero/velero:v1.9.0` https://velero.io/docs/v1.9/ https://velero.io/docs/v1.9/upgrade-to-1.9/ Bump up to the CSI volume snapshot v1 API No VolumeSnapshot will be left in the source namespace of the workload Report metrics for CSI snapshots More improvements please refer to With these improvements we'll provide official support for CSI snapshots on AKS/EKS clusters. (with CSI plugin v0.3.0) In this release we continued our code modernization work, rewriting some controllers using Kubebuilder v3. This work is ongoing and we will continue to make progress in future releases. Options are added to the CLI and Restore spec to control the group of resources whose status will be restored. Users can choose to overwrite or patch the existing resources during restore by setting this policy. Upgrade integrated Restic version, which will resolve some of the CVEs, and support skip TLS validation in Restic backup/restore. With bumping up the API to v1 in CSI plugin, the v0.3.0 CSI plugin will only work for Kubernetes v1.20+ restic: add full support for setting SecurityContext for restore init container from configMap. (#4084, @MatthieuFin) Add metrics backupitemstotal and backupitemserrors (#4296, @tobiasgiese) Convert PodVolumebackup controller to the Kubebuilder framework (#4436, @fgold) Skip not mounted volumes when backing up (#4497, @dkeven) Update doc for v1.8 (#4517, @reasonerjt) Fix bug to make the restic prune frequency configurable (#4518, @ywk253100) Add E2E test of backups sync from BSL (#4545, @mqiu) Fix: OrderedResources in Schedules (#4550, @dbrekau) Skip volumes of non-running pods when backing up (#4584, @bynare) E2E SSR test add retry mechanism and logs (#4591, @mqiu) Add pushing image to GCR in github workflow to facilitate some environments that have rate limitation to docker hub, e.g. vSphere. (#4623, @jxun) Add existingResourcePolicy to Restore API (#4628, @shubham-pampattiwar) Fix E2E backup namespaces test (#4634, @qiuming-best) Update image used by E2E test to gcr.io (#4639, @jxun) Add multiple label selector support to Velero Backup and Restore APIs (#4650, @shubham-pampattiwar) Convert Pod Volume Restore resource/controller to the Kubebuilder framework (#4655, @ywk253100) Update --use-owner-references-in-backup description in velero command line. (#4660, @jxun) Avoid overwritten hook's exec.container parameter when running pod command executor. (#4661, @jxun) Support regional pv for GKE (#4680, @jxun) Bypass the remap CRD version plugin when v1beta1 CRD is not supported (#4686, @reasonerjt) Add GINKGO_SKIP to support skip specific case in e2e test. (#4692, @jxun) Add --pod-labels flag to velero install (#4694, @j4m3s-s) Enable coverage in test.sh and upload to codecov (#4704, @reasonerjt) Mark the BSL as \"Unavailable\" when gets any error and add a new field \"Message\" to the status to record the error" }, { "data": "(#4719, @ywk253100) Support multiple skip option for E2E test (#4725, @jxun) Add PriorityClass to the AdditionalItems of Backup's PodAction and Restore's PodAction plugin to backup and restore PriorityClass if it is used by a Pod. (#4740, @phuongatemc) Insert all restore errors and warnings into restore log. (#4743, @sseago) Refactor schedule controller with kubebuilder (#4748, @ywk253100) Garbage collector now adds labels to backups that failed to delete for BSLNotFound, BSLCannotGet, BSLReadOnly reasons. (#4757, @kaovilai) Skip podvolumerestore creation when restore excludes pv/pvc (#4769, @half-life666) Add parameter for e2e test to support modify kibishii install path. (#4778, @jxun) Ensure the restore hook applied to new namespace based on the mapping (#4779, @reasonerjt) Add ability to restore status on selected resources (#4785, @RafaeLeal) Do not take snapshot for PV to avoid duplicated snapshotting, when CSI feature is enabled. (#4797, @jxun) Bump up to v1 API for CSI snapshot (#4800, @reasonerjt) fix: delete empty backups (#4817, @yuvalman) Add CSI VolumeSnapshot related metrics. (#4818, @jxun) Fix default-backup-ttl not work (#4831, @qiuming-best) Make the vsc created by backup sync controller deletable (#4832, @reasonerjt) Make in-progress backup/restore as failed when doing the reconcile to avoid hanging in in-progress status (#4833, @ywk253100) Use controller-gen to generate the deep copy methods for objects (#4838, @ywk253100) Update integrated Restic version and add insecureSkipTLSVerify for Restic CLI. (#4839, @jxun) Modify CSI VolumeSnapshot metric related code. (#4854, @jxun) Refactor backup deletion controller based on kubebuilder (#4855, @reasonerjt) Remove VolumeSnapshots created during backup when CSI feature is enabled. (#4858, @jxun) Convert Restic Repository resource/controller to the Kubebuilder framework (#4859, @qiuming-best) Add ClusterClasses to the restore priority list (#4866, @reasonerjt) Cleanup the .velero folder after restic done (#4872, @big-appled) Delete orphan CSI snapshots in backup sync controller (#4887, @reasonerjt) Make waiting VolumeSnapshot to ready process parallel. (#4889, @jxun) continue rather than return for non-matching restore action label (#4890, @sseago) Make in-progress PVB/PVR as failed when restic controller restarts to avoid hanging backup/restore (#4893, @ywk253100) Refactor BSL controller with periodical enqueue source (#4894, @jxun) Make garbage collection for expired backups configurable (#4897, @ywk253100) Bump up the version of distroless to base-debian11 (#4898, @ywk253100) Add schedule ordered resources E2E test (#4913, @qiuming-best) Make velero completion zsh command output can be used by `source` command. (#4914, @jxun) Enhance the map flag to support parsing input value contains entry delimiters (#4920, @ywk253100) Fix E2E test [Restic] on GCP. (#4968, @jxun) Disable status as sub resource in CRDs (#4972, @ywk253100) Add more information for failing to get path or snapshot in restic backup and restore. (#4988, @jxun)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.9.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Containerd CRI Integration ============= Author: Lantao Liu (@random-liu) This proposal aims to integrate with Kubelet against the . Containerd is a core container runtime, which provides the minimum set of functionalities to manage the complete container lifecycle of its host system, including container execution and supervision, image distribution and storage, etc. Containerd was , used to manage containers on the node. As shown below, it creates a containerd-shim for each container, and the shim manages the lifecycle of its corresponding container. In Dec. 2016, Docker Inc. spun it out into a standalone component, and donated it to in Mar. 2017. Containerd is one potential alternative to Docker as the runtime for Kubernetes clusters. Compared with Docker, containerd has pros and cons. Stability*: Containerd has limited scope and slower feature velocity, which is expected to be more stable. Compatibility*: The scope of containerd aligns with Kubernetes' requirements. It provides the required functionalities and the flexibility for areas like image pulling, networking, volume and logging etc. Performance*: Containerd consumes less resource than Docker at least because it's a subset of Docker; Containerd CRI integration eliminates an extra hop in the stack (as shown below). Neutral Foundation*: Containerd is part of CNCF now. User Adoption*: Ideally, Kubernetes users don't interact with the underlying container runtime directly. However, for the lack of debug toolkits, sometimes users still need to login the node to debug with Docker CLI directly. Containerd provides barebone CLI for development and debugging purpose, but it may not be sufficient and necessary. Additionally, presuming these are sufficient and necessary tools, a plan and time would be needed to sufficiently document these CLIs and educate users in their use. Maturity*: The rescoped containerd is pretty new, and it's still under heavy development. Make sure containerd meets the requirement of Kubernetes, now and into the foreseeable future. Implement containerd CRI shim and make sure it provides equivalent functionality, usability and debuggability. Improve Kubernetes by taking advantage of the flexibility provided by containerd. The following sections discuss the design aspects of the containerd CRI integration. For the purposes of this doc, the containerd CRI integration will be referred to as `CRI-containerd`. CRI-containerd relies on containerd to manage container lifecycle. Ideally, CRI-containerd only needs to do api translation and information reorganization. However, CRI-containerd needs to maintain some metadata because: There is a mismatch between container lifecycle of CRI and containerd - containerd only tracks running processes, once the container and it's corresponding containerd-shim exit, the container is no longer visible in the containerd" }, { "data": "Some sandbox/container metadata is not provided by containerd, and we can not leverage OCI runtime annotation to store it because of the container lifecycle mismatch, e.g. labels/annotations, `PodSandboxID` of a container, `FinishedAt` timestamp, `ExitCode`, `Mounts` etc. CRI-containerd should checkpoint these metadata itself or use if available. Containerd doesn't provide persistent container log. It redirects container STDIO into different FIFOs. CRI-containerd should start a goroutine (process/container in the future) to: Continuously drain the FIFO; Decorate the log line into ; Write the log into . Containerd supports creating a process in the container with `Exec`, and the STDIO is also exposed as FIFOs. Containerd also supports resizing console of a specific process with `Pty`. CRI-containerd could reuse the , it should implement the . For different CRI streaming functions: `ExecSync`: CRI-containerd should use `Exec` to create the exec process, collect the stdout/stderr of the process, and wait for the process to terminate. `Exec`: CRI-containerd should use `Exec` to create the exec process, create a goroutine (process/container) to redirect streams, and wait for the process to terminate. `Attach`: CRI-containerd should create a goroutine (process/container) to read the existing container log to the output, redirect streams of the init process, and wait for any stream to be closed. `PortForward`: CRI-containerd could implement this with `socat` and `nsenter`, similar with . Containerd doesn't provide container networking, but OCI runtime spec supports joining a linux container into an existing network namespace. CRI-containerd should: Create a network namespace for a sandbox; Call to update the options of the network namespace; Let the user containers in the same sandbox share the network namespace. Containerd provides , and plans to provide . CRI container metrics api needs to be defined (). After that, CRI-containerd should translate containerd container metrics into CRI container metrics. CRI-containerd relies on containerd to manage images. Containerd should provide all function and information required by CRI, and CRI-containerd only needs to do api translation and information reorganization. Containerd plans to provide . CRI image filesystem metrics needs to be defined (). After that, we should make sure containerd provides the required metrics, and CRI-containerd should translate containerd image filesystem metrics into CRI image filesystem metrics. Following items are out of the scope of this design, we may address them in future version as enhancement or optimization. Debuggability*: One of the biggest concern of CRI-containerd is debuggability. We should provide equivalent debuggability with Docker CLI through `kubectl`, or containerd" }, { "data": "Built-in CRI support*: The provided by containerd makes it possible to directly build CRI support into containerd as a plugin, which will eliminate one more hop from the stack. But because of the , we have to either maintain our own branch or push CRI plugin upstream. Seccomp*: () Seccomp is supported in OCI runtime spec. However, current seccomp implementation in Kubernetes is experimental and docker specific, the api needs to be defined in CRI first before CRI-containerd implements it. Streaming server authentication*: () CRI-containerd will be out-of-process with Kubelet, so it could not reuse Kubelet authentication. Its streaming server should implement its own authentication mechanism. Move container facilities into pod cgroup*: Container facilities including container image puller, container streaming handler, log handler and containerd-shim serve a specific container. They should be moved to the corresponding pod cgroup, and the overhead introduced by them should be charged to the pod. Log rotation*: () Container log rotation is under design. A function may be added in CRI to signal the runtime to reopen log file. CRI-containerd should implement that function after it is defined. Exec container*: With the flexibility provided by containerd, it is possible to implement `Exec` with a separate container sharing the same rootfs and mount namespace with the original container. The advantage is that the `Exec` container could have it's own sub-cgroup, so that it will not consume the resource of application container and user could specify dedicated resource for it. Advanced image management*: The image management interface in CRI is relatively simple because the requirement of Kubelet image management is not clearly scoped out. In the future, we may want to leverage the flexibility provided by containerd more, e.g. estimate image size before pulling etc. ... [P0] Basic container lifecycle. [P0] Basic image management. [P0] Container networking. [P1] Container streaming/logging. [P2] Container/ImageFS Metrics. Test Plan: Each feature added should have unit test and pass its corresponding cri validation test. [P0] Feature complete, pass 100% cri validation test. [P0] Integrate CRI-containerd with Kubernetes, and build the e2e/node e2e test framework. [P1] Address the debuggability problem. | Item | 1/2 Mar. | 2/2 Mar. | 1/2 Apr. | 2/2 Apr. | 1/2 May. | 2/2 May. | |:--:|:--:|:--:|:--:|:--:|:--:|:--:| | Survey | | | | | | | | POC | | | | | | | | Proposal | | | | | | | | Containerd Feature Complete | | | | | | | | Runtime Management Integration | | | | | | | | Image Management Integration | | | | | | | | Container Networking Integration | | | | | | |" } ]
{ "category": "Runtime", "file_name": "proposal.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "rkt can automatically prepare `/etc/resolv.conf` and `/etc/hosts` for the apps in the pod. They can either be generated at runtime, or the host's configuration can be used. Four options affect how this file is created: `--dns` : Specify either a DNS server, or one of the \"magic\" values `host` or `none` `--dns-domain` : The resolv.conf `domain` parameter `--dns-opt` : One or more resolv.conf `option` parameters `--dns-search` : One or more domains for the search list The simplest configuration is: ```sh $ sudo rkt run --dns=8.8.8.8 pod.aci ``` Other parameters can be given: ```sh $ sudo rkt run \\ --dns=8.8.8.8 --dns=4.2.2.2 \\ --dns-domain=example.org \\ --dns-opt=debug --dns-opt=rotate \\ --dns-search=example.com --dns-search=example.gov \\ pod.aci ``` This will generate the following `/etc/resolv.conf` for the applications: ``` search example.com example.gov nameserver 8.8.8.8 nameserver 4.2.2.2 options debug rotate domain example.org ``` The magic parameter `host` will bind-mount the host's `/etc/resolv.conf` in to the applications. This will be a read-only mount. The magic parameter `none` will ignore any DNS configuration from CNI. This will ensure that the image's `/etc/resolv.conf` has precedence. `resolv.conf` can be generated by multiple components. The order of precedence is: If `--dns`, et al. are passed to `rkt run` If a CNI plugin returns DNS information, unless `--dns=none` is passed If a volume is mounted on `/etc/resolv.conf` If the application container includes `/etc/resolv.conf` `rkt run` provides one option with two modes: `--hosts-entry <IP>=<HOST>` `--hosts-entry host` Passing `--hosts-entry=host` will bind-mount (read-only) the hosts's `/etc/hosts` in to every application. When passing IP=HOST pairs: ```sh $ rkt run ... --hosts-entry 198.51.100.0=host1,198.51.100.1=host2 --hosts-entry 198.51.100.0=host3 ``` rkt will take some and append the requested entries. ``` < the default entries > 198.51.100.0 host1 host3 198.51.100.1 host2 ``` `/etc/hosts` can be generated by multiple components. The order of precedence is: If `--hosts-entry` is passed to `rkt run` If a volume is mounted on `/etc/hosts` If the app image includes `/etc/hosts` Otherwise, a fallback stub `/etc/hosts` is created The following example shows that the DNS options allow the pod to resolve names successfully: ``` $ sudo rkt run --net=host --dns=8.8.8.8 quay.io/coreos/alpine-sh --exec=/bin/ping --interactive -- -c 1 coreos.com ... PING coreos.com (104.20.47.236): 56 data bytes 64 bytes from 104.20.47.236: seq=0 ttl=63 time=5.421 ms coreos.com ping statistics 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max = 5.421/5.421/5.421 ms ```" } ]
{ "category": "Runtime", "file_name": "dns.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: Troubleshooting and Present Limitations menu_order: 50 search_type: Documentation The command: weave status reports on the current status of various Weave Net components, including DNS: ``` ... Service: dns Domain: weave.local. Upstream: 8.8.8.8, 8.8.4.4 TTL: 1 Entries: 9 ... ``` The first section covers the router; see the [troubleshooting guide](/site/troubleshooting.md#weave-status) for more details. The 'Service: dns' section is pertinent to weaveDNS, and includes: The local domain suffix which is being served The list of upstream servers used for resolving names not in the local domain The response ttl The total number of entries You may also use `weave status dns` to obtain a [complete dump](/site/troubleshooting.md#weave-status-dns) of all DNS registrations. Information on the processing of queries, and the general operation of weaveDNS, can be obtained from the container logs with docker logs weave The server will not know about restarted containers, but if you re-attach a restarted container to the weave network, it will be re-registered with weaveDNS. The server may give unreachable IPs as answers, since it doesn't try to filter by reachability. If you use subnets, align your hostnames with the subnets." } ]
{ "category": "Runtime", "file_name": "troubleshooting-weavedns.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Upgrading to Velero 1.13\" layout: docs Velero installed. If you're not yet running at least Velero v1.8, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Caution: From Velero v1.10, except for using restic to do file-system level backup and restore, kopia is also been integrated, it could be upgraded from v1.10 or higher to v1.13 directly, but it would be a little bit of difference when upgrading to v1.13 from a version lower than v1.10.0. Install the Velero v1.13 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.13.0 Git commit: <git SHA> ``` Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: Since velero v1.10.0 only v1 CRD will be supported during installation, therefore, the v1.10.0 will only work on Kubernetes version >= v1.16 Update the container image and objects fields used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl get deploy -n velero -ojson \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.13.0\\\"#g\" \\ | sed \"s#\\\"server\\\",#\\\"server\\\",\\\"--uploader-type=$uploader_type\\\",#g\" \\ | sed \"s#default-volumes-to-restic#default-volumes-to-fs-backup#g\" \\ | sed \"s#default-restic-prune-frequency#default-repo-maintain-frequency#g\" \\ | sed \"s#restic-timeout#fs-backup-timeout#g\" \\ | kubectl apply -f - echo $(kubectl get ds -n velero restic -ojson) \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.13.0\\\"#g\" \\ | sed \"s#\\\"name\\\"\\: \\\"restic\\\"#\\\"name\\\"\\: \\\"node-agent\\\"#g\" \\ | sed \"s#\\[ \\\"restic\\\",#\\[ \\\"node-agent\\\",#g\" \\ | kubectl apply -f - kubectl delete ds -n velero restic --force --grace-period 0 ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.13.0 Git commit: <git SHA> Server: Version: v1.13.0 ``` If it's directly upgraded from v1.10 or higher, the other steps remain the same only except for step 3 above. The details as below: Update the container image used by the Velero deployment, plugin and, optionally, the node agent daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.13.0 \\ velero-plugin-for-aws=velero/velero-plugin-for-aws:v1.9.0 \\ --namespace velero kubectl set image daemonset/node-agent \\ node-agent=velero/velero:v1.13.0 \\ --namespace velero ``` If upgraded from v1.9.x, there still remains some resources left over in the cluster and never used in v1.13.x, which could be deleted through kubectl and it is based on your desire: resticrepository CRD and related CRs velero-restic-credentials secret in velero install namespace" } ]
{ "category": "Runtime", "file_name": "upgrade-to-1.13.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Encryption and Weave Net menu_order: 50 search_type: Documentation Weave Net peers . This communication . Encryption of control plane traffic (TCP) and data plane traffic (UDP) of sleeve overlay is accomplished using the crypto libraries, employing Curve25519, XSalsa20 and Poly1305 to encrypt and authenticate messages. Weave Net protects against injection and replay attacks for traffic forwarded between peers. NaCl was selected because of its good reputation both in terms of selection and implementation of ciphers, but equally importantly, its clear APIs, good documentation and high-quality . It is quite difficult to use NaCl incorrectly. Contrast this with libraries such as OpenSSL where the library and its APIs are vast in size, poorly documented, and easily used wrongly. There are some similarities between Weave Net's crypto and . Weave Net does not need to cater for multiple cipher suites, certificate exchange and other requirements emanating from X509, and a number of other features. This simplifies the protocol and implementation considerably. On the other hand, Weave Net needs to support UDP transports, and while there are extensions to TLS such as which can operate over UDP, these are not widely implemented and deployed. In the case of fast datapath, data plane traffic is encrypted by using . The process of encryption is handled by the Linux kernel and is controlled via the IP transformation framework (XFRM). See Also *" } ]
{ "category": "Runtime", "file_name": "encryption.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-operator-generic completion fish | source To load completions for every new session, execute once: cilium-operator-generic completion fish > ~/.config/fish/completions/cilium-operator-generic.fish You will need to start a new shell for this setup to take effect. ``` cilium-operator-generic completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-generic_completion_fish.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "After the microVM is started, the rate limiters assigned to a network interface can be updated via a `PATCH /network-interfaces/{id}` API call. E.g. for a network interface created with: ```console PUT /network-interfaces/iface_1 HTTP/1.1 Host: localhost Content-Type: application/json Accept: application/json { \"ifaceid\": \"iface1\", \"hostdevname\": \"fctap1\", \"guest_mac\": \"06:00:c0:a8:34:02\", \"rxratelimiter\": { \"bandwidth\": { \"size\": 1024, \"onetimeburst\": 1048576, \"refill_time\": 1000 } }, \"txratelimiter\": { \"bandwidth\": { \"size\": 1024, \"onetimeburst\": 1048576, \"refill_time\": 1000 } } } ``` A `PATCH` request can be sent at any future time, to update the rate limiters: ```console PATCH /network-interfaces/iface_1 HTTP/1.1 Host: localhost Content-Type: application/json Accept: application/json { \"ifaceid\": \"iface1\", \"rxratelimiter\": { \"bandwidth\": { \"size\": 1048576, \"refill_time\": 1000 }, \"ops\": { \"size\": 2000, \"refill_time\": 1000 } } } ``` The full specification of the data structures available for this call can be found in our . Note: The data provided for the update is merged with the existing data. In the above example, the RX rate limit is updated, but the TX rate limit remains unchanged. A rate limit can be disabled by providing a 0-sized token bucket. E.g., following the above example, the TX rate limit can be disabled with: ```console PATCH /network-interfaces/iface_1 HTTP/1.1 Host: localhost Content-Type: application/json Accept: application/json { \"ifaceid\": \"iface1\", \"txratelimiter\": { \"bandwidth\": { \"size\": 0, \"refill_time\": 0 }, \"ops\": { \"size\": 0, \"refill_time\": 0 } } } ```" } ]
{ "category": "Runtime", "file_name": "patch-network-interface.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "ziostat(8) -- Report ZFS read I/O activity ============================================= ziostat [-hIMrzZ] [interval [count]] The ziostat utility reports a summary of ZFS read I/O operations. It first prints all activity since boot, then reports activity over a specified interval. When run from a non-global zone (NGZ), only activity from that NGZ can be observed. When run from a the global zone (GZ), activity from the GZ and all other NGZs can be observed. This tool is useful for determining if disk I/O is a source of application latency. Combined with vfsstat(8), ziostat(8) shows the relative contribution of disk I/O latency to overall I/O (and therefore application) latency. The ziostat utility reports the following information: r/s reads per second kr/s kilobytes read per second actv average number of ZFS read I/O operations being handled by the disk wsvc_t average wait time per I/O, in milliseconds asvc_t average disk service time per I/O, in milliseconds %b percent of time there is an I/O operation pending The following options are supported: -h Show help message and exit -I Print results per interval, rather than per second (where applicable) -M Print results in MB/s instead of KB/s -r Show results in a comma-separated format -z Hide zones with no read I/O activity -Z Print results for all zones, not just the current zone interval Specifies the length in seconds to pause between each interval report. If not specified, ziostat will print a summary since boot and exit. count Specifies the number of intervals to report. Defaults to unlimited if not specified. iostat(8), vfsstat(8), mpstat(8) This utility does not show any ZFS write I/O activity. Most write operations are asynchronous, so the latency of those operations committing to disk is much less important that read latency. The output format from ziostat may change over time; use the comma-separated output for a stable output format." } ]
{ "category": "Runtime", "file_name": "ziostat.8.md", "project_name": "SmartOS", "subcategory": "Container Runtime" }
[ { "data": "When kube-router is used to provide pod-to-pod networking, BGP is used to exchange routes across the nodes. Kube-router provides flexible networking models to support different deployments (public vs private cloud, routable vs non-routable pod IPs, service IPs, etc.). This is the default mode. All nodes in the clusters form iBGP peering relationships with rest of the nodes forming a full node-to-node mesh. Each node advertise the pod CIDR allocated to the nodes with its peers (the rest of the nodes in the cluster). There is no configuration required in this mode. All the nodes in the cluster are associated with the private ASN 64512 implicitly (which can be configured with `--cluster-asn` flag) and users are transparent to use of iBGP. This mode is suitable in public cloud environments or small cluster deployments. This model is used to support more than a single AS per cluster to allow for an AS per rack or an AS per node. Nodes in the cluster do not form full node-to-node meshes. Users have to explicitly select this mode by specifying `--nodes-full-mesh=false` when launching kube-router. In this mode kube-router expects each node will be configured with an ASN number from the node's API object annotations. Kube-router will use the node's `kube-router.io/node.asn` annotation value as the ASN number for the node. Users can annotate node objects with the following command: ```sh kubectl annotate node <kube-node> \"kube-router.io/node.asn=64512\" ``` Only nodes within same ASN form full mesh. Two nodes with different ASNs never get peered. This model supports the common scheme of using a Route Reflector Server node to concentrate peering from client peers. This has the big advantage of not needing full mesh, and will scale better. In this mode kube-router expects each node is configured either in Route Reflector server mode or in Route Reflector client mode. This is done with node `kube-router.io/rr.server=ClusterID`, `kube-router.io/rr.client=ClusterId` respectively. In this mode each route reflector client will only peer with route reflector servers. Each route reflector server will only peer with other route reflector servers and with route reflector clients enabling reflection. Users can annotate node objects with the following command for Route Reflector server mode: ```sh kubectl annotate node <kube-node> \"kube-router.io/rr.server=42\" ``` and for Route Reflector client mode: ```sh kubectl annotate node <kube-node> \"kube-router.io/rr.client=42\" ``` Only nodes with the same ClusterID in client and server mode will peer together. When joining new nodes to the cluster, remember to annotate them with `kube-router.io/rr.client=42`, and then restart kube-router on the new nodes and the route reflector server nodes to let them successfully read the annotations and peer with each other. An optional global BGP peer can be configured by specifying the parameters: `--peer-router-asns` and `--peer-router-ips`. When configured each node in the cluster forms a peer relationship with specified global peer. Pod CIDR and Cluster IPs get advertised to the global BGP peer. For redundancy, you can also configure more than one peer router by specifying a slice of BGP peers. For example: ```sh" }, { "data": "--peer-router-asns=65000,65000 ``` Alternatively, each node can be configured with one or more node specific BGP peers. Information regarding node specific BGP peer is read from node API object annotations: `kube-router.io/peer.ips` `kube-router.io/peer.asns` For example, users can annotate node object with below commands: ```shell kubectl annotate node <kube-node> \"kube-router.io/peer.ips=192.168.1.99,192.168.1.100\" kubectl annotate node <kube-node> \"kube-router.io/peer.asns=65000,65000\" ``` For traffic shaping purposes, you may want to prepend the AS path announced to peers. This can be accomplished on a per-node basis with annotations: `kube-router.io/path-prepend.as` `kube-router.io/path-prepend.repeat-n` If you wanted to prepend all routes from a particular node with the AS 65000 five times, you would run the following commands: ```shell kubectl annotate node <kube-node> \"kube-router.io/path-prepend.as=65000\" kubectl annotate node <kube-node> \"kube-router.io/path-prepend.repeat-n=5\" ``` In some setups it might be desirable to set a local IP address used for connecting external BGP peers. This can be accomplished on nodes with annotations: `kube-router.io/peer.localips` If set, this must be a list with a local IP address for each peer, or left empty to use nodeIP. Example: ```shell kubectl annotate node <kube-node> \"kube-router.io/peer.localips=10.1.1.1,10.1.1.2\" ``` This will instruct kube-router to use IP `10.1.1.1` for first BGP peer as a local address, and use `10.1.1.2`for the second. The examples above have assumed there is no password authentication with BGP peer routers. If you need to use a password for peering, you can use the `--peer-router-passwords` command-line option, the `kube-router.io/peer.passwords` node annotation, or the `--peer-router-passwords-file` command-line option. To ensure passwords are easily parsed, but not easily read by human eyes, kube-router requires that they are encoded as base64. On a Linux or MacOS system you can encode your passwords on the command line: ```shell $ printf \"SecurePassword\" | base64 U2VjdXJlUGFzc3dvcmQ= ``` In this CLI flag example the first router (192.168.1.99) uses a password, while the second (192.168.1.100) does not. ```sh --peer-router-ips=\"192.168.1.99,192.168.1.100\" --peer-router-asns=\"65000,65000\" --peer-router-passwords=\"U2VjdXJlUGFzc3dvcmQK,\" ``` Note the comma indicating the end of the first password. Here's the same example but configured as node annotations: ```shell kubectl annotate node <kube-node> \"kube-router.io/peer.ips=192.168.1.99,192.168.1.100\" kubectl annotate node <kube-node> \"kube-router.io/peer.asns=65000,65000\" kubectl annotate node <kube-node> \"kube-router.io/peer.passwords=U2VjdXJlUGFzc3dvcmQK,\" ``` Finally, to include peer passwords as a file you would run kube-router with the following option: ```shell --peer-router-ips=\"192.168.1.99,192.168.1.100\" --peer-router-asns=\"65000,65000\" --peer-router-passwords-file=\"/etc/kube-router/bgp-passwords.conf\" ``` The password file, closely follows the syntax of the command-line and node annotation options. Here, the first peer IP (192.168.1.99) would be configured with a password, while the second would not. ```sh U2VjdXJlUGFzc3dvcmQK, ``` Note, complex parsing is not done on this file, please do not include any content other than the passwords on a single line in this file. Global peers support the addition of BGP communities via node annotations. Node annotations can be formulated either as: a single 32-bit integer two 16-bit integers separated by a colon (`:`) common BGP community names (e.g. `no-export`, `internet`, `no-peer`, etc.) (see: ) In the following example we add the `NO_EXPORT` BGP community to two of our nodes via annotation using all three forms of the annotation: ```shell kubectl annotate node <kube-node> \"kube-router.io/node.bgp.communities=4294967041\" kubectl annotate node <kube-node>" }, { "data": "kubectl annotate node <kube-node> \"kube-router.io/node.bgp.communities=no-export\" ``` kube-router, by default, accepts all routes advertised by its neighbors. If the bgp session with one neighbor dies, GoBGP deletes all routes received by it. If one of the received routes is needed for this node to function properly (eg: custom static route), it could stop working. In the following example we add custom prefixes that'll be set via a custom import policy reject rule annotation, protecting the node from losing required routes: ```shell kubectl annotate node <kube-node> \"kube-router.io/node.bgp.customimportreject=10.0.0.0/16, 192.168.1.0/24\" ``` By default, the GoBGP server binds on the node IP address. However, in some cases nodes with multiple IP addresses desire to bind GoBGP to multiple local addresses. Local IP addresses on which GoGBP should listen on a node can be configured with annotation `kube-router.io/bgp-local-addresses`. Here is sample example to make GoBGP server to listen on multiple IP address: ```shell kubectl annotate node ip-172-20-46-87.us-west-2.compute.internal \"kube-router.io/bgp-local-addresses=172.20.56.25,192.168.1.99\" ``` By default, kube-router populates the GoBGP RIB with node IP as next hop for the advertised pod CIDRs and service VIPs. While this works for most cases, overriding the next hop for the advertised routes is necessary when node has multiple interfaces over which external peers are reached. Next hops need to be defined as the interface over which external peer can be reached. Setting `--override-nexthop` to true leverages the BGP next-hop-self functionality implemented in GoBGP. The next hop will automatically be selected appropriately when advertising routes, irrespective of the next hop in the RIB. A common scenario exists where each node in the cluster is connected to two upstream routers that are in two different subnets. For example, one router is connected to a public network subnet and the other router is connected to a private network subnet. Additionally, nodes may be split across different subnets (e.g. different racks) each of which has their own routers. In this scenario, `--override-nexthop` can be used to correctly peer with each upstream router, ensuring that the BGP next-hop attribute is correctly set to the node's IP address that faces the upstream router. The `--enable-overlay` option can be set to allow overlay/underlay tunneling across the different subnets to achieve an interconnected pod network. This configuration would have the following effects: via one of themany means that kube-router makes available Overriding Next Hop Enabling overlays in either full mode or with nodes in different subnets The warning here is that when using `--override-nexthop` in the above scenario, it may cause kube-router to advertise an IP address other than the node IP which is what kube-router connects the tunnel to when the `--enable-overlay` option is given. If this happens it may cause some network flows to become un-routable. Specifically, people need to take care when combining `--override-nexthop` and `--enable-overlay` and make sure that they understand their network, the flows they desire, how the kube-router logic works, and the possible side effects that are created from their configuration. Please refer to for the risk and impact discussion." } ]
{ "category": "Runtime", "file_name": "bgp.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Restore API Type\" layout: docs The `Restore` API type is used as a request for the Velero server to perform a Restore. Once created, the Velero Server immediately starts the Restore process. Restore belongs to the API group version `velero.io/v1`. Here is a sample `Restore` object with each of the fields documented: ```yaml apiVersion: velero.io/v1 kind: Restore metadata: name: a-very-special-backup-0000111122223333 namespace: velero spec: backupName: a-very-special-backup includedNamespaces: '*' excludedNamespaces: some-namespace includedResources: '*' excludedResources: storageclasses.storage.k8s.io restoreStatus: includedResources: workflows excludedResources: [] includeClusterResources: null labelSelector: matchLabels: app: velero component: server orLabelSelectors: matchLabels: app: velero matchLabels: app: data-protection namespaceMapping: namespace-backup-from: namespace-to-restore-to restorePVs: true scheduleName: my-scheduled-backup-name existingResourcePolicy: none hooks: resources: name: restore-hook-1 includedNamespaces: ns1 excludedNamespaces: ns3 includedResources: pods excludedResources: [] labelSelector: matchLabels: app: velero component: server postHooks: init: initContainers: name: restore-hook-init1 image: alpine:latest volumeMounts: mountPath: /restores/pvc1-vm name: pvc1-vm command: /bin/ash -c echo -n \"FOOBARBAZ\" >> /restores/pvc1-vm/foobarbaz name: restore-hook-init2 image: alpine:latest volumeMounts: mountPath: /restores/pvc2-vm name: pvc2-vm command: /bin/ash -c echo -n \"DEADFEED\" >> /restores/pvc2-vm/deadfeed exec: container: foo command: /bin/bash -c \"psql < /backup/backup.sql\" waitTimeout: 5m execTimeout: 1m onError: Continue status: phase: \"\" validationErrors: null warnings: 2 errors: 0 failureReason: ```" } ]
{ "category": "Runtime", "file_name": "restore.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This document defines governance policies for the Rook project. Steering committee members demonstrate a strong commitment to the project with views in the interest of the broader Rook project. Responsibilities include: Own the overall direction of the Rook project Provide guidance for the project maintainers and oversee the process for adding new maintainers Participate in the when necessary Actively participate in the regularly scheduled steering committee meetings Regularly attend the recurring The current list of steering committee members is published and updated in . Each storage provider has at most a single member on the steering committee. Storage providers declared as stable in Rook should have a representative on the steering committee. Storage providers that are on track to be declared stable soon may have a representative on the steering committee. Steering committee members are likely members of the Rook maintainers group and are contributing consistently to Rook. If the member (or proposed member) is not a maintainer, they must have demonstrated consistent vision and input for the good of the project with the big picture in mind. No company may hold the majority seats on the steering committee. If this happens due to changing companies, a member of the committee from that company must be replaced. If you meet these requirements, express interest to the steering committee directly that your organization is interested in joining the steering committee. If a steering committee member is no longer interested or cannot perform the duties listed above, they should volunteer to be moved to emeritus status. In extreme cases this can also occur by a vote of the steering committee members per the voting process below. Maintainers have the most experience with the Rook project and are expected to have the knowledge and insight to lead the project's growth and improvement. Responsibilities include: Represent their organization and storage provider within the Rook community Strong commitment to the project Participate in design and technical discussions Contribute non-trivial pull requests Perform code reviews on other's pull requests Regularly triage GitHub issues. The areas of specialization listed in can be used to help with routing an issue/question to the right person. Make sure that ongoing PRs are moving forward at the right pace or closing them Monitor Rook email aliases Monitor Rook Slack (delayed response is perfectly acceptable), particularly for the area of your storage provider Regularly attend the recurring Periodically attend the recurring steering committee meetings to provide input In general continue to be willing to spend at least 25% of ones time working on Rook (~1.25 business days per week) The current list of maintainers is published and updated in . Reviewers have similar responsibilities as maintainers, with the differences listed in the roles of the developer guide. Rules for adding and removing reviewers will follow the same guidelines as adding and removing maintainers as described below. To become a maintainer you need to demonstrate the following: Consistently be seen as a leader in the Rook community by fulfilling the Maintainer responsibilities listed above to some degree. Domain expertise for at least one of the Rook storage providers Be extremely proficient with Kubernetes and Golang Consistently demonstrate: Ability to write good solid code Ability to collaborate with the team Understanding of how the team works (policies, processes for testing and code review," }, { "data": "Understanding of the project's code base and coding style Beyond your contributions to the project, consider: If your storage provider or organization already have a Rook maintainer, more maintainers may not be needed. A valid reason is \"blast radius\" for a large storage provider or organization Becoming a maintainer generally means that you are going to be spending substantial time (>25%) on Rook for the foreseeable future. If you are meeting these requirements, express interest to the directly that your organization is interested in adding a maintainer. We may ask you to resolve some issues from our backlog. As you gain experience with the code base and our standards, we will ask you to perform code reviews for incoming PRs (i.e., all maintainers are expected to shoulder a proportional share of community reviews). After a period of approximately several months of working together and making sure we see eye to eye, the steering committee will confer and decide whether to grant maintainer status or not. We make no guarantees on the length of time this will take, but several months is the approximate goal. If a maintainer is no longer interested or cannot perform the maintainer duties listed above, they should volunteer to be moved to emeritus status. In extreme cases this can also occur by a vote of the maintainers per the voting process below. Maintainers will be added to the Rook GitHub organization (if they are not already) and added to the GitHub Maintainers team. The full change approval process is described in the . All new steering committee members and maintainers must be nominated by someone (anyone) opening a pull request that adds the nominated persons name to the appropriate files in the appropriate roles. Similarly, to remove a steering committee member or maintainer, a pull request should be opened that removes their name from the appropriate files. The steering committee will approve this update in the standard voting and conflict resolution process. Note that new nominations do not go through the standard pull request approval described in the . Only the steering committee team can approve updates of members to the steering committee or maintainer roles. In general, we prefer that technical issues and maintainer membership are amicably worked out between the persons involved. If a dispute cannot be decided independently, the steering committee can be called in to decide an issue. If the steering committee members themselves cannot decide an issue, the issue will be resolved by voting. The voting process is a simple majority in which each steering committee member gets a single vote, except as noted below. Maintainers do not have a vote in conflict resolution, although steering committee members should consider their input. For formal votes, a specific statement of what is being voted on should be added to the relevant GitHub issue or PR. Steering committee members should indicate their yes/no vote on that issue or PR, and after a suitable period of time (goal is by 5 business days), the votes will be tallied and the outcome noted. If any steering committee members are unreachable during the voting period, postponing the completion of the voting process should be considered. Additions and removals of steering committee members or maintainers require a 2/3 majority, while other decisions and changes require only a simple majority." } ]
{ "category": "Runtime", "file_name": "GOVERNANCE.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "gVisor is workloads. This post showcases running the [Stable Diffusion] generative model from [Stability AI] to generate images using a GPU from within gVisor. Both the and the [PyTorch] code used by Stable Diffusion were run entirely within gVisor while being able to leverage the NVIDIA GPU. <span class=\"attribution\">Sandboxing a GPU. Generated with Stable Diffusion v1.5.<br/>This picture gets a lot deeper once you realize that GPUs are made out of sand.</span> -- As of this writing (2023-06), is not generalized. Only some PyTorch workloads have been tested on NVIDIA T4, L4, A100, and H100 GPUs, using the specific driver versions `525.60.13` and `525.105.17`. Contributions are welcome to expand this set to support other GPUs and driver versions! Additionally, while gVisor does its best to sandbox the workload, interacting with the GPU inherently requires running code on GPU hardware, where isolation is enforced by the GPU driver and hardware itself rather than gVisor. More to come soon on the value of the protection gVisor provides for GPU workloads. In a few months, gVisor's GPU support will have broadened and become easier-to-use, such that it will not be constrained to the specific sets of versions used here. In the meantime, this blog stands as an example of what's possible today with gVisor's GPU support. {:width=\"100%\"} <span class=\"attribution\">A collection of astronaut helmets in various styles.<br/>Other than the helmet in the center, each helmet was generated using Stable Diffusion v1.5.</span> The recent explosion of machine learning models has led to a large number of new open-source projects. Much like it is good practice to be careful about running new software downloaded from the Internet, it is good practice to run new open-source projects in a sandbox. For projects like the , which automatically download various models, components, and from external repositories as the user enables them in the web UI, this principle applies all the more. Additionally, within the machine learning space, tooling for packaging and distributing models are still nascent. While some models (including Stable Diffusion) are packaged using the more secure [safetensors] format, the majority of models available online today are distributed using the [Pickle format], which can execute arbitrary Python code upon deserialization. As such, even when using trustworthy software, using Pickle-formatted models may still be risky (Edited 2024-04-04: ). gVisor provides a layer of protection around this process which helps protect the host machine. Third, machine learning applications are typically not I/O heavy, which means they tend not to experience a significant performance overhead. The process of uploading code to the GPU is not a significant number of system calls, and most communication to/from the GPU happens over shared memory, where gVisor imposes no overhead. Therefore, the question is not so much \"why should I run this GPU workload in gVisor?\" but rather \"why not?\". <span class=\"attribution\">Cool astronauts don't look at explosions. Generated using Stable Diffusion v1.5.</span> Lastly, running GPU workloads in gVisor is pretty" }, { "data": "We use a Debian virtual machine on GCE. The machine needs to have a GPU and to have sufficient RAM and disk space to handle Stable Diffusion and its large model files. The following command creates a VM with 4 vCPUs, 15GiB of RAM, 64GB of disk space, and an NVIDIA T4 GPU, running Debian 11 (bullseye). Since this is just an experiment, the VM is set to self-destruct after 6 hours. ```shell $ gcloud compute instances create stable-diffusion-testing \\ --zone=us-central1-a \\ --machine-type=n1-standard-4 \\ --max-run-duration=6h \\ --instance-termination-action=DELETE \\ --maintenance-policy TERMINATE \\ --accelerator=count=1,type=nvidia-tesla-t4 \\ --create-disk=auto-delete=yes,boot=yes,device-name=stable-diffusion-testing,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230509,mode=rw,size=64 $ gcloud compute ssh --zone=us-central1-a stable-diffusion-testing ``` All further commands in this post are performed while SSH'd into the VM. We first need to install the specific NVIDIA driver version that gVisor is currently compatible with. ```shell $ sudo apt-get update && sudo apt-get -y upgrade $ sudo apt-get install -y build-essential linux-headers-$(uname -r) $ DRIVER_VERSION=525.60.13 $ curl -fSsl -O \"https://us.download.nvidia.com/tesla/$DRIVERVERSION/NVIDIA-Linux-x8664-$DRIVER_VERSION.run\" $ sudo sh NVIDIA-Linux-x8664-$DRIVERVERSION.run ``` <!-- The above in a single live, for convenience: DRIVERVERSION=525.60.13; sudo apt-get update && sudo apt-get -y upgrade && sudo apt-get install -y build-essential linux-headers-$(uname -r) && curl -fSsl -O \"https://us.download.nvidia.com/tesla/$DRIVERVERSION/NVIDIA-Linux-x8664-$DRIVERVERSION.run\" && sudo sh NVIDIA-Linux-x8664-$DRIVERVERSION.run --> Next, we install Docker, per . ```shell $ sudo apt-get install -y ca-certificates curl gnupg $ sudo install -m 0755 -d /etc/apt/keyrings $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor --batch --yes -o /etc/apt/keyrings/docker.gpg $ sudo chmod a+r /etc/apt/keyrings/docker.gpg $ echo \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(. /etc/os-release && echo \"$VERSION_CODENAME\") stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null $ sudo apt-get update && sudo apt-get install -y docker-ce docker-ce-cli ``` <!-- The above in a single live, for convenience: sudo apt-get install -y ca-certificates curl gnupg && sudo install -m 0755 -d /etc/apt/keyrings && curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor --batch --yes -o /etc/apt/keyrings/docker.gpg && sudo chmod a+r /etc/apt/keyrings/docker.gpg && echo \"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian $(. /etc/os-release && echo \"$VERSION_CODENAME\") stable\" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null && sudo apt-get update && sudo apt-get install -y docker-ce docker-ce-cli --> We will also need the [NVIDIA container toolkit], which enables use of GPUs with Docker. Per its : ```shell $ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list $ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit ``` Of course, we also need to itself. ```shell $ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg $ curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg $ echo \"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main\" | sudo tee" }, { "data": "> /dev/null $ sudo apt-get update && sudo apt-get install -y runsc As gVisor does not yet enable GPU support by default, we need to set the flags that will enable it: $ sudo runsc install -- --nvproxy=true --nvproxy-docker=true $ sudo systemctl restart docker ``` Now, let's make sure everything works by running commands that involve more and more of what we just set up. ```shell Check that the NVIDIA drivers are installed, with the right version, and with a supported GPU attached $ sudo nvidia-smi -L GPU 0: Tesla T4 (UUID: GPU-6a96a2af-2271-5627-34c5-91dcb4f408aa) $ sudo cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 525.60.13 Wed Nov 30 06:39:21 UTC 2022 Check that Docker works. $ sudo docker version [...] Server: Docker Engine - Community Engine: Version: 24.0.2 [...] Check that gVisor works. $ sudo docker run --rm --runtime=runsc debian:latest dmesg | head -1 [ 0.000000] Starting gVisor... Check that Docker GPU support (without gVisor) works. $ sudo docker run --rm --gpus=all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi -L GPU 0: Tesla T4 (UUID: GPU-6a96a2af-2271-5627-34c5-91dcb4f408aa) Check that gVisor works with the GPU. $ sudo docker run --rm --runtime=runsc --gpus=all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi -L GPU 0: Tesla T4 (UUID: GPU-6a96a2af-2271-5627-34c5-91dcb4f408aa) ``` We're all set! Now we can actually get Stable Diffusion running. We used the following `Dockerfile` to run Stable Diffusion and its web UI within a GPU-enabled Docker container. ```dockerfile FROM python:3.10 Set of dependencies that are needed to make this work. RUN apt-get update && apt-get install -y git wget build-essential \\ nghttp2 libnghttp2-dev libssl-dev ffmpeg libsm6 libxext6 Clone the project at the revision used for this test. RUN git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git && \\ cd /stable-diffusion-webui && \\ git checkout baf6946e06249c5af9851c60171692c44ef633e0 We don't want the build step to start the server. RUN sed -i '/start()/d' /stable-diffusion-webui/launch.py Install some pip packages. Note that this command will run as part of the Docker build process, which is not sandboxed by gVisor. RUN cd /stable-diffusion-webui && COMMANDLINE_ARGS=--skip-torch-cuda-test python launch.py WORKDIR /stable-diffusion-webui This causes the web UI to use the Gradio service to create a public URL. Do not use this if you plan on leaving the container running long-term. ENV COMMANDLINE_ARGS=--share Start the webui app. CMD [\"python\", \"webui.py\"] ``` We build the image and create a container with it using the `docker` command-line. ```shell $ cat > Dockerfile (... Paste the above contents...) ^D $ sudo docker build --tag=sdui . ``` Finally, we can start the Stable Diffusion web UI. Note that it will take a long time to start, as it has to download all the models from the Internet. To keep this post simple, we didn't set up any kind of volume that would enable data persistence, so it will do this every time the container starts. ```shell $ sudo docker run --runtime=runsc --gpus=all --name=sdui --detach sdui Follow the logs: $ sudo docker logs -f sdui [...] Calculating sha256 for /stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: Running on local URL: http://127.0.0.1:7860 Running on public URL: https://4446d982b4129a66d7.gradio.live This share link expires in 72 hours. [...] ``` We're all set! Now we can browse to the Gradio URL shown in the logs and start generating pictures, all within the secure confines of gVisor. {:width=\"100%\"} <span class=\"attribution\">Stable Diffusion Web UI screenshot. Inner image generated with Stable Diffusion v1.5.</span> Happy sandboxing! <span class=\"attribution\">Happy sandboxing! Generated with Stable Diffusion v1.5.</span>" } ]
{ "category": "Runtime", "file_name": "2023-06-20-gpu-pytorch-stable-diffusion.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "(projects-confine)= You can use projects to confine the activities of different users or clients. See {ref}`projects-confined` for more information. How to confine a project to a specific user depends on the authentication method you choose. You can confine access to specific projects by restricting the TLS client certificate that is used to connect to the Incus server. See {ref}`authentication-tls-certs` for detailed information. To confine the access from the time the client certificate is added, you must either use token authentication or add the client certificate to the server directly. Use the following command to add a restricted client certificate: ````{tabs} ```{group-tab} Token authentication incus config trust add --projects <project_name> --restricted ``` ```{group-tab} Add client certificate incus config trust add-certificate <certificatefile> --projects <projectname> --restricted ``` ```` The client can then add the server as a remote in the usual way ( or ) and can only access the project or projects that have been specified. To confine access for an existing certificate, use the following command: incus config trust edit <fingerprint> Make sure that `restricted` is set to `true` and specify the projects that the certificate should give access to under `projects`. ```{note} You can specify the `--project` flag when adding a remote. This configuration pre-selects the specified project. However, it does not confine the client to this project. ``` Incus can be configured to dynamically create projects for all users in a specific user group. This is usually achieved by having some users be a member of the `incus` group but not the `incus-admin` group. Make sure that all user accounts that you want to be able to use Incus are a member of this group. Once a member of the group issues an Incus command, Incus creates a confined project for this user and switches to this project. If Incus has not been {ref}`initialized <initialize>` at this point, it is automatically initialized (with the default settings). If you want to customize the project settings, for example, to impose limits or restrictions, you can do so after the project has been created. To modify the project configuration, you must have full access to Incus, which means you must be part of the `incus-admin` group and not only the group that you configured as the Incus user group." } ]
{ "category": "Runtime", "file_name": "projects_confine.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "X86 machine: | Configuration | Information | | - | - | | OS | openEuler 22.03-LTS | | Kernel | linux 5.10.0-60.18.0.50.oe2203.x86_64 | | CPU | 104 coresIntel(R) Xeon(R) Gold 6278C CPU @ 2.60GHz | | Memory | 754 GB | ARM machine: | Configuration | Information | | - | -- | | OS | openEuler 22.03-LTS | | Kernel | linux 5.10.0-60.18.0.50.oe2203.aarch64 | | CPU | 64 cores | | Memory | 196 GB | | Name | Version | | | | | iSulad | Version: 2.0.12 , Git commit: 9025c4b831b3c8240297f52352dec64368aa4f08 | | docker | Version: 18.09.0, Git commit: aa1eee8 | | podman | version 0.10.1 | Power by base operators of client | operator (ms) | Docker | Podman | iSulad | vs Docker | vs Podman | | - | | | | | | | create | 29 | 49 | 19 | -34.48% | -61.22% | | start | 193 | 158 | 51 | -73.58% | -67.72% | | stop | 21 | 25 | 14 | -33.33% | -44.00% | | rm | 22 | 101 | 14 | -36.36% | -86.14% | | run | 184 | 164 | 54 | -70.65% | -67.07% | base operators of client | operator (ms) | Docker | Podman | iSulad | vs Docker | vs Podman | | - | | | | | | | create | 334 | 380 | 101 | -69.76% | -73.42% | | start | 1087 | 636 | 103 | -90.52% | -83.81% | | stop | 49 | 108 | 38 | -22.45% | -64.81% | | rm | 92 | 573 | 39 | -57.61% | -93.19% | | run | 1059 | 761 | 192 | -81.87% | -74.77% | base operator of client | operator (ms) | Docker | Podman | iSulad | vs Docker | vs Podman | | - | | - | | | | | 100 * create | 32307 | 1078391 | 8558 | -73.51% | -99.21% | | 100 * start | 610723 | 472437 | 42204 | -93.06% | -91.07% | | 100 * stop | 16951 | 25663 | 6438 | -62.02% | -74.91% | | 100 * rm | 17962 | 377677 | 6299 | -64.93% | -98.33% | | 100 * run | 316828 | 466688 | 43269 | -86.34% | -90.73% | base operator of client | operator (ms) | Docker | Podman | iSulad | vs Docker | vs Podman | | - | - | - | | | | | 100 * create | 681423 | 3365343 | 67568 | -90.08% | -97.99% | | 100 * start | 3012528 | 2347719 | 98737 | -96.72% | -95.79% | | 100 * stop | 26973 | 358485 | 17423 | -35.41% | -95.14% | | 100 * rm | 60899 | 3469354 | 17742 | -70.87% |" }, { "data": "| | 100 * run | 2626248 | 3083552 | 129860 | -95.06% | -95.79% | ```bash $ cat ptcr.yml log_lever : 3 image_name : rnd-dockerhub.huawei.com/official/busybox-aarch64 mixed_cmd : 0 measure_count : serially : 10 parallerlly : 0 runtime_names : isula docker podman runtime_endpoint: start_cmd : /bin/sh -c while true; do echo hello world; sleep 1; done ``` ```bash $ cat ptcr.yml log_lever : 3 image_name : rnd-dockerhub.huawei.com/official/busybox-aarch64 mixed_cmd : 0 measure_count : serially : 0 parallerlly : 100 runtime_names : isula docker podman runtime_endpoint: start_cmd : /bin/sh -c while true; do echo hello world; sleep 1; done ``` ```bash $ ptcr -c ptcr.yml Thu Mar 31 10:29:53 2022 unit: msec TargetName:isula Type: searially action |count |total spent |average spent Create |10 |998 |101 Start |10 |1039 |103 Stop |10 |378 |38 Remove |10 |398 |39 Run |10 |1915 |192 TargetName:docker Type: searially action |count |total spent |average spent Create |10 |3336 |334 Start |10 |10617 |1087 Stop |10 |494 |49 Remove |10 |935 |92 Run |10 |10596 |1059 TargetName:podman Type: searially action |count |total spent |average spent Create |10 |3791 |380 Start |10 |6332 |636 Stop |10 |1100 |108 Remove |10 |5749 |573 Run |10 |7536 |761 ``` ```bash $ ptcr -c ptcr.yml Thu Mar 31 11:02:49 2022 unit: msec TargetName:isula Type: parallerlly action |count |total spent |average spent Create |100 |67568 |677 Start |100 |98737 |982 Stop |100 |17423 |173 Remove |100 |17742 |177 Run |100 |129860 |1299 TargetName:docker Type: parallerlly action |count |total spent |average spent Create |100 |681423 |6816 Start |100 |3012528 |30122 Stop |100 |26973 |267 Remove |100 |60899 |611 Run |100 |2626248 |26356 TargetName:podman Type: parallerlly action |count |total spent |average spent Create |100 |3365343 |33777 Start |100 |2347719 |23483 Stop |100 |358485 |3591 Remove |100 |3469354 |34805 Run |100 |3083552 |30855 ``` ```bash $ ptcr -c ptcr.yml Thu Mar 31 14:47:48 2022 unit: msec TargetName:isula Type: searially action |count |total spent |average spent Create |10 |194 |19 Start |10 |509 |51 Stop |10 |143 |14 Remove |10 |148 |14 Run |10 |549 |54 TargetName:docker Type: searially action |count |total spent |average spent Create |10 |290 |29 Start |10 |1963 |193 Stop |10 |209 |21 Remove |10 |226 |22 Run |10 |1850 |184 TargetName:podman Type: searially action |count |total spent |average spent Create |10 |494 |49 Start |10 |1590 |158 Stop |10 |254 |25 Remove |10 |1020 |101 Run |10 |1648 |164 ``` ```bash $ ptcr -c ptcr.yml Thu Mar 31 15:09:12 2022 unit: msec TargetName:isula Type: parallerlly action |count |total spent |average spent Create |100 |8558 |85 Start |100 |42204 |422 Stop |100 |6438 |64 Remove |100 |6299 |63 Run |100 |43269 |432 TargetName:docker Type: parallerlly action |count |total spent |average spent Create |100 |32307 |323 Start |100 |610723 |6107 Stop |100 |16951 |169 Remove |100 |17962 |180 Run |100 |316828 |3170 TargetName:podman Type: parallerlly action |count |total spent |average spent Create |100 |1078391 |10836 Start |100 |472437 |4726 Stop |100 |25663 |256 Remove |100 |377677 |3783 Run |100 |466688 |4670 ```" } ]
{ "category": "Runtime", "file_name": "performance_test.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 2 sidebar_label: \"Expand Volumes\" HwameiStor supports `CSI Volume Expansion`, by which altering the size of `PVC` can dynamically expand the volume online. The below example will expand PVC `data-sts-mysql-local-0` from 1GiB to 2GiB. Check the current size of the `PVC/PV`. ```console $ kubectl get pvc data-sts-mysql-local-0 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-sts-mysql-local-0 Bound pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 1Gi RWO hwameistor-storage-lvm-hdd 85m $ kubectl get pv pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 1Gi RWO Delete Bound default/data-sts-mysql-local-0 hwameistor-storage-lvm-hdd 85m ``` Verify if the `StorageClass` has the parameter `allowVolumeExpansion: true`. ```console $ kubectl get pvc data-sts-mysql-local-0 -o jsonpath='{.spec.storageClassName}' hwameistor-storage-lvm-hdd $ kubectl get sc hwameistor-storage-lvm-hdd -o jsonpath='{.allowVolumeExpansion}' true ``` ```console $ kubectl edit pvc data-sts-mysql-local-0 ... spec: resources: requests: storage: 2Gi ... ``` The larger the volume, the longer it takes to expand the volume. You may observe the process from `PVC` events. ```console $ kubectl describe pvc data-sts-mysql-local-0 Events: Type Reason Age From Message - - - Warning ExternalExpanding 34s volume_expand Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC. Warning VolumeResizeFailed 33s external-resizer lvm.hwameistor.io resize volume \"pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8\" by resizer \"lvm.hwameistor.io\" failed: rpc error: code = Unknown desc = volume expansion not completed yet Normal Resizing 32s (x2 over 33s) external-resizer lvm.hwameistor.io External resizer is resizing volume pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 Normal FileSystemResizeRequired 32s external-resizer lvm.hwameistor.io Require file system resize of volume on node Normal FileSystemResizeSuccessful 11s kubelet MountVolume.NodeExpandVolume succeeded for volume \"pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8\" k8s-worker-3 ``` ```console $ kubectl get pvc data-sts-mysql-local-0 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-sts-mysql-local-0 Bound pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 2Gi RWO hwameistor-storage-lvm-hdd 96m $ kubectl get pv pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b9fc8651-97b8-414c-8bcf-c8d2708c4ee8 2Gi RWO Delete Bound default/data-sts-mysql-local-0 hwameistor-storage-lvm-hdd 96m ```" } ]
{ "category": "Runtime", "file_name": "expand.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "Install the Kata Containers components with the following commands: ```bash $ sudo -E dnf -y install kata-containers ``` Decide which container manager to use and select the corresponding link that follows:" } ]
{ "category": "Runtime", "file_name": "fedora-installation-guide.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "Description -- The Gluster Test Framework, is a suite of scripts used for regression testing of Gluster. It runs well on RHEL, CentOS, and Fedora, and is run on a request basis against the patches submitted to Gluster . The Gluster Test Framework is part of the main Gluster code base, living under the \"tests\" subdirectory: https://github.com/gluster/glusterfs WARNING Running the Gluster Test Framework deletes /var/lib/glusterd/\\*. DO NOT run it on a server with any data. Preparation steps for Ubuntu 14.04 LTS -- \\# apt-get install dbench git libacl1-dev mock nfs-common nfs-kernel-server libtest-harness-perl libyajl-dev xfsprogs psmisc attr acl lvm2 rpm \\# apt-get install python-webob python-paste python-sphinx \\# apt-get install autoconf automake bison dos2unix flex libfuse-dev libaio-dev libibverbs-dev librdmacm-dev libtool libxml2-dev libxml2-utils liblvm2-dev make libssl-dev pkg-config libpython-dev python-eventlet python-netifaces python-simplejson python-pyxattr libreadline-dev tar 4) Install cmockery2 from github (https://github.com/lpabon/cmockery2) and compile and make install as in Readme 5) sudogroupaddmock sudouseradd-gmockmock 6) mkdir /var/run/gluster Note: redhat-rpm-config package is not found in ubuntu Preparation steps for CentOS 7 (only) Install EPEL: $sudoyuminstall-yhttp://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-1.noarch.rpm Install the CentOS 7.x dependencies: $sudoyuminstall-y--enablerepo=epelcmockery2-develdbenchgitlibacl-develmocknfs-utilsperl-Test-Harnessyajlxfsprogspsmisc $sudoyuminstall-y--enablerepo=epelpython-webob1.0python-paste-deploy1.5python-sphinx10redhat-rpm-config ==\\> Despite below missing packages it worked for me Nopackagepython-webob1.0available. Nopackagepython-paste-deploy1.5available. Nopackagepython-sphinx10available. $sudoyuminstall-y--enablerepo=epelautoconfautomakebisondos2unixflexfuse-devellibaio-devellibibverbs-devel\\ librdmacm-devellibtoollibxml2-devellvm2-develmakeopenssl-develpkgconfig\\ python-develpython-eventletpython-netifacespython-paste-deploy\\ python-simplejsonpython-sphinxpython-webobpyxattrreadline-develrpm-build\\ tar Create the mock user $sudouseradd-gmockmock Preparation steps for CentOS 6.3+ (only) Install EPEL: $sudoyuminstall-yhttp://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm Install the CentOS 6.x dependencies: $sudoyuminstall-y--enablerepo=epelcmockery2-develdbenchgitlibacl-develmocknfs-utilsperl-Test-Harnessyajlxfsprogs $sudoyuminstall-y--enablerepo=epelpython-webob1.0python-paste-deploy1.5python-sphinx10redhat-rpm-config $sudoyuminstall-y--enablerepo=epelautoconfautomakebisondos2unixflexfuse-devellibaio-devellibibverbs-devel\\ librdmacm-devellibtoollibxml2-devellvm2-develmakeopenssl-develpkgconfig\\ python-develpython-eventletpython-netifacespython-paste-deploy\\ python-simplejsonpython-sphinxpython-webobpyxattrreadline-develrpm-build\\ tar Create the mock user $sudouseradd-gmockmock Preparation steps for RHEL 6.3+ (only) -- Ensure you have the \"Scalable Filesystem Support\" group installed This provides the xfsprogs package, which is required by the test framework. Install EPEL: $sudoyuminstall-yhttp://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm Install the CentOS 6.x dependencies: $sudoyuminstall-y--enablerepo=epelcmockery2-develdbenchgitlibacl-develmocknfs-utilsyajlperl-Test-Harness $sudoyuminstall-y--enablerepo=rhel-6-server-optional-rpmspython-webob1.0python-paste-deploy1.5python-sphinx10redhat-rpm-config $sudoyuminstall-y--disablerepo=rhs--enablerepo=optional-rpmsautoconf\\ automakebisondos2unixflexfuse-devellibaio-devellibibverbs-devel\\ librdmacm-devellibtoollibxml2-devellvm2-develmakeopenssl-develpkgconfig\\ python-develpython-eventletpython-netifacespython-paste-deploy\\ python-simplejsonpython-sphinxpython-webobpyxattrreadline-develrpm-build\\ tar Create the mock user $sudouseradd-gmockmock Preparation steps for Fedora 16-19 (only) -- Still in development Install the Fedora dependencies: $sudoyuminstall-yattrcmockery2-develdbenchgitmocknfs-utilsperl-Test-Harnesspsmiscxfsprogs $sudoyuminstall-ypython-webob1.0python-paste-deploy1.5python-sphinx10redhat-rpm-config $sudoyuminstall-yautoconfautomakebisondos2unixflexfuse-devellibaio-devellibibverbs-devel\\ librdmacm-devellibtoollibxml2-devellvm2-develmakeopenssl-develpkgconfig\\ python-develpython-eventletpython-netifacespython-paste-deploy\\ python-simplejsonpython-sphinxpython-webobpyxattrreadline-develrpm-build\\ tar Create the mock user $sudouseradd-gmockmock Common steps Ensure DNS for your server is working The Gluster Test Framework fails miserably if the full domain name for your server doesn't resolve back to itself. If you don't have a working DNS infrastructure in place, adding an entry for your server to its /etc/hosts file will work. Install the version of Gluster you are testing Either install an existing set of rpms: $sudoyuminstall[yourglusterrpmshere] Or compile your own ones (fairly easy): https://docs.gluster.org/en/latest/Developer-guide/compiling-rpms/ Clone the GlusterFS git repository $gitclonegit://git.gluster.org/glusterfs $cdglusterfs Ensure mock can access the directory Some tests run as the user" }, { "data": "If the mock user can't access the tests subdirectory directory, these tests fail. (rpm.t is one such test) This is a known gotcha when the git repo is cloned to your home directory. Home directories generally don't have world readable permissions. You can fix this by adjusting your home directory permissions, or placing the git repo somewhere else (with access for the mock user). Running the tests -- The tests need to run as root, so they can mount volumes and manage gluster processes as needed. It's also best to run them directly as the root user, instead of through sudo. Strange things sporadicly happen (for me) when using the full test framework through sudo, that haven't happened (yet) when running directly as root. Hangs in dbench particularly, which are part of at least one test. The test framework takes just over 45 minutes to run in a VM here (4 cpu's assigned, 8GB ram, SSD storage). It may take significantly more or less time for you, depending on the hardware and software you're using. Showing debug information To display verbose information while the tests are running, set the DEBUG environment variable to 1 prior to running the tests. Log files Verbose output from the rpm.t test goes into \"rpmbuild-mock.log\", located in the same directory the test is run from. Reporting bugs -- If you hit a bug when running the test framework, please create a bug report for it on github so it gets fixed: https://github.com/gluster/glusterfs/issues/new Creating your own tests -- The test scripts are written in bash, with their filenames ending in .t instead of .sh. When creating your own test scripts, create them in an appropriate subdirectory under \"tests\" (eg \"bugs\" or \"features\") and use descriptive names like \"bug-XXXXXXX-checking-feature-X.t\" Also include the \"include.rc\" file, which defines the test types and host/brick/volume defaults: .$(dirname$0)/../include.rc There are 5 test types available at present, but feel free to add more if you need something that doesn't yet exist. The test types are explained in more detail below. Also essential is the \"cleanup\" command, which removes any existing Gluster configuration (without backing it up), and also kills any running gluster processes. There is a basic test template you can copy, named bug-000000.t in the bugs subdirectory: $cpbugs/bug-000000.tsomedir/descriptive-name.t Example of usage in basic/volume.t Example of usage in basic/rpm.t Example of usage in basic/volume.t Example of usage in basic/volume-status.t Defined in include.rc, but seems to be unused?" } ]
{ "category": "Runtime", "file_name": "Using-Gluster-Test-Framework.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "StratoVirt provides two kinds of machine, which are microvm and standard VM. The boot process of these two machines are as follows. ```shell arch=`uname -m` if [ ${arch} = \"x86_64\" ]; then con=ttyS0 machine=\"q35\" elif [ ${arch} = \"aarch64\" ]; then con=ttyAMA0 machine=\"virt\" else echo \"${arch} architecture not supported.\" exit 1 fi ``` The microvm machine type of StratoVirt supports PE or bzImage format kernel images on x86_64 platforms, and supports PE format kernel images on aarch64 platforms. Kernel image can be built with following steps: Firstly, get the openEuler kernel source code with: ```shell $ git clone -b kernel-5.10 --depth=1 https://gitee.com/openeuler/kernel $ cd kernel ``` If you use our openEuler 21.03, you can also acquire kernel source with yum: ```shell $ sudo yum install kernel-source $ cd /usr/src/linux-5.10.0-0.0.0.7.oe1.$(uname -m)/ ``` Configure your linux kernel. You can use and copy it to `kernel` path as `.config`. You can also modify config options by: ```shell $ make menuconfig ``` Build and transform kernel image to PE format. ```shell $ make -j$(nproc) vmlinux && objcopy -O binary vmlinux vmlinux.bin ``` If you want to compile bzImage format kernel in x86_64. ```shell $ make -j$(nproc) bzImage ``` Rootfs image is a file system image. An EXT4-format image with `/sbin/init` can be mounted at boot time in StratoVirt. You can check . ```shell /usr/bin/stratovirt \\ -machine microvm \\ -kernel /path/to/kernel \\ -smp 1 \\ -m 1024m \\ -append \"console=${con} pci=off reboot=k quiet panic=1 root=/dev/vda\" \\ -drive file=/path/to/rootfs,id=rootfs,readonly=off,direct=off \\ -device virtio-blk-device,drive=rootfs,id=rootfs \\ -qmp unix:/path/to/socket,server,nowait \\ -serial stdio ``` Standard VMs can boot in two modes. The first mode is kernel + rootfs.The other is to use the raw image that has been preinstalled with the guest OS. The preceding two boot modes both require standard boot firmware. So we first describe how to obtain the standard boot firmware. Standard boot needs firmware. Stratovirt only supports booting from UEFI (Unified Extensible Firmware Interface) on x86_64 and aarch64 platform. EDK II is an open-source project that implements UEFI specification. We use EDK II as the firmware to boot VM, and therefore we have to get the corresponding EDK II binary. There are two ways to get the EDK II binary, either by installing directly by yum or compiling from source code. The specific steps are as follows. Notes that EDK II binary contains two files, one for executable code storage and the other for boot data storage. On x86_64 platform, run ```shell $ sudo yum install -y edk2-ovmf ``` On aarch64 platform, run ```shell $ sudo yum install -y edk2-aarch64 ``` After installing edk2, on x8664 platform, `OVMFCODE.fd` and `OVMF_VARS.fd` are located in `/usr/share/edk2/ovmf` directory. On aarch64 platform, `QEMU_EFI-pflash.raw` and `vars-template-pflash.raw` are located in `/usr/share/edk2/aarch64` directory. ```shell yum install git nasm acpica-tools -y git clone https://github.com/tianocore/edk2.git cd edk2 git checkout edk2-stable202102 git submodule update --init arch=`uname -m` if [ ${arch} = \"x86_64\" ]; then echo \"ACTIVE_PLATFORM = OvmfPkg/OvmfPkgX64.dsc\" >> Conf/target.txt echo \"TARGET_ARCH = X64\" >> Conf/target.txt elif [ ${arch} = \"aarch64\" ]; then echo \"ACTIVE_PLATFORM = ArmVirtPkg/ArmVirtQemu.dsc\" >> Conf/target.txt echo \"TARGET_ARCH = AARCH64\" >> Conf/target.txt else echo \"${arch} architecture not supported.\" exit 1 fi echo \"TOOLCHAINTAG = GCC5\" >> Conf/target.txt echo \"BUILDRULECONF = Conf/build_rule.txt\" >> Conf/target.txt echo \"TARGET = RELEASE\" >> Conf/target.txt make -C BaseTools . ./edksetup.sh build if [ ${arch} = \"x86_64\" ]; then cp ./Build/OvmfX64/RELEASEGCC5/FV/OVMFCODE.fd /home/ cp ./Build/OvmfX64/RELEASEGCC5/FV/OVMFVARS.fd /home/ elif [ ${arch} = \"aarch64\" ]; then dd if=/dev/zero of=/home/STRATOVIRT_EFI.raw bs=1M count=64 dd of=/home/STRATOVIRTEFI.raw if=./Build/ArmVirtQemu-AARCH64/RELEASEGCC5/FV/QEMU_EFI.fd conv=notrunc dd if=/dev/zero" }, { "data": "bs=1M count=64 fi ``` After compiling edk2, on x8664 platform, `OVMFCODE.fd` and `OVMF_VARS.fd` locate underneath `/home` directory. On aarch64 platform, `STRATOVIRT_EFI.raw` and `STRATOVIRT_VAR.raw` locates underneath `/home` directory. The standard_ machine in StratoVirt supports bzImage format kernel image on x86_64 platform; and supports PE format kernel image on aarch64 platform. Kernel image can be built with: Firstly, get the openEuler kernel source code with: ```shell $ git clone -b kernel-5.10 --depth=1 https://gitee.com/openeuler/kernel $ cd kernel ``` Configure your linux kernel. You should use [our recommended standard_vm config] (./kernelconfig/standardvm) and copy it to `kernel` path as `.config`. Build kernel image ```shell $ make -j$(nproc) vmlinux && objcopy -O binary vmlinux vmlinux.bin $ make -j$(nproc) bzImage ``` In addition to manually building the kernel image, you can also download the from the openEuler official website. The building of rootfs for standard VM is exactly the same with microvm. You can check for more detailed information. You can download the installed from the OpenEuler official website. After downloading the file, run the qemu-img command to convert the file. Next, take the qcow2 image of openeuler-21.03 as an example to give the specific commands: ```shell $ xz -d openEuler-21.03-x86_64.qcow2.xz $ qemu-img convert -f qcow2 -O raw openEuler-21.03-x8664.qcow2 openEuler-21.03-x8664.raw ``` Now the available raw image is obtained. It can directly boot from kernel. In this mode, UEFI and ACPI will not be used. And VM will skip the UEFI, directly start the kernel to reduce boot up time. Run the following commands to direct boot VM from kernel: ```shell /usr/bin/stratovirt \\ -machine virt \\ -kernel /path/to/kernel \\ -smp 1 \\ -m 2G \\ -append \"console=${con} reboot=k panic=1 root=/dev/vda rw\" \\ -drive file=/path/to/rootfs,id=rootfs,readonly=off,direct=off \\ -device virtio-blk-pci,drive=rootfs,id=blk1,bus=pcie.0,addr=0x2 \\ -qmp unix:/path/to/socket,server,nowait \\ -serial stdio ``` Note: This mode currently only supports arm architecture. Note that standard need two PFlash devices which will use two firmware files from EDK II binary. If you don't need to store boot information, data storage file can be omitted whose unit is 1. But code storage file with unit 0 is necessary. Run the following commands to boot with the kernel and rootfs: ```shell /usr/bin/stratovirt \\ -machine ${machine} \\ -kernel /path/to/kernel \\ -smp 1 \\ -m 2G \\ -append \"console=${con} reboot=k panic=1 root=/dev/vda rw\" \\ -drive file=/path/to/rootfs,id=rootfs,readonly=off,direct=off \\ -device virtio-blk-pci,drive=rootfs,id=blk1,bus=pcie.0,addr=0x2 \\ -drive file=/path/to/OVMF_CODE.fd,if=pflash,unit=0,readonly=true \\ -drive file=/path/to/OVMF_VARS.fd,if=pflash,unit=1 \\ -qmp unix:/path/to/socket,server,nowait \\ -serial stdio ``` The command for booting with the raw image is as follows: ```shell /usr/bin/stratovirt \\ -machine ${machine} \\ -smp 1 \\ -m 2G \\ -drive file=/path/to/rawimage,id=rawimage,readonly=off,direct=off \\ -device virtio-blk-pci,drive=raw_image,id=blk1,bus=pcie.0,addr=0x2 \\ -drive file=/path/to/OVMF_CODE.fd,if=pflash,unit=0,readonly=true \\ -drive file=/path/to/OVMF_VARS.fd,if=pflash,unit=1 \\ -qmp unix:/path/to/socket,server,nowait \\ -serial stdio ``` Below is a simple way to make a EXT4 rootfs image: Prepare a properly-sized file(e.g. 1G): ```shell $ dd if=/dev/zero of=./rootfs.ext4 bs=1G count=20 ``` Create an empty EXT4 file system on this file: ```shell $ mkfs.ext4 ./rootfs.ext4 ``` Mount the file image: ```shell $ mkdir -p /mnt/rootfs $ sudo mount ./rootfs.ext4 /mnt/rootfs && cd /mnt/rootfs ``` Get the : ```shell $ arch=`uname -m` $ wget http://dl-cdn.alpinelinux.org/alpine/v3.13/releases/$arch/alpine-minirootfs-3.13.0-$arch.tar.gz -O alpine-minirootfs.tar.gz $ tar -zxvf alpine-minirootfs.tar.gz $ rm alpine-minirootfs.tar.gz ``` Make a simple `/sbin/init` for EXT4 file image. ```shell $ rm sbin/init && touch sbin/init && cat > sbin/init <<EOF mount -t devtmpfs dev /dev mount -t proc proc /proc mount -t sysfs sysfs /sys ip link set up dev lo exec /sbin/getty -n -l /bin/sh 115200 /dev/ttyS0 poweroff -f EOF $ sudo chmod +x sbin/init ``` Notice: alpine is an example. You can use any open rootfs filesystem with init/systemd as rootfs image. Unmount rootfs image: ```shell $ cd ~ && umount /mnt/rootfs ```" } ]
{ "category": "Runtime", "file_name": "boot.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "This document describes how to use `containerd-shim-runsc-v1` with the containerd runtime handler support on `containerd`. This is a similar setup as [GKE Sandbox], other than the . Note: If you are using Kubernetes and set up your cluster using `kubeadm` you may run into issues. See the for details. runsc and containerd-shim-runsc-v1: See the . containerd: See the for information on how to install containerd. Minimal version supported: 1.3.9 or 1.4.3. Update `/etc/containerd/config.toml`. Make sure `containerd-shim-runsc-v1` is in `${PATH}` or in the same directory as `containerd` binary. ```shell cat <<EOF | sudo tee /etc/containerd/config.toml version = 2 [plugins.\"io.containerd.runtime.v1.linux\"] shim_debug = true [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc] runtime_type = \"io.containerd.runc.v2\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runsc] runtime_type = \"io.containerd.runsc.v1\" EOF ``` Restart `containerd`: ```shell sudo systemctl restart containerd ``` You can run containers in gVisor via containerd's CRI. Download and install the `crictl` binary: ```shell { wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.13.0/crictl-v1.13.0-linux-amd64.tar.gz tar xf crictl-v1.13.0-linux-amd64.tar.gz sudo mv crictl /usr/local/bin } ``` Write the `crictl` configuration file: ```shell cat <<EOF | sudo tee /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock EOF ``` Pull the nginx image: ```shell sudo crictl pull nginx ``` Create the sandbox creation request: ```shell cat <<EOF | tee sandbox.json { \"metadata\": { \"name\": \"nginx-sandbox\", \"namespace\": \"default\", \"attempt\": 1, \"uid\": \"hdishd83djaidwnduwk28bcsb\" }, \"linux\": { }, \"log_directory\": \"/tmp\" } EOF ``` Create the pod in gVisor: ```shell SANDBOX_ID=$(sudo crictl runp --runtime runsc sandbox.json) ``` Create the nginx container creation request: ```shell cat <<EOF | tee container.json { \"metadata\": { \"name\": \"nginx\" }, \"image\":{ \"image\": \"nginx\" }, \"log_path\":\"nginx.0.log\", \"linux\": { } } EOF ``` Create the nginx container: ```shell CONTAINERID=$(sudo crictl create ${SANDBOXID} container.json sandbox.json) ``` Start the nginx container: ```shell sudo crictl start ${CONTAINER_ID} ``` Inspect the created pod: ```shell sudo crictl inspectp ${SANDBOX_ID} ``` Inspect the nginx container: ```shell sudo crictl inspect ${CONTAINER_ID} ``` Verify that nginx is running in gVisor: ```shell sudo crictl exec ${CONTAINER_ID} dmesg | grep -i gvisor ``` Install the RuntimeClass for gVisor: ```shell cat <<EOF | kubectl apply -f - apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: gvisor handler: runsc EOF ``` Create a Pod with the gVisor RuntimeClass: ```shell cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nginx-gvisor spec: runtimeClassName: gvisor containers: name: nginx image: nginx EOF ``` Verify that the Pod is running: ```shell kubectl get pod nginx-gvisor -o wide ``` This setup is already done for you on [GKE Sandbox]. It is an easy way to get started with gVisor. Before taking this deployment to production, review the ." } ]
{ "category": "Runtime", "file_name": "quick_start.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Targeted for v0.9 Provisioning OSDs today is done directly by Rook. This needs to be simplified and improved by building on the functionality provided by the `ceph-volume` tool that is included in the ceph image. As Rook is implemented today, the provisioning has a lot of complexity around: Partitioning of devices for bluestore Partitioning and configuration of a `metadata` device where the WAL and DB are placed on a different device from the data Support for both directories and devices Support for bluestore and filestore Since this is mostly handled by `ceph-volume` now, Rook should replace its own provisioning code and rely on `ceph-volume`. `ceph-volume` is a CLI tool included in the `ceph/ceph` image that will be used to configure and run Ceph OSDs. `ceph-volume` will replace the OSD provisioning mentioned previously in the legacy design. At a high level this flow remains unchanged from the flow in the . No new jobs or pods need to be launched from what we have today. The sequence of events in the OSD provisioning will be the following. The cluster CRD specifies what nodes/devices to configure with OSDs The operator starts a provisioning job on each node where OSDs are to be configured The provisioning job: Detects what devices should be configured Calls `ceph-volume lvm batch` to prepare the OSDs on the node. A single call is made with all of the devices unless more specific settings are included for LVM and partitions. Calls `ceph-volume lvm list` to retrieve the results of the OSD configuration. Store the results in a configmap for the operator to take the next step. The operator starts a deployment for each OSD that was provisioned. `rook` is the entrypoint for the container. The configmap with the osd configuration is loaded with info such as ID, FSID, bluestore/filestore, etc `ceph-volume lvm activate` is called to activate the osd, which mounts the config directory such as `/var/lib/ceph/osd-0`, using a tempfs mount. The OSD options such as `--bluestore`, `--filestore`, `OSDID`, and `OSDFSID` are passed to the command as necessary. The OSD daemon is started with `ceph-osd` When `ceph-osd` exits, `rook` will exit and the pod will be restarted by K8s. `ceph-volume` enables rook to expose several new features: Multiple OSDs for a single device, which is ideal for NVME devices. Configure OSDs on LVM, either consuming the existing LVM or automatically configuring LVM on the raw devices. Encrypt the OSD data with dmcrypt The Cluster CRD will be updated with the following settings to enable these features. All of these settings can be specified globally if under the `storage` element as in this example. The `config` element can also be specified under individual nodes or" }, { "data": "```yaml storage: config: encryptedDevice: \"true\" osdsPerDevice: 1 crushDeviceClass: ssd ``` If more flexibility is needed that consuming raw devices, LVM or partition names can also be used for specific nodes. Properties are shown for both bluestore and filestore OSDs. ```yaml storage: nodes: name: node2 logicalDevices: db: db_lv1 wal: wal_lv1 data: data_lv1 dbVolumeGroup: db_vg walVolumeGroup: wal_vg dataVolumeGroup: data_vg volume: my_lv1 volumeGroup: my_vg data: my_lv2 dataVolumeGroup: my_vg data: data_lv3 dataVolumeGroup: data_vg journal: journal_lv3 journalVolumeGroup: journal_vg devices: name: sdd name: sdf1 name: nvme01 config: osdsPerDevice: 5 ``` The above options for LVM and partitions look very tedious. Questions: Is it useful at this level of complexity? Is there a simpler way users would configure LVM? Do users need all this flexibility? This looks like too many options to maintain. Rook will need to continue supporting clusters that are running different types of OSDs. All of the v0.8 OSDs must continue running after Rook is upgraded to v0.9 and beyond, whether they were filestore or bluestore running on directories or devices. Since `ceph-volume` only supports devices that have not been previously configured by Rook: Rook will continue to provision OSDs directly when a `directory` is specified in the CRD Support for creating new OSDs on directories will be deprecated. While directories might still be used for test scenarios, it's not a mainline scenario. With the legacy design, directories were commonly used on LVM, but LVM is now directly supported. In v0.9, support for directories will remain, but documentation will encourage users to provision devices. For existing devices configured by Rook, `ceph-volume` will be skipped and the OSDs will be started as previously New devices will be provisioned with `ceph-volume` Rook relies on very recent developments in `ceph-volume` that are not yet available in luminous or mimic releases. For example, rook needs to run the command: ``` ceph-volume lvm batch --prepare <devices> ``` The `batch` command and the flag `--prepare` have been added recently. While the latest `ceph-volume` changes will soon be merged to luminous and mimic, Rook needs to know if it is running an image that contains the required functionality. To detect if `ceph-volume` supports the required options, Rook will run the command with all the flags that are required. To avoid side effects when testing for the version of `ceph-volume`, no devices are passed to the `batch` command. ``` ceph-volume lvm batch --prepare ``` If the flags are supported, `ceph-volume` has an exit code of `0`. If the flags are not supported, `ceph-volume` has an exit code of `2`. Since Rook orchestrates different versions of Ceph, Rook (at least initially) will need to support running images that may not have the features necessary from `ceph-volume`. When a supported version of `ceph-volume` is not detected, Rook will execute the legacy code to provision devices." } ]
{ "category": "Runtime", "file_name": "ceph-volume-provisioning.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Cobra supports native Fish completions generated from the root `cobra.Command`. You can use the `command.GenFishCompletion()` or `command.GenFishCompletionFile()` functions. You must provide these functions with a parameter indicating if the completions should be annotated with a description; Cobra will provide the description automatically based on usage information. You can choose to make this option configurable by your users. Custom completions implemented using the `ValidArgsFunction` and `RegisterFlagCompletionFunc()` are supported automatically but the ones implemented in Bash scripting are not." } ]
{ "category": "Runtime", "file_name": "fish_completions.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Amazon AWS S3 This guide describes the instructions to configure as Alluxio's under storage system. Amazon AWS S3, or Amazon Simple Storage Service, is an object storage service offering industry-leading scalability, data availability, security, and performance. For more information about Amazon AWS S3, please read its {:target=\"_blank\"}. If you haven't already, please see before you get started. In preparation for using Amazon AWS S3 with Alluxio: <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3_BUCKET>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing bucket</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use in that container, either by creating a new directory or using an existing one. </td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3ACCESSKEY_ID>`</td> <td markdown=\"span\">Used to sign programmatic requests made to AWS. See {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3SECRETKEY>`</td> <td markdown=\"span\">Used to sign programmatic requests made to AWS. See {:target=\"_blank\"}</td> </tr> </table> To use Amazon AWS S3 as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Specify an existing S3 bucket and directory as the underfs address by modifying `conf/alluxio-site.properties` to include: ```properties alluxio.dora.client.ufs.root=s3://<S3BUCKET>/<S3DIRECTORY> ``` Note that if you want to mount the whole s3 bucket, add a trailing slash after the bucket name (e.g. `s3://S3_BUCKET/`). Specify the AWS credentials for S3 access by setting `s3a.accessKeyId` and `s3a.secretKey` in `alluxio-site.properties`. ```properties s3a.accessKeyId=<S3ACCESSKEY_ID> s3a.secretKey=<S3SECRETKEY> ``` For other methods of setting AWS credentials, see the credentials section in . Once you have configured Alluxio to Amazon AWS S3, try to see that everything works. Configure S3 region when accessing S3 buckets to improve performance. Otherwise, global S3 bucket access will be enabled which introduces extra requests. S3 region can be set in `conf/alluxio-site.properties` ```properties alluxio.underfs.s3.region=us-west-1 ``` You can specify credentials in different ways, from highest to lowest priority: `s3a.accessKeyId` and `s3a.secretKey` specified as mount options `s3a.accessKeyId` and `s3a.secretKey` specified as Java system properties `s3a.accessKeyId` and `s3a.secretKey` in `alluxio-site.properties` Environment Variables `AWSACCESSKEYID` or `AWSACCESS_KEY` (either is acceptable) and `AWSSECRETACCESSKEY` or `AWSSECRET_KEY` (either is acceptable) on the Alluxio servers Profile file containing credentials at `~/.aws/credentials` AWS Instance profile credentials, if you are using an EC2 instance When using an AWS Instance profile as the credentials' provider: Create an with access to the mounted bucket Create an as a container for the defined IAM Role Launch an EC2 instance using the created profile Note that the IAM role will need access to both the files in the bucket as well as the bucket itself in order to determine the bucket's owner. Automatically assigning an owner to the bucket can be avoided by setting the property `alluxio.underfs.s3.inherit.acl=false`. See for more details. You may encrypt your data stored in S3. The encryption is only valid for data at rest in S3 and will be transferred in decrypted form when read by clients. Note, enabling this will also enable HTTPS to comply with requirements for reading/writing objects. Enable this feature by configuring `conf/alluxio-site.properties`: ```properties" }, { "data": "``` By default, a request directed at the bucket named \"mybucket\" will be sent to the host name \"mybucket.s3.amazonaws.com\". You can enable DNS-Buckets to use path style data access, for example: \"http://s3.amazonaws.com/mybucket\" by setting the following configuration: ```properties alluxio.underfs.s3.disable.dns.buckets=true ``` To communicate with S3 through a proxy, modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.s3.proxy.host=<PROXY_HOST> alluxio.underfs.s3.proxy.port=<PROXY_PORT> ``` `<PROXYHOST>` and `<PROXYPORT>` should be replaced by the host and port of your proxy. To use an S3 service provider other than \"s3.amazonaws.com\", modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.s3.endpoint=<S3_ENDPOINT> alluxio.underfs.s3.endpoint.region=<S3ENDPOINTREGION> ``` Replace `<S3_ENDPOINT>` with the hostname and port of your S3 service, e.g., `http://localhost:9000`. Only use this parameter if you are using a provider other than `s3.amazonaws.com`. Both the endpoint and region value need to be updated to use non-home region. ```properties alluxio.underfs.s3.endpoint=<S3_ENDPOINT> alluxio.underfs.s3.endpoint.region=<S3ENDPOINTREGION> ``` All OCI object storage regions need to use `PathStyleAccess` ```properties alluxio.underfs.s3.disable.dns.buckets=true alluxio.underfs.s3.inherit.acl=false ``` Some S3 service providers only support v2 signatures. For these S3 providers, you can enforce using the v2 signatures by setting the `alluxio.underfs.s3.signer.algorithm` to `S3SignerType`. S3 is an object store and because of this feature, the whole file is sent from client to worker, stored in the local disk temporary directory, and uploaded in the `close()` method by default. To enable S3 streaming upload, you need to modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.s3.streaming.upload.enabled=true ``` The default upload process is safer but has the following issues: Slow upload time. The file has to be sent to Alluxio worker first and then Alluxio worker is responsible for uploading the file to S3. The two processes are sequential. The temporary directory must have the capacity to store the whole file. Slow `close()`. The execution time of `close()` method is proportional to the file size and inversely proportional to the bandwidth. That is O(FILE_SIZE/BANDWIDTH). Slow `close()` is unexpected and has already been a bottleneck in the Alluxio Fuse integration. Alluxio Fuse method which calls `close()` is asynchronous and thus if we write a big file through Alluxio Fuse to S3, the Fuse write operation will be returned much earlier than the file has been written to S3. The S3 streaming upload feature addresses the above issues and is based on the . The S3 streaming upload has the following advantages: Shorter upload time. Alluxio worker uploads buffered data while receiving new data. The total upload time will be at least as fast as the default method. Smaller capacity requirement. Our data is buffered and uploaded according to partitions (`alluxio.underfs.s3.streaming.upload.partition.size` which is 64MB by default). When a partition is successfully uploaded, this partition will be deleted. Faster `close()`. We begin uploading data when data buffered reaches the partition size instead of uploading the whole file in `close()`. If a S3 streaming upload is interrupted, there may be intermediate partitions uploaded to S3 and S3 will charge for those data. To reduce the charges, users can modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.cleanup.enabled=true ``` Intermediate multipart uploads in all non-readonly S3 mount points older than the clean age (configured by `alluxio.underfs.s3.intermediate.upload.clean.age`) will be cleaned when a leading master starts or a cleanup interval (configured by `alluxio.underfs.cleanup.interval`) is reached. The default upload method uploads one file completely from start to end in one" }, { "data": "We use multipart-upload method to upload one file by multiple parts, every part will be uploaded in one thread. It won't generate any temporary files while uploading. It will consume more memory but faster than streaming upload mode. To enable S3 multipart upload, you need to modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.s3.multipart.upload.enabled=true ``` There are other parameters you can specify in `conf/alluxio-site.properties` to make the process faster and better. ```properties alluxio.underfs.object.store.multipart.upload.timeout ``` ```properties alluxio.underfs.s3.multipart.upload.partition.size ``` When using Alluxio to access S3 with a great number of clients per Alluxio server, these parameters can be tuned so that Alluxio uses a configuration optimized for the S3 backend. If the S3 connection is slow, a larger timeout is useful: ```properties alluxio.underfs.s3.socket.timeout=500sec alluxio.underfs.s3.request.timeout=5min ``` If we expect a great number of concurrent metadata operations: ```properties alluxio.underfs.s3.admin.threads.max=80 ``` If the total number of metadata + data operations is huge: ```properties alluxio.underfs.s3.threads.max=160 ``` For a worker, the number of concurrent writes to S3. For a master, the number of threads to concurrently rename files within a directory. ```properties alluxio.underfs.s3.upload.threads.max=80 ``` Thread-pool size to submit delete and rename operations to S3 on master: ```properties alluxio.underfs.object.store.service.threads=80 ``` is very different from the traditional POSIX permission model. For instance, S3 ACL does not support groups or directory-level settings. Alluxio makes the best effort to inherit permission information including file owner, group and permission mode from S3 ACL information. The S3 credentials set in Alluxio configuration corresponds to an AWS user. If this user does not have the required permissions to access an S3 bucket or object, a 403 permission denied error will be returned. If you see a 403 error in Alluxio server log when accessing an S3 service, you should double-check You are using the correct AWS credentials. See . Your AWS user has permissions to access the buckets and objects mounted to Alluxio. Read more for 403 error. Alluxio file system sets the file owner based on the AWS account configured in Alluxio to connect to S3. Since there is no group in S3 ACL, the owner is reused as the group. By default, Alluxio extracts the display name of this AWS account as the file owner. In case this display name is not available, this AWS user's will be used. This canonical user ID is typically a long string (like `79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be`), thus often inconvenient to read and use in practice. Optionally, the property `alluxio.underfs.s3.owner.id.to.username.mapping` can be used to specify a preset mapping from canonical user IDs to Alluxio usernames, in the format \"id1=user1;id2=user2\". For example, edit `alluxio-site.properties` to include ```properties alluxio.underfs.s3.owner.id.to.username.mapping=\\ 79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be=john ``` This configuration helps Alluxio recognize all objects owned by this AWS account as owned by the user `john` in Alluxio namespace. To find out the AWS S3 canonical ID of your account, check the console `https://console.aws.amazon.com/iam/home?#/security_credentials`, expand the \"Account Identifiers\" tab and refer to \"Canonical User ID\". `chown`, `chgrp`, and `chmod` of Alluxio directories and files do NOT propagate to the underlying S3 buckets nor objects. If issues are encountered when running against your S3 backend, enable additional logging to track HTTP traffic. Modify `conf/log4j.properties` to add the following properties: ```properties log4j.logger.com.amazonaws=WARN log4j.logger.com.amazonaws.request=DEBUG log4j.logger.org.apache.http.wire=DEBUG ``` See for more details." } ]
{ "category": "Runtime", "file_name": "S3.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Supported Kubernetes Versions\" layout: docs In general, Velero works on Kubernetes version 1.7 or later (when Custom Resource Definitions were introduced). Restic support requires Kubernetes version 1.10 or later, or an earlier version with the mount propagation feature enabled. See . Velero supports a variety of storage providers for different backup and snapshot operations. Velero has a plugin system which allows anyone to add compatibility for additional backup and volume storage platforms without modifying the Velero codebase. | Provider | Owner | Contact | ||-|| | | Velero Team | , | | | Velero Team | , | | | Velero Team | , | Velero uses to connect to the S3 API. Some third-party storage providers also support the S3 API, and users have reported the following providers work with Velero: Note that these providers are not regularly tested by the Velero team. * Ceph RADOS v12.2.7 * Quobyte * Some storage providers, like Quobyte, may need a different . | Provider | Owner | Contact | |-|--|| | | Velero Team | , | | | Velero Team | , | | | Velero Team | , | | | Velero Team | , | | | Portworx | , | | | StackPointCloud | | | | OpenEBS | , | | | AlibabaCloud | | | | HPE | , | To write a plugin for a new backup or volume storage system, take a look at the . After you publish your plugin, open a PR that adds your plugin to the appropriate list." } ]
{ "category": "Runtime", "file_name": "support-matrix.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "A Longhorn volume can be backed by replicas on some nodes in the cluster and accessed by a pod running on any node in the cluster. In the current implementation of Longhorn, the pod which uses Longhorn volume could be on a node that doesn't contain any replica of the volume. In some cases, it is desired to have a local replica on the same node as the consuming pod. In this document, we refer to the property of having a local replica as having `data locality` This enhancement gives the users option to have a local replica on the same node as the engine which means on the same node as the consuming pod. https://github.com/longhorn/longhorn/issues/1045 Provide users an option to try to migrate a replica to the same node as the consuming pod. Another approach to achieve data locality is trying to influence Kubernetes scheduling decision so that pods get scheduled onto the nodes which contain volume's replicas. However, this is not a goal in this LEP. See https://github.com/longhorn/longhorn/issues/1045 for more discussion about this approach. We give user 2 options for data locality setting: `disabled` and `best-effort`. In `disabled` mode, there may be a local replica of the volume on the same node as the consuming pod or there may not be. Longhorn doesn't do anything. In `best-effort` mode, if a volume is attached to a node that has no replica, the Volume Controller will start rebuilding the replica on the node after the volume is attached. Once the rebuilding process is done, it will remove one of the other replicas to keep the replica count as specified. Sometimes, having `data locality` is critical. For example, when the network is bad or the node is temporarily disconnected, having local replica will keep the consuming pod running. Another case is that sometimes the application workload can do replication itself (e.g. database) and it wants to have a volume of 1 replica for each pod. Without the `data locality` feature, multiple replicas may end up on the same node which destroys the replication intention of the workload. See more in In the current implementation of Longhorn, the users cannot ensure that pod will have a local replica. After the enhancement implemented, users can have options to choose among `disabled` (default setting) or `best-effort` A user has three hyper-converged nodes and default settings with: `default-replica-count: 2`. He wants to ensure a pod always runs with at least one local replica would reduce the amount of network traffic needed to keep the data in sync. There does not appear to be an obvious way for him to schedule the pod using affinities. A user runs a database application that can do replication itself. The database app creates multiple pods and each pod uses a Longhorn volume with `replica-count = 1`. The database application knows how to schedule pods into different nodes so that they achieve" }, { "data": "The problem is that replicas of multiple volumes could land on the same node which destroys the HA capability. With the `data locality` feature we can ensure that replicas are on the same nodes with the consuming pods and therefore they are on different nodes. Users create a new volume using Longhorn UI with `dataLocality` set to `best-effort`. If users attach the volume a node which doesn't contain any replica, they will see that Longhorn migrate a local replica to the node. Users create a storageclass with dataLocality: best-effort set Users launch a statefulset with the storageclass. Users will find that there is always a replica on the node where the pod resides on Users update `dataLocality` to `disable`, detach the volume, and attach it to a node which doesn't have any replica Users will see that Longhorn does not create a local replica on the new node. There are 2 API changes: When creating a new volume, the body of the request sent to `/v1/volumes` has a new field `dataLocality` set to either `disabled` or `best-effort`. Implement a new API for users to update `dataLocality` setting for individual volume. The new API could be `/v1/volumes/<VOLUMENAME>?action=updateDataLocality`. This API expects the request's body to have the form `{dataLocality:<DATALOCALITY_MODE>}`. There are 2 modes for `dataLocality`: `disabled` is the default mode. There may be a local replica of the volume on the same node as the consuming pod or there may not be. Longhorn doesn't do anything. `best-effort` mode instructs Longhorn to try to keep a local replica on the same node as the consuming pod. If Longhorn cannot keep the local replica (due to not having enough disk space, incompatible disk tags, etc...), Longhorn does not stop the volume. There are 3 settings the user can change for `data locality`: Global default setting inside Longhorn UI settings. The global setting should only function as a default value, like replica count. It doesn't change any existing volume's setting specify `dataLocality` mode for individual volume upon creation using UI specify `dataLocality` mode as a parameter on Storage Class. Implementation steps: Add a global setting `DefaultDataLocality` Add the new field `DataLocality` to `VolumeSpec` Modify the volume creation API so that it extracts, verifies, and sets the `dataLocality` mode for the new volume. If the volume creation request doesn't have `dataLocality` field inside its body, we use the `DefaultDataLocality` for the new volume. Modify the `CreateVolume` function inside the CSI package so that it extracts, verifies, and sets the `dataLocality` mode for the new volume. This makes sure that Kubernetes can use CSI to create Longhorn volume with a specified `datLocality` mode. Inside `volume controller`'s sync logic, we add a new function `ReconcileLocalReplica`. When a volume enters the `volume controller`'s sync logic, function `ReconcileLocalReplica` checks the `dataLocality` mode of the volume. If the `dataLocality` is `disabled`, it will do nothing and return. If the `dataLocality` is `best-effort`, `ReconcileLocalReplica` checks whether there is a local replica on the same node as the" }, { "data": "If there is no local replica, we create an in-memory replica struct. We don't create a replica in DS using createReplica() directly because we may need to delete the new replica if it fails to ScheduleReplicaToNode. This prevents UI from repeatedly show creating/deleting the new replica. Then we try to schedule the replica struct onto the consuming pod's node. If the scheduling fails, we don't do anything. The replica struct will be collected by Go's garbage collector. If the scheduling success, we save the replica struct to the data store. This will trigger replica rebuilding on the consuming pod's node. If there already exists a local replica on the consuming pod's node, we check to see if there are more healthy replica than specified on the volume's spec. If there are more healthy replicas than specified on the volume's spec, we remove a replica on the other nodes. We prefer to delete replicas on the same disk, then replicas on the same node, then replicas on the same zone. UI modification: On volume creation, add an input field for `dataLocality` On volume detail page: On the right volume info panel, add a <div> to display `selectedVolume.dataLocality` On the right volume panel, in the Health row, add an icon for data locality status. Specifically, if `dataLocality=best-effort` but there is not a local replica then display a warning icon. Similar to the replica node redundancy warning In the volume's actions dropdown, add a new action to update `dataLocality` In Rancher UI, add a parameter `dataLocality` when create storage class using Longhorn provisioner. Create a cluster of 9 worker nodes and install Longhorn. Having more nodes helps us to be more confident because the chance of randomly scheduling a replica onto the same node as the engine is small. Create volume `testvol` with `Number of Replicas = 2` and `dataLocality` is `best-effort` Attach `testvol` to a node that doesn't contain any replica. Verify that Longhorn schedules a local replica to the same node as the consuming pod. After finishing rebuilding the local replica. Longhorn removes a replica on other nodes to keep the number of replicas is 2. Create another volume, `testvol2` with `Number of Replicas = 2` and `dataLocality` is `disabled` Attach `testvol2` to a node that doesn't contain any replica. Verify that Longhorn doesn't move replica Leave the `DefaultDataLocality` setting as `disabled` in Longhorn UI. Create another volume, `testvol3` with `Number of Replicas = 2` and `dataLocality` is empty Attach `testvol3` to a node that doesn't contain any replica. Verify that the `dataLocality` of `testvol3` is `disabled` and that Longhorn doesn't move replica. Set the `DefaultDataLocality` setting to `best-effort` in Longhorn UI. Create another volume, `testvol4` with `Number of Replicas = 2` and `dataLocality` is empty Attach `testvol4` to a node that doesn't contain any replica. Verify that the `dataLocality` of `testvol4` is `best-effort`. Verify that Longhorn schedules a local replica to the same node as the consuming" }, { "data": "After finishing rebuilding the local replica. Longhorn removes a replica on other nodes to keep the number of replicas is 2. Change `dataLocality` to `best-effort` for `testvol2` Verify that Longhorn schedules a local replica to the same node as the consuming pod. After finishing rebuilding the local replica. Longhorn removes a replica on other nodes to keep the number of replicas which is 2. Change `dataLocality` to `disabled` for `testvol2` Go to Longhorn UI, increase the `number of replicas` to 3. Wait until the new replica finishes rebuilding. Delete the local replica on the same node as the consuming pod. Verify that Longhorn doesn't move replica Create `disabled-longhorn` storage class with from this yaml file: ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: disabled-longhorn provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"1\" dataLocality: \"disabled\" staleReplicaTimeout: \"2880\" # 48 hours in minutes fromBackup: \"\" ``` create a deployment of 1 pod using PVC dynamically created by `disabled-longhorn` storage class. The consuming pod is likely scheduled onto a different node than the replica. If this happens, verify that Longhorn doesn't move replica Create `best-effort-longhorn` storage class with from this yaml file: ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: best-effort-longhorn provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"1\" dataLocality: \"best-effort\" staleReplicaTimeout: \"2880\" # 48 hours in minutes fromBackup: \"\" ``` create a shell deployment of 1 pod using the PVC dynamically created by `best-effort-longhorn` storage class. The consuming pod is likely scheduled onto a different node than the replica. If this happens, verify that Longhorn schedules a local replica to the same node as the consuming pod. After finishing rebuilding the local replica, Longhorn removes a replica on other nodes to keep the number of replicas which is 1. verify that the volume CRD has `dataLocality` is `best-effort` Create `unspecified-longhorn` storage class with from this yaml file: ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: unspecified-longhorn provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"1\" staleReplicaTimeout: \"2880\" # 48 hours in minutes fromBackup: \"\" ``` create a shell deployment of 1 pod using PVC dynamically created by `unspecified-longhorn` storage class. The consuming pod is likely scheduled onto a different node than the replica. If this happens, depend on `DefaultDataLocality` setting in Longhorn UI, verify that Longhorn does/doesn't migrate a local replica to the same node as the consuming pod. The volumes created in old Longhorn versions don't have the field `dataLocality`. We treat those volumes the same as having `dataLocality` set to `disabled` Verify that Longhorn doesn't migrate replicas for those volumes. No special upgrade strategy is required. We are adding the new field, `dataLocality`, to volume CRD's spec. Then we use this field to check whether we need to migrate a replica to the same node as the consuming pod. When users upgrade Longhorn to this new version, it is possible that some volumes don't have this field. This is not a problem because we only migrate replica when `dataLocality` is `best-effort`. So, the empty `dataLocality` field is fine." } ]
{ "category": "Runtime", "file_name": "20200819-keep-a-local-replica-to-engine.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<p align=\"center\"><img alt=\"sysbox\" src=\"../figures/k8s-in-docker.png\" width=\"800x\" /></p> Starting with release v0.2.0, Sysbox has preliminary support for running Kubernetes (K8s) inside system containers. This is known as Kubernetes-in-Docker or \"KinD\". There are several for running Kubernetes-in-Docker. The has step-by-step examples. Sysbox is capable of creating containers that can run K8s seamlessly, using simple Docker images, no special configurations, and strongly isolated containers (i.e,. using the Linux user-namespace). You can deploy the cluster using simple Docker commands or using a higher level tool (e.g., Nestybox's \"kindbox\" tool). With Sysbox, you have full control of the container images used for K8s nodes. You can use different images for different cluster nodes if you wish, and you can easily preload inner pod images into the K8s nodes. Some sample use cases for Kubernetes-in-Docker are: Testing and CI/CD: Use it local testing or in a CI/CD pipeline. Infrastructure-as-code: The K8s cluster is itself containerized, bringing the power of containers from applications down to infrastructure. Increased host utilization: Run multiple K8s clusters on a single host, with strong isolation and without resorting to heavier VMs. Deploying a K8s cluster is as simple as using Docker + Sysbox to deploy one or more system containers, each with Systemd, Docker, and Kubeadm, and running `kubeadm init` for the master node and `kubeadm join` on the worker node(s). See in the Quick Start Guide for step-by-step instructions. In addition, you can also use a higher level tool such as to deploy the K8s cluster. Kindbox is a simple bash script wrapper around Docker commands. See in the Quick Start Guide for step-by-step instructions. A key feature of Sysbox is that it allows you to easily create system container images that come preloaded with inner container images. You can use this to create K8s node images that include inner pod images. This can significantly speed up deployment of the K8s cluster, since K8s node need not download those inner pod images at runtime. There are two ways to do this: Using `docker build` (see for an example). Using `docker commit` (see for an example). You can use this to embed your own pod images into the K8s-node image too. Sysbox's support for running Kubernetes-in-Docker is preliminary at this stage. This is because Kubernetes is a complex and large piece of software, and not all K8s functionality works inside system containers yet. However, many widely used K8s features work, so it's already quite useful. Below is a list of K8s features that work and those that don't. Anything not shown in the lists means we've not tested it yet (i.e., it may or may not work). Cluster deployment (single master, multi-worker). Cluster on Docker's default bridge network. Cluster on Docker's user-defined bridge network. Deploying multiple K8s clusters on a single host (each on it's own Docker user-defined bridge network). Kubeadm Kubectl Helm K8s deployments, replicas, auto-scale, rolling updates, daemonSets, configMaps, secrets, etc. K8s CNIs: Flannel, WeaveNet (Sysbox-EE), Calico (Sysbox-EE). K8s services (ClusterIP, NodePort). K8s service mesh (Istio). K8s ingress controller (Traefik). K8s volumes (emptyDir, hostPath, persistent). Kube-proxy (iptables mode only). Kube-proxy ipvs mode. K8s NFS volumes" } ]
{ "category": "Runtime", "file_name": "kind.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "Learn about the ways to get started with Piraeus Datastore by deploying Piraeus Operator and provisioning your first volume. . In this tutorial we will be using `kubectl` with the built-in `kustomize` feature to deploy Piraeus Operator. All resources needed to run Piraeus Operator are included in a single Kustomization. Install Piraeus Operator by running: ```bash $ kubectl apply --server-side -k \"https://github.com/piraeusdatastore/piraeus-operator//config/default?ref=v2.5.1\" namespace/piraeus-datastore configured ... ``` The Piraeus Operator will be installed in a new namespace `piraeus-datastore`. After a short wait the operator will be ready. The following command waits until the Operator is ready: ``` $ kubectl wait pod --for=condition=Ready -n piraeus-datastore -l app.kubernetes.io/component=piraeus-operator pod/piraeus-operator-controller-manager-dd898f48c-bhbtv condition met ``` Now, we will deploy Piraeus Datastore using a new resource managed by Piraeus Operator. We create a `LinstorCluster`, which creates all the necessary resources (Deployments, Pods, and so on...) for our Datastore: ``` $ kubectl apply -f - <<EOF apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: {} EOF ``` Again, all workloads will be deployed to the `piraeus-datastore` namespace. After a short wait the Datastore will ready: ``` $ kubectl wait pod --for=condition=Ready -n piraeus-datastore -l app.kubernetes.io/name=piraeus-datastore pod/linstor-controller-65cbbc74db-9vm9n condition met pod/linstor-csi-controller-5ccb7d84cd-tvd9h condition met pod/linstor-csi-node-2lkpd condition met pod/linstor-csi-node-hbcvv condition met pod/linstor-csi-node-hmrd7 condition met pod/n1.example.com condition met pod/n2.example.com condition met pod/n3.example.com condition met pod/piraeus-operator-controller-manager-dd898f48c-bhbtv condition met ``` We can now inspect the state of the deployed LINSTOR Cluster using the `linstor` client: ``` $ kubectl -n piraeus-datastore exec deploy/linstor-controller -- linstor node list +-+ | Node | NodeType | Addresses | State | |===================================================================| | n1.example.com | SATELLITE | 10.116.72.166:3366 (PLAIN) | Online | | n2.example.com | SATELLITE | 10.127.183.190:3366 (PLAIN) | Online | | n3.example.com | SATELLITE | 10.125.97.33:3366 (PLAIN) | Online | +-+ ``` We have not yet configured any storage location for our volumes. This can be accomplished by creating a new `LinstorSatelliteConfiguration` resource. We will create a storage pool of type `fileThinPool` on each node. We chose `fileThinPool` as it does not require further configuration on the host. ``` $ kubectl apply -f - <<EOF apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: storage-pool spec: storagePools: name: pool1 fileThinPool: directory: /var/lib/piraeus-datastore/pool1 EOF ``` This will cause some Pods to be recreated. While this occurs `linstor node list` will temporarily show offline nodes: ``` $ kubectl -n piraeus-datastore exec deploy/linstor-controller -- linstor node list +--+ | Node | NodeType | Addresses | State | |====================================================================| | n1.example.com | SATELLITE | 10.116.72.166:3366 (PLAIN) | OFFLINE | | n2.example.com | SATELLITE | 10.127.183.190:3366 (PLAIN) | OFFLINE | | n3.example.com | SATELLITE | 10.125.97.33:3366 (PLAIN) | OFFLINE | +--+ ``` Waiting a bit longer, the nodes will be `Online` again. Once the nodes are connected again, we can verify that the storage pools were configured: ``` $ kubectl -n piraeus-datastore exec deploy/linstor-controller -- linstor storage-pool list ++ | StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName | |=========================================================================================================================================================| | DfltDisklessStorPool |" }, { "data": "| DISKLESS | | | | False | Ok | | | DfltDisklessStorPool | n2.example.com | DISKLESS | | | | False | Ok | | | DfltDisklessStorPool | n3.example.com | DISKLESS | | | | False | Ok | | | pool1 | n1.example.com | FILE_THIN | /var/lib/piraeus-datastore/pool1 | 24.54 GiB | 49.30 GiB | True | Ok | | | pool1 | n2.example.com | FILE_THIN | /var/lib/piraeus-datastore/pool1 | 23.03 GiB | 49.30 GiB | True | Ok | | | pool1 | n3.example.com | FILE_THIN | /var/lib/piraeus-datastore/pool1 | 26.54 GiB | 49.30 GiB | True | Ok | | ++ ``` We now have successfully deployed and configured Piraeus Datastore, and are ready to create our first in Kubernetes. First, we will set up a new for our volumes. In the `StorageClass`, we specify the storage pool from above: ``` $ kubectl apply -f - <<EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: piraeus-storage provisioner: linstor.csi.linbit.com allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer parameters: linstor.csi.linbit.com/storagePool: pool1 EOF ``` Next, we will create a , requesting 1G of storage from our newly created `StorageClass`. ``` $ kubectl apply -f - <<EOF apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-volume spec: storageClassName: piraeus-storage resources: requests: storage: 1Gi accessModes: ReadWriteOnce EOF ``` When we check the created PersistentVolumeClaim, we can see that it remains in `Pending` state. ``` $ kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-volume Pending piraeus-storage 14s ``` We first need to create a \"consumer\", which in this case is just a `Pod`. For our consumer, we will create a Deployment for a simple web server, serving files from our volume. ``` $ kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: web-server spec: selector: matchLabels: app.kubernetes.io/name: web-server template: metadata: labels: app.kubernetes.io/name: web-server spec: containers: name: web-server image: nginx volumeMounts: mountPath: /usr/share/nginx/html name: data volumes: name: data persistentVolumeClaim: claimName: data-volume EOF ``` After a short wait, the Pod is `Running`, and our `PersistentVolumeClaim` is now `Bound`: ``` $ kubectl wait pod --for=condition=Ready -l app.kubernetes.io/name=web-server pod/web-server-84867b5449-hgdzx condition met $ kubectl get persistentvolumeclaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-volume Bound pvc-9e1149e7-33db-47a7-8fc6-172514422143 1Gi RWO piraeus-storage 1m ``` Checking the running container, we see that the volume is mounted where we expected it: ``` $ kubectl exec deploy/web-server -- df -h /usr/share/nginx/html Filesystem Size Used Avail Use% Mounted on /dev/drbd1000 973M 24K 906M 1% /usr/share/nginx/html ``` Taking a look with the `linstor` client, we can see that the volume is listed in LINSTOR and marked as `InUse` by the Pod. ``` $ kubectl -n piraeus-datastore exec deploy/linstor-controller -- linstor resource list-volumes +-+ | Node | Resource | StoragePool | VolNr | MinorNr | DeviceName | Allocated | InUse | State | |===========================================================================================================================================| | n1.example.com | pvc-9e1149e7-33db-47a7-8fc6-172514422143 | pool1 | 0 | 1000 | /dev/drbd1000 | 16.91 MiB | InUse | UpToDate | +-+ ``` We have now successfully set up Piraeus Datastore and used it to provision a Persistent Volume in a Kubernetes cluster." } ]
{ "category": "Runtime", "file_name": "get-started.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "English | With a multitude of public cloud providers available, such as Alibaba Cloud, Huawei Cloud, Tencent Cloud, AWS, and more, it can be challenging to use mainstream open-source CNI plugins to operate on these platforms using Underlay networks. Instead, one has to rely on proprietary CNI plugins provided by each cloud vendor, leading to a lack of standardized Underlay solutions for public clouds. This page introduces , an Underlay networking solution designed to work seamlessly in any public cloud environment. A unified CNI solution offers easier management across multiple clouds, particularly in hybrid cloud scenarios. Spiderpool's node topology function can bind IP pools to the available IPs of each network card on each node, and also achieve the validity of MAC addresses. Spiderpool can run on the Alibaba Cloud environment based on IPVlan Underlay CNI, and ensures that the east-west and north-south traffic of the cluster are normal. Its implementation principle is as follows: When using Underlay networks in a public cloud environment, each network interface of a cloud server can only be assigned a limited number of IP addresses. To enable communication when an application runs on a specific cloud server, it needs to obtain the valid IP addresses allocated to different network interfaces within the VPC network. To address this IP allocation requirement, Spiderpool introduces a CRD named `SpiderIPPool`. By configuring the nodeName and multusName fields in `SpiderIPPool`, it enables node topology functionality. Spiderpool leverages the affinity between the IP pool and nodes, as well as the affinity between the IP pool and IPvlan Multus, facilitating the utilization and management of available IP addresses on the nodes. This ensures that applications are assigned valid IP addresses, enabling seamless communication within the VPC network, including communication between Pods and also between Pods and cloud servers. In a public cloud VPC network, network security controls and packet forwarding principles dictate that when network data packets contain MAC and IP addresses unknown to the VPC network, correct forwarding becomes unattainable. This issue arises in scenarios where Macvlan or OVS based Underlay CNI plugins generate new MAC addresses for Pod NICs, resulting in communication failures among Pods. To address this challenge, Spiderpool offers a solution in conjunction with . The IPVlan CNI operates at the L3 of the network, eliminating the reliance on L2 broadcasts and avoiding the generation of new MAC addresses. Instead, it maintains consistency with the parent interface. By incorporating IPVlan, the legitimacy of MAC addresses in a public cloud environment can be effectively resolved. The system kernel version must be greater than 4.2 when using IPVlan as the cluster's CNI. is installed. Prepare an Alibaba Cloud environment with virtual machines that have 2 network interfaces. Assign a set of auxiliary private IP addresses to each network interface, as shown in the picture: > - An instance (virtual machine) is the smallest unit that can provide computing services for your business. Different instance specifications vary in the number of network cards that can be created and the number of auxiliary IPs that can be assigned to each network card. For more information on specific business and usage scenarios, refer to Alibaba Cloud to select the corresponding specification to create an instance. > - If you have IPv6 requirements, you can refer to Alibaba Cloud . Utilize the configured VMs to build a Kubernetes cluster. The available IP addresses for the nodes and the network topology of the cluster are depicted below: Install Spiderpool via helm: ```bash helm repo add spiderpool" }, { "data": "helm repo update spiderpool helm install spiderpool spiderpool/spiderpool --namespace kube-system --set ipam.enableStatefulSet=false --set multus.multusCNI.defaultCniCRName=\"ipvlan-eth0\" ``` If IPVlan is not installed in your cluster, you can specify the Helm parameter `--set plugins.installCNI=true` to install IPVlan in your cluster. If you are using a cloud server from a Chinese mainland cloud provider, you can enhance image pulling speed by specifying the parameter `--set global.imageRegistryOverride=ghcr.m.daocloud.io`. Spiderpool allows for fixed IP addresses for application replicas with a controller type of `StatefulSet`. However, in the underlay network scenario of public clouds, cloud instances are limited to using specific IP addresses. When StatefulSet replicas migrate to different nodes, the original fixed IP becomes invalid and unavailable on the new node, causing network unavailability for the new Pods. To address this issue, set `ipam.enableStatefulSet` to `false` to disable this feature. Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. To simplify the creation of JSON-formatted Multus CNI configurations, Spiderpool offers the SpiderMultusConfig CR to automatically manage Multus NetworkAttachmentDefinition CRs. Here is an example of creating an IPvlan SpiderMultusConfig configuration: ```shell IPVLANMASTERINTERFACE0=\"eth0\" IPVLANMULTUSNAME0=\"ipvlan-$IPVLANMASTERINTERFACE0\" IPVLANMASTERINTERFACE1=\"eth1\" IPVLANMULTUSNAME1=\"ipvlan-$IPVLANMASTERINTERFACE1\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${IPVLANMULTUSNAME0} namespace: kube-system spec: cniType: ipvlan enableCoordinator: true ipvlan: master: ${IPVLANMASTERINTERFACE0} apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ${IPVLANMULTUSNAME1} namespace: kube-system spec: cniType: ipvlan enableCoordinator: true ipvlan: master: ${IPVLANMASTERINTERFACE1} EOF ``` This case uses the given configuration to create two IPvlan SpiderMultusConfig instances. These instances will automatically generate corresponding Multus NetworkAttachmentDefinition CRs for the host's `eth0` and `eth1` network interfaces. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -n kube-system NAME AGE ipvlan-eth0 10m ipvlan-eth1 10m ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system NAME AGE ipvlan-eth0 10m ipvlan-eth1 10m ``` The Spiderpool's CRD, `SpiderIPPool`, introduces the following fields: `nodeName`, `multusName`, and `ips`: `nodeName`: when `nodeName` is not empty, Pods are scheduled on a specific node and attempt to acquire an IP address from the corresponding SpiderIPPool. If the Pod's node matches the specified `nodeName`, it successfully obtains an IP. Otherwise, it cannot obtain an IP from that SpiderIPPool. When `nodeName` is empty, Spiderpool does not impose any allocation restrictions on the Pod. `multusName`Spiderpool integrates with Multus CNI to cope with cases involving multiple network interface cards. When `multusName` is not empty, SpiderIPPool utilizes the corresponding Multus CR instance to configure the network for the Pod. If the Multus CR specified by `multusName` does not exist, Spiderpool cannot assign a Multus CR to the Pod. When `multusName` is empty, Spiderpool does not impose any restrictions on the Multus CR used by the Pod. `spec.ips`this field must not be empty. Due to Alibaba Cloud's limitations on available IP addresses for nodes, the specified range of values must fall within the auxiliary private IP range of the host associated with the specified `nodeName`. You can obtain this information from the Elastic Network Interface page in the Alibaba Cloud console. Based on the provided information, use the following YAML configuration to create a SpiderIPPool for each network interface (eth0 and eth1) on every node. These SpiderIPPools will assign IP addresses to Pods running on different nodes. ```shell ~# cat <<EOF | kubectl apply -f - apiVersion:" }, { "data": "kind: SpiderIPPool metadata: name: master-172 spec: default: true ips: 172.31.199.185-172.31.199.189 subnet: 172.31.192.0/20 gateway: 172.31.207.253 nodeName: master multusName: kube-system/ipvlan-eth0 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: master-192 spec: default: true ips: 192.168.0.156-192.168.0.160 subnet: 192.168.0.0/24 gateway: 192.168.0.253 nodeName: master multusName: kube-system/ipvlan-eth1 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: worker-172 spec: default: true ips: 172.31.199.190-172.31.199.194 subnet: 172.31.192.0/20 gateway: 172.31.207.253 nodeName: worker multusName: kube-system/ipvlan-eth0 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: worker-192 spec: default: true ips: 192.168.0.161-192.168.0.165 subnet: 192.168.0.0/24 gateway: 192.168.0.253 nodeName: worker multusName: kube-system/ipvlan-eth1 EOF ``` In the following example YAML, there are 2 sets of DaemonSet applications and 1 service with a type of ClusterIP: `v1.multus-cni.io/default-network`: specify the subnet that each application will use. In the example, the applications are assigned different subnets. ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: test-app-1 name: test-app-1 namespace: default spec: selector: matchLabels: app: test-app-1 template: metadata: labels: app: test-app-1 annotations: v1.multus-cni.io/default-network: kube-system/ipvlan-eth0 spec: containers: image: busybox command: [\"sleep\", \"3600\"] imagePullPolicy: IfNotPresent name: test-app-1 ports: name: http containerPort: 80 protocol: TCP apiVersion: apps/v1 kind: DaemonSet metadata: labels: app: test-app-2 name: test-app-2 namespace: default spec: selector: matchLabels: app: test-app-2 template: metadata: labels: app: test-app-2 annotations: v1.multus-cni.io/default-network: kube-system/ipvlan-eth1 spec: containers: image: nginx imagePullPolicy: IfNotPresent name: test-app-2 ports: name: http containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: test-svc labels: app: test-app-2 spec: type: ClusterIP ports: port: 80 protocol: TCP targetPort: 80 selector: app: test-app-2 EOF ``` Check the status of the running Pods: ```bash ~# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-1-ddlx7 1/1 Running 0 16s 172.31.199.187 master <none> <none> test-app-1-jpfkj 1/1 Running 0 16s 172.31.199.193 worker <none> <none> test-app-2-qbhwx 1/1 Running 0 12s 192.168.0.160 master <none> <none> test-app-2-r6gwx 1/1 Running 0 12s 192.168.0.161 worker <none> <none> ``` Spiderpool automatically assigns IP addresses to the applications, ensuring that the assigned IPs are within the expected IP pool. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT master-172 4 172.31.192.0/20 1 5 true master-192 4 192.168.0.0/24 1 5 true worker-172 4 172.31.192.0/20 1 5 true worker-192 4 192.168.0.0/24 1 5 true ``` Test communication between Pods and their hosts: ```bash ~# kubectl get nodes -owide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master Ready control-plane 2d12h v1.27.3 172.31.199.183 <none> CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 worker Ready <none> 2d12h v1.27.3 172.31.199.184 <none> CentOS Linux 7 (Core) 6.4.0-1.el7.elrepo.x86_64 containerd://1.7.1 ~# kubectl exec -ti test-app-1-ddlx7 -- ping 172.31.199.183 -c 2 PING 172.31.199.183 (172.31.199.183): 56 data bytes 64 bytes from 172.31.199.183: seq=0 ttl=64 time=0.088 ms 64 bytes from 172.31.199.183: seq=1 ttl=64 time=0.054 ms 172.31.199.183 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.054/0.071/0.088 ms ``` Test communication between Pods across different nodes and subnets: ```shell ~# kubectl exec -ti test-app-1-ddlx7 -- ping 172.31.199.193 -c 2 PING 172.31.199.193 (172.31.199.193): 56 data bytes 64 bytes from 172.31.199.193: seq=0 ttl=64 time=0.460 ms 64 bytes from 172.31.199.193: seq=1 ttl=64 time=0.210 ms 172.31.199.193 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.210/0.335/0.460 ms ~# kubectl exec -ti test-app-1-ddlx7 -- ping 192.168.0.161 -c 2 PING 192.168.0.161 (192.168.0.161): 56 data bytes 64 bytes from 192.168.0.161: seq=0 ttl=64 time=0.408 ms 64 bytes from 192.168.0.161: seq=1 ttl=64 time=0.194 ms 192.168.0.161 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.194/0.301/0.408 ms ``` Test communication between Pods and ClusterIP services: ```bash ~# kubectl get svc test-svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-svc ClusterIP" }, { "data": "<none> 80/TCP 26s ~# kubectl exec -ti test-app-2-qbhwx -- curl 10.233.23.194 -I HTTP/1.1 200 OK Server: nginx/1.10.1 Date: Fri, 21 Jul 2023 06:45:56 GMT Content-Type: text/html Content-Length: 4086 Last-Modified: Fri, 21 Jul 2023 06:38:41 GMT Connection: keep-alive ETag: \"64ba27f1-ff6\" Accept-Ranges: bytes ``` Alibaba Cloud's NAT Gateway provides an ingress and egress gateway for public or private network traffic within a VPC environment. By utilizing NAT Gateway, the cluster can have egress connectivity. Please refer to for creating a NAT Gateway as depicted in the picture: Test egress traffic from Pods ```bash ~# kubectl exec -ti test-app-2-qbhwx -- curl www.baidu.com -I HTTP/1.1 200 OK Accept-Ranges: bytes Cache-Control: private, no-cache, no-store, proxy-revalidate, no-transform Connection: keep-alive Content-Length: 277 Content-Type: text/html Date: Fri, 21 Jul 2023 08:42:17 GMT Etag: \"575e1f60-115\" Last-Modified: Mon, 13 Jun 2016 02:50:08 GMT Pragma: no-cache Server: bfe/1.0.8.18 ``` If you want to access the traffic egress of Pods in the cluster through IPv6 addresses, you need to activate public network bandwidth for the IPv6 address assigned to the Pod through the IPv6 gateway and convert the private IPv6 to a public IPv6 address. The configuration is as follows. Test Pod egress traffic over IPv6: ```bash ~# kubectl exec -ti test-app-2-qbhwx -- ping -6 aliyun.com -c 2 PING aliyun.com (2401:b180:1:60::6): 56 data bytes 64 bytes from 2401:b180:1:60::6: seq=0 ttl=96 time=6.058 ms 64 bytes from 2401:b180:1:60::6: seq=1 ttl=96 time=6.079 ms aliyun.com ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 6.058/6.068/6.079 ms ``` Cloud Controller Manager (CCM) is an Alibaba Cloud's component that enables integration between Kubernetes and Alibaba Cloud services. We will use CCM along with Alibaba Cloud infrastructure to facilitate load balancer traffic ingress access. Follow the steps below and refer to for deploying CCM. Configure `providerID` on Cluster Nodes On each node in the cluster, run the following command to obtain the `providerID` for each node. <http://100.100.100.200/latest/meta-data> is the API entry point provided by Alibaba Cloud CLI for retrieving instance metadata. You don't need to modify it in the provided example. For more information, please refer to . ```bash ~# META_EP=http://100.100.100.200/latest/meta-data ~# providerid=`curl -s $METAEP/region-id`.`curl -s $META_EP/instance-id` ~# echo $provider_id cn-hangzhou.i-bp17345hor9* ``` On the `master` node of the cluster, use the `kubectl patch` command to add the `providerID` for each node in the cluster. This step is necessary to ensure the proper functioning of the CCM Pod on each corresponding node. Failure to run this step will result in the CCM Pod being unable to run correctly. ```bash ~# kubectl get nodes ~# kubectl patch node <NODENAME> -p '{\"spec\":{\"providerID\": \"<providerid>\"}}' # Replace <NODENAME> and <providerid> with corresponding values. ``` Create an Alibaba Cloud RAM user and grant authorization. A RAM user is an entity within Alibaba Cloud's Resource Access Management (RAM) that represents individuals or applications requiring access to Alibaba Cloud resources. Refer to to create a RAM user and assign the necessary permissions for accessing resources. To ensure that the RAM user used in the subsequent steps has sufficient privileges, grant the `AdministratorAccess` and `AliyunSLBFullAccess` permissions to the RAM user, following the instructions provided here. Obtain the AccessKey & AccessKeySecret for the RAM user. Log in to the RAM User account and go to to retrieve the corresponding AccessKey & AccessKeySecret for the RAM User. Create the Cloud ConfigMap for CCM. Use the following method to write the AccessKey & AccessKeySecret obtained in step 3 as environment variables. ```bash ~# export" }, { "data": "~# export ACCESSKEYSECRET=HAeS ``` Run the following command to create cloud-config: ```bash accessKeyIDBase64=`echo -n \"$ACCESSKEYID\" |base64 -w 0` accessKeySecretBase64=`echo -n \"$ACCESSKEYSECRET\"|base64 -w 0` cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: cloud-config namespace: kube-system data: cloud-config.conf: |- { \"Global\": { \"accessKeyID\": \"$accessKeyIDBase64\", \"accessKeySecret\": \"$accessKeySecretBase64\" } } EOF ``` Retrieve the YAML file and install CCM by running the command `kubectl apply -f cloud-controller-manager.yaml`. The version of CCM being installed here is v2.5.0. Use the following command to obtain the cloud-controller-manager.yaml file and replace `<<cluster_cidr>>` with the actual cluster CIDR; You can view the cluster CIDR of the cluster through the `kubectl cluster-info dump | grep -m1 cluster-cidr` command. ```bash ~# wget https://raw.githubusercontent.com/spidernet-io/spiderpool/main/docs/example/alicloud-ccm/cloud-controller-manager.yaml ~# kubectl apply -f cloud-controller-manager.yaml ``` Verify if CCM is installed. ```bash ~# kubectl get po -n kube-system | grep cloud-controller-manager NAME READY STATUS RESTARTS AGE cloud-controller-manager-72vzr 1/1 Running 0 27s cloud-controller-manager-k7jpn 1/1 Running 0 27s ``` The following YAML will create two sets of services, one for TCP (layer 4 load balancing) and one for HTTP (layer 7 load balancing), with `spec.type` set to `LoadBalancer`. `service.beta.kubernetes.io/alibaba-cloud-loadbalancer-protocol-port`: this annotation provided by CCM allows you to customize the exposed ports for layer 7 load balancing. For more information, refer to . `.spec.externalTrafficPolicy`: indicates whether the service prefers to route external traffic to local or cluster-wide endpoints. It has two options: Cluster (default) and Local. Setting `.spec.externalTrafficPolicy` to `Local` preserves the client source IP. However, when a self-built public cloud cluster uses the platform's Loadbalancer component for nodePort forwarding in this mode, access will be blocked. In response to this problem, Spiderpool provides the coordinator plug-in, which uses iptables to mark the data packets to confirm that the reply packets of data entering from veth0 are still forwarded from veth0, thus solving the problem of nodeport being unable to access in this mode. ```bash ~# cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: tcp-service namespace: default spec: externalTrafficPolicy: Local ports: name: tcp port: 999 protocol: TCP targetPort: 80 selector: app: test-app-2 type: LoadBalancer apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-protocol-port: \"http:80\" name: http-service namespace: default spec: externalTrafficPolicy: Local ports: port: 80 protocol: TCP targetPort: 80 selector: app: test-app-2 type: LoadBalancer EOF ``` After the creation is complete, you can view the following: ```bash ~# kubectl get svc |grep service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-service LoadBalancer 10.233.1.108 121.41.165.119 80:30698/TCP 11s tcp-service LoadBalancer 10.233.4.245 47.98.137.75 999:32635/TCP 15s ``` CCM will automatically create layer 4 and layer 7 load balancers at its IaaS services. You can easily access and manage them through the Alibaba Cloud console, as shown below: On a public machine, access the load balancer's public IP + port to test the traffic ingress: ```bash $ curl 47.98.137.75:999 -I HTTP/1.1 200 OK Server: nginx/1.25.1 Date: Sun, 30 Jul 2023 09:12:46 GMT Content-Type: text/html Content-Length: 615 Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT Connection: keep-alive ETag: \"6488865a-267\" Accept-Ranges: bytes $ curl 121.41.165.119:80 -I HTTP/1.1 200 OK Date: Sun, 30 Jul 2023 09:13:17 GMT Content-Type: text/html Content-Length: 615 Connection: keep-alive Last-Modified: Tue, 13 Jun 2023 15:08:10 GMT ETag: \"6488865a-267\" Accept-Ranges: bytes ``` Alibaba Cloud's CCM implements ingress access for load balancing traffic, and it does not support setting the `spec.ipFamilies` of the backend service to IPv6. ```bash ~# kubectl describe svc lb-ipv6 ... Events: Type Reason Age From Message - - - Warning SyncLoadBalancerFailed 3m5s (x37 over 159m) nlb-controller Error syncing load balancer [nlb-rddqbe6gnp9jil4i15]: Message: code: 400, The operation is not allowed because of ServerGroupNotSupportIpv6. ``` Spiderpool is successfully running in an Alibaba Cloud cluster, ensuring normal east-west and north-south traffic." } ]
{ "category": "Runtime", "file_name": "get-started-alibaba.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: Velero v1.1 backing up and restoring apps on vSphere slug: Velero-v1-1-on-vSphere image: /img/posts/vsphere-logo.jpg excerpt: A How-To guide to run Velero on vSphere. author_name: Cormac Hogan author_avatar: /img/contributors/cormac-pic.png categories: ['kubernetes'] tags: ['Velero', 'Cormac Hogan', 'how-to'] Velero version 1.1 provides support to backup Kubernetes applications deployed on vSphere. This post will provide detailed information on how to install and configure Velero to backup and restore a stateless application (`nginx`) that is running in Kubernetes on vSphere. At this time there is no vSphere plugin for snapshotting stateful applications on vSphere during a Velero backup. In this case, we rely on a third party program called `restic`. However this post does not include an example of how to backup a stateful application. That is available in another tutorial which can be found . Download and extract Velero v1.1 Deploy and Configure a Minio Object store Install Velero using the `velero install` command, ensuring that both `restic` support and a Minio `publicUrl` are included Run a test backup/restore of a stateless application that has been deployed on upstream Kubernetes A demonstration on how to do backup/restore of a stateful application (i.e. PVs) The assumption is that the Kubernetes nodes in your cluster have internet access in order to pull the Velero images. This guide does not show how to add images using a local repository The . Download and extract it to the desktop where you wish to manage your Velero backups, then copy or move the `velero` binary to somewhere in your $PATH. Velero sends data and metadata about the Kubernetes objects being backed up to an S3 Object Store. If you do not have an S3 Object Store available, Velero provides the manifest file to create a Minio S3 Object Store on your Kubernetes cluster. This means that all Velero backups can be kept on-premises. Note: Stateful backups of applications deployed in Kubernetes on vSphere that use the `restic` plugin for backing up Persistent Volumes send the backup data to the same S3 Object Store. There are a few different steps required to successfully deploy the Minio S3 Object Store. A simple credentials file containing the login/password (id/key) for the local on-premises Minio S3 Object Store must be created. ```bash $ cat credentials-velero [default] awsaccesskey_id = minio awssecretaccess_key = minio123 ``` While this step is optional, it is useful for two reasons. The first is that it gives you a way to access the Minio portal through a browser and examine the backups. The second is that it enables you to specify a `publicUrl` for Minio, which in turn means that you can access backup and restore logs from the Minio S3 Object Store. To expose the Minio Service on a NodePort, a modification of the `examples/minio/00-minio-deployment.yaml` manifest is necessary. The only change is to the type: field, from ClusterIP to NodePort: ```bash spec: type: NodePort ``` After making the changes above, simply run the following command to create the Minio Object Store. ```bash $ kubectl apply -f examples/minio/00-minio-deployment.yaml namespace/velero created deployment.apps/minio created service/minio created" }, { "data": "created ``` Retrieve both the Kubernetes node on which the Minio Pod is running, and the port that the Minio Service has been exposed on. With this information, you can verify that Minio is working. ```bash $ kubectl get pods -n velero NAME READY STATUS RESTARTS AGE minio-66dc75bb8d-95xpp 1/1 Running 0 25s minio-setup-zpnfl 0/1 Completed 0 25s ``` ```bash $ kubectl describe pod minio-66dc75bb8d-95xpp -n velero | grep -i Node: Node: 140ab5aa-0159-4612-b68c-df39dbea2245/192.168.192.5 ``` ```bash $ kubectl get svc -n velero NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE minio NodePort 10.100.200.82 <none> 9000:32109/TCP 5s ``` In the above outputs, the node on which the Minio Object Storage is deployed has IP address `192.168.192.5`. The NodePort that the Minio Service is exposed is `32109`. If we now direct a browser to that `Node:port` combination, we should see the Minio Object Store web interface. You can use the credentials provided in the `credentials-velero` file earlier to login. To install Velero, the `velero install` command is used. There are a few options that need to be included. Since there is no vSphere plugin at this time, we rely on a third party plugin called `restic` to make backups of the Persistent Volume contents when Kubernetes is running on vSphere. The command line must include the option to use `restic`. As we also mentioned, we have setup a `publicUrl` for Minio, so we should also include this in our command line. Here is a sample command based on a default installation on Velero for Kubernetes running on vSphere, ensuring that the `credentials-velero` secret file created earlier resides in the same directory where the command is run: ```bash $ velero install --provider aws --bucket velero \\ --secret-file ./credentials-velero \\ --use-volume-snapshots=false \\ --use-restic \\ --backup-location-config \\ region=minio,s3ForcePathStyle=\"true\",s3Url=http://minio.velero.svc:9000,publicUrl=http://192.168.192.5:32109 ``` Once the command is running, you should observe various output related to the creation of necessary Velero objects in Kubernetes. Everything going well, the output should complete with the following message: ```bash Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status. ``` Yes, that is a small sailboat in the output (Velero is Spanish for sailboat). Velero provides a sample `nginx` application for backup testing. This nginx deployment assumes the presence of a LoadBalancer for its Service. If you do not have a Load Balancer as part of your Container Network Interface (CNI), there are some easily configuration ones available to get your started. One example is MetalLb, available . Note: This application is stateless. It does not create any Persistent Volumes, thus the restic driver is not utilizied as part of this example. To test whether restic is working correctly, you will need to backup a stateful application that is using Persistent Volumes. To deploy the sample nginx application, run the following command: ```bash $ kubectl apply -f examples/nginx-app/base.yaml namespace/nginx-example created" }, { "data": "created service/my-nginx created ``` Check that the deployment was successful using the following commands: ```bash $ kubectl get ns NAME STATUS AGE cassandra Active 23h default Active 5d3h kube-public Active 5d3h kube-system Active 5d3h nginx-example Active 4s velero Active 9m40s wavefront-collector Active 24h ``` ```bash $ kubectl get deployments --namespace=nginx-example NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 2/2 2 2 20s ``` ```bash $ kubectl get svc --namespace=nginx-example NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx LoadBalancer 10.100.200.147 100.64.0.1,192.168.191.70 80:30942/TCP 32s ``` In this example, a Load Balancer has provided the `nginx` service with an external IP address of 192.168.191.70. If I point a browser to that IP address, I get an nginx landing page identical to that shown below. We're now ready to do a backup and restore of the `nginx` application. In this example, we are going to stipulate at the `velero backup` command line that it should only backup applications that match `app=nginx`. Thus, we do not backup everything in the Kubernetes cluster, only the `nginx` application specific items. ```bash $ velero backup create nginx-backup --selector app=nginx Backup request \"nginx-backup\" submitted successfully. Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details. $ velero backup get NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR nginx-backup Completed 2019-08-07 16:13:44 +0100 IST 29d default app=nginx ``` You can now login to the Minio Object Storage via a browser and verify that the backup actually exists. You should see the name of the backup under the `velero/backups` folder: Lets now go ahead and remove the `nginx` namespace, then do a restore of the application from our backup. Later we will demonstrate how we can restore our `nginx` application. ```bash $ kubectl delete ns nginx-example namespace \"nginx-example\" deleted ``` This command should also have removed the `nginx` deployment and service. Restores are also done from the command line using the `velero restore` command. You simply need to specify which backup you wish to restore. ```bash $ velero backup get NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR nginx-backup Completed 2019-08-07 16:13:44 +0100 IST 29d default app=nginx ``` ```bash $ velero restore create nginx-restore --from-backup nginx-backup Restore request \"nginx-restore\" submitted successfully. Run `velero restore describe nginx-restore` or `velero restore logs nginx-restore` for more details. ``` The following command can be used to examine the restore in detail, and check to see if it has successfully completed. ```bash $ velero restore describe nginx-restore Name: nginx-restore Namespace: velero Labels: <none> Annotations: <none> Phase: Completed Backup: nginx-backup Namespaces: Included: * Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io Cluster-scoped: auto Namespace mappings: <none> Label selector: <none> Restore PVs: auto ``` You can see that the restore has now completed. Check to see if the namespace, DaemonSet and service has been restored using the `kubectl` commands shown previously. One item to note is that the `nginx` service may be restored with a new IP address from the LoadBalancer. This is normal. Now lets see if we can successfully reach our `nginx` web server on that IP address. Yes we can! Looks like the restore was successful. Backups and Restores are now working on Kubernetes deployed on vSphere using Velero v1.1. As always, we welcome feedback and participation in the development of Velero. You can find us on , and follow us on Twitter at ." } ]
{ "category": "Runtime", "file_name": "2019-10-08-Velero-v1-1-on-vSphere.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "AK (Access Key) and SK (Secret Key) are secure credentials used for authentication and access control, commonly used in cloud service providers and other online platforms. AK is the user's access key, similar to a username, used to identify the user's identity. SK is the associated key with AK, similar to a password, used to verify the user's identity and generate digital signatures to ensure data security. Proper management and usage of AK and SK are crucial in the security domain. Here are some best practices for AK and SK security: Confidentiality: AK and SK should be securely stored and only provided to trusted entities (such as applications or services) when necessary. They should not be stored in publicly accessible locations, such as version control systems, public code repositories, or public documents. Regular Rotation: Regularly rotating AK and SK is a good security practice to mitigate the risk of key misuse. It is recommended to change keys periodically, such as every few months or as per security policies and standards requirements. Permission Control: Setting appropriate permissions and access controls for each AK and SK is vital. Only grant the minimum required privileges to perform specific tasks, and regularly review and update permission settings to ensure the principle of least privilege is always maintained. Encrypted Transmission: When using AK and SK for authentication, secure transmission protocols (such as HTTPS) should be employed to protect the credentials during the transmission process, preventing interception or tampering by man-in-the-middle attacks. Strong Password Policy: Selecting strong passwords and following password policies is an important measure to protect SK. Passwords should have sufficient complexity, including a combination of uppercase and lowercase letters, numbers, and special characters, while avoiding easily guessable or common passwords. In conclusion, ensuring the security of AK and SK is crucial for safeguarding the security of systems and data. By strictly adhering to security best practices and properly managing and using AK and SK, security risks can be mitigated, and the effectiveness of authentication and access control can be ensured. Requirements Add authentication to the Master node or other important interfaces in the access process to prevent unauthenticated clients and internal nodes from joining the cluster. Verification is required when important information is pulled. Principles Authnode is a secure node that provides a universal authentication and authorization framework for CubeFS. In addition, Authnode serves as a centralized key storage for both symmetric and asymmetric keys. Authnode adopts and customizes the ticket-based Kerberos authentication concept. Specifically, when a client node (Master, MetaNode, DataNode, or client node) accesses the service, it needs to present a shared key for authentication to Authnode. If the authentication is successful, Authnode issues a time-limited ticket specifically for that service. For authorization purposes, functionality is embedded in the ticket to indicate who can do what on which resources. <div style=\"text-align:center;\"> <img src=\"../pic/cfs-security-practice-authnode.png\" alt=\"Image\" style=\"width:800px; height:auto;\"> </div> Startup Configuration steps in CubeFS (refer to docker/run_docker4auth.sh) The super administrator uses the key of authnode and can generate exclusive keys for each module through the createKey" }, { "data": "For example, the client is Ckey and the master is Skey. At the same time, the corresponding relationship between module ID and Key will be stored in AuthNode; Configure ID and Key for each module and start the service. Expand the scope of Master interface verification The Master interface requires a significant amount of operational management at both the volume and cluster levels. It is essential to ensure the security of cluster management. To achieve this, you can enable the \"authenticate\" configuration on the Master. This will require interface operations to undergo a secondary verification of the correctness at Authnode. Permissions are divided into admin privileges and regular user privileges. Admin privileges include management privileges of regular users and operational privileges at the volume level, which are displayed in the system through the \"owner\" field. Regular user privileges are based on admin authorization and have fixed access paths (such as the root directory) and operational permissions (read, write). Users can only operate on files or directories within their own mounted directory and its subdirectories. From the perspective of volumes, users are classified into two types: read-only users and read-write users. Read-only users can only read files or directories and cannot modify them. Additionally, it is possible to restrict users to access only specific subdirectories. After mounting, CubeFS supports permission checks based on Linux user, group, and other permission restrictions. For example, given the following file permissions: -rwxr-xr-x 2 service service f1, only the \"service\" user can modify the file, while other non-root users can only read it. User managmenthttps://cubefs.io/docs/master/tools/cfs-cli/user.html The creation request goes through the governance platform for approval. After creation, the usage of the owner (admin) can be tightened, with the ability to delete volumes. The admin account for the volume is created and managed by the governance platform, automatically generated based on naming rules. Sharing a single admin account is otherwise insecure, so a regular account is provided to the business. For example: When integrating with middleware, a regular account is authorized and provided to the business. The governance platform retains the owner information, and any deletion requests need to be submitted through the governance platform for approval. <div style=\"text-align:center;\"> <img src=\"../pic/cfs-security-practice-ldap.png\" alt=\"Image\" style=\"width:800px; height:auto;\"> </div> User Authentication: LDAP can be used to authenticate user identities. Usernames and passwords can be stored in an LDAP directory, and when a user attempts to authenticate, the system can communicate with the LDAP server via the LDAP protocol to verify if the credentials provided by the user match those stored in the directory. Single Sign-On (SSO): LDAP can serve as an authentication backend for a Single Sign-On system. SSO allows users to log in to multiple associated applications using a single set of credentials, eliminating the need for separate authentication for each application. By using LDAP as the authentication backend, user credentials can be managed centrally, enabling a seamless login experience across multiple applications. User Authorization and Permission Management: The LDAP directory can store user organizational structure, roles, and permission" }, { "data": "An LDAP-based authentication and authorization system can grant users appropriate permissions and access based on their identity and organizational structure. This ensures that only authorized users can access sensitive data and resources, enhancing system security. Centralized User Management: LDAP provides a centralized user management platform. Organizations can store and manage user information, including usernames, passwords, email addresses, phone numbers, etc., in an LDAP directory. Through LDAP, administrators can easily add, modify, or delete user information without the need for individual operations in each application, improving management efficiency and consistency. AK and SK should be properly safeguarded and only provided to trusted entities (such as applications or services) when necessary. AK and SK should not be stored in publicly accessible locations, such as version control systems, public code repositories, or public documents. To prevent the leakage of AK, SK, and other sensitive information, it is important to enhance the security and availability of keys. LDAP-based authentication can be implemented using IP and user-based verification. LDAP can also be utilized for controlling and managing permissions. The master IP is not directly exposed, and a domain name approach is used through a gateway. The advantages are as follows: Easy master node replacement: Using a domain name approach avoids the need for client configuration changes. Under normal circumstances, it can be cumbersome to update business configurations and restart them. Ensuring interface security: The necessary master interfaces, such as partition information required by programs, can be exposed through the domain name. However, management interfaces are not exposed externally, protecting administrative privileges. Serving for monitoring, alerting, and success rate statistics: The domain name approach enables monitoring, alerting, and success rate statistics to be performed effectively. Adding a caching layer: Implementing a caching layer becomes feasible with the domain name approach. In the multi-tenant operation mode of large-scale clusters, resource utilization can be improved and costs can be reduced, but it also brings challenges to system stability. For example, when a large number of requests for a volume under a tenant suddenly increase, the traffic impacts the entire storage system: Different volumes' data and metadata partitions may be placed on the same machine or even the same disk. A large read/write request for a certain volume will affect the access of other volumes on the same node and disk. When the bandwidth is full, it will affect internal communication between cluster nodes, further affecting the judgment of the cluster management node on the status of data nodes, triggering behaviors such as data balancing, and further exacerbating the shaking of the entire cluster. Impacting the upstream switch of the cluster will not only affect the storage system itself but also affect other non-storage systems under the same switch. Therefore, traffic QoS is an important means of ensuring stable operation of the system. Design document```https://mp.weixin.qq.com/s/ytBvK3MazOzm3uDtzRBwaw``` User document```https://cubefs.io/zh/docs/master/maintenance/admin-api/master/volume.html#%E6%B5%81%E6%8E%A7``` The stability of the Master node is crucial for the entire" }, { "data": "To prevent accidents (such as excessive retries in case of failures) or malicious attacks, it is necessary to implement QPS (Queries Per Second) rate limiting management for the Master's interfaces QPS rate limiting sets a limit on the number of requests the Master can accept per second. For interfaces without rate limiting set, no restrictions are applied. For interfaces with rate limiting configured, there is an option to set a timeout for rate limiting waits, preventing the occurrence of cascading failures. User documenthttps://cubefs.io/zh/docs/master/user-guide/qos.html To ensure the security of data and request processes in CubeFS, all requests are either signed and authenticated or anonymously authorized. CubeFS's signature mechanism protects the security of user data in the following dimensions: Requester Identity Authentication: Requesters must authenticate their identity through signature calculations using Access Key (AK) and Secret Key (SK). The server can identify the requester's identity through the AK. Data Integrity: To prevent data tampering during transmission, the request elements are included in the signature calculation. Upon receiving a request, the server performs a signature calculation using the received request elements. If the request elements have been tampered with, the signature comparison between the client and server will not match. ObjectNode supports three compatible S3 protocol signature algorithms: V2, V4, and STS. It also supports three sub-signature methods: Header, Query, and Form, ensuring the data security of user requests throughout the process. Permission control is an authorization strategy based on users and resources, which regulates access to user actions. Common application scenarios include controlling user and API combinations, controlling client IP addresses, request referers, and implementing internal and external network isolation access control. CubeFS provides the following permission control strategies primarily for accessing storage buckets and objects: Bucket Policy and ACL. In CubeFS's permission control, the first validation performed is the Bucket Policy. Only when a bucket does not have a set policy or the policy does not match the corresponding permissions, will ACL validation be performed. Object Lock enables the storage of objects in a Write Once, Read Many (WORM) mode. Object Lock allows users to comply with regulatory requirements for WORM storage and provides additional protection to prevent objects from being modified or deleted. An interface is provided to users for setting (canceling) and retrieving the Bucket object lock configuration. Once the Bucket object lock is configured by the user, all newly uploaded objects will adhere to this configuration, while existing objects remain unaffected. Users are provided with the functionality to retrieve the retention period and retention mode of objects through the HeadObject and GetObject methods. During the object lock protection period, objects cannot be deleted or overwritten. To ensure the availability of CubeFS services and mitigate the impact of abnormal user traffic, ObjectNode supports flow control policies at the concurrency, QPS (Queries Per Second), and bandwidth dimensions for different users under the S3API level. Strengthening client-side control, we provide IP-based blocking policies for business and operations managers. Once an IP address belonging to a client mounting requests or an already mounted client is added to the blacklist, it will no longer be able to access the" }, { "data": "For example: Preventing unauthorized access: ACL IP blacklisting can be used to block access from known malicious IP addresses, thus protecting network resources from unauthorized access or attacks. Blocking malicious traffic: It helps to block malicious traffic from these sources, such as distributed denial of service (DDoS) attacks, malicious scanning, or web crawling. Usage Methods ``` ./cfs-cli acl Manage cluster volumes acl black list Usage: cfs-cli acl [command] Aliases: acl, acl Available Commands: add add volume ip check check volume ip del del volume ip list list volume ip list ``` The master, datanode, metanode, authnode, objectnode, lcnode, and other server-side components have a range of listening ports. It is recommended to only open necessary ports and close unnecessary ones. This can be achieved through firewall rules or system configurations. For example, if you only need to remotely manage the system via SSH, you can close other unnecessary remote management ports such as Telnet. The operation audit trail of the mount point is stored in a locally specified directory on the client, making it convenient to integrate with third-party log collection platforms. The local audit log feature can be enabled or disabled in the client configuration. Clients can also receive commands via HTTP to actively enable or disable the log feature without the need for remounting. The client audit logs are recorded locally, and when a log file exceeds 200MB, it is rolled over. The outdated logs after rolling over are deleted after 7 days. In other words, the audit logs are retained for 7 days by default, and outdated log files are scanned and deleted every hour. Starting from version 3.3.1, client-side auditing is enabled by default. However, instead of storing audit logs on the client, they are now stored on the server. Server-side auditing can be configured to enable or disable it, with the default setting being disabled. At the volume level, a prohibition period can be set for a file or directory after its creation, during which it is not allowed to be deleted. Users can use the CLI tool to set the deletion lock period and check if the parameter value passed is greater than 0. If it is greater than 0, the deletion lock period of the volume is updated with that value. The client periodically fetches volume information from the master and caches it. The deletion lock feature is only enabled on the client after the volume information is successfully fetched. Before executing a deletion, the client checks if the deletion lock period in the cached volume information is non-zero, indicating that the deletion lock feature is enabled. In this case, further checks are performed. If the deletion lock period set on the current volume is less than the difference between the current time and the creation time of the file or directory, the deletion can proceed with the subsequent logic. By default, the deletion lock feature is disabled for volumes. To enable this feature, it can be turned on during the volume creation or update process. ``` volume createvolume create [VOLUME NAME] --deleteLockTime=[VALUE] volume updatevolume update [VOLUME NAME] --deleteLockTime=[VALUE] ```" } ]
{ "category": "Runtime", "file_name": "security_practice.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> BPF filesystem mount ``` -h, --help help for fs ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Show bpf filesystem mount details" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_fs.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Global community meeting for technical topics covering all engineering topics (Requirements, Design/Architecture, Issues, Releases, Status, Plan and Events) Meeting Time and Details can be found at : http://bit.ly/opensdstechmeeting , OpenSDS" } ]
{ "category": "Runtime", "file_name": "COMMUNITY.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "Fix infinitely generated route table 220 mentioned in ; Use iptables-wrapper in images of fabedge-agent, fabedge-connector and fabedge-cloud-agent; Improve startup process of fabedge-cloud-agent." } ]
{ "category": "Runtime", "file_name": "CHANGELOG-0.8.1.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "The confd configuration file is written in and loaded from `/etc/confd/confd.toml` by default. You can specify the config file via the `-config-file` command line flag. Note: You can use confd without a configuration file. See . Optional: `backend` (string) - The backend to use. (\"etcd\") `client_cakeys` (string) - The client CA key file. `client_cert` (string) - The client cert file. `client_key` (string) - The client key file. `confdir` (string) - The path to confd configs. (\"/etc/confd\") `interval` (int) - The backend polling interval in seconds. (600) `log-level` (string) - level which confd should log messages (\"info\") `nodes` (array of strings) - List of backend nodes. ([\"http://127.0.0.1:4001\"]) `noop` (bool) - Enable noop mode. Process all template resources; skip target update. `prefix` (string) - The string to prefix to keys. (\"/\") `scheme` (string) - The backend URI scheme. (\"http\" or \"https\") `srv_domain` (string) - The name of the resource record. `srv_record` (string) - The SRV record to search for backends nodes. `sync-only` (bool) - sync without checkcmd and reloadcmd. `watch` (bool) - Enable watch support. Example: ```TOML backend = \"etcd\" client_cert = \"/etc/confd/ssl/client.crt\" client_key = \"/etc/confd/ssl/client.key\" confdir = \"/etc/confd\" log-level = \"debug\" interval = 600 nodes = [ \"http://127.0.0.1:4001\", ] noop = false prefix = \"/production\" scheme = \"https\" srv_domain = \"etcd.example.com\" ```" } ]
{ "category": "Runtime", "file_name": "configuration-guide.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "To enable Harvester to utilize Kubevirts live migration support, we need to allow for volume live migration, so that a Kubevirt triggered migration will lead to a volume migration from the old node to the new node. https://github.com/longhorn/longhorn/issues/2127 https://github.com/rancher/harvester/issues/384 https://github.com/longhorn/longhorn/issues/87 Support Harvester VM live migration Using multiple engines for faster volume failover for other scenarios than live migration We want to add volume migration support so that we can use the VM live migration support of Kubevirt via Harvester. By limiting this feature to that specific use case we can use the csi drivers attach / detach flow to implement migration interactions. To do this, we need to be able to start a second engine for a volume on a different node that uses matching replicas of the first engine. We only support this for a volume while it is used with `volumeMode=BLOCK`, since we don't support concurrent writes and having kubernetes mount a filesystem even in read only mode can potentially lead to a modification of the filesystem (metadata, access time, journal replay, etc). Previously the only way to support live migration in Harvester was using a Longhorn RWX volume that meant dealing with NFS and it's problems, instead we want to add support for live migration for a traditional Longhorn volume this was previously implemented for the old RancherVM. After this enhancement Longhorn will support a special `migratable` flag that allows for a Longhorn volume to be live migrated from one node to another. The assumption here is that the initial consumer will never write again to the block device once the new consumer takes over. To test one needs to create a storage class with `migratable: \"true\"` set as a parameter. Afterwards an RWX PVC is necessary since migratable volumes need to be able to be attached to multiple nodes. ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn-migratable provisioner: driver.longhorn.io allowVolumeExpansion: true parameters: numberOfReplicas: \"3\" staleReplicaTimeout: \"2880\" # 48 hours in minutes fromBackup: \"\" migratable: \"true\" ``` We use CirOS as our test image for the live migration. The login account is `cirros` and the password is `gocubsgo`. To test with Harvester one can use the below example yamls as a quick start. Deploy the below yaml so that Harvester will download the CirrOS image into the local Minio store. NOTE: The CirrOS servers don't support the range request which Kubevirt importer uses, which is why we let harvester download the image first. ```yaml apiVersion: harvester.cattle.io/v1alpha1 kind: VirtualMachineImage metadata: name: image-jxpnq namespace: default spec: displayName: cirros-0.4.0-x86_64-disk.img url: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img ``` Afterwards deploy the" }, { "data": "to create a live migratable virtual machine. ```yaml apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: harvester.cattle.io/creator: harvester name: cirros-rwx-blk spec: dataVolumeTemplates: apiVersion: cdi.kubevirt.io/v1alpha1 kind: DataVolume metadata: annotations: cdi.kubevirt.io/storage.import.requiresScratch: \"true\" name: cirros-rwx-blk spec: pvc: accessModes: ReadWriteMany resources: requests: storage: 8Gi storageClassName: longhorn-migratable volumeMode: Block source: http: certConfigMap: importer-ca-none url: http://minio.harvester-system:9000/vm-images/image-jxpnq # locally downloaded cirros image running: true template: metadata: annotations: harvester.cattle.io/diskNames: '[\"cirros-rwx-blk\"]' harvester.cattle.io/imageId: default/image-jxpnq labels: harvester.cattle.io/creator: harvester harvester.cattle.io/vmName: cirros-rwx-blk spec: domain: cpu: cores: 1 sockets: 1 threads: 1 devices: disks: disk: bus: virtio name: disk-0 inputs: [] interfaces: masquerade: {} model: virtio name: default machine: type: q35 resources: requests: memory: 128M hostname: cirros-rwx-blk networks: name: default pod: {} volumes: dataVolume: name: cirros-rwx-blk name: disk-0 ``` Once the `cirros-rwx` virtual machine is up and running deploy the `cirros-rwx-migration.yaml` to initiate a virtual machine live migration. ```yaml apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstanceMigration metadata: name: cirros-rwx-blk spec: vmiName: cirros-rwx-blk ``` volume detach call now expects a `detachInput { hostId: \"\" }` if `hostId==\"\"` it will be treated as detach from all nodes same behavior as before. csi driver now calls volume attach/detach for all volume types: RWO, RWX (NFS), RWX (Migratable). the api volume-manager now determines, whether attach/detach is necessary and valid instead of the csi driver. If a volume is already attached (to the requested node) we will return the current volume. If a volume is mode RWO, it will be attached to the requested node, unless it's attached already to a different node. If a volume is mode RWX (NFS), it will only be attached when requested in maintenance mode. Since in other cases the volume is controlled by the share-manager. If a volume is mode RWX (Migratable), will initially be attached to the requested node unless already attached, at which point a migration will be initiated to the new node. If a volume is already detached (from all, from the requested node) we will return the current volume. If a volume is mode RWO, It will be detached from the requested node. If a volume is mode RWX (NFS), it will only be detached if it's currently attached in maintenance mode. Since in other cases the volume is controlled by the share-manager. If a volume is mode RWX (Migratable) It will be detached from the requested node. if a migration is in progress then depending on the requested node to detach different migration actions will happen. A migration confirmation will be triggered if the detach request is for the first node. A migration rollback will be triggered if the detach request is for the second" }, { "data": "The live migration intention is triggered and evaluated via the attach/detach calls. The expectation is that Kubernetes will bring up a new pod that requests attachment of the already attached volume. This will initiate the migration start, after this there are two things that can happen. Either Kubernetes will terminate the new pod which is equivalent to a migration rollback, or the old pod will be terminated which is equivalent to a migration complete operation. Users launch a new VM with a new migratable Longhorn volume -> A migratable volume is created then attached to node1. Similar to regular attachment, Longhorn will set `v.spec.nodeID` to `node1` here. Users launch the 2nd VM (pod) with the same Longhorn volume -> Kubernetes requests that the volume (already attached) be attached to node2. Then Longhorn receives the attach call and set `v.spec.migrationNodeID` to `node2` with `v.spec.nodeID = node1`. Longhorn volume-controller brings up the new engine on node2, with inactive matching replicas (same as live engine upgrade) Longhorn CSI driver polls for the existence of the second engine on node2 before acknowledging attachment success. Once the migration is started (running engines on both nodes), the following detach decides whether migration is completed successfully, or a migration rollback is desired: If succeeded: Kubevirt will remove the original pod on `node1`, this will lead to requesting detachment from node1, which will lead to longhorn setting `v.spec.nodeID` to `node2` and unsetting `v.spec.migrationNodeID` If failed: Kubevirt will terminate the new pod on `node2`, this will lead to requesting detachment from node2, which will lead to longhorn keeping `v.spec.nodeID` to `node1` and unsetting `v.spec.migrationNodeID` Longhorn volume controller then cleans up the second engine and switches the active replicas to be the current engine ones. In summary: ``` n1 | vm1 has the volume attached (v.spec.nodeID = n1) n2 | vm2 requests attachment [migrationStart] -> (v.spec.migrationNodeID = n2) volume-controller brings up new engine on n2, with inactive matching replicas (same as live engine upgrade) csi driver polls for existence of second engine on n2 before acknowledging attach The following detach decides whether a migration is completed successfully, or a migration rollback is desired. n1 | vm1 requests detach of n1 [migrationComplete] -> (v.spec.nodeID = n2, v.spec.migrationNodeID = \"\") n2 | vm2 requests detach of n2 [migrationRollback] -> (v.spec.NodeID = n1, v.spec.migrationNodeID = \"\") The volume controller then cleans up the second engine and switches the active replicas to be the current engine ones. ``` E2E test for migration successful E2E test for migration rollback Requires using a storage class with `migratable: \"true\"` parameter for the harvester volumes as well as an RWX PVC to allow live migration in Kubernetes/Kubevirt." } ]
{ "category": "Runtime", "file_name": "20210216-volume-live-migration.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "This document describes the process for improving developer documentation by creating PRs at OpenEBS. Developer documentation is available under the folder: The developer documentation includes anything that will help the community like: Architecture and Design documentation Technical or Research Notes FAQ Process Documentation Generic / Miscellaneous Notes We are also looking for an explainer video or presentation from the community that helps new developers in understanding the OpenEBS Architecture, it's use cases, and installation procedure locally. At a very high level, the process to contributing and improving the code is pretty simple: Submit an issue describing your proposed change Create your development branch Commit your changes Submit your Pull Request The following sections describe some guidelines that can come in handy with the above process. Followed by the guidelines, is a with frequently used git commands. Some general guidelines when submitting issues for developer documentation: If the proposed change requires an update to the existing page, please provide a link to the page in the issue. If you want to add a new page, then go ahead and open an issue describing the requirement for the new page. You can also help with some existing issues under this category available at Fork the repository and if you have forked it already, rebase with master branch to fetch latest changes to your local system. Create a new development branch in your forked repository with the following naming convention: \"task description-#issue\" Example: This change is being developed with the branch named: OpenEBS-DevDoc-PR-Workflow-#213 Reference the issue number along with a brief description in your commits Set your commit.template to the `COMMIT_TEMPLATE` given in the `.github` directory. `git config --local commit.template $GOPATH/src/github.com/openebs/openebs/.github` Rebase your development branch Submit the PR from the development branch to the openebs/openebs:master Incorporate review comments, if any, in the development branch. Once the PR is accepted, close the branch. After the PR is merged the development branch in the forked repository can be deleted. If you need any help with git, refer to this and go back to the guide to proceed." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING-TO-DEVELOPER-DOC.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Retrieve information about an identity ``` cilium-dbg identity get [flags] ``` ``` -h, --help help for get --label strings Label to lookup -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage security identities" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_identity_get.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "name: Document about: Create or update document title: \"[DOC] \" labels: kind/doc assignees: '' <!--A clear and concise description of what the document is.--> <!--Add any other context or screenshots about the document request here.-->" } ]
{ "category": "Runtime", "file_name": "doc.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "NOTE: These instructions apply only to Calico versions v3.21 or greater. For older releases, refer to the instructions in the corresponding `release-vX.Y` branch. Generally, the release environment is managed through Semaphore and will already meet these requirements. However, if you must run the process locally then the following requirements must be met. To publish Calico, you need the following permissions: Write access to the projectcalico/calico GitHub repository. You can create a for GitHub and export it as the `GITHUB_TOKEN` env var (for example by adding it to your `.profile`). Push access to the Calico DockerHub repositories. Assuming you've been granted access by an admin: ``` docker login ``` Push access to the Calico quay.io repositories. Assuming you've been granted access by an admin: ``` docker login quay.io ``` Push access to the gcr.io/projectcalico-org repositories. Note: Some of the repos do not yet support credential helpers, you must use one of the token-based logins. For example, assuming you've been granted access, this will configure a short-lived auth token: ``` gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://gcr.io ``` You must be a member of the Project Calico team on Launchpad, and have uploaded a GPG identity to your account, for which you have the secret key. You must be able to access binaries.projectcalico.org. To publish the helm release to the repo, youll need an AWS helm profile: Add this to your ~/.aws/config ``` [profile helm] rolearn = arn:aws:iam::<productionaccount_id>:role/CalicoDevHelmAdmin mfaserial = arn:aws:iam::<tigera-devaccount_id>:mfa/myusername source_profile = default region = us-east-2 ``` Your user will need permission for assuming the helm admin role in the production account. You'll also need several GB of disk space. Some of the release scripts also require tools to be installed in your dev environment: the GitHub `hub` tool. Finally, the release process assumes that your repos are checked out with name `origin` for the git remote for the main Calico repo. To verify that the code and GitHub are in the right state for releasing the chosen version. either by merging PRs, or kicking them out of the Milestone. relevant to the release. for this release. . for the target release branch. Check the status of each of these items daily in the week leading up to the release. When starting development on a new minor release, the first step is to create a release" }, { "data": "For patch releases, this section can be skipped and you can go directly to Create a new branch off of the latest master and publish it, along with a dev tag for the next release. ``` git checkout master && git pull origin master ``` ``` make create-release-branch ``` Checkout the newly created branch. ``` git checkout release-vX.Y ``` Update manifests to use the new release branch instead of master. Update versions in the following files: charts/calico/values.yaml charts/tigera-operator/values.yaml Then, run manifest generation ``` make generate ``` Commit your changes ``` Update manifests for release-vX.Y ``` Then, push your changes to the branch. ``` git push origin release-vX.Y ``` On netlify create a new site using the `release-vX.Y` branch (You should at least have write access to this repo for site creation) Rename the randomly generated site name to follow the same naming convention as other releases (Ex: `calico-vX-Y`). Ensure that the site is generated properly by visiting site URL (Ex. https://calico-vX-Y.netlify.app/archive/vX.Y/). Cherry-pick the proxy rules commit created earlier to the latest production branch, as well as `master`. This will make the candidate site docs available at `projectcalico.docs.tigera.io/archive/vX.Y/` (Note: the trailing slash) Once a new branch is cut, we need to ensure a new milestone exists to represent the next release that will be cut from the master branch. Go to the Create a new release of the form `Calico vX.Y+1.0`. Leave the existing `Calico vX.Y.0` milestone open. Create a new branch based off of `release-vX.Y`. ``` git checkout release-vX.Y && git pull origin release-vX.Y ``` ``` git checkout -b build-vX.Y.Z ``` Update version information in the following files: `charts/calico/values.yaml`: Calico version used in manifest generation. `charts/tigera-operator/values.yaml`: Versions of operator and calicoctl used in the helm chart and manifests. Update manifests (and other auto-generated code) by running the following command in the repository root. ``` make generate ``` Follow the steps in to generate candidate release notes. Then, add the newly created release note file to git. ``` git add release-notes/<VERSION>-release-notes.md ``` Commit your changes. For example: ``` git commit -m \"Updates for vX.Y.Z\" ``` Push the branch to `github.com/projectcalico/calico` and create a pull request. Get it reviewed and ensure it passes CI before moving to the next step. To build and publish the release artifacts, find the desired commit , verify that all tests for that commit have passed, and press the `Publish official release` manual promotion button. Wait for this job to complete before moving on to the next step. Follow . Check out the release tag in the `projectcalico/calico` repository. ``` git fetch origin --tags && git checkout vX.Y.Z ``` In your environment, set `HOST` to the GCP name for" }, { "data": "`GCLOUDARGS` to the `--zone` and `--project` args needed to access that host, and `SECRETKEY` to the secret key for a GPG identity that you have uploaded to your Launchpad account. Establish GCP credentials so that gcloud with `HOST` and `GCLOUD_ARGS` can access binaries.projectcalico.org. Build OpenStack packages from the checked out commit. ``` make -C hack/release/packaging release-publish VERSION=vX.Y.Z ``` Merge the PR branch created in step 4.a - `build-vX.Y.Z` and delete the branch from the repository. Go to the Open a new milestone of the form `Calico vX.Y.Z` for the next patch release in the series if it does not yet exist. Close out the milestone for the release that was just published, moving any remaining open issues and PRs to the newly created milestone. Run the post-release checks. The release validation checks will run - they check for the presence of all the required binaries tarballs, tags, etc. ``` make VERSION=... FLANNELVERSION=... OPERATORVERSION=... postrelease-checks ``` Check the output of the tests - if any test failed, dig in and understand why. Kick off some e2e tests to test the contents of the release. Release notes for a Calico release contain notable changes across Calico repositories. To write release notes for a given version, perform the following steps. Check the merged pull requests in the milestone and make sure each has a release note if it needs one. Use this URL to query for PRs, replacing `vX.Y.Z` with your desired version. ``` https://github.com/issues?utf8=%E2%9C%93&q=user%3Aprojectcalico+milestone%3A%22Calico+vX.Y.Z%22+ ``` Each PR that wants a release note must meet the following conditions to have its release note considered: It is in the correct `Calico vX.Y.Z` GitHub milestone It has the `release-note-required` label It has one or more release notes included in the description (Optional). Run the following command to collect all release notes for the given version. ``` make release-notes ``` A file called `release-notes/<VERSION>-release-notes.md` will be created with the raw release note content. > NOTE: If you receive a ratelimit error, you can specify a `GITHUB_TOKEN` in the above command to > increase the number of allowed API calls. . Edit the generated file. The release notes should be edited to highlight a few major enhancements and their value to the user. Bug fixes and other changes should be summarized in a bulleted list at the end of the release notes. Any limitations or incompatible changes in behavior should be explicitly noted. Consistent release note formatting is important. Here are some examples for reference: - Add the generated file to git. ``` git add release-notes/ ```" } ]
{ "category": "Runtime", "file_name": "RELEASING.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting Dan Buch at [email protected]. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html" } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "[TOC] Note: gVisor supports x86\\_64 and ARM64, and requires Linux 4.14.77+ (). To download and install the latest release manually follow these steps: ```bash ( set -e ARCH=$(uname -m) URL=https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH} wget ${URL}/runsc ${URL}/runsc.sha512 \\ ${URL}/containerd-shim-runsc-v1 ${URL}/containerd-shim-runsc-v1.sha512 sha512sum -c runsc.sha512 \\ -c containerd-shim-runsc-v1.sha512 rm -f *.sha512 chmod a+rx runsc containerd-shim-runsc-v1 sudo mv runsc containerd-shim-runsc-v1 /usr/local/bin ) ``` To install gVisor as a Docker runtime, run the following commands: ```shell $ /usr/local/bin/runsc install $ sudo systemctl reload docker $ docker run --rm --runtime=runsc hello-world ``` For more details about using gVisor with Docker, see . Please read the before running such a setup for production purposes. Note: It is important to copy `runsc` to a location that is readable and executable to all users, since `runsc` executes itself as user `nobody` to avoid unnecessary privileges. The `/usr/local/bin` directory is a good place to put the `runsc` binary. First, appropriate dependencies must be installed to allow `apt` to install packages via https: ```bash sudo apt-get update && \\ sudo apt-get install -y \\ apt-transport-https \\ ca-certificates \\ curl \\ gnupg ``` Next, configure the key used to sign archives and the repository. NOTE: The key was updated on 2021-07-13 to replace the expired key. If you get errors about the key being expired, run the `curl` command below again. ```bash curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg echo \"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main\" | sudo tee /etc/apt/sources.list.d/gvisor.list > /dev/null ``` Now the runsc package can be installed: ```bash sudo apt-get update && sudo apt-get install -y runsc ``` If you have Docker installed, it will be automatically configured. The `runsc` binaries and repositories are available in multiple versions and release channels. You should pick the version you'd like to install. For experimentation, the nightly release is recommended. For production use, the latest release is recommended. After selecting an appropriate release channel from the options below, proceed to the preferred installation mechanism: manual or from an `apt`" }, { "data": "Note: Older releases are still available but may not have an `${ARCH}` component in the URL. These release were available for `x86_64` only. Binaries are available for every commit on the `master` branch, and are available at the following URL: `https://storage.googleapis.com/gvisor/releases/master/latest/${ARCH}` You can use this link with the steps described in . For `apt` installation, use the `master` to configure the repository: ```bash sudo add-apt-repository \"deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases master main\" ``` Nightly releases are built most nights from the master branch, and are available at the following URL: `https://storage.googleapis.com/gvisor/releases/nightly/latest/${ARCH}` You can use this link with the steps described in . Specific nightly releases can be found at: `https://storage.googleapis.com/gvisor/releases/nightly/${yyyy-mm-dd}/${ARCH}` Note that a release may not be available for every day. For `apt` installation, use the `nightly` to configure the repository: ```bash sudo add-apt-repository \"deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases nightly main\" ``` The latest official release is available at the following URL: `https://storage.googleapis.com/gvisor/releases/release/latest/${ARCH}` You can use this link with the steps described in . For `apt` installation, use the `release` to configure the repository: ```bash sudo add-apt-repository \"deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases release main\" ``` Specific releases are the latest for a given date. Specific releases should be available for any date that has a point release. A given release is available at the following URL: `https://storage.googleapis.com/gvisor/releases/release/${yyyymmdd}/${ARCH}` You can use this link with the steps described in . See the page for information about specific releases. For `apt` installation of a specific release, which may include point updates, use the date of the release for repository, e.g. `${yyyymmdd}`. ```bash sudo add-apt-repository \"deb [arch=amd64,arm64] https://storage.googleapis.com/gvisor/releases yyyymmdd main\" ``` Note: only newer releases may be available as `apt` repositories. Point releases correspond to tagged in the Github repository. A given point release is available at the following URL: `https://storage.googleapis.com/gvisor/releases/release/${yyyymmdd}.${rc}/${ARCH}` You can use this link with the steps described in . Note that `apt` installation of a specific point release is not supported. After installation, try out `runsc` by following the , , or ." } ]
{ "category": "Runtime", "file_name": "install.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "This runbook will cover triaging API changes and ways to implement them appropriately. Deprecated - We will consider a deprecated API element (endpoint and/or parts of an endpoint) to be an element which still provides users with access to its backing functionality and can be used, but will soon be removed completely along with said functionality in an upcoming version. Mandatory endpoint - We will consider an endpoint mandatory if Firecracker cannot operate normally without performing a request to it. Optional endpoint - We will consider an endpoint optional if Firecracker can operate normally without performing a request to it and the functionality behind it is not essential. Mandatory header/field - We will consider a header/field mandatory in an HTTP message if the request will fail without specifying said header/field. Optional header/field - We will consider a header/field optional in an HTTP message if the request will succeeds without specifying said header/field. For the purposes of this document, there are 2 main categories for API changes, namely breaking and non-breaking. A breaking change in the API is a change that makes the API incompatible with the previous version (backwards incompatible). In an effort to avoid a breaking change, we may take the route of deprecation and incrementing the minor version in an effort to preserve backwards compatibility, but breaking changes will always ultimately result in incrementing the major version. Here is a non-exhaustive list of such changes: Adding a new mandatory endpoint/HTTP method. Removing an endpoint/method. Adding a mandatory request header/field. Removing a request header/field. Adding a mandatory response field. Removing a response header/field. A change in the API is not a breaking change if the version resulting from it is compatible with the previous one (backwards compatible). The outcome of a non-breaking change should always include incrementing the minor version but must not lead to incrementing the major version by itself. Here is a non-exhaustive list of such changes: Deprecating an endpoint/method/field. Adding a new optional endpoint/method. Adding an optional request header/field. Adding a response header. Adding additional valid inputs for fields in API requests. Making mandatory headers/fields optional. Making mandatory endpoints optional. Changing the URI of an endpoint. Changing the metrics output format. API changes result in version increases. As Firecrackers support policy is based on , we will look at API changes from this point of view. Given a version number MAJOR.MINOR.PATCH, increment the: MAJOR version when you make incompatible API changes; MINOR version when you add functionality in a backwards compatible manner; PATCH version when you make backwards compatible bug fixes. *All deprecated endpoints are supported until at least the next major version release, where they may be" }, { "data": "We will go through multiple types of API changes and provide ways to ensure we dont break our backwards compatibility promise to our customers. The list is split into categories of components changed. Entire endpoints Adding an optional endpoint with new functionality - Increment minor version. Adding a command line parameter - Increment minor version. Removing an endpoint - Deprecate endpoint and increment minor version Remove endpoint when incrementing major version. Adding a mandatory endpoint - Increment major version. Request Adding an optional header/field - Increment minor version. Renaming a header/field - Accept both names and deprecate the old one Remove old name when incrementing major version. Removing a header/field - Make said header/field optional Remove header/field when incrementing major version. Changing the URI of an endpoint - Redirect the old endpoint to the new one and deprecate the old one Remove old endpoint when incrementing major version. Adding a mandatory header/field - Increment major version. Response Adding a header/field - Create a new, separate endpoint with the changes and deprecate the old one Remove old endpoint when incrementing major version. Removing a header/field - Create a new, separate endpoint with the changes and deprecate the old one Remove old endpoint when incrementing major version. Command line parameter Renaming a command line parameter - Accept both names and deprecate the old one Remove old name when incrementing major version. Changing expected value taken by a command line parameter - Accept both names and deprecate the old one Remove old name when incrementing major version. In case the outlined solution for your case is not feasible (e.g. because of security concerns), break the glass and increment the major version. As outlined in the diagram above, sometimes we have to deprecate endpoints partially or entirely. In this section we will go through different situations where we have to deprecate something and ways of avoiding common pitfalls when doing so. Some paths in the flowchart above lead to deprecation. Based on the initial conditions, there are 2 major cases where we need to deprecate an endpoint: Changing an existing endpoint Often happens because directly changing the endpoint would be a breaking change. We usually create a clone of the old endpoint we want to deprecate and make the necessary changes to it. We usually expose both endpoints in the next minor version while marking the old one as deprecated. The old endpoint retains its previous name. When naming the new endpoint: for HTTP endpoints we follow a per-endpoint versioning scheme; in cases where we cant find a fitting name for the new endpoint, the simplest way forward is to take the old URI and append `/v2` to it. for command line endpoints, we can usually find a different name for the new" }, { "data": "Deprecating an endpoint without adding a replacement to it Often happens when we want to phase out a certain feature or functionality, but doing so immediately would be a breaking change. We just mark the endpoint as deprecated. Make sure that any changes you make in the code are also reflected in the swagger specification. Some tips: There is nothing in the swagger file that shows whether an endpoint is mandatory or optional, its all code logic. Mandatory fields in a request or response body are marked with `required: true` in the swagger definition. All other fields are optional. If you need to redirect an endpoint, you have to clone the old one under the new URI in the swagger specification. When marking: an HTTP endpoint as deprecated: Add a comment for the parsing function of the endpoint stating that it is deprecated. Log a `warn!` message stating that the user accessed a deprecated endpoint. Increment the `deprecatedHttpApi` metric. Include the `Deprecated` header in the response. a header field in an HTTP endpoint as deprecated: Add a comment in the parsing function where we check the presence of the header stating that it is deprecated. If the header is present, log a `warn!` message stating that the user used a deprecated field. Increment the `deprecatedHttpApi` metric. Include the Deprecated header in the response. a command line parameter as deprecated: Mention it is deprecated in the help message of the parameter in the argument parser. Add it in the `warndeprecatedparameters` function where we log it and increment the `deprecatedCmdLineApi` metric. When doing a major release, the API can have breaking changes. This is the _only time_ where we can safely remove deprecated elements of the API. To remove a deprecated element of the API: Remove the associated functionality from the codebase (usually in `vmm` or `mmds`); Remove the parsing logic in `api_server`; Remove any unit and integration tests associated with this element. In this guide we set out to remove the `vsock_id` field in `PUT`s on `/vsock`. This was implemented in and we will go step by step through the changes in order to understand the process of changing something in the Firecracker API. We go through the flowchart; we want to remove a field in the body of a HTTP request. So we follow the flowchart like this: Change an existing endpoint Request Remove header or field Make it optional Deprecate Increment minor version. Now that we know we need to make the field optional and deprecate it, its time for the code changes (reference implementation in ). We go to the function in `api_server/src/requests` which is responsible for parsing this request, which is `parseputvsock` in this case, and do the" }, { "data": "We find the associated `vmmconfig` struct which `serdejson` uses for deserialization, in this case `VsockDeviceConfig`. In the struct referenced above, we make the parameter optional by encapsulating it in an `Option` with `#[serde(default)]` and `#[serde(skipserializingif = \"Option::is_none\")]` so that we dont break existing implementations, but we follow the new, desired usage of the endpoint. After deserializing the body of the request into the struct, we check for the existence of the field we want to deprecate, in this case by calling `vsockcfg.vsockid.is_some()`. If the field is there, we must mark this request as being deprecated, so we craft a deprecation message (`\"PUT /vsock: vsock_id field is deprecated.\"`) and increment the deprecated HTTP API metric (`METRICS.deprecatedapi.deprecatedhttpapicalls.inc()`). We create a new `ParsedRequest` where, if we marked the request as deprecated, we append the deprecation message into its `parsing_info` structure, in this case by calling `parsedreq.parsinginfo().appenddeprecationmessage(msg)`. Dont forget to comment your code! Comments should reflect what is deprecated and clearly describe the code paths where you handle the deprecation case. Add a unit test where you test your new code paths. Fix all other failing unit tests. Update the swagger file to reflect the change, in this case by removing the `vsock_id` field from the required parameter list in the `Vsock` definition and adding a description to it stating that it is deprecated since the current version. Update any relevant documentation. We update the python integration tests to reflect the change (reference implementation in ). We refactor the relevant `tests/integrationtests/functional/testapi.py` test to use the artifact model instead of the fixture one. If the test already uses the artifact model, you can skip this step. We make sure to run the test with the current build, as well as with future Firecracker versions by specifying the unreleased version in the `min_version` parameter of `artifacts.firecrackers()`. We do this in order to ensure that, when we create patch releases on older branches, we test the API with future binaries to enforce backwards compatibility. Disclaimer: This test will fail when running with the binary artifact fetched from S3 until you update the binary there with your current build. You should only do this once your PR has all necessary approves and this test is the last thing keeping it from getting merged. We check that, when the deprecated field is present in the request, the `Deprecation` header is also present in the response by asserting `response.headers['deprecation']`. We do not also check that the header is not present when the field is not present because, in a future version, some other field may be deprecated in the same request and would return the header anyway, resulting in a fail in our test when it shouldnt. Fix all other failing integration tests." } ]
{ "category": "Runtime", "file_name": "api-change-runbook.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Name | Type | Description | Notes | - | - | - DestinationUrl | Pointer to string | | [optional] `func NewVmSnapshotConfig() *VmSnapshotConfig` NewVmSnapshotConfig instantiates a new VmSnapshotConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVmSnapshotConfigWithDefaults() *VmSnapshotConfig` NewVmSnapshotConfigWithDefaults instantiates a new VmSnapshotConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VmSnapshotConfig) GetDestinationUrl() string` GetDestinationUrl returns the DestinationUrl field if non-nil, zero value otherwise. `func (o VmSnapshotConfig) GetDestinationUrlOk() (string, bool)` GetDestinationUrlOk returns a tuple with the DestinationUrl field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmSnapshotConfig) SetDestinationUrl(v string)` SetDestinationUrl sets DestinationUrl field to given value. `func (o *VmSnapshotConfig) HasDestinationUrl() bool` HasDestinationUrl returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "VmSnapshotConfig.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "title: JuiceFS vs. GlusterFS slug: /comparison/juicefsvsglusterfs description: This document compares the design and features of GlusterFS and JuiceFS, helping you make an informed decision for selecting a storage solution. is an open-source software-defined distributed storage solution. It can support data storage of PiB levels within a single cluster. is an open-source, high-performance distributed file system designed for the cloud. It delivers massive, elastic, and high-performance storage at low cost. This document compares the key attributes of JuiceFS and GlusterFS in a table and then explores them in detail, offering insights to aid your team in the technology selection process. You can easily see their main differences in the table below and delve into specific topics you're interested in within this article. The table below provides a quick overview of the differences between GlusterFS and JuiceFS: | Comparison basis | GlusterFS | JuiceFS | | : | : | : | | Metadata | Purely distributed | Independent database | | Data storage | Self-managed | Relies on object storage | | Large file handling | Doesn't split files | Splits large files | | Redundancy protection | Replication, erasure coding | Relies on object storage | | Data compression | Partial support | Supported | | Data encryption | Partial support | Supported | | POSIX compatibility | Full | Full | | NFS protocol | Not directly supported | Not directly supported | | CIFS protocol | Not directly supported | Not directly supported | | S3 protocol | Supported (but not updated) | Supported | | HDFS compatibility | Supported (but not updated) | Supported | | CSI Driver | Supported | Supported | | POSIX ACLs | Supported | Supported | | Cross-cluster replication | Supported | Relies on external service | | Directory quotas | Supported | Supported | | Snapshots | Supported | Not supported (but supports cloning) | | Trash | Supported | Supported | | Primary maintainer | Red Hat, Inc | Juicedata, Inc | | Development language | C | Go | | Open source license | GPLV2 and LGPLV3+ | Apache License 2.0 | GlusterFS employs a fully distributed architecture without centralized nodes. A GlusterFS cluster consists of the server and the client. The server side manages and stores data, often referred to as the Trusted Storage Pool. This pool comprises a set of server nodes, each running two types of processes: glusterd: One per node, which manages and distributes configuration. glusterfsd: One per (storage unit), which handles data requests and interfaces with the underlying file system. All files on each brick can be considered a subset of GlusterFS. File content accessed directly through the brick or via GlusterFS clients is typically" }, { "data": "If GlusterFS experiences an exception, users can partially recover original data by integrating content from multiple bricks. Additionally, for fault tolerance during deployment, data is often redundantly protected. In GlusterFS, multiple bricks form a redundancy group, protecting data through replication or erasure coding. When a node experiences a failure, recovery can only be performed within the redundancy group, which may result in longer recovery times. When scaling a GlusterFS cluster, the scaling is typically performed on a redundancy group basis. The client side, which mounts GlusterFS, presents a unified namespace to applications. The architecture diagram is as follows (source: ): JuiceFS adopts an architecture that separates its data and metadata storage. File data is split and stored in object storage systems like Amazon S3, while metadata is stored in a user-selected database like Redis or MySQL. By sharing the same database and object storage, JuiceFS achieves a strongly consistent distributed file system with features like full POSIX compatibility and high performance. For details about JuiceFS architecture, see its . GlusterFS: Metadata in GlusterFS is purely distributed, lacking a centralized metadata service. Clients use file name hashing to determine the associated brick. When requests require access across multiple bricks, for example, `mv` and `ls`, the client is responsible for coordination. While this design is simple, it can lead to performance bottlenecks as the system scales. For instance, listing a large directory might require accessing multiple bricks, and any latency in one brick can slow down the entire request. Additionally, ensuring metadata consistency when performing cross-brick modifications in the event of failures can be challenging, and severe failures may lead to split-brain scenarios, requiring to achieve a consistent version. JuiceFS: JuiceFS metadata is stored in an independent database, which is called the metadata engine. Clients transform file metadata operations into transactions within this database, leveraging its transactional capabilities to ensure operation atomicity. This design simplifies JuiceFS implementation but places higher demands on the metadata engine. JuiceFS currently supports three categories of transactional databases. For details, see the . GlusterFS stores data by integrating multiple server nodes' bricks (typically built on local file systems like XFS). Therefore, it provides certain data management features, including distribution management, redundancy protection, fault switching, and silent error detection. JuiceFS, on the other hand, does not use physical disks directly but manages data through integration with various object storage systems. Most of its features rely on the capabilities of its object storage. In distributed systems, splitting large files into smaller chunks and storing them on different nodes is a common optimization technique. This often leads to higher concurrency and bandwidth when applications access such files. GlusterFS does not split large files (although it used to support Striped Volumes for large files, this feature is no longer" }, { "data": "JuiceFS splits files into 64 MiB chunks by default, and each chunk is further divided into 4 MiB blocks based on the write pattern. For details, see . GlusterFS supports both replication (Replicated Volume) and erasure coding (Dispersed Volume). JuiceFS relies on the redundancy capabilities of the underlying object storage it uses. GlusterFS: Supports only transport-layer compression. Files are compressed by clients, transmitted to the server, and decompressed by the bricks. Does not implement storage-layer compression but depends on the underlying file system used by the bricks, such as . JuiceFS supports both transport-layer and storage-layer compression. Data compression and decompression are performed on the client side. GlusterFS: Supports only , relying on SSL/TLS. Previously supported , but it is no longer supported. JuiceFS supports both . Data encryption and decryption are performed on the client side. Both and offer POSIX compatibility. GlusterFS previously had embedded support for NFSv3 but now it is . Instead, it is suggested to export the mount point using an NFS server. JuiceFS does not provide direct support for NFS and requires mounting followed by . GlusterFS embeds support for Windows, Linux Samba clients, and macOS CLI access (excluding macOS Finder). However, it is recommended to . JuiceFS does not offer direct support for CIFS and requires mounting followed by . GlusterFS supports S3 through the project, but the project hasn't seen recent updates since November 2017. JuiceFS . GlusterFS offers HDFS compatibility through the project, but the project hasn't seen recent updates since May 2015. JuiceFS provides . GlusterFS but the latest version was released in November 2018, and the repository is marked as DEPRECATED. JuiceFS supports CSI Driver. For details, see the . In Linux, file access permissions are typically controlled by three entities: the file owner, the group owner, and others. However, when more complex requirements arise, such as the need to assign specific permissions to a particular user within the others category, this standard mechanism does not work. POSIX Access Control Lists (ACLs) offer enhanced permission management capabilities, allowing you to assign permissions to any user or user group as needed. GlusterFS , including access ACLs and default ACLs. JuiceFS does not support POSIX ACLs. Cross-cluster replication indicates replicating data between two independent clusters, often used for geographically distributed disaster recovery. GlusterFS but requires both sides to use the same version of Gluster cluster. JuiceFS depends on the capabilities of the metadata engine and the object storage, allowing one-way replication. Both and support directory quotas, including capacity and/or file count limits. GlusterFS supports and requires all bricks to be deployed on LVM thinly provisioned volumes. JuiceFS does not support snapshots but offers directory-level cloning. GlusterFS , which is disabled by default. JuiceFS , which is enabled by default." } ]
{ "category": "Runtime", "file_name": "juicefs_vs_glusterfs.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Display cilium connectivity to other nodes ``` cilium-health status [flags] ``` ``` -h, --help help for status -o, --output string json| yaml| jsonpath='{}' --probe Synchronously probe connectivity status --succinct Print the result succinctly (one node per line) --verbose Print more information in results ``` ``` -D, --debug Enable debug messages -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Cilium Health Client" } ]
{ "category": "Runtime", "file_name": "cilium-health_status.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark client config get\" layout: docs Get client configuration file values Get client configuration file values ``` ark client config get [KEY 1] [KEY 2] [...] [flags] ``` ``` -h, --help help for get ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Get and set client configuration file values" } ]
{ "category": "Runtime", "file_name": "ark_client_config_get.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "English | Spiderpool consists of the following components: Spiderpool-controller A set of deployments that interact with the API Server, managing multiple CRD resources such as SpiderIPPool, SpiderSubnet, SpiderMultusConfig, etc. It implements validation, creation, and status updates for these CRDs. Additionally, it responds to requests from Spiderpool-agent Pods, performing functions like allocation, release, reclamation, and managing automatic IP pools. Spiderpool-agent A set of daemonsets running on each node, assisting in the installation of plugins such as Multus, Coordinator, IPAM, and CNI on each node. It responds to CNI requests for IP allocation during Pod creation and interacts with Spiderpool-controller to handle Pod IP allocation and release. It also interacts with Coordinator, assisting the Spiderpool plugin in implementing IP allocation and helping the coordinator plugin with configuration synchronization. CNI plugins include: Spiderpool IPAM plugin: a main CNI used to handle IP allocation. refer to coordinator plugin: as a chain plugin, it performs various functions such as routing coordination for multiple network interfaces, checking for IP conflicts, ensuring host connectivity, and fixing MAC addresses. refer to ifacer plugin: as a chain plugin, it automates the creation of bond and VLAN virtual interfaces that serve as parent interfaces for plugins like macvlan and ipvlan. refer to : a scheduler for other CNI plugins. CNI plugins: include , , , , , , , , etc. SR-IOV related components: : Facilitates the installation and configuration of sriov-cni. For more details, refer to . RDMA components: : Used to discover shared RDMA devices on the host and report them to Kubelet for use by the RDMA CNI. : It implements network isolation for RDMA device. : Facilitates the installation and configuration of sriov-cni. : It implements ipoib cni for infiniband scenario. In overlay networks, Spiderpool uses Multus to add an overlay NIC (such as or ) and multiple underlay NICs (such as Macvlan CNI or SR-IOV CNI) for each Pod. This offers several benefits: Rich IPAM features for underlay CNIs, including shared/fixed IPs, multi-NIC IP allocation, and dual-stack support. Route coordination for multiple underlay CNI NICs and an overlay NIC for Pods, ensuring the consistent request and reply data paths for smooth communication. Use the overlay NIC as the default one with route coordination and enable local host connectivity to enable clusterIP access, local health checks of applications, and forwarding overlay network traffic through overlay networks while forwarding underlay network traffic through underlay networks. The integration of Multus CNI and Spiderpool IPAM enables the collaboration of an overlay CNI and multiple underlay CNIs. For example, in clusters with nodes of varying network capabilities, Pods on bare-metal nodes can access both overlay and underlay NICs. Meanwhile, Pods on virtual machine nodes only serving east-west services are connected to the Overlay NIC. This approach provides several benefits: Applications providing east-west services can be restricted to being allocated only the overlay NIC while those providing north-south services can simultaneously access overlay and underlay NICs. This results in reduced Underlay IP resource usage, lower manual maintenance costs, and preserved pod connectivity within the cluster. Fully integrate resources from virtual machines and bare-metal" }, { "data": "In underlay networks, Spiderpool can work with underlay CNIs such as and to provide the following benefits: Rich IPAM capabilities for underlay CNIs, including shared/fixed IPs, multi-NIC IP allocation, and dual-stack support One or more underlay NICs for Pods with coordinating routes between multiple NICs to ensure smooth communication with consistent request and reply data paths Enhanced connectivity between open-source underlay CNIs and hosts using additional veth network interfaces and route control. This enables clusterIP access, local health checks of applications, and much more How can you deploy containers using a single underlay CNI, when a cluster has multiple underlying setups? Some nodes in the cluster are virtual machines like VMware that don't enable promiscuous mode, while others are bare metal and connected to traditional switch networks. What CNI solution should be deployed on each type of node? Some bare metal nodes only have one SR-IOV high-speed NIC that provides 64 VFs. How can more pods run on such a node? Some bare metal nodes have an SR-IOV high-speed NIC capable of running low-latency applications, while others have only ordinary network cards for running regular applications. What CNI solution should be deployed on each type of node? By simultaneously deploying multiple underlay CNIs through Multus CNI configuration and Spiderpool's IPAM abilities, resources from various infrastructure nodes across the cluster can be integrated to solve these problems. For example, as shown in the above diagram, different nodes with varying networking capabilities in a cluster can use various underlay CNIs, such as SR-IOV CNI for nodes with SR-IOV network cards, Macvlan CNI for nodes with ordinary network cards, and ipvlan CNI for nodes with restricted network access (e.g., VMware virtual machines with limited layer 2 network forwarding). It is hard to implement underlay CNI in public cloud, OpenStack, VMware. It requires the vendor underlay CNI on specific environments, as these environments typically have the following limitations: The IAAS network infrastructure implements MAC restrictions for packets. On the one hand, security checks are conducted on the source MAC to ensure that the source MAC address is the same as the MAC address of VM network interface. On the other hand, restrictions have been placed on the destination MAC, which only supports packet forwarding by the MAC address of VM network interfaces. The MAC address of the Pod in the common CNI plugin is newly generated, which leads to Pod communication failure. The IAAS network infrastructure implements IP restrictions on packets. Only when the destination and source IP of the packet are assigned to VM, packet could be forwarded rightly. The common CNI plugin assigns IP addresses to Pods that do not comply with IAAS settings, which leads to Pod communication failure. Spiderpool provides IP pool based on node topology, aligning with IP allocation settings of VMs. In conjunction with ipvlan CNI, it provides underlay CNI solutions for various public cloud environments. RDMA (Remote Direct Memory Access) allows network cards to directly interact with memory, reducing CPU overhead and alleviating the burden on the kernel protocol stack. This technology offloads the network protocol stack to the network card, resulting in effective reduction of network transmission latency and increased throughput. Currently, RDMA finds extensive applications in fields such as AI computing and storage. Macvlan, IPvlan, and SR-IOV CNIs enable transparent RDMA network card passthrough to Pods within the Kubernetes platform. Spiderpool enhances these CNIs by providing additional capabilities including IPAM, host connectivity, clusterIP access, as well as simplifying the installation process and usage steps of dependent components in the community." } ]
{ "category": "Runtime", "file_name": "arch.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Tunnel endpoint map ``` -h, --help help for tunnel ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - List tunnel endpoint entries" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_tunnel.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This document describes the versions supported by the CubeFS project. Service versioning and supported versions CubeFS versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, following Semantic Versioning terminology. New minor versions may add additional features to the API. The CubeFS project maintains release branches for the current version and previous release. For example, when v3.3. is the current version, v3.1. is supported. When v3.4. is released, v3.1. goes out of support. The project Maintainers own this decision." } ]
{ "category": "Runtime", "file_name": "plan.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List BPF datapath traffic metrics ``` cilium-dbg bpf metrics list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - BPF datapath traffic metrics" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_metrics_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Root filesystem overlay is now the default in runsc. This improves performance for filesystem-heavy workloads by overlaying the container root filesystem with a tmpfs filesystem. Learn more about this feature in the following blog that was on . -- gVisor uses a trusted filesystem proxy process (gofer) to access the filesystem on behalf of the sandbox. The sandbox process is considered untrusted in gVisors . As a result, it is not given direct access to the container filesystem and do not allow filesystem syscalls. In gVisor, the container rootfs and are configured to be served by a gofer. {:width=\"100%\"} When the container needs to perform a filesystem operation, it makes an RPC to the gofer which makes host system calls and services the RPC. This is quite expensive due to: RPC cost: This is the cost of communicating with the gofer process, including process scheduling, message serialization and system calls. To ameliorate this, gVisor recently developed a purpose-built protocol called which is much more efficient than its predecessor. gVisor is also with giving the sandbox direct access to the container filesystem in a secure manner. This would essentially nullify RPC costs as it avoids the gofer being in the critical path of filesystem operations. Syscall cost: This is the cost of making the host syscall which actually accesses/modifies the container filesystem. Syscalls are expensive, because they perform context switches into the kernel and back into userspace. To help with this, gVisor heavily caches the filesystem tree in memory. So operations like on cached files are serviced quickly. But other operations like or still need to make host syscalls. In Docker and Kubernetes, the containers root filesystem (rootfs) is based on the filesystem packaged with the image. The images filesystem is immutable. Any change a container makes to the rootfs is stored separately and is destroyed with the container. This way, the images filesystem can be shared efficiently with all containers running the same image. This is different from bind mounts, which allow containers to access the bound host filesystem tree. Changes to bind mounts are always propagated to the host and persist after the container exits. Docker and Kubernetes both use the by default to configure container rootfs. Overlayfs mounts are composed of one upper layer and multiple lower layers. The overlay filesystem presents a merged view of all these filesystem layers at its mount location and ensures that lower layers are read-only while all changes are held in the upper layer. The lower layer(s) constitute the image layer and the upper layer is the container" }, { "data": "When the container is destroyed, the upper layer mount is destroyed as well, discarding the root filesystem changes the container may have made. Dockers has a good explanation. Lets consider an example where the image has files `foo` and `baz`. The container overwrites `foo` and creates a new file `bar`. The diagram below shows how the root filesystem used to be configured in gVisor earlier. We used to go through the gofer and access/mutate the overlaid directory on the host. It also shows the state of the host overlay filesystem. {:width=\"100%\"} Given that the upper layer is destroyed with the container and that it is expensive to access/mutate a host filesystem from the sandbox, why keep the upper layer on the host at all? Instead we can move the upper layer into the sandbox. The idea is to overlay the rootfs using a sandbox-internal overlay mount. We can use a tmpfs upper (container) layer and a read-only lower layer served by the gofer client. Any changes to rootfs would be held in tmpfs (in-memory). Accessing/mutating the upper layer would not require any gofer RPCs or syscalls to the host. This really speeds up filesystem operations on the upper layer, which contains newly created or copied-up files and directories. Using the same example as above, the following diagram shows what the rootfs configuration would look like using a sandbox-internal overlay. {:width=\"100%\"} The tmpfs mount by default will use the sandbox processs memory to back all the file data in the mount. This can cause sandbox memory usage to blow up and exhaust the containers memory limits, so its important to store all file data from tmpfs upper layer on disk. We need to have a tmpfs-backing filestore on the host filesystem. Using the example from above, this filestore on the host will store file data for `foo` and `bar`. This would essentially flatten all regular files in tmpfs into one host file. The sandbox can the filestore into its address space. This allows it to access and mutate the filestore very efficiently, without incurring gofer RPCs or syscalls overheads. In Kubernetes, you can set . The upper layer of the rootfs overlay (writeable container layer) on the host . The kubelet enforces this limit by the entire , `stat(2)`-ing all files and their `stat.stblocks*blocksize`. If we move the upper layer into the sandbox, then the host upper layer is empty and the kubelet will not be able to enforce these limits. To address this issue, we , which create the filestore in the host upper" }, { "data": "This way, when the kubelet scans the host upper layer, the filestore will be detected and its `stat.st_blocks` should be representative of the total file usage in the sandbox-internal upper layer. It is also important to hide this filestore from the containerized application to avoid confusing it. We do so by in the sandbox-internal upper layer, which blocks this file from appearing in the merged directory. The following diagram shows what rootfs configuration would finally look like today in gVisor. {:width=\"100%\"} Lets look at some filesystem-intensive workloads to see how rootfs overlay impacts performance. These benchmarks were run on a gLinux desktop with . provides a . This program performs a large number of filesystem operations concurrently, creating and modifying a large filesystem tree of all sorts of files. We ran this program on the container's root filesystem. The exact usage was: &nbsp;&nbsp;&nbsp;&nbsp;`sh -c \"mkdir /test && time fsstress -d /test -n 500 -p 20 -s 1680153482 -X -l 10\"` You can use the -v flag (verbose mode) to see what filesystem operations are being performed. The results were astounding! Rootfs overlay reduced the time to run this fsstress program from 262.79 seconds to 3.18 seconds! However, note that such microbenchmarks are not representative of real-world applications and we should not extrapolate these results to real-world performance. Build jobs are very filesystem intensive workloads. They read a lot of source files, compile and write out binaries and object files. Lets consider building the with . Bazel performs a lot of filesystem operations in rootfs; in bazels cache located at `~/.cache/bazel/`. This is representative of the real-world because many other applications also use the container root filesystem as scratch space due to the handy property that it disappears on container exit. To make this more realistic, the abseil-cpp repo was attached to the container using a bind mount, which does not have an overlay. When measuring performance, we care about reducing the sandboxing overhead and bringing gVisor performance as close as possible to unsandboxed performance. Sandboxing overhead can be calculated using the formula overhead = (s-n)/n where `s` is the amount of time taken to run a workload inside gVisor sandbox and `n` is the time taken to run the same workload natively (unsandboxed). The following graph shows that rootfs overlay halved the sandboxing overhead for abseil build! {:width=\"100%\"} Rootfs overlay in gVisor substantially improves performance for many filesystem-intensive workloads, so that developers no longer have to make large tradeoffs between performance and security. We recently made this optimization in runsc. This is part of our ongoing efforts to improve gVisor performance. You can use gVisor in GKE with GKE Sandbox. Happy sandboxing!" } ]
{ "category": "Runtime", "file_name": "2023-05-08-rootfs-overlay.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Fix inter-Node ClusterIP Service access when AntreaProxy is disabled. (, [@tnqn]) Fix duplicate group ID allocation in AntreaProxy when using a combination of IPv4 and IPv6 Services in dual-stack clusters; this was causing Service connectivity issues. (, [@hongliangl]) Fix intra-Node ClusterIP Service access when both the AntreaProxy and Egress features are enabled. (, [@tnqn]) Fix invalid clean-up of the HNS Endpoint during Pod deletion, when Docker is used as the container runtime. (, [@wenyingd]) [Windows] Fix race condition on Windows when retrieving the local HNS Network created by Antrea for containers. (, [@tnqn]) [Windows] Fix invalid conversion function between internal and versioned types for controlplane API, which was causing JSON marshalling errors. (, [@tnqn]) Fix implementation of the v1beta1 version of the legacy \"controlplane.antrea.tanzu.vmware.com\" API: the API was incorrectly using some v1beta2 types and it was missing some field selectors. (, [@tnqn]) It was discovered that the AntreaProxy implementation has an upper-bound for the number of Endpoints it can support for each Service: we increase this upper-bound from ~500 to 800, log a warning for Services with a number of Endpoints greater than 800, and arbitrarily drop some Endpoints so we can still provide load-balancing for the Service. (, [@hongliangl]) Fix Antrea-native policy with multiple AppliedTo selectors: some rules were never realized by the Agents as they thought they had only received partial information from the Controller. (, [@tnqn]) Fix re-installation of the OpenFlow groups when the OVS daemons are restarted to ensure that AntreaProxy keeps functioning. (, [@antoninbas]) Fix IPFIX flow records exported by the Antrea Agent. (, [@zyiou]) If a connection spanned multiple export cycles, it wasn't handled properly and no record was sent for the connection If a connection spanned a single export cycle, a single record was sent but \"delta counters\" were set to 0 which caused flow visualization to omit the flow in dashboards Fix incorrect stats reporting for ingress rules of some NetworkPolicies: some types of traffic were bypassing the OVS table keeping track of statistics once the connection was established, causing packet and byte stats to be incorrect. (, [@ceclinux]) Fix the retry logic when enabling the OVS bridge local interface on Windows Nodes. (, [@antoninbas]) [Windows] The AntreaPolicy feature is graduated from Alpha to Beta and is therefore enabled by default. Add [Egress] feature to configure SNAT policies for Pod-to-external traffic. [Alpha - Feature Gate: `Egress`] A new Egress CRD is introduced to define SNAT policies (, [@jianjuns]) Update the datapath to implement Egress: on Windows Nodes, everything is implemented in OVS, while on Linux Nodes, OVS marks packets and sends them to the host network namespace, where iptables handles SNAT ( , [@jianjuns], [@tnqn]) A new EgressGroup control plane API is introduced: the Controller computes group membership for each policy and sends this information to the Agents (, [@tnqn]) Implement the EgressGroup control plane API in the Agent (, [@tnqn] [@ceclinux]) Document the Egress feature and its datapath implementation ( , [@jianjuns] [@tnqn]) Add support for the \"Reject\" action in Antrea-native policies as an alternative to \"Drop\" (which silently drops" }, { "data": "(, [@GraysonWu]) For rejected TCP connections, the Agent will send a TCP RST packet For UDP and SCTP, the Agent will send an ICMP message with Type 3 (Destination Unreachable) and Code 10 (Host administratively prohibited) Add support for nesting in the , [@Dyanngg]) Add ability to specify multiple IPBlocks when defining a ClusterGroup. (, [@Dyanngg]) Support for IPv6 (IPv6-only and dual-stack clusters) in the FlowAggregator and in the reference ELK stack. ( , [@dreamtalen]) Add support for arm/v7 and arm64 to the main Antrea Docker image for Linux (antrea/antrea-ubuntu) instead of using a separate image. (, [@antoninbas]) Add support for live-traffic tracing in Traceflow: rather than injecting a Traceflow packet, we can monitor real traffic and update the Traceflow Status when a matching packet is observed. ( , [@jianjuns]) The captured packet is reported as part of the Traceflow request Status Live-traffic tracing supports a \"Dropped-Only\" filter which will only capture packets dropped by the datapath Introduce a new optional to automatically label all Namespaces and Services with their name (`antrea.io/metadata.name: <resourceName>`); this allows NetworkPolicies and ClusterGroup to easily select these resources by name. (, [@abhiraut] [@Dyanngg]) Add support for rule-level statistics for Antrea-native policies, when the NetworkPolicyStats feature is enabled: rules are identified by their name, which can be user-provided or auto-generated. (, [@ceclinux]) Add TCP connection state information to the IPFIX records sent by the FlowExporter, and improve handling of \"dying\" connections. (, [@zyiou]) Add information about the flow type (intra-Node, inter-Node, Pod-to-external) to the IPFIX records sent by the FlowExporter. (, [@dreamtalen]) Add support for dumping OVS flows related to a Service with the \"antctl get of\" command. (, [@jianjuns]) Randomly generate a cluster UUID in the Antrea Controller and make it persistent by storing it to a ConfigMap (\"antrea-cluster-identity\"). (, [@antoninbas]) Add support for IPv6 to \"antctl traceflow\". (, [@luolanzone]) Rename all Antrea API groups from `.antrea.tanzu.vmware.com` to `.antrea.io`. (, [@hongliangl]) All legacy groups will be supported until December 2021 See the , [@antoninbas]) Change the export mechanism for the FlowExporter in the Antrea Agent: instead of exporting all flows periodically with a fixed interval, we introduce an \"active timeout\" and an \"idle timeout\", and flow information is exported differently based on flow activity. (, [@srikartati]) Add rate-limiting in the Agent for PacketIn messages sent by the OVS datapath: this can help limit the CPU usage when too many messages are sent by OVS. (, [@GraysonWu]) Output partial result when a Traceflow request initiated by antctl fails or times out, as it can still provide useful information. (, [@jianjuns]) Ensure that \"antctl version\" always outputs the client version, even when antctl cannot connect to the Antrea apiserver. (, [@antoninbas]) Extract the group member calculation for the NetworkPolicy implementation in the Controller to its own module, so it can be reused for different features which need to calculate groups of endpoints based on a given selection criteria; performance (CPU and memory usage) is also improved. (, [@tnqn]) Optimize the computation of unions of sets when processing NetworkPolicies in the Controller. (, [@tnqn]) Optimize the computation of symmetric differences of sets in the Agent (NodePortLocal) and in the Controller (NetworkPolicy" }, { "data": "(, [@tnqn]) Move mutable ConfigMap resources out of the deployment YAML and create them programmatically instead; this facilitates integration with other projects such as kapp. (, [@hty690]) Improve error logs when the Antrea Agent's connection to the Controller times out, and introduce a dedicated health check in the Agent to report the connection status. (, [@hty690]) Support user-provided signed OVS binaries in Windows installation script. (, [@lzhecheng]) [Windows] When NodePortLocal is enabled on a Pod, do not allocate new ports on the host for Pod containers with HostPort enabled. (, [@annakhm]) Use \"distroless\" Docker image for the FlowAggregator to reduce its size. ( , [@hanlins] [@dreamtalen]) Improve reference Kibana dashboards for flow visualization and update the documentation for flow visualization with more up-to-date Kibana screenshots. (, [@zyiou]) Reject unsupported positional arguments in antctl commands. (, [@hty690]) Reduce log verbosity for PacketIn messages received by the Agent. (, [@jianjuns]) Improve Windows documentation to cover running Antrea as a Windows service, which is required when using containerd as the container runtime. (, [@lzhecheng] [@jayunit100]) [Windows] Update the documentation for hardware offload support. (, [@Mmduh-483]) Document IPv6 support for Traceflow. (, [@gran-vmv]) Remove old references to Ubuntu 18.04 from the documentation. (, [@shadowlan]) Fix audit logging on Windows Nodes: the log directory was not configured properly, causing Agent initialization to fail on Windows when the AntreaPolicy feature was enabled. (, [@antoninbas]) [Windows] When selecting the Pods corresponding to a Service for which NodePortLocal has been enabled, Pods should be filtered by Namespace. (, [@chauhanshubham]) Correctly handle Service Type changes for NodePortLocal, and update Pod annotations accordingly. (, [@chauhanshubham]) Use correct output format for CNI Add in networkPolicyOnly mode: this was not an issue with Docker but was causing failures with containerd. (, [@antoninbas] [@dantingl]) Fix audit logging of IPv6 traffic for Antrea-native policies: IPv6 packets were ignored by the Agent instead of being parsed and logged to file. (, [@antoninbas]) Fix the Traceflow implementation when the destination IP is an external IP or the local gateway's IP. (, [@antoninbas]) Fix a crash in the Agent when the FlowExporter initialization fails; instead of a crash it should try again the next time flow data needs to be exported. (, [@srikartati]) Add missing flows in OVS for IPv6 Traceflow support preventing Traceflow packets from bypassing conntrack. (, [@jianjuns]) Fix Status updates for ClusterNetworkPolicies. (, [@Dyanngg]) Clean up stale IP addresses on Antrea host gateway interface. (, [@antoninbas]) If a Node leaves and later rejoins a cluster, a new Pod CIDR may be allocated to the Node for each supported IP family and the gateway receives a new IP address (first address in the CIDR) If the previous addresses are not removed from the gateway, we observe connectivity issues across Nodes Update libOpenflow to avoid crash in Antrea Agent for certain Traceflow requests. (, [@antoninbas]) Fix the deletion of stale port forwarding iptables rules installed for NodePortLocal, occurring when the Antrea Agent restarts. (, [@monotosh-avi]) Fix output formatting for the \"antctl trace-packet\" command: the result was displayed as a Go struct variable and newline characters were not rendered, making it hard to read. (, [@jianjuns])" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.0.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "rkt features native support for fetching and running Docker container images. To reference a Docker image, use the `docker://` prefix when fetching or running images. Note that Docker images do not support signature verification, and hence it's necessary to use the `--insecure-options=image` flag. As a simple example, let's run the latest container image from the default Docker registry: ``` rkt: fetching image from docker://redis rkt: warning: image signature verification has been disabled Downloading layer: 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 ... Downloading layer: f2fb89b0a711a7178528c7785d247ba3572924353b0d5e23e9b28f0518253b22 4:C 19 Apr 06:09:02.372 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf 4:M 19 Apr 06:09:02.373 # You requested maxclients of 10000 requiring at least 10032 max file descriptors. 4:M 19 Apr 06:09:02.373 # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted. 4:M 19 Apr 06:09:02.373 # Current maximum open files is 8192. maxclients has been reduced to 8160 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'. . .-`` ''-. .-`` `. `. ''-._ Redis 3.0.0 (00000000/0) 64 bit .-`` .-```. ```\\/ ., ''-._ ( ' , .-` | `, ) Running in standalone mode |`-.`-...-` ...-.``-.|'` _.-'| Port: 6379 | `-. `. / _.-' | PID: 4 `-. `-. `-./ .-' .-' |`-.`-. `-..-' .-'.-'| | `-.`-. .-'.-' | http://redis.io `-. `-.`-..-'.-' .-' |`-.`-. `-..-' .-'.-'| | `-.`-. .-'.-' | `-. `-.`-..-'.-' .-' `-. `-..-' .-' `-. .-' `-..-' 4:M 19 Apr 06:09:02.374 # Server started, Redis version 3.0.0 4:M 19 Apr 06:09:02.375 # WARNING overcommitmemory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommitmemory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. 4:M 19 Apr 06:09:02.375 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 4:M 19 Apr 06:09:02.375 * The server is now ready to accept connections on port 6379 ``` This behaves similarly to the Docker client: if no specific registry is named, the is used by default. As with Docker, alternative registries can be used by specifying the registry as part of the image reference. For example, the following command will fetch an Docker image hosted on : ``` rkt: fetching image from docker://quay.io/zanui/nginx rkt: warning: image signature verification has been disabled Downloading layer: 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 ... Downloading layer: 340951f1240f3dc1189ae32cfa5af35df2dc640e0c92f2397b7a72e174c1a158 sha512-c6d6efd98f506380ff128e473ca239ed ``` The hash printed in the final line represents the image ID of the converted ACI. After the image has been retrieved, it can be run by referencing this hash: ``` ```" } ]
{ "category": "Runtime", "file_name": "running-docker-images.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "The Docker CLI is a very popular developer tool. However, it is not easy to replace Docker's underlying OCI runtime (`runc`) with the WasmEdge-enabled `crun`. In this section, we will discuss two ways to run WasmEdge applications in Docker." } ]
{ "category": "Runtime", "file_name": "docker.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "Architecture of the library === ```mermaid graph RL Program --> ProgramSpec --> ELF btf.Spec --> ELF Map --> MapSpec --> ELF Links --> Map & Program ProgramSpec -.-> btf.Spec MapSpec -.-> btf.Spec subgraph Collection Program & Map end subgraph CollectionSpec ProgramSpec & MapSpec & btf.Spec end ``` ELF BPF is usually produced by using Clang to compile a subset of C. Clang outputs an ELF file which contains program byte code (aka BPF), but also metadata for maps used by the program. The metadata follows the conventions set by libbpf shipped with the kernel. Certain ELF sections have special meaning and contain structures defined by libbpf. Newer versions of clang emit additional metadata in . The library aims to be compatible with libbpf so that moving from a C toolchain to a Go one creates little friction. To that end, the is tested against the Linux selftests and avoids introducing custom behaviour if possible. The output of the ELF reader is a `CollectionSpec` which encodes all of the information contained in the ELF in a form that is easy to work with in Go. The returned `CollectionSpec` should be deterministic: reading the same ELF file on different systems must produce the same output. As a corollary, any changes that depend on the runtime environment like the current kernel version must happen when creating . Specifications `CollectionSpec` is a very simple container for `ProgramSpec`, `MapSpec` and `btf.Spec`. Avoid adding functionality to it if possible. `ProgramSpec` and `MapSpec` are blueprints for in-kernel objects and contain everything necessary to execute the relevant `bpf(2)` syscalls. They refer to `btf.Spec` for type information such as `Map` key and value types. The package provides an assembler that can be used to generate `ProgramSpec` on the fly. Objects `Program` and `Map` are the result of loading specifications into the kernel. Features that depend on knowledge of the current system (e.g kernel version) are implemented at this point. Sometimes loading a spec will fail because the kernel is too old, or a feature is not enabled. There are multiple ways the library deals with that: Fallback: older kernels don't allow naming programs and maps. The library automatically detects support for names, and omits them during load if necessary. This works since name is primarily a debug aid. Sentinel error: sometimes it's possible to detect that a feature isn't available. In that case the library will return an error wrapping `ErrNotSupported`. This is also useful to skip tests that can't run on the current kernel. Once program and map objects are loaded they expose the kernel's low-level API, e.g. `NextKey`. Often this API is awkward to use in Go, so there are safer wrappers on top of the low-level API, like `MapIterator`. The low-level API is useful when our higher-level API doesn't support a particular use case. Links Programs can be attached to many different points in the kernel and newer BPF hooks tend to use bpf_link to do so. Older hooks unfortunately use a combination of syscalls, netlink messages, etc. Adding support for a new link type should not pull in large dependencies like netlink, so XDP programs or tracepoints are out of scope. Each bpflinktype has one corresponding Go type, e.g. `link.tracing` corresponds to BPFLINKTRACING. In general, these types should be unexported as long as they don't export methods outside of the Link interface. Each Go type may have multiple exported constructors. For example `AttachTracing` and `AttachLSM` create a tracing link, but are distinct functions since they may require different arguments." } ]
{ "category": "Runtime", "file_name": "architecture.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "title: Developer Environment You can choose any Kubernetes install of your choice. The test framework only depends on `kubectl` being configured. To install `kubectl`, please see the . The developers of Rook are working on Minikube and thus it is the recommended way to quickly get Rook up and running. Minikube should not be used for production but the Rook authors consider it great for development. While other tools such as k3d/kind are great, users have faced issues deploying Rook. Always use a virtual machine when testing Rook. Never use your host system where local devices may mistakenly be consumed. To install Minikube follow the [official guide](https://minikube.sigs.k8s.io/docs/start/). It is recommended to use the kvm2 driver when running on a Linux machine and the hyperkit driver when running on a MacOS. Both allow to create and attach additional disks to the virtual machine. This is required for the Ceph OSD to consume one drive. We don't recommend any other drivers for Rook. You will need a Minikube version 1.23 or higher. Starting the cluster on Minikube is as simple as running: ```console minikube start --disk-size=40g --extra-disks=1 --driver kvm2 minikube start --disk-size=40g --extra-disks=1 --driver hyperkit minikube start --disk-size=40g --extra-disks 1 --driver qemu ``` It is recommended to install a Docker client on your host system too. Depending on your operating system follow the . Stopping the cluster and destroying the Minikube virtual machine can be done with: ```console minikube delete ``` Use to install Helm and set up Rook charts defined under `_output/charts` (generated by build): To install and set up Helm charts for Rook run `tests/scripts/helm.sh up`. To clean up `tests/scripts/helm.sh clean`. !!! note These helper scripts depend on some artifacts under the `_output/` directory generated during build time. These scripts should be run from the project root. !!! note If Helm is not available in your `PATH`, Helm will be downloaded to a temporary directory (`/tmp/rook-tests-scripts-helm`) and used from that directory. Developers can test quickly their changes by building and using the local Rook image on their minikube cluster. 1) Set the local Docker environment to use minikube: ```console eval $(minikube docker-env -p minikube) ``` 2) Build your local Rook image. The following command will generate a Rook image labeled in the format `local/ceph-<arch>`. ```console cd <yourrooksrc_directory> make BUILD_REGISTRY=local ``` 3) Tag the generated image as `rook/ceph:master` so operator will pick it. ```console docker tag \"local/ceph-$(go env GOARCH)\" 'rook/ceph:master' ``` 4) Create a Rook cluster in minikube, or if the Rook cluster is already configured, apply the new operator image by restarting the operator. To accelerate the development process, users have the option to employ the script located at `tests/scripts/create-dev-cluster.sh`. This script is designed to rapidly set up a new minikube environment, apply the CRDs and the common file, and then utilize the `cluster-test.yaml` script to create the Rook cluster. Once setup, users can use the different `*-test.yaml` files from the `deploy/examples/` directory to configure their clusters. This script supports the possibility of creating multiple rook clusters running on the same machine by using the option `-p <profile-name>`." } ]
{ "category": "Runtime", "file_name": "development-environment.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Display local cilium agent status ``` cilium-health get [flags] ``` ``` -h, --help help for get -o, --output string json| yaml| jsonpath='{}' ``` ``` -D, --debug Enable debug messages -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Cilium Health Client" } ]
{ "category": "Runtime", "file_name": "cilium-health_get.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "sidebar_position: 1 slug: /productiondeploymentrecommendations description: This article is intended as a reference for users who are about to deploy JuiceFS to a production environment and provides a series of environment configuration recommendations. This document provides deployment recommendations for JuiceFS Community Edition in production environments. It focuses on monitoring metric collection, automatic metadata backup, trash configuration, background tasks of clients, client log rolling, and command-line auto-completion to ensure the stability and reliability of the file system. It is necessary to collect monitoring metrics from JuiceFS clients and visualize them using Grafana. This allows for real-time monitoring of file system performance and health status. For detailed instructions, see this . :::tip Automatic metadata backup is a feature that has been added since JuiceFS v1.0.0. ::: Metadata is critical to the JuiceFS file system, and any loss or corruption of metadata may affect a large number of files or even the entire file system. Therefore, metadata must be backed up regularly. This feature is enabled by default and the backup interval is 1 hour. The backed-up metadata is compressed and stored in the corresponding object storage, separate from file system data. Backups are performed by JuiceFS clients, which may increase CPU and memory usage during the process. By default, one client is randomly selected for backup operations. It is important to note that this feature is disabled when the number of files reaches one million. To re-enable it, set a larger backup interval (the `--backup-meta` option). The interval is configured independently for each client. You can use `--backup-meta 0` to disable automatic backup. :::note The time required for metadata backup depends on the specific metadata engine. Different metadata engines have different performance. ::: For detailed information on automatic metadata backups, see this . Alternatively, you can back up metadata manually. In addition, follow the operational and maintenance recommendations of the metadata engine you are using to back up your data regularly. :::tip The Trash feature has been available since JuiceFS v1.0.0. ::: Trash is enabled by default. The retention time for deleted files defaults to 1 day to mitigate the risk of accidental data loss. However, enabling Trash may have side" }, { "data": "If the application needs to frequently delete files or overwrite them, it will cause the object storage usage to be much larger than the file system. This is because the JuiceFS client retain deleted files and overwritten blocks on the object storage for a certain period. Therefore, it is highly recommended to evaluate workload requirements before deploying JuiceFS in a production environment to configure Trash appropriately. You can configure the retention time as follows (`--trash-days 0` disables Trash): For new file systems: set via the `--trash-days <value>` option of `juicefs format` For existing file systems: modify with the `--trash-days <value>` option of `juicefs config` For more information on Trash, see this . The JuiceFS file system maintains background tasks through clients, which can automatically execute cleaning tasks such as deleting pending files and objects, purging expired files and fragments from Trash, and terminating long-stalled client sessions. All clients of the same JuiceFS volume share a set of background tasks during runtime. Each task is executed at regular intervals, with the client chosen randomly. Background tasks include: Cleaning up files and objects to be deleted Clearing out-of-date files and fragments in Trash Cleaning up stale client sessions Automatic backup of metadata Since these tasks take up some resources when executed, you can set the `--no-bgjob` option to disable them for clients with heavy workload. :::note Make sure that at least one JuiceFS client can execute background tasks. ::: When running a JuiceFS mount point in the background, the client outputs logs to a local file by default. The path to the local log file is slightly different depending on the user running the process: For the root user, the path is `/var/log/juicefs.log`. For others, the path is `$HOME/.juicefs/juicefs.log`. The local log file is not rotated by default and needs to be configured manually in production to prevent excessive disk space usage. The following is a configuration example for log rotation: ```text title=\"/etc/logrotate.d/juicefs\" /var/log/juicefs.log { daily rotate 7 compress delaycompress missingok notifempty copytruncate } ``` You can check the correctness of the configuration file with the `logrotate -d` command: ```shell logrotate -d /etc/logrotate.d/juicefs ``` For details about the logrotate configuration, see this . JuiceFS provides command line auto-completion scripts for Bash and Zsh to facilitate the use of `juicefs` commands. For details, see this for details." } ]
{ "category": "Runtime", "file_name": "production_deployment_recommendations.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "If you are a beginner and expect opensds project as the gate to open source world, this tutorial is one of the best choices for you. Just follow the guidance and you will find the pleasure to becoming a contributor. Before making modifications of opensds project, you need to make sure that this project have been forked to your own repository. It means that there will be parallel development between opensds repo and your own repo, so be careful to avoid the inconsistency between these two repos. If you want to download the code to the local machine, ```git``` is the best way: ``` git clone https://yourrepourl/opensds.git ``` To reduce the conflicts between your remote repo and opensds repo, we SUGGEST you configure opensds as the upstream repo: ``` git remote add upstream https://github.com/sodafoundation/api.git git fetch upstream ``` To avoid inconsistency between multiple branches, we SUGGEST checking out to a new branch: ``` git checkout -b newbranchname upstream/development git pull ``` Then you can change the code arbitrarily. After updating the code, you should push the update in the formal way: ``` git add . git status (Check the update status) git commit -m \"Your commit title\" git commit --amend (Add the concrete description of your commit) git push origin newbranchname ``` In the last step, your need to pull a compare request between your new branch and opensds development branch. After finishing the pull request, the travis CI will be automatically set up for building test. The tutorial is done, enjoy your contributing work!" } ]
{ "category": "Runtime", "file_name": "Tutorials-for-Beginners.md", "project_name": "Soda Foundation", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Data Caching The purpose of this documentation is to introduce users to the concepts behind Alluxio storage and the operations that can be performed within Alluxio storage space. Alluxio helps unify users' data across a variety of platforms while also helping to increase overall I/O throughput. Alluxio accomplishes this by splitting storage into two distinct categories. UFS (Under File Storage, also referred to as under storage) This type of storage is represented as space which is not managed by Alluxio. UFS storage may come from an external file system, including HDFS or S3. Alluxio may connect to one or more of these UFSs and expose them within a single namespace. Typically, UFS storage is aimed at storing large amounts of data persistently for extended periods of time. Alluxio storage Alluxio manages the local storage, including memory, of Alluxio workers to act as a distributed buffer cache. This fast data layer between user applications and the various under storages results in vastly improved I/O performance. Alluxio storage is mainly aimed at storing hot, transient data and is not focused on long term persistence. The amount and type of storage for each Alluxio worker node to manage is determined by user configuration. Even if data is not currently within Alluxio storage, files within a connected UFS are still visible to Alluxio clients. The data is copied into Alluxio storage when a client attempts to read a file that is only available from a UFS. Alluxio storage improves performance by storing data in memory co-located with compute nodes. Data in Alluxio storage can be replicated to make \"hot\" data more readily available for I/O operations to consume in parallel. Replicas of data within Alluxio are independent of the replicas that may exist within a UFS. The number of data replicas within Alluxio storage is determined dynamically by cluster activity. Due to the fact that Alluxio relies on the under file storage for a majority of data storage, Alluxio does not need to keep copies of data that are not being used. Alluxio also supports tiered storage configurations such as memory, SSD, and HDD tiers which can make the storage system media aware. This enables decreased fetching latencies similar to how L1/L2 CPU caches operate. The major shift in the landscape of Alluxio's metadata and cache management in Project Dora is that there is no longer a single master node in charge of all the file system metadata and cache information. Instead, the \"workers\", or simply \"Dora cache nodes,\" now handle both the metadata and the data of the files. Clients simply send requests to Dora cache nodes, and each Dora cache node will serve both the metadata and the data of the requested files. Since you can have a large number of Dora cache nodes in service, client traffic does not have to go to the single master node, but rather is distributed among the Dora cache nodes, therefore, greatly reducing the burden of the master node. Without a single master node dictating clients which workers to go to fetch the data, clients need an alternative way to select its destination. Dora employs a simple yet effective algorithm called to deterministically compute a target" }, { "data": "Consistent hashing ensures that given a list of all available Dora cache nodes, for a particular requested file, any client will independently choose the same node to request the file from. If the hash algorithm used is over the nodes, then the requests will be uniformly distributed among the nodes (modulo the distribution of the requests). Consistent hashing allows the Dora cluster to scale linearly with the size of the dataset, as all the metadata and data are partitioned onto multiple nodes, without any of them being the single point of failure. A Dora cache node can sometimes run into serious trouble and stop serving requests. To make sure the clients' requests get served normally even if a node is faulty, there's a fallback mechanism supporting clients falling back from the faulty node to another one, and eventually if all possible options are exhausted, falling back to the UFS. For a given file, the target node chosen by the consistent hashing algorithm is the primary node for handling the requests regarding this file. Consistent hashing allows a client to compute a secondary node following the primary node's failure, and redirects the requests to it. Like the primary node, the secondary node computed by different clients independently is exactly the same, ensuring that the fallback will happen to the same node. This fallback process can happen a few more times (configurable by the user), until the cost of retrying multiple nodes becomes unacceptable, when the client can fall back to the UFS directly. Dora cache nodes cache the metadata of the files they are in charge of. The metadata is fetched from the UFS directly the first time when a file is requested by a client, and cached by the Dora node. It is then used to respond to metadata requests afterwards. Currently, the Dora architecture is geared towards immutable and read-only use cases only. This assumes the metadata of the files in the UFS do not change over time, so that the cached metadata do not have to be invalidated. In the future, we'd like to explore certain use cases where invalidation of metadata is needed but is relatively rare. {:target=\"_blank\"} (1:06) {:target=\"_blank\"} (2:44) {:target=\"_blank\"} (1:18) Alluxio supports finer-grained page-level (typically, 1 MB) caching storage on Alluxio workers, as an alternative option to the existing block-based (defaults to 64 MB) tiered caching storage. This paging storage supports general workloads including reading and writing, with customizable cache eviction policies similar to in tiered block store. To switch to the paging storage: ```properties alluxio.worker.block.store.type=PAGE ``` You can specify multiple directories to be used as cache storage for the paging store. For example, to use two SSDs (mounted at `/mnt/ssd1` and `/mnt/ssd2`): ```properties alluxio.worker.page.store.dirs=/mnt/ssd1,/mnt/ssd2 ``` You can set a limit to the maximum amount of storage for cache for each of the directories. For example, to allocate 100 GB of space on each SSD: ```properties alluxio.worker.page.store.sizes=100GB,100GB ``` Note that the ordering of the sizes must match the ordering of the dirs. It is highly recommended to allocate all directories to be the same size, since the allocator will distribute data evenly. To specify the page size: ```properties alluxio.worker.page.store.page.size=1MB ``` A larger page size might improve sequential read performance, but it may take up more cache space. We recommend to use the default value (1MB) for Presto workload (reading Parquet or Orc files). To enable the asynchronous writes for paging store: ```properties alluxio.worker.page.store.async.write.enabled=true ``` You might find this property helpful if you notice performance degradation when there are a lot of cache misses." } ]
{ "category": "Runtime", "file_name": "Data-Caching.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "Create user [USER ID]. ``` bash cfs-cli user create [USER ID] [flags] ``` ```bash Flags: --access-key string # Specify the access key for the user to use the object storage function. --secret-key string # Specify the secret key for the user to use the object storage function. --password string # Specify the user password. --user-type string # Specify the user type, optional values are normal or admin (default is normal). -y, --yes # Skip all questions and set the answer to \"yes\". ``` Delete user [USER ID]. ``` bash cfs-cli user delete [USER ID] [flags] ``` ```bash Flags: -y, --yes # Skip all questions and set the answer to \"yes\". ``` Get information of user [USER ID]. ```bash cfs-cli user info [USER ID] ``` Get a list of all current users. ```bash cfs-cli user list ``` Update the permission [PERM] of user [USER ID] for volume [VOLUME]. [PERM] can be \"READONLY/RO\", \"READWRITE/RW\", or \"NONE\". ```bash cfs-cli user perm [USER ID] [VOLUME] [PERM] ``` Update the information of user [USER ID]. ```bash cfs-cli user update [USER ID] [flags] ``` ```bash Flags: --access-key string # The updated access key value. --secret-key string # The updated secret key value. --user-type string # The updated user type, optional values are normal or admin. -y, --yes # Skip all questions and set the answer to \"yes\". ```" } ]
{ "category": "Runtime", "file_name": "user.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "... ... By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check . [ ] If a specific issue led to this PR, this PR closes the issue. [ ] The description of changes is clear and encompassing. [ ] Any required documentation changes (code and docs) are included in this PR. . [ ] User-facing changes are mentioned in `CHANGELOG.md`. [ ] All added/changed functionality is tested. [ ] New `TODO`s link to an issue. [ ] Commits meet . ." } ]
{ "category": "Runtime", "file_name": "pull_request_template.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Display BPF program events The monitor displays notifications and events emitted by the BPF programs attached to endpoints and devices. This includes: Dropped packet notifications Captured packet traces Policy verdict notifications Debugging information ``` cilium-dbg monitor [flags] ``` ``` --from []uint16 Filter by source endpoint id -h, --help help for monitor --hex Do not dissect, print payload in HEX -j, --json Enable json output. Shadows -v flag --monitor-socket string Configure monitor socket path -n, --numeric Display all security identities as numeric values --related-to []uint16 Filter by either source or destination endpoint id --to []uint16 Filter by destination endpoint id -t, --type []string Filter by event types [agent capture debug drop l7 policy-verdict recorder trace trace-sock] -v, --verbose bools[=false] Enable verbose output (-v, -vv) (default []) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_monitor.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This document describes functional restrictions and limitations of Sysbox and the containers created by it. Sysbox enables containers to run applications or system software such as systemd, Docker, Kubernetes, K3s, etc., seamlessly & securely (e.g., no privileged containers, no complex setups). While our goal is for Sysbox containers to run any software that runs on bare-metal or VMs, this is still a work-in-progress. Thus, there are some limitations at this time. The table below describes these. | Limitation | Description | Affected Software | Planned Fix | | -- | | -- | :--: | | mknod | Fails with \"operation not permitted\". | Software that creates devices such as /dev/tun, /dev/tap, /dev/fuse, etc. | WIP | | binfmt-misc | Fails with \"permission denied\". | Software that uses /proc/sys/fs/binfmt_misc inside the container (e.g., buildx+QEMU for multi-arch builds). | WIP | | Nested user-namespace | `unshare -U --mount-proc` fails with \"invalid argument\". | Software that uses the Linux user-namespace (e.g., Docker + userns-remap). Note that the Sysbox container is rootless already, so this implies nesting Linux user-namespaces. | Yes | | Host device access | Host devices exposed to the container (e.g., `docker run --devices ...`) show up with \"nobody:nogroup\" ownership. Thus, access to them will fail with \"permission denied\" unless the device grants read/write permissions to \"others\". | Software that needs access to hardware accelerators. | Yes | | rpc-pipefs | Mounting rpc-pipefs fails with \"permission denied\". | Running an NFS server inside the Sysbox container. | Yes | | insmod | Fails with \"operation not permitted\". | Can't load kernel modules from inside containers. | TBD | NOTES: \"WIP\" means the fix is being worked-on right now. \"TBD\" means a decision is yet to be made. If you find other software that fails inside the Sysbox container, please open a GitHub issue so we can add it to the list and work on a" }, { "data": "This section describes restrictions when using Docker + Sysbox. These restrictions are in place because they reduce or break container-to-host isolation, which is one of the key features of Sysbox. Note that some of these options (e.g., --privileged) are typically needed when running complex workloads in containers. With Sysbox, this is no longer needed. | Limitation | Description | Comment | | - | | - | | docker --privileged | Does not work with Sysbox | Breaks container-to-host isolation. | | docker --userns=host | Does not work with Sysbox | Breaks container-to-host isolation. | | docker --pid=host | Does not work with Sysbox | Breaks container-to-host isolation. | | docker --net=host | Does not work with Sysbox | Breaks container-to-host isolation. | This section describes restrictions when using Kubernetes + Sysbox. Some of these restrictions are in place because they reduce or break container-to-host isolation, which is one of the key features of Sysbox. Note that some of these options (e.g., privileged: true) are typically needed when running complex workloads in pods. With Sysbox, this is no longer needed. | Limitation | Description | Comment | | - | | - | | privileged: true | Not supported in pod security context | Breaks container-to-host isolation. | | hostNetwork: true | Not supported in pod security context | Breaks container-to-host isolation. | | hostIPC: true | Not supported in pod security context | Breaks container-to-host isolation. | | hostPID: true | Not supported in pod security context | Breaks container-to-host isolation. | | Limitation | Description | Planned Fix | | - | | - | | Sysbox must run as root | Sysbox needs root privileges on the host to perform the advanced OS virtualization it provides (e.g., procfs/sysfs emualtion, syscall trappings, etc.) | TBD | | Container Checkpoint/Restore | Not yet supported | Yes | | Sysbox Nesting | Running Sysbox inside a Sysbox container is not supported | TBD |" } ]
{ "category": "Runtime", "file_name": "limitations.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "layout: global title: CephObjectStorage This guide describes how to configure Alluxio with {:target=\"_blank\"} as the under storage system. Ceph Object Storage is a distributed, open-source storage system designed for storing and retrieving large amounts of unstructured data. It provides a scalable and highly available storage solution that can be deployed on commodity hardware. Alluxio supports two different clients APIs to connect to Ceph Object Storage using {:target=\"blank\"}. For more information, please read its {:target=\"blank\"}. If you haven't already, please see before you get started. In preparation for using Ceph Object Storage with Alluxio: <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<CEPH_BUCKET>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing bucket</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<CEPH_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use in the bucket, either by creating a new directory or using an existing one</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3ACCESSKEY_ID>`</td> <td markdown=\"span\">Used to sign programmatic requests made to AWS. See {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3SECRETKEY>`</td> <td markdown=\"span\">Used to sign programmatic requests made to AWS. See {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<RGW_HOSTNAME>`</td> <td markdown=\"span\">The host for the Ceph Object Gateway instance, which can be an IP address or a hostname. Read {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<RGW_PORT>`</td> <td markdown=\"span\">The port the instance listens for requests and if not specified, Ceph Object Gateway runs external FastCGI. Read {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<INHERIT_ACL>`</td> <td markdown=\"span\"></td> </tr> </table> To use Ceph Object Storage as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Modify `conf/alluxio-site.properties` to include: ```properties alluxio.dora.client.ufs.root=s3://<CEPHBUCKET>/<CEPHDIRECTORY> s3a.accessKeyId=<S3ACCESSKEY_ID> s3a.secretKey=<S3SECRETKEY> alluxio.underfs.s3.endpoint=http://<RGW-HOSTNAME>:<RGW-PORT> alluxio.underfs.s3.disable.dns.buckets=true alluxio.underfs.s3.inherit.acl=<INHERIT_ACL> ``` If using a Ceph release such as hammer (or older) specify `alluxio.underfs.s3.signer.algorithm=S3SignerType` to use v2 S3 signatures. To use GET Bucket (List Objects) Version 1 specify `alluxio.underfs.s3.list.objects.v1=true`. Once you have configured Alluxio to Ceph Object Storage, try to see that everything works. If Alluxio security is enabled, Alluxio enforces the access control inherited from underlying Ceph Object Storage. Depending on the interace used, refer to or" } ]
{ "category": "Runtime", "file_name": "CephObjectStorage.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "| Maintainer | GitHub ID | Organization | Email | | | | -- | -- | | Michael Yuan | @juntao | Second State | <[email protected]> | | Hung-Ying Tai(hydai) | @hydai | Second State | <[email protected]> | | Yi-Ying He | @q82419 | Second State | <[email protected]> | | Shen-Ta Hsieh(BestSteve) | @ibmibmibm | Second State | <[email protected]> | | Committers | GitHub ID | Organization | Email | | | | -- | -- | | dm4 | @dm4 | Second State | <[email protected]> | | yi | @0yi0 | Second State | <[email protected]> | | Sam | @apepkuss | Second State | <[email protected]> | | danny | @dannypsnl | Second State | <[email protected]> | | Shreyas Atre | @SAtacker | SRA VJTI | <[email protected]> | | Reviewers | GitHub ID | Organization | Email | | | | -- | -- | | | @gusye1234 | University of Science and Technology of China | <[email protected]> | | Tricster | @MediosZ | Southeast University | <[email protected]> | | Wenshuo Yang | @sonder-joker | Bytedance | <[email protected]> | | csh | @L-jasmine | Second State | <[email protected]> | | Amun | @hangedfish | Giant Network Group Co., Ltd. | <[email protected]> | | yb | @yanghaku | Nanjing University | <[email protected]> | | WenYuan Huang | @michael1017 | Purdue University | <[email protected]> | | | @am009 | Huazhong University of Science and Technology | <[email protected]> |" } ]
{ "category": "Runtime", "file_name": "OWNER.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 6 sidebar_label: \"LVM\" The full name of LVM is Logical Volume Manager. It adds a logical layer between the disk partition and the file system, provides an abstract disk volume for the file system to shield the underlying disk partition layout, and establishes a file system on the disk volume. With LVM, you can dynamically resize file systems without repartitioning the disk, and file systems managed with LVM can span disks. When a new disk is added to the server, the administrator does not have to move the original files to the new disk, but directly extends the file system across the disk through LVM. That is, by encapsulating the underlying physical hard disk, and then presenting it to the upper-level application in the form of a logical volume. LVM encapsulates the underlying hard disk. When we operate the underlying physical hard disk, we no longer operate on the partition, but perform the underlying disk management operation on it through something called a logical volume. Physical media (PM): LVM storage media can be partitions, disks, RAID arrays, or SAN disks. Physical volume (PV): Physical volume is the basic storage logical block of LVM, but compared with basic physical storage media (such as partitions, disks, etc.), it contains management parameters related to LVM. A physical volume can be partitioned by a disk, or it can be the disk itself. Disks must be initialized as LVM physical volumes to be used with LVM. Volume groups (VG): It can be resized online by absorbing new physical volumes (PVs) or ejecting existing ones. Logical volumes (LV): It can be resized online by concatenating extents onto them or truncating extents from them. Physical extents (PE): The smallest storage unit that can be allocated in the physical volume. The size of PE can be specified, and the default is 4MB. Logical extents (LE): The smallest storage unit that can be allocated in an logical volume. In the same volume group, the size of LE is the same as that of PE, and there is a one-to-one" }, { "data": "Use volume groups to make multiple hard drives look like one big hard drive Using logical volumes, partitions can span multiple hard disk spaces sdb1 sdb2 sdc1 sdd2 sdf Using logical volumes, you can dynamically resize it if the storage space is insufficient When resizing a logical volume, you need not to consider the location of the logical volume on a hard disk, and you need not to worry about no contiguous space available LV and VG can be created, deleted, and resized online, and the file system on LVM also needs to be resized You can create snapshots, which can be used to back up file systems RAID + LVM combined: LVM is a software method of volume management, while RAID is a method of disk management. For important data, RAID is used to protect physical disks from failures and services are not interrupted, and LVM is used to achieve a good volume management and better use of disk resources. Format a physical disk as PVs, that is, the space is divided into PEs. A PV contains multiple PEs. Add different PVs to the same VG, that is, the PEs of different PVs all enter the PE pool of the VG. A VG contains multiple PVs. Create logical volumes in the VG. This creation process is based on PE, so the PEs that make up the LV may come from different physical disks. LV is created based on PE. Directly format the LV and mount it for use. The scaling in / out of an LV is actually to increase or decrease the number of PEs that make up the LV without losing the original data. Format the LV and mount it for use. First, determine if there is available space for expansion, because space is created from VG, and LVs cannot be expanded across VGs. If the VG has no capacity, you need to expand the VG first. Perform the following steps: ```bash $ vgs VG #PV #LV #SN Attr VSize VFree vg-sdb1 1 8 1 wz--n- <16.00g <5.39g $ lvextend -L +100M -r /dev/vg-sdb1/lv-sdb1 # /dev/vg-sdb1/lv-sdb 100M ``` If there is not sufficient space in the VG and you need to add a new disk, run the following commands in sequence: ```bash pvcreate /dev/sdc vgextend vg-sdb1 /dev/sdb3 ``` The LVM mechanism provides the function of snapshotting LVs to obtain a state-consistent backup of the file system. LVM adopts Copy-On-Write (COW) technology, which can be backed up without stopping the service or setting the logical volume as read-only. Using the LVM, snapshot function can enable consistent backup without affecting the availability of the server. The copy-on-write adopted by LVM means that when creating an LVM snapshot, only the metadata in the original volume is copied. In other words, when an LVM logical volume is created, no physical replication of the data occurs. In another words, only the metadata is copied, not the physical data, so the snapshot creation is almost real-time. When a write operation is performed on the original volume, the snapshot will track the changes to the blocks in the original volume. At this time, the data that will be changed on the original volume will be copied to the space reserved by the snapshot before the change." } ]
{ "category": "Runtime", "file_name": "lvm.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "For kata containers, rootfs is used in the read-only way. EROFS can noticeably decrease metadata overhead. `mkfs.erofs` can generate compressed and uncompressed EROFS images. For uncompressed images, no files are compressed. However, it is optional to inline the data blocks at the end of the file with the metadata. For compressed images, each file will be compressed using the lz4 or lz4hc algorithm, and it will be confirmed whether it can save space. Use No compression of the file if compression does not save space. | | EROFS | EXT4 | XFS | |--|-| | | | Image Size [MB] | 106(uncompressed) | 256 | 126 | On newer `Ubuntu/Debian` systems, it can be installed directly using the `apt` command, and on `Fedora` it can be installed directly using the `dnf` command. ```shell $ apt install erofs-utils $ dnf install erofs-utils ``` If you need to enable the `Lz4` compression feature, `Lz4 1.8.0+` is required, and `Lz4 1.9.3+` is strongly recommended. For some old lz4 versions (lz4-1.8.0~1.8.3), if lz4-static is not installed, the lz4hc algorithm will not be supported. lz4-static can be installed with apt install lz4-static.x86_64. However, these versions have some bugs in compression, and it is not recommended to use these versions directly. If you use `lz4 1.9.0+`, you can directly use the following command to compile. ```shell $ ./autogen.sh $ ./configure $ make ``` The compiled `mkfs.erofs` program will be saved in the `mkfs` directory. Afterwards, the generated tools can be installed to a system directory using make install (requires root privileges). ```shell $ export distro=\"ubuntu\" $ export FS_TYPE=\"erofs\" $ export ROOTFS_DIR=\"realpath kata-containers/tools/osbuilder/rootfs-builder/rootfs\" $ sudo rm -rf \"${ROOTFS_DIR}\" $ pushd kata-containers/tools/osbuilder/rootfs-builder $ script -fec 'sudo -E SECCOMP=no ./rootfs.sh \"${distro}\"' $ popd ``` Note: You should only do this step if you are testing with the latest version of the agent. ```shell $ sudo install -o root -g root -m 0550 -t \"${ROOTFSDIR}/usr/bin\" \"${ROOTFSDIR}/../../../../src/agent/target/x86_64-unknown-linux-musl/release/kata-agent\" $ sudo install -o root -g root -m 0440 \"${ROOTFSDIR}/../../../../src/agent/kata-agent.service\" \"${ROOTFSDIR}/usr/lib/systemd/system/\" $ sudo install -o root -g root -m 0440 \"${ROOTFSDIR}/../../../../src/agent/kata-containers.target\" \"${ROOTFSDIR}/usr/lib/systemd/system/\" ``` ```shell $ pushd kata-containers/tools/osbuilder/image-builder $ script -fec 'sudo -E ./imagebuilder.sh \"${ROOTFSDIR}\"' $ popd ``` ```shell $ pushd kata-containers/tools/osbuilder/image-builder $ commit=\"$(git log --format=%h -1 HEAD)\" $ date=\"$(date +%Y-%m-%d-%T.%N%z)\" $ rootfs=\"erofs\" $ image=\"kata-containers-${rootfs}-${date}-${commit}\" $ sudo install -o root -g root -m 0640 -D kata-containers.img \"/usr/share/kata-containers/${image}\" $ (cd /usr/share/kata-containers && sudo ln -sf \"$image\" kata-containers.img) $ popd ``` ```shell $ sudo sed -i -e 's/^# \\(rootfs_type\\).=.*$/\\1 = erofs/g' /etc/kata-containers/configuration.toml ```" } ]
{ "category": "Runtime", "file_name": "how-to-use-erofs-build-rootfs.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Print version information ``` cilium-dbg version [flags] ``` ``` -h, --help help for version -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_version.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for fish Generate the autocompletion script for the fish shell. To load completions in your current shell session: cilium-health completion fish | source To load completions for every new session, execute once: cilium-health completion fish > ~/.config/fish/completions/cilium-health.fish You will need to start a new shell for this setup to take effect. ``` cilium-health completion fish [flags] ``` ``` -h, --help help for fish --no-descriptions disable completion descriptions ``` ``` -D, --debug Enable debug messages -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-health_completion_fish.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: Using Fast Datapath menu_order: 60 search_type: Documentation The most important thing to know about fast datapath is that you don't need to configure anything before using this feature. If you are using Weave Net 1.2 or greater, fast datapath (`fastdp`) is automatically enabled. When Weave Net cannot use the fast data path between two hosts, it falls back to a slower packet forwarding approach called `sleeve`. Selecting the fastest forwarding approach is automatic, and is determined on a connection-by-connection basis. For example, a Weave network spanning two data centers might use fast data path within the data centers, but not for the more constrained network link between them. See for a more in-depth discussion of this feature. You can disable fastdp by enabling the `WEAVENOFASTDP` environment variable at `weave launch`: $ WEAVENOFASTDP=true weave launch Fast datapath implements encryption using IPsec which is configured with IP transformation framework (XFRM) provided by the Linux kernel. Each encrypted dataplane packet is encapsulated into , thus in some networks a firewall rule for allowing ESP traffic needs to be installed. E.g. Google Cloud Platform denies ESP packets by default. See for more details for the fastdp encryption. Weave Net automatically uses the fastest datapath for every connection unless it encounters a situation that prevents it from working. To ensure that Weave Net can use the fast datapath: Avoid Network Address Translation (NAT) devices Open UDP port 6784 (This is the port used by the Weave routers) Ensure that `WEAVE_MTU` fits with the `MTU` of the intermediate network (see below) The use of fast datapath is an automated connection-by-connection decision made by Weave Net, and because of this, you may end up with a mixture of connection tunnel types. If fast datapath cannot be used for a connection, Weave Net falls back to the `sleeve` \"user space\" packet path. Once a Weave network is set up, you can query the connections using the `weave status connections` command: $ weave status connections <-192.168.122.25:43889 established fastdp a6:66:4f:a5:8a:11(ubuntu1204) Where fastdp indicates that fast datapath is being used on a connection. If fastdp is not shown, the field displays `sleeve` indicating Weave Net's fall-back encapsulation method: $ weave status connections <- 192.168.122.25:54782 established sleeve 8a:50:4c:23:11:ae(ubuntu1204) The Maximum Transmission Unit, or MTU, is the technical term for the limit on how big a single packet can be on the network. Weave Net defaults to 1376 bytes, but you can set a smaller size if your underlying network has a tighter limit, or set a larger size for better performance. The underlying network must be able to deliver packets of the size specified plus overheads of around 84-87 bytes (the final MTU should be divisible by four), or else Weave Net will fall back to Sleeve for that connection. This requirement applies to every path between peers. To specify a different MTU, before launching Weave Net set the environment variable `WEAVE_MTU`. For example, for a typical \"jumbo frame\" configuration: $ WEAVE_MTU=8916 weave launch host2 host3 See Also *" } ]
{ "category": "Runtime", "file_name": "fastdp.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "slug: /comparison/juicefsvsalluxio description: This article compares the main features of Alluxio and JuiceFS. Alluxio (/lksio/) is a data access layer in the big data and machine learning ecosystem. Initially as the research project \"Tachyon,\" it was created at the University of California, Berkeley's as creator's Ph.D. thesis in 2013. Alluxio was open sourced in 2014. The following table compares the main features of Alluxio and JuiceFS. | Features | Alluxio | JuiceFS | | -- | - | - | | Storage format | Object | Block | | Cache granularity | 64 MiB | 4 MiB | | Multi-tier cache | | | | Hadoop-compatible | | | | S3-compatible | | | | Kubernetes CSI Driver | | | | Hadoop data locality | | | | Fully POSIX-compatible | | | | Atomic metadata operation | | | | Consistency | | | | Data compression | | | | Data encryption | | | | Zero-effort operation | | | | Language | Java | Go | | Open source license | Apache License 2.0 | Apache License 2.0 | | Open source date | 2014 | 2021.1 | JuiceFS has its own storage format, where files are divided into blocks, and they can be optionally encrypted and compressed before being uploaded to the object storage. For more details, see . In contrast, Alluxio stores files as objects into UFS and does not split them into blocks like JuiceFS does. JuiceFS has a smaller of 4 MiB, which results in a finer granularity compared to Alluxio's 64 MiB. The smaller block size of JuiceFS is more beneficial for workloads involving random reads (e.g., Parquet and ORC), as it improves cache management efficiency. JuiceFS is , supporting not only Hadoop 2.x and Hadoop 3.x, but also various components in the Hadoop ecosystem. JuiceFS provides for easy integration with Kubernetes environments. While Alluxio also offers , it seems to have limited activity and lacks official support from Alluxio. JuiceFS is . A pjdfstest from shows that Alluxio did not pass the POSIX compatibility" }, { "data": "For example, Alluxio does not support symbolic links, truncate, fallocate, append, xattr, mkfifo, mknod and utimes. Besides the things covered by pjdfstest, JuiceFS also provides close-to-open consistency, atomic metadata operations, mmap, fallocate with punch hole, xattr, BSD locks (flock), and POSIX record locks (fcntl). In Alluxio, a metadata operation involves two steps: modifying the state of the Alluxio master and sending a request to the UFS. This process is not atomic, and the state is unpredictable during execution or in case of failures. Additionally, Alluxio relies on UFS to implement metadata operations. For example, rename file operations will become copy and delete operations. Thanks to , most metadata operations in JuiceFS are atomic, for example, file renaming, file deletion, and directory renaming. You do not have to worry about the consistency and performance. Alluxio loads metadata from the UFS as needed. It lacks information about UFS at startup. By default, Alluxio expects all modifications on UFS to be completed through Alluxio. If changes are made directly on UFS, you need to sync metadata between Alluxio and UFS either manually or periodically. As we have mentioned in section, the two-step metadata operation may result in inconsistency. JuiceFS provides strong consistency for both metadata and data. The metadata service of JuiceFS is the single source of truth, not a mirror of UFS. The metadata service does not rely on object storage to obtain metadata, and object storage is just treated as unlimited block storage. This ensures there are no inconsistencies between JuiceFS and object storage. JuiceFS supports data compression using or for all your data, while Alluxio does not offer this feature. JuiceFS supports data encryption both in transit and at rest. Alluxio community edition lacks this feature, while it is available in the . Alluxio's architecture can be divided into three components: master, worker and client. A typical cluster consists of a single leading master, standby masters, a job master, standby job masters, workers, and job workers. You need to maintain all these masters and workers by yourself. JuiceFS uses Redis or as the metadata engine. You could easily use the service managed by a public cloud provider as JuiceFS' metadata engine, without any operational overhead." } ]
{ "category": "Runtime", "file_name": "juicefs_vs_alluxio.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Given a list of pod UUIDs, rkt stop will shut them down, for the shipped stage1 images, this means: default systemd-nspawn stage1: the apps in the pod receive a TERM signal and, after a timeout, a KILL signal. kvm stage1: the virtual machine is shut down with `systemctl halt`. rkt fly stage1: the app receives a TERM signal. The `--force` flag will stop a pod forcibly, that is: default systemd-nspawn stage1: the container is killed. kvm stage1: the qemu or lkvm process receives a KILL signal. rkt fly stage1: the app receives a KILL signal. ``` \"387fc8eb-eabd-4e77-b080-d8c0001eb50c\" \"cbbf5c01-dd52-4ccc-a1e0-cfd8f1e88418\" \"93e516b0-e84b-40cf-a45b-531b14dfcce2\" ``` The `--uuid-file` flag may be used to pass a text file with UUID to `stop` command. This can be paired with `--uuid-file-save` flag to stop pods by name: ``` rkt run --uuid-file-save=/run/rkt-uuids/mypod ... rkt stop --uuid-file=/run/rkt-uuids/mypod ``` If you started rkt as a systemd service, you can stop the pod with `systemctl stop`. If you started rkt interactively: For a stage1 with systemd-nspawn, you can stop the pod by pressing `^]` three times within 5 seconds. If you're using systemd on the host, you can also use `machinectl` with the `poweroff` or `terminate` subcommand. For a stage1 with kvm, you can stop the pod by pressing Ctrl+A and then x." } ]
{ "category": "Runtime", "file_name": "stop.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "name: Bug report about: Create a report to help us improve. title: \"[BUG] \" labels: '' assignees: '' Yes/No All fs objects: Total space: Free space: RAM used: last metadata save duration:" } ]
{ "category": "Runtime", "file_name": "bug-report.md", "project_name": "MooseFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide is useful if you intend to contribute on containerd. Thanks for your effort. Every contribution is very appreciated. This doc includes: To build the `containerd` daemon, and the `ctr` simple test client, the following build system dependencies are required: Go 1.13.x or above except 1.14.x Protoc 3.x compiler and headers (download at the ) Btrfs headers and libraries for your distribution. Note that building the btrfs driver can be disabled via the build tag `no_btrfs`, removing this dependency. First you need to setup your Go development environment. You can follow this guideline and at the end you have `go` command in your `PATH`. You need `git` to checkout the source code: ```sh git clone https://github.com/containerd/containerd ``` For proper results, install the `protoc` release into `/usr/local` on your build system. For example, the following commands will download and install the 3.11.4 release for a 64-bit Linux host: ``` $ wget -c https://github.com/google/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip $ sudo unzip protoc-3.11.4-linux-x86_64.zip -d /usr/local ``` `containerd` uses it means that you need to satisfy these dependencies in your system: CentOS/Fedora: `yum install btrfs-progs-devel` Debian/Ubuntu: `apt-get install btrfs-progs libbtrfs-dev` Debian(before Buster)/Ubuntu(before 19.10): `apt-get install btrfs-tools` At this point you are ready to build `containerd` yourself! `runc` is the default container runtime used by `containerd` and is required to run containerd. While it is okay to download a runc binary and install that on the system, sometimes it is necessary to build runc directly when working with container runtime development. You can skip this step if you already have the correct version of `runc` installed. `runc` requires `libseccomp`. You may need to install the missing dependencies: CentOS/Fedora: `yum install libseccomp libseccomp-devel` Debian/Ubuntu: `apt-get install libseccomp libseccomp-dev` For the quick and dirty installation, you can use the following: ``` git clone https://github.com/opencontainers/runc cd runc make sudo make install ``` Make sure to follow the guidelines for versioning in for the best results. `containerd` uses `make` to create a repeatable build flow. It means that you can run: ``` cd containerd make ``` This is going to build all the project binaries in the `./bin/` directory. You can move them in your global path, `/usr/local/bin` with: ```sudo sudo make install ``` When making any changes to the gRPC API, you can use the installed `protoc` compiler to regenerate the API generated code packages with: ```sudo make generate ``` Note: Several build tags are currently available: `no_btrfs`: A build tag disables building the btrfs snapshot driver. `no_cri`: A build tag disables building Kubernetes support into containerd. See for build tags of CRI plugin. `no_devmapper`: A build tag disables building the device mapper snapshot" }, { "data": "For example, adding `BUILDTAGS=no_btrfs` to your environment before calling the binaries Makefile target will disable the btrfs driver within the containerd Go build. Vendoring of external imports uses the . You need to use `go mod` command to modify the dependencies. After modifition, you should run `go mod tidy` and `go mod vendor` to ensure the `go.mod`, `go.sum` files and `vendor` directory are up to date. Changes to these files should become a single commit for a PR which relies on vendored updates. Please refer to for the currently supported version of `runc` that is used by containerd. You can build static binaries by providing a few variables to `make`: ```sudo make EXTRA_FLAGS=\"-buildmode pie\" \\ EXTRA_LDFLAGS='-linkmode external -extldflags \"-fno-PIC -static\"' \\ BUILDTAGS=\"netgo osusergo static_build\" ``` Note: static build is discouraged static containerd binary does not support loading shared object plugins (`*.so`) The following instructions assume you are at the parent directory of containerd source directory. You can build `containerd` via a Linux-based Docker container. You can build an image from this `Dockerfile`: ``` FROM golang RUN apt-get update && \\ apt-get install -y libbtrfs-dev ``` Let's suppose that you built an image called `containerd/build`. From the containerd source root directory you can run the following command: ```sh docker run -it \\ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \\ -e GOPATH=/go \\ -w /go/src/github.com/containerd/containerd containerd/build sh ``` This mounts `containerd` repository You are now ready to : ```sh make && make install ``` To have complete core container runtime, you will need both `containerd` and `runc`. It is possible to build both of these via Docker container. You can use `git` to checkout `runc`: ```sh git clone https://github.com/opencontainers/runc ``` We can build an image from this `Dockerfile`: ```sh FROM golang RUN apt-get update && \\ apt-get install -y libbtrfs-dev libseccomp-dev ``` In our Docker container we will build `runc` build, which includes , , and support. Seccomp support in runc requires `libseccomp-dev` as a dependency (AppArmor and SELinux support do not require external libraries at build time). Refer to in the docs directory to for details about building runc, and to learn about supported versions of `runc` as used by containerd. Let's suppose you build an image called `containerd/build` from the above Dockerfile. You can run the following command: ```sh docker run -it --privileged \\ -v /var/lib/containerd \\ -v ${PWD}/runc:/go/src/github.com/opencontainers/runc \\ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \\ -e GOPATH=/go \\ -w /go/src/github.com/containerd/containerd containerd/build sh ``` This mounts both `runc` and `containerd` repositories in our Docker container. From within our Docker container let's build `containerd`: ```sh cd /go/src/github.com/containerd/containerd make && make install ``` These binaries can be found in the `./bin` directory in your host. `make install` will move the binaries in your `$PATH`. Next, let's build `runc`: ```sh cd" }, { "data": "make && make install ``` For further details about building runc, refer to in the docs directory. When working with `ctr`, the simple test client we just built, don't forget to start the daemon! ```sh containerd --config config.toml ``` During the automated CI the unit tests and integration tests are run as part of the PR validation. As a developer you can run these tests locally by using any of the following `Makefile` targets: `make test`: run all non-integration tests that do not require `root` privileges `make root-test`: run all non-integration tests which require `root` `make integration`: run all tests, including integration tests and those which require `root`. `TESTFLAGSPARALLEL` can be used to control parallelism. For example, `TESTFLAGSPARALLEL=1 make integration` will lead a non-parallel execution. The default value of `TESTFLAGS_PARALLEL` is 8. To execute a specific test or set of tests you can use the `go test` capabilities without using the `Makefile` targets. The following examples show how to specify a test name and also how to use the flag directly against `go test` to run root-requiring tests. ```sh go test -v -run \"<TEST_NAME>\" . go test -v -run . -test.root ``` Example output from directly running `go test` to execute the `TestContainerList` test: ```sh sudo go test -v -run \"TestContainerList\" . -test.root INFO[0000] running tests against containerd revision=f2ae8a020a985a8d9862c9eb5ab66902c2888361 version=v1.0.0-beta.2-49-gf2ae8a0 === RUN TestContainerList PASS: TestContainerList (0.00s) PASS ok github.com/containerd/containerd 4.778s ``` In addition to `go test`-based testing executed via the `Makefile` targets, the `containerd-stress` tool is available and built with the `all` or `binaries` targets and installed during `make install`. With this tool you can stress a running containerd daemon for a specified period of time, selecting a concurrency level to generate stress against the daemon. The following command is an example of having five workers running for two hours against a default containerd gRPC socket address: ```sh containerd-stress -c 5 -t 120 ``` For more information on this tool's options please run `containerd-stress --help`. is an external tool which can be used to drive load against a container runtime, specifying a particular set of lifecycle operations to run with a specified amount of concurrency. Bucketbench is more focused on generating performance details than simply inducing load against containerd. Bucketbench differs from the `containerd-stress` tool in a few ways: Bucketbench has support for testing the Docker engine, the `runc` binary, and containerd 0.2.x (via `ctr`) and 1.0 (via the client library) branches. Bucketbench is driven via configuration file that allows specifying a list of lifecycle operations to execute. This can be used to generate detailed statistics per-command (e.g. start, stop, pause, delete). Bucketbench generates detailed reports and timing data at the end of the configured test run. More details on how to install and run `bucketbench` are available at the ." } ]
{ "category": "Runtime", "file_name": "BUILDING.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "On x86_64, Firecracker makes certain modifications to the guest's CPUID regardless of whether a CPU template is used. This is referred to as `CPUID normalization`. If a CPU template is used the normalization is performed after the CPU template is applied. That means that if the CPU template configures CPUID bits used in the normalization process, they will be" }, { "data": "See also: | Description | Leaf | Subleaf | Register | Bits | | | :--: | :--: | :--: | :: | | Pass through vendor ID from host | 0x0 | - | EBX, ECX, EDX | all | | Set CLFLUSH line size | 0x1 | - | EBX | 15:8 | | Set maximum number of addressable IDs for logical processors in the physical package | 0x1 | - | EBX | 23:16 | | Set initial APIC ID | 0x1 | - | EBX | 31:24 | | Disable PDCM (Perfmon and Debug Capability) | 0x1 | - | ECX | 15 | | Enable TSC_DEADLINE | 0x1 | - | ECX | 24 | | Enable HYPERVISOR | 0x1 | - | ECX | 31 | | Set HTT value if the microVM's CPU count is greater than 1 | 0x1 | - | EDX | 28 | | Insert leaf 0xb, subleaf 0x1 filled with `0` if it is not already present | 0xb | 0x1 | all | all | | Update extended topology enumeration | 0xb | all | EAX | 4:0 | | Update extended topology enumeration | 0xb | all | EBX | 15:0 | | Update extended topology enumeration | 0xb | all | ECX | 15:8 | | Pass through L1 cache and TLB information from host | 0x80000005 | - | all | all | | Pass through L2 cache and TLB and L3 cache information from host | 0x80000006 | - | all | all | | Description | Leaf | Subleaf | Register | Bits | | -- | :--: | :--: | :-: | :: | | Update deterministic cache parameters | 0x4 | all | EAX | 31:14 | | Disable Intel Turbo Boost technology | 0x6 | - | EAX | 1 | | Disable frequency selection | 0x6 | - | ECX | 3 | | Set FDPEXCPTNONLY bit | 0x7 | 0x0 | EBX | 6 | | Set \"Deprecates FPU CS and FPU DS values\" bit | 0x7 | 0x0 | EBX | 13 | | Disable performance monitoring | 0xa | - | EAX, EBX, ECX, EDX | all | | Update brand string to use a default format and real frequency | 0x80000002, 0x80000003, 0x80000004 | - | EAX, EBX, ECX, EDX | all | | Description | Leaf | Subleaf | Register | Bits | | - | :--: | :--: | :-: | :: | | Set IA32ARCHCAPABILITIES MSR as not present | 0x7 | - | EDX | 29 | | Update largest extended function entry to 0x8000001f | 0x80000000 | - | EAX | 31:0 | | Set topology extension bit | 0x80000001 | - | ECX | 22 | | Update brand string with a default AMD value | 0x80000002, 0x80000003, 0x80000004 | - | EAX, EBX, ECX, EDX | all | | Update number of physical threads | 0x80000008 | - | ECX | 7:0 | | Update APIC ID size | 0x80000008 | - | ECX | 15:12 | | Update cache topology information | 0x8000001d | all | all | all | | Update extended APIC ID | 0x8000001e | - | EAX, EBX, ECX | all |" } ]
{ "category": "Runtime", "file_name": "cpuid-normalization.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "A non-exhaustive list of containerd adopters is provided below. _Docker/Moby engine_ - Containerd began life prior to its CNCF adoption as a lower-layer runtime manager for `runc` processes below the Docker engine. Continuing today, containerd has extremely broad production usage as a component of the stack. Note that this includes any use of the open source ; including the Balena project listed below. faasd in an Open Source project for serverless functions. It takes the same OpenFaaS components that usually run on Kubernetes and instead launches containers directly on a single host using CNI for networking. It's ideal for edge and for deploying functions without having to think about managing and maintaining Kubernetes. __ - offers containerd as the CRI runtime for v1.11 and higher versions. __ - IBM's on-premises cloud offering has containerd as a \"tech preview\" CRI runtime for the Kubernetes offered within this product for the past two releases, and plans to fully migrate to containerd in a future release. __ - Container-Optimized OS is a Linux Operating System from Google that is optimized for running containers. COS has used containerd as container runtime when containerd was part of Docker's core container runtime. __ - containerd has been offered in GKE since version 1.14 and has been the default runtime since version 1.19. It is also the only supported runtime for GKE Autopilot from the launch. __ - uses containerd + Firecracker (noted below) as the runtime and isolation technology for containers run in the Fargate platform. Fargate is a serverless, container-native compute offering from Amazon Web Services. __ - EKS optionally offers containerd as a CRI runtime starting with Kubernetes version 1.21. In Kubernetes 1.22 the default CRI runtime will be containerd. __ - Bottlerocket is a Linux distribution from Amazon Web Services purpose-built for containers using containerd as the core system runtime. _Cloud Foundry_ - The for CF has been using OCI runC directly with additional code from CF managing the container image and filesystem interactions, but have recently migrated to use containerd as a replacement for the extra code they had written around runC. _Alibaba's PouchContainer_ - The Alibaba project uses containerd as its runtime for a cloud native offering that has unique isolation and image distribution capabilities. _Rancher's k3s project_ - Rancher Labs is a lightweight Kubernetes distribution; in their words: \"Easy to install, half the memory, all in a binary less than 40mb.\" k8s uses containerd as the embedded runtime for this popular lightweight Kubernetes variant. _Rancher's Rio project_ - Rancher Labs project uses containerd as the runtime for a combined Kubernetes, Istio, and container \"Cloud Native Container Distribution\"" }, { "data": "_Balena_ - Resin's container engine, based on moby/moby but for edge, embedded, and IoT use cases, uses the containerd and runc stack in the same way that the Docker engine uses containerd. _LinuxKit_ - the Moby project's for building secure, minimal Linux OS images in a container-native model uses containerd as the core runtime for system and service containers. _BuildKit_ - The Moby project's can use either runC or containerd as build execution backends for building container images. BuildKit support has also been built into the Docker engine in recent releases, making BuildKit provide the backend to the `docker build` command. __ - Microsoft's managed Kubernetes offering uses containerd for Linux nodes running v1.19 and greater, and Windows nodes running 1.20 and greater. _Amazon Firecracker_ - The AWS has extended containerd with a new snapshotter and v2 shim to allow containerd to drive virtualized container processes via their VMM implementation. More details on their containerd integration are available in . _Kata Containers_ - The lightweight-virtualized container runtime project integrates with containerd via a custom v2 shim implementation that drives the Kata container runtime. _D2iQ Konvoy_ - D2iQ Inc product uses containerd as the container runtime for its Kubernetes distribution. _Inclavare Containers_ - is an innovation of container runtime with the novel approach for launching protected containers in hardware-assisted Trusted Execution Environment (TEE) technology, aka Enclave, which can prevent the untrusted entity, such as Cloud Service Provider (CSP), from accessing the sensitive and confidential assets in use. _VMware TKG_ - VMware's Multicloud Kubernetes offering uses containerd as the default CRI runtime. _VMware TCE_ - VMware's fully-featured, easy to manage, Kubernetes platform for learners and users. It is a freely available, community supported, and open source distribution of VMware Tanzu. It uses containerd as the default CRI runtime. __ - Talos Linux is Linux designed for Kubernetes secure, immutable, and minimal. Talos Linux is using containerd as the core system runtime and CRI implementation. _Deckhouse_ - from Flant allows you to manage Kubernetes clusters anywhere in a fully automatic and uniform fashion. It uses containerd as the default CRI runtime. __ - Actuated is a platform for running self-hosted CI in securely-isolated Firecracker VMs. Actuated uses containerd's image pulling facility to distribute and update the root filesystem for VMs for CI agents. _ - Syself Autopilot is a simplified Kubernetes platform based on Cluster API that can run on various providers. Syself Autopilot uses containerd as the default CRI runtime. _Other Projects_ - While the above list provides a cross-section of well known uses of containerd, the simplicity and clear API layer for containerd has inspired many smaller projects around providing simple container management platforms. Several examples of building higher layer functionality on top of the containerd base have come from various containerd community participants: Michael Crosby's project, Evan Hazlett's project, Paul Knopf's immutable Linux image builder project: ." } ]
{ "category": "Runtime", "file_name": "ADOPTERS.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "name: Bug report about: Create a report to help us improve labels: bug <!-- Are you in the right place? For issues or feature requests, please create an issue in this repository. For general technical and non-technical questions, we are happy to help you on our . Did you already search the existing open issues for anything similar? --> Is this a bug report or feature request? Bug Report Deviation from expected behavior: Expected behavior: How to reproduce it (minimal and precise): <!-- Please let us know any circumstances for reproduction of your bug. --> File(s) to submit: Cluster CR (custom resource), typically called `cluster.yaml`, if necessary Logs to submit: Operator's logs, if necessary Crashing pod(s) logs, if necessary To get logs, use `kubectl -n <namespace> logs <pod name>` When pasting logs, always surround them with backticks or use the `insert code` button from the Github UI. Read . Cluster Status to submit: Output of kubectl commands, if necessary To get the health of the cluster, use `kubectl rook-ceph health` To get the status of the cluster, use `kubectl rook-ceph ceph status` For more details, see the Environment: OS (e.g. from /etc/os-release): Kernel (e.g. `uname -a`): Cloud provider or hardware configuration: Rook version (use `rook version` inside of a Rook Pod): Storage backend version (e.g. for ceph do `ceph -v`): Kubernetes version (use `kubectl version`): Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift): Storage backend status (e.g. for Ceph use `ceph health` in the ):" } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Name | Type | Description | Notes | - | - | - BootVcpus | int32 | | [default to 1] MaxVcpus | int32 | | [default to 1] Topology | Pointer to | | [optional] KvmHyperv | Pointer to bool | | [optional] [default to false] MaxPhysBits | Pointer to int32 | | [optional] Affinity | Pointer to | | [optional] Features | Pointer to | | [optional] `func NewCpusConfig(bootVcpus int32, maxVcpus int32, ) *CpusConfig` NewCpusConfig instantiates a new CpusConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewCpusConfigWithDefaults() *CpusConfig` NewCpusConfigWithDefaults instantiates a new CpusConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *CpusConfig) GetBootVcpus() int32` GetBootVcpus returns the BootVcpus field if non-nil, zero value otherwise. `func (o CpusConfig) GetBootVcpusOk() (int32, bool)` GetBootVcpusOk returns a tuple with the BootVcpus field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpusConfig) SetBootVcpus(v int32)` SetBootVcpus sets BootVcpus field to given value. `func (o *CpusConfig) GetMaxVcpus() int32` GetMaxVcpus returns the MaxVcpus field if non-nil, zero value otherwise. `func (o CpusConfig) GetMaxVcpusOk() (int32, bool)` GetMaxVcpusOk returns a tuple with the MaxVcpus field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpusConfig) SetMaxVcpus(v int32)` SetMaxVcpus sets MaxVcpus field to given value. `func (o *CpusConfig) GetTopology() CpuTopology` GetTopology returns the Topology field if non-nil, zero value otherwise. `func (o CpusConfig) GetTopologyOk() (CpuTopology, bool)` GetTopologyOk returns a tuple with the Topology field if it's non-nil, zero value otherwise and a boolean to check if the value has been" }, { "data": "`func (o *CpusConfig) SetTopology(v CpuTopology)` SetTopology sets Topology field to given value. `func (o *CpusConfig) HasTopology() bool` HasTopology returns a boolean if a field has been set. `func (o *CpusConfig) GetKvmHyperv() bool` GetKvmHyperv returns the KvmHyperv field if non-nil, zero value otherwise. `func (o CpusConfig) GetKvmHypervOk() (bool, bool)` GetKvmHypervOk returns a tuple with the KvmHyperv field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpusConfig) SetKvmHyperv(v bool)` SetKvmHyperv sets KvmHyperv field to given value. `func (o *CpusConfig) HasKvmHyperv() bool` HasKvmHyperv returns a boolean if a field has been set. `func (o *CpusConfig) GetMaxPhysBits() int32` GetMaxPhysBits returns the MaxPhysBits field if non-nil, zero value otherwise. `func (o CpusConfig) GetMaxPhysBitsOk() (int32, bool)` GetMaxPhysBitsOk returns a tuple with the MaxPhysBits field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpusConfig) SetMaxPhysBits(v int32)` SetMaxPhysBits sets MaxPhysBits field to given value. `func (o *CpusConfig) HasMaxPhysBits() bool` HasMaxPhysBits returns a boolean if a field has been set. `func (o *CpusConfig) GetAffinity() []CpuAffinity` GetAffinity returns the Affinity field if non-nil, zero value otherwise. `func (o CpusConfig) GetAffinityOk() ([]CpuAffinity, bool)` GetAffinityOk returns a tuple with the Affinity field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpusConfig) SetAffinity(v []CpuAffinity)` SetAffinity sets Affinity field to given value. `func (o *CpusConfig) HasAffinity() bool` HasAffinity returns a boolean if a field has been set. `func (o *CpusConfig) GetFeatures() CpuFeatures` GetFeatures returns the Features field if non-nil, zero value otherwise. `func (o CpusConfig) GetFeaturesOk() (CpuFeatures, bool)` GetFeaturesOk returns a tuple with the Features field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpusConfig) SetFeatures(v CpuFeatures)` SetFeatures sets Features field to given value. `func (o *CpusConfig) HasFeatures() bool` HasFeatures returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "CpusConfig.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "(cluster-recover)= It might happen that one or several members of your cluster go offline or become unreachable. In that case, no operations are possible on this member, and neither are operations that require a state change across all members. See {ref}`clustering-offline-members` and {ref}`cluster-automatic-evacuation` for more information. If you can bring the offline cluster members back or delete them from the cluster, operation resumes as normal. If this is not possible, there are a few ways to recover the cluster, depending on the scenario that caused the failure. See the following sections for details. ```{note} Run `incus admin cluster --help` for an overview of all available commands. ``` Every Incus cluster has a specific number of members (configured through {config:option}`server-cluster:cluster.max_voters`) that serve as voting members of the distributed database. If you permanently lose a majority of these cluster members (for example, you have a three-member cluster and you lose two members), the cluster loses quorum and becomes unavailable. However, if at least one database member survives, it is possible to recover the cluster. To do so, complete the following steps: Log on to any surviving member of your cluster and run the following command: sudo incus admin cluster list-database This command shows which cluster members have one of the database roles. Pick one of the listed database members that is still online as the new leader. Log on to the machine (if it differs from the one you are already logged on to). Make sure that the Incus daemon is not running on the machine. sudo systemctl stop incus.service incus.socket Log on to all other cluster members that are still online and stop the Incus daemon. On the server that you picked as the new leader, run the following command: sudo incus admin cluster recover-from-quorum-loss Start the Incus daemon again on all machines, starting with the new leader. sudo systemctl start incus.socket incus.service The database should now be back online. No information has been deleted from the database. All information about the cluster members that you have lost is still there, including the metadata about their instances. This can help you with further recovery steps if you need to re-create the lost instances. To permanently delete the cluster members that you have lost, force-remove them. See {ref}`cluster-manage-delete-members`. If some members of your cluster are no longer reachable, or if the cluster itself is unreachable due to a change in IP address or listening port number, you can reconfigure the" }, { "data": "To do so, edit the cluster configuration on each member of the cluster and change the IP addresses or listening port numbers as required. You cannot remove any members during this process. The cluster configuration must contain the description of the full cluster, so you must do the changes for all cluster members on all cluster members. You can edit the {ref}`clustering-member-roles` of the different members, but with the following limitations: A cluster member that does not have a `database*` role cannot become a voter, because it might lack a global database. At least two members must remain voters (except in the case of a two-member cluster, where one voter suffices), or there will be no quorum. Log on to each cluster member and complete the following steps: Stop the Incus daemon. sudo systemctl stop incus.service incus.socket Run the following command: sudo incus admin cluster edit Edit the YAML representation of the information that this cluster member has about the rest of the cluster: ```yaml members: id: 1 # Internal ID of the member (Read-only) name: server1 # Name of the cluster member (Read-only) address: 192.0.2.10:8443 # Last known address of the member (Writeable) role: voter # Last known role of the member (Writeable) id: 2 # Internal ID of the member (Read-only) name: server2 # Name of the cluster member (Read-only) address: 192.0.2.11:8443 # Last known address of the member (Writeable) role: stand-by # Last known role of the member (Writeable) id: 3 # Internal ID of the member (Read-only) name: server3 # Name of the cluster member (Read-only) address: 192.0.2.12:8443 # Last known address of the member (Writeable) role: spare # Last known role of the member (Writeable) ``` You can edit the addresses and the roles. After doing the changes on all cluster members, start the Incus daemon on all members again. sudo systemctl start incus.socket incus.service The cluster should now be fully available again with all members reporting in. No information has been deleted from the database. All information about the cluster members and their instances is still there. In some situations, you might need to manually alter the Raft membership configuration of the cluster because of some unexpected behavior. For example, if you have a cluster member that was removed uncleanly, it might not show up in but still be part of the Raft configuration. To see the Raft configuration, run the following command: incus admin sql local \"SELECT * FROM raft_nodes\" In that case, run the following command to remove the leftover node: incus admin cluster remove-raft-node <address>" } ]
{ "category": "Runtime", "file_name": "cluster_recover.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "The following table summarizes the supported Linux distros, the installation methods supported, and any other requirements: | Distro / Release | Package Install | K8s Install | Build from Source | Min Kernel | Shiftfs Required | Other | | | :-: | :: | :: | :--: | :--: | -- | | Ubuntu Bionic (18.04) | | | | 5.3+ | If kernel < 5.12 | | | Ubuntu Focal (20.04) | | | | 5.4+ | If kernel < 5.12 | | | Ubuntu Jammy (22.04) | | | | 5.15+ | No (but recommended if kernel < 5.19) | | | Debian Buster (10) | | WIP | | 5.5+ | If kernel < 5.12 | | | Debian Bullseye (11) | | WIP | | 5.5+ | If kernel < 5.12 | | | Fedora (34 to 37) | WIP | WIP | | 5.12+ | No | | | Rocky Linux 8 | WIP | WIP | | 5.12+ | No | | | Alma Linux (8, 9) | WIP | WIP | | 5.12+ | No | | | CentOS Stream | WIP | WIP | | 5.12+ | No | | | Amazon Linux 2 | WIP | WIP | | 5.12+ | No | | | RedHat Enterprise | WIP | WIP | | 5.12+ | No | Sysbox-EE only | | Flatcar | | | | 5.10+ | If kernel < 5.12 | Sysbox-EE only; see . | NOTES: \"Package install\" means a Sysbox package is available for that distro. See for more. \"K8s-install\" means you can deploy Sysbox on a Kubernetes worker node based on that distro. See for more. \"Build from source\" means you can build and install Sysbox from source on that distro. It's pretty easy, see . \"Kernel upgrade\" means a kernel upgrade may be required (Sysbox requires a fairly new kernel). See for more. \"WIP\" means \"work-in-progress\" (i.e., we expect to have this soon). These are the Linux distros we officially support (and test" }, { "data": "However, we expect Sysbox to work fine on other Linux distros too, particularly with kernel >= 5.12. See for the list of supported K8s distros, and for the list of supported K8s versions. See for a list of supported platform architectures (e.g., amd64, arm64). Shiftfs is a Linux kernel module that Sysbox uses to ensure host volumes mounted into the (rootless) container show up with proper user and group IDs (rather than `nobody:nogroup`); see the for more info. Sysbox's requirement for shiftfs is as follows: | Linux Kernel Version | Shiftfs Required by Sysbox | | -- | :: | | < 5.12 | Yes | | 5.12 to 5.18 | No (but recommended) | | >= 5.19 | No | In kernels 5.12 to 5.18, shiftfs is not required but having it causes Sysbox to setup the container's filesystem more efficiently, so it's recommended. Unfortunately, shiftfs is only available in Ubuntu, Debian, and Flatcar distros (and possibly derivatives of these). Therefore, if your host has kernel < 5.12 and you wish to use Sysbox, it must be one of these distros. Shiftfs is not available in other distros (e.g., Fedora, CentOS, RedHat, Amazon Linux 2, etc.) For this reason, in order to use Sysbox in these other distros, you must have kernel >= 5.12. In the Ubuntu's desktop and server versions, shiftfs comes pre-installed. In Ubuntu's cloud images or in Debian or Flatcar, shiftfs must be manually installed. See the for info on how to do this. If you have a relatively old Ubuntu 18.04 release (e.g. 18.04.3), you need to upgrade the kernel to >= 5.3. We recommend using Ubuntu's package to do the upgrade as follows: ```console $ sudo apt-get update && sudo apt install --install-recommends linux-generic-hwe-18.04 -y $ sudo shutdown -r now ``` This one is only required when running Debian Buster. ```console $ # Allow debian-backports utilization ... $ echo deb http://deb.debian.org/debian buster-backports main contrib non-free | sudo tee /etc/apt/sources.list.d/buster-backports.list $ sudo apt update $ sudo apt install -t buster-backports linux-image-amd64 $ sudo shutdown -r now ``` Refer to this for more details." } ]
{ "category": "Runtime", "file_name": "distro-compat.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated. The `semconv-generate` make target is used for this. Checkout a local copy of the [OpenTelemetry Semantic Conventions] to the desired release tag. Pull the latest `otel/semconvgen` image: `docker pull otel/semconvgen:latest` Run the `make semconv-generate ...` target from this repository. For example, ```sh export TAG=\"v1.21.0\" # Change to the release version you are generating. export OTELSEMCONVREPO=\"/absolute/path/to/opentelemetry/semantic-conventions\" docker pull otel/semconvgen:latest make semconv-generate # Uses the exported TAG and OTELSEMCONVREPO. ``` This should create a new sub-package of . Ensure things look correct before submitting a pull request to include the addition. You can run `make gorelease` that runs to ensure that there are no unwanted changes done in the public API. You can check/report problems with `gorelease` . First, decide which module sets will be released and update their versions in `versions.yaml`. Commit this change to a new branch. Update go.mod for submodules to depend on the new release which will happen in the next step. Run the `prerelease` make target. It creates a branch `prerelease<module set><new tag>` that will contain all release changes. ``` make prerelease MODSET=<module set> ``` Verify the changes. ``` git diff ...prerelease<module set><new tag> ``` This should have changed the version for all modules to be `<new tag>`. If these changes look correct, merge them into your pre-release branch: ```go git merge prerelease<module set><new tag> ``` Update the . Make sure all relevant changes for this release are included and are in language that non-contributors to the project can understand. To verify this, you can look directly at the commits since the `<last tag>`. ``` git --no-pager log --pretty=oneline \"<last tag>..HEAD\" ``` Move all the `Unreleased` changes into a new section following the title scheme (`[<new tag>] - <date of release>`). Update all the appropriate links at the" }, { "data": "Push the changes to upstream and create a Pull Request on GitHub. Be sure to include the curated changes from the in the description. Once the Pull Request with all the version changes has been approved and merged it is time to tag the merged commit. *IMPORTANT*: It is critical you use the same tag that you used in the Pre-Release step! Failure to do so will leave things in a broken state. As long as you do not change `versions.yaml` between pre-release and this step, things should be fine. *IMPORTANT*: . It is critical you make sure the version you push upstream is correct. . For each module set that will be released, run the `add-tags` make target using the `<commit-hash>` of the commit on the main branch for the merged Pull Request. ``` make add-tags MODSET=<module set> COMMIT=<commit hash> ``` It should only be necessary to provide an explicit `COMMIT` value if the current `HEAD` of your working directory is not the correct commit. Push tags to the upstream remote (not your fork: `github.com/open-telemetry/opentelemetry-go.git`). Make sure you push all sub-modules as well. ``` git push upstream <new tag> git push upstream <submodules-path/new tag> ... ``` Finally create a Release for the new `<new tag>` on GitHub. The release body should include all the release notes from the Changelog for this release. After releasing verify that examples build outside of the repository. ``` ./verify_examples.sh ``` The script copies examples into a different directory removes any `replace` declarations in `go.mod` and builds them. This ensures they build with the published release, not the local copy. Once verified be sure to that uses this release. Update the [Go instrumentation documentation] in the OpenTelemetry website under [content/en/docs/languages/go]. Importantly, bump any package versions referenced to be the latest one you just released and ensure all code examples still compile and are accurate. Bump the dependencies in the following Go services:" } ]
{ "category": "Runtime", "file_name": "RELEASING.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "The purpose of this document is to show how to configure user namespaces in CRI-O, as well as some of the options CRI-O supports for configuring user namespaces. To start, the host will have to have `/etc/subuid` and `/etc/subgid` files set correctly. By default, the assumes there will be entries in each of these files for the `containers` user. If one would like to have a different user's entries in `/etc/sub?id` files, then the field `remap-user` and `remap-group` can be configured in `/etc/containers/storage.conf` in the `[storage.options]` table. Let's assume we want the IDs of the users and groups to begin on the host at 100000, and be each given ranges of 65536. For most containers, this will be more than enough. The contents of both `/etc/subuid` and `/etc/subgid` should be: ```text containers:100000:65536 ``` To enable pods to be able to use the userns-mode annotation, the pod must be allowed to interpret the experimental annotation `io.kubernetes.cri-o.userns-mode`. In CRI-O versions greater than 1.23.0, this can be done by creating a custom workload. This can be done by creating a file with the following contents in /etc/crio/crio.conf.d/01-userns-workload.conf ```toml [crio.runtime.workloads.userns] activation_annotation = \"io.kubernetes.cri-o.userns-mode\" allowed_annotations = [\"io.kubernetes.cri-o.userns-mode\"] ``` This will allow any pod with the `io.kubernetes.cri-o.userns-mode` annotation to configure a user namespace. CRI-O opts for this approach to give administrators the ability to toggle the behavior on their nodes, just in case an administrator doesn't want their users to be able to create user namespace. An administrator can also set a different `activation_annotation` if they'd like a different annotation to allow pods to configure user namespaces. CRI-O has supported this experimental annotation since 1.20.0. Originally, it was supported by setting allowed annotations in the runtime class, not the workload. Setting allowed_annotations on runtimes have been deprecated, and newer installations should use workloads" }, { "data": "To create a runtime class that allows the user namespace annotation, the following file can be created: ```toml [crio.runtime.runtimes.userns] runtime_path = \"/usr/bin/runc\" runtime_root = \"/run/runc\" allowed_annotations = [\"io.kubernetes.cri-o.userns-mode\"] ``` `runtimepath` and `runtimeroot` can be configured differently, but must be specified. The name `userns` will be the one that must be specified in the pod's `runtimeClassName` field. See for more details. The remainder of this document will assume workloads are being used. Now that the pod is allowed to specify the annotation, it must actually be done in the pod spec. We will use the simplest example \"auto\" for this pod spec: ```yaml apiVersion: v1 kind: Pod metadata: name: mypod annotations: io.kubernetes.cri-o.userns-mode: \"auto\" ``` In this case, upon pod creation, the pod will have a user namespace automatically configured for it. With a user on the host that is greater than 100000 and with a size of 65536. The auto keyword tells CRI-O that a user namespace should be configured for the pod by CRI-O. This is a good option for users who are new to user namespaces, or don't have precise needs for the feature. When RunAsUser or RunAsGroup are specified for a container in a pod, and the user namespace mode is \"auto\", the user namespace is configured to have that user inside of the user namespace, but the user in the host user namespace is in the range configured in `/etc/subuid`. For instance, in RunAsUser is set to `1234` for a pod that specifies auto along with the `/etc/subuid` configuration above, the pod user inside the pod sees itself as `1234`. However, from the perspective of the host, the pod user could actually be `101234`. This allows for the container process to think it is running as user 1234 for file access inside of the container, but actually be a much higher ID on the host." } ]
{ "category": "Runtime", "file_name": "userns.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "title: Container Object Storage Interface (COSI) The Ceph COSI driver provisions buckets for object storage. This document instructs on enabling the driver and consuming a bucket from a sample application. !!! note The Ceph COSI driver is currently in experimental mode. COSI requires: A running Rook Deploy the COSI controller with these commands: ```bash kubectl apply -k github.com/kubernetes-sigs/container-object-storage-interface-api kubectl apply -k github.com/kubernetes-sigs/container-object-storage-interface-controller ``` The Ceph COSI driver will be started when the CephCOSIDriver CR is created and when the first CephObjectStore is created. ```yaml apiVersion: ceph.rook.io/v1 kind: CephCOSIDriver metadata: name: ceph-cosi-driver namespace: rook-ceph spec: deploymentStrategy: \"Auto\" ``` ```console cd deploy/examples/cosi kubectl create -f cephcosidriver.yaml ``` The driver is created in the same namespace as Rook operator. The BucketClass and BucketAccessClass are CRDs defined by COSI. The BucketClass defines the bucket class for the bucket. The BucketAccessClass defines the access class for the bucket. Rook will automatically create a secret named with `rook-ceph-object-user-<store-name>-cosi` which contains credentials used by the COSI driver. This secret is referred by the BucketClass and BucketAccessClass as defined below: ```yaml kind: BucketClass apiVersion: objectstorage.k8s.io/v1alpha1 metadata: name: sample-bcc driverName: ceph.objectstorage.k8s.io deletionPolicy: Delete parameters: objectStoreUserSecretName: rook-ceph-object-user-my-store-cosi objectStoreUserSecretNamespace: rook-ceph kind: BucketAccessClass apiVersion: objectstorage.k8s.io/v1alpha1 metadata: name: sample-bac driverName: ceph.objectstorage.k8s.io authenticationType: KEY parameters: objectStoreUserSecretName: rook-ceph-object-user-my-store-cosi objectStoreUserSecretNamespace: rook-ceph ``` ```console kubectl create -f bucketclass.yaml -f bucketaccessclass.yaml ``` To create a bucket, use the BucketClass to pointing the required object store and then define BucketClaim request as below: ```yaml kind: BucketClaim apiVersion: objectstorage.k8s.io/v1alpha1 metadata: name: sample-bc namespace: default # any namespace can be used spec: bucketClassName: sample-bcc protocols: s3 ``` ```console kubectl create -f bucketclaim.yaml ``` Define access to the bucket by creating the BucketAccess resource: ```yaml kind: BucketAccess apiVersion: objectstorage.k8s.io/v1alpha1 metadata: name: sample-access namespace: default # any namespace can be used spec: bucketAccessClassName: sample-bac bucketClaimName: sample-bc protocol: s3 credentialsSecretName: sample-secret-name ``` ```console kubectl create -f bucketaccess.yaml ``` The secret will be created which contains the access details for the bucket in JSON format in the namespace of BucketAccess: ``` console kubectl get secret sample-secret-name -o jsonpath='{.data.BucketInfo}' | base64 -d ``` ```json { \"metadata\": { \"name\": \"bc-81733d1a-ac7a-4759-96f3-fbcc07c0cee9\", \"creationTimestamp\": null }, \"spec\": { \"bucketName\": \"sample-bcc1fc94b04-6011-45e0-a3d8-b6a093055783\", \"authenticationType\": \"KEY\", \"secretS3\": { \"endpoint\": \"http://rook-ceph-rgw-my-store.rook-ceph.svc:80\", \"region\": \"us-east\", \"accessKeyID\": \"LI2LES8QMR9GB5SZLB02\", \"accessSecretKey\": \"s0WAmcn8N1eIBgNV0mjCwZWQmJiCF4B0SAzbhYCL\" }, \"secretAzure\": null, \"protocols\": [ \"s3\" ] } } ``` To access the bucket from an application pod, mount the secret for accessing the bucket: ```yaml volumes: name: cosi-secrets secret: secretName: sample-secret-name spec: containers: name: sample-app volumeMounts: name: cosi-secrets mountPath: /data/cosi ``` The Secret will be mounted in the pod in the path: `/data/cosi/BucketInfo`. The app must parse the JSON object to load the bucket connection details." } ]
{ "category": "Runtime", "file_name": "cosi.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "(network-configure)= To configure an existing network, use either the and commands (to configure single settings) or the `incus network edit` command (to edit the full configuration). To configure settings for specific cluster members, add the `--target` flag. For example, the following command configures a DNS server for a physical network: ```bash incus network set UPLINK dns.nameservers=8.8.8.8 ``` The available configuration options differ depending on the network type. See {ref}`network-types` for links to the configuration options for each network type. There are separate commands to configure advanced networking features. See the following documentation: {doc}`/howto/network_acls` {doc}`/howto/network_forwards` {doc}`/howto/network_integrations` {doc}`/howto/networkloadbalancers` {doc}`/howto/network_zones` {doc}`/howto/networkovnpeers` (OVN only)" } ]
{ "category": "Runtime", "file_name": "network_configure.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Intel Software Guard Extensions (SGX) is a set of instructions that increases the security of applications code and data, giving them more protections from disclosure or modification. This document guides you to run containers with SGX enclaves with Kata Containers in Kubernetes. Intel SGX capable bare metal nodes Host kernel Linux 5.13 or later with SGX and SGX KVM enabled: ```sh $ grep SGX /boot/config-`uname -r` CONFIGX86SGX=y CONFIGX86SGX_KVM=y ``` Kubernetes cluster configured with: based Kata Containers installation and associated components including and dependencies Note: Kata Containers supports creating VM sandboxes with Intel SGX enabled using and VMMs only. For `containerd` check in `/etc/containerd/config.toml` that the list of `pod_annotations` passed to the `sandbox` are: `[\"io.katacontainers.*\", \"sgx.intel.com/epc\"]`. With the following sample job deployed using `kubectl apply -f`: Note: Change the `runtimeClassName` option accordingly, only `kata-clh` and `kata-qemu` support Intel SGX. ```yaml apiVersion: batch/v1 kind: Job metadata: name: oesgx-demo-job labels: jobgroup: oesgx-demo spec: template: metadata: labels: jobgroup: oesgx-demo spec: runtimeClassName: kata-clh initContainers: name: init-sgx image: busybox command: ['sh', '-c', 'mkdir /dev/sgx; ln -s /dev/sgxenclave /dev/sgx/enclave; ln -s /dev/sgxprovision /dev/sgx/provision'] volumeMounts: mountPath: /dev name: dev-mount restartPolicy: Never containers: name: eosgx-demo-job-1 image: oeciteam/oe-helloworld:latest imagePullPolicy: IfNotPresent volumeMounts: mountPath: /dev name: dev-mount securityContext: readOnlyRootFilesystem: true capabilities: add: [\"IPC_LOCK\"] resources: limits: sgx.intel.com/epc: \"512Ki\" volumes: name: dev-mount hostPath: path: /dev ``` You'll see the enclave output: ```sh $ kubectl logs oesgx-demo-job-wh42g Hello world from the enclave Enclave called into host to print: Hello World! ``` The Kata VM's SGX Encrypted Page Cache (EPC) memory size is based on the sum of `sgx.intel.com/epc` resource requests within the pod. `init-sgx` can be removed from the YAML configuration file if the Kata rootfs is modified with the necessary udev rules. See the . Intel SGX DCAP attestation is known to work from Kata sandboxes but it comes with one limitation: If the Intel SGX `aesm` daemon runs on the bare metal node and DCAP `out-of-proc` attestation is used, containers within the Kata sandbox cannot get the access to the host's `/var/run/aesmd/aesm.sock` because socket passthrough is not supported. An alternative is to deploy the `aesm` daemon as a side-car container. Projects like are also known to work. For GSC specifically, the Kata guest kernel needs to have the `CONFIG_NUMA=y` enabled and at least one CPU online when running the GSC container. The Kata Containers guest kernel currently has `CONFIG_NUMA=y` enabled by default." } ]
{ "category": "Runtime", "file_name": "using-Intel-SGX-and-kata.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "This version of NRI is supported through the included v010-adapter plugin. This project is a WIP for a new, CNI like, interface for managing resources on a node for Pods and Containers. The basic interface, concepts and plugin design of the Container Network Interface (CNI) is an elegant way to handle multiple implementations of the network stack for containers. This concept can be used for additional interfaces to customize a container's runtime environment. This proposal covers a new interface for resource management on a node with a structured API and plugin design for containers. The big selling point for CNI is that it has a structured interface for modifying the network namespace for a container. This is different from generic hooks as they lack a type safe API injected into the lifecycle of a container. The lifecycle point that CNI and NRI plugins will be injected into is the point between `Create` and `Start` of the container's init process. `Create->NRI->Start` Configuration is split into two parts. One is the payload that is specific to a plugin invocation while the second is the host level configuration and options that specify what plugins to run and provide additional configuration to a plugin. Plugin binary paths can be configured via the consumer but will default to `/opt/nri/bin`. Binaries are named with their type as the binary name, same as the CNI plugin naming scheme. The config's default location will be `/etc/nri/resource.d/*.conf`. ```json { \"version\": \"0.1\", \"plugins\": [ { \"type\": \"konfine\", \"conf\": { \"systemReserved\": [0, 1] } }, { \"type\": \"clearcfs\" } ] } ``` Input to a plugin is provided via `STDIN` as a `json` payload. ```json { \"version\": \"0.1\", \"state\": \"create\", \"id\": \"redis\", \"pid\": 1234, \"spec\": { \"resources\": {}, \"cgroupsPath\": \"default/redis\", \"namespaces\": { \"pid\": \"/proc/44/ns/pid\", \"mount\": \"/proc/44/ns/mnt\", \"net\": \"/proc/44/ns/net\" }, \"annotations\": { \"qos.class\": \"ls\" } } } ``` ```json { \"version\": \"0.1\", \"state\": \"create\", \"id\": \"redis\", \"pid\": 1234, \"cgroupsPath\": \"qos-ls/default/redis\" } ``` Invoke - provides invocations into different lifecycle changes of a container states: `setup|pause|resume|update|delete` A Go based API and client package will be created for both producers of plugins and consumers, commonly being the container runtime (containerd). nri is a containerd sub-project, licensed under the . As a containerd sub-project, you will find the: , , and information in our repository." } ]
{ "category": "Runtime", "file_name": "README-v0.1.0.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "Generating man pages from a cobra command is incredibly easy. An example is as follows: ```go package main import ( \"log\" \"github.com/spf13/cobra\" \"github.com/spf13/cobra/doc\" ) func main() { cmd := &cobra.Command{ Use: \"test\", Short: \"my test program\", } header := &doc.GenManHeader{ Title: \"MINE\", Section: \"3\", } err := doc.GenManTree(cmd, header, \"/tmp\") if err != nil { log.Fatal(err) } } ``` That will get you a man page `/tmp/test.3`" } ]
{ "category": "Runtime", "file_name": "man_docs.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Processors are a binary API that works off of content streams. The incoming content stream will be provided to the binary via `STDIN` and the stream processor is expected to output the processed stream on `STDOUT`. If errors are encountered, errors MUST be returned via `STDERR` with a non-zero exit status. Additional information can be provided to stream processors via a payload. Payloads are marshaled as `protobuf.Any` types and can wrap any type of serialized data structure. On Unix systems, the payload, if available, is provided on `fd 3` for the process. On Windows systems, the payload, if available, is provided via a named pipe with the pipe's path set as the value of the environment variable `STREAMPROCESSORPIPE`. To configure stream processors for containerd, entries in the config file need to be made. The `stream_processors` field is a map so that users can chain together multiple processors to mutate content streams. Processor Fields: Key - ID of the processor, used for passing a specific payload to the processor. `accepts` - Accepted media-types for the processor that it can handle. `returns` - The media-type that the processor returns. `path` - Path to the processor binary. `args` - Arguments passed to the processor binary. ```toml version = 2 [stream_processors] [stream_processors.\"io.containerd.processor.v1.pigz\"] accepts = [\"application/vnd.docker.image.rootfs.diff.tar.gzip\"] returns = \"application/vnd.oci.image.layer.v1.tar\" path = \"unpigz\" args = [\"-d\", \"-c\"] ```" } ]
{ "category": "Runtime", "file_name": "stream_processors.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "rkt aims to be supported on several Linux distributions. In order to notice distro-specific issues, Continuous Integration should ideally run the tests on several Linux distributions. rkt tests can be intrusive and require full root privileges. Each test should be run on a fresh VM. VMs should not be reused for next tests. Tests run on Jenkins and . The script `tests/aws.sh` can generate a AMI of the specified Linux distribution with all the dependencies rkt needs. First, install and configure it with your AWS credentials. Then, create a key pair and a security group for rkt tests: ``` $ tests/aws.sh setup ``` Then generate an AMI of the specified Linux distribution: ``` $ tests/aws.sh fedora-22 $ tests/aws.sh fedora-23 $ tests/aws.sh fedora-24 $ tests/aws.sh fedora-rawhide $ tests/aws.sh ubuntu-1604 $ tests/aws.sh ubuntu-1510 $ tests/aws.sh debian $ tests/aws.sh centos ``` The generated AMIs can then be used to configure Jenkins. If new packages are needed they can be added to the corresponding cloudinit files in `test/cloudinit`." } ]
{ "category": "Runtime", "file_name": "test-on-several-distro.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Status: Accepted To increase the visibility of what a backup might contain, this document proposes storing metadata about backed up resources in object storage and adding a new section to the detailed backup description output to list them. Include a list of backed up resources as metadata in the bucket Enable users to get a view of what resources are included in a backup using the Velero CLI Expose the full manifests of the backed up resources As reported in , the information reported in a `velero backup describe <name> --details` command is fairly limited, and does not easily describe what resources a backup contains. In order to see what a backup might contain, a user would have to download the backup tarball and extract it. This makes it difficult to keep track of different backups in a cluster. After performing a backup, a new file will be created that contains the list of the resources that have been included in the backup. This file will be persisted in object storage alongside the backup contents and existing metadata. A section will be added to the output of `velero backup describe <name> --details` command to view this metadata. This metadata will be in JSON (or YAML) format so that it can be easily inspected from the bucket outside of Velero tooling, and will contain the API resource and group, namespaces and names of the resources: ``` apps/v1/Deployment: default/database default/wordpress v1/Service: default/database default/wordpress v1/Secret: default/database-root-password default/database-user-password v1/ConfigMap: default/database v1/PersistentVolume: my-pv ``` The filename for this metadata will be `<backup name>-resource-list.json.gz`. The top-level key is the string form of the `schema.GroupResource` type that we currently keep track of in the backup controller code path. The Backupper currently initialises a map to track the `backedUpItems` (https://github.com/heptio/velero/blob/1594bdc8d0132f548e18ffcc1db8c4cd2b042726/pkg/backup/backup.go#L269), this is passed down through GroupBackupper, ResourceBackupper and ItemBackupper where ItemBackupper records each backed up item. This property will be moved to the , allowing the BackupController to access it after a successful backup. `backedUpItems` currently uses the `schema.GroupResource` as a key for the" }, { "data": "In order to record the API group, version and kind for the resource, this key will be constructed from the object's `schema.GroupVersionKind` in the format `{group}/{version}/{kind}` (e.g. `apps/v1/Deployment`). The `backedUpItems` map is kept as a flat structure internally for quick lookup. When the backup is ready to upload, `backedUpItems` will be converted to a nested structure representing the metadata file above, grouped by `schema.GroupVersionKind`. After converting to the right format, it can be passed to the `persistBackup` function to persist the file in object storage. A new `DownloadTargetKind` \"BackupResourceList\" will be added to the DownloadRequest CR. The `GetDownloadURL` function in the `persistence` package will be updated to handle this new DownloadTargetKind to enable the Velero client to fetch the metadata from the bucket. This command will need to be updated to fetch the metadata from the bucket using the `Stream` method used in other commands. The file will be read in memory and displayed in the output of the command. Depending on the format the metadata is stored in, it may need processing to print in a more human-readable format. If we choose to store the metadata in YAML, it can likely be directly printed out. If the metadata file does not exist, this is an older backup and we cannot display the list of resources that were backed up. Instead of recording new metadata about what resources have been backed up, we could simply download the backup contents archive and walkthrough it to list the contents every time `velero backup describe <name> --details` is run. The advantage of this approach is that we don't need to change any backup procedures as we already have this content, and we will also be able to list resources for older backups. Additionally, if we wanted to expose more information about the backed up resources, we can do so without having to update what we store in the metadata. The disadvantages are: downloading the whole backup archive will be larger than just downloading a smaller file with metadata reduces the metadata available in the bucket that users might want to inspect outside of Velero tooling (though this is not an explicit requirement)" } ]
{ "category": "Runtime", "file_name": "backup-resource-list.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This page shows you how to deploy a sample site using . to install runsc with Docker. This document assumes that Docker and Docker Compose are installed and the runtime name chosen for gVisor is `runsc`. We'll start by creating the `docker-compose.yaml` file to specify our services. We will specify two services, a `wordpress` service for the Wordpress Apache server, and a `db` service for MySQL. We will configure Wordpress to connect to MySQL via the `db` service host name. Note: This example uses gVisor to sandbox the frontend web server, but not the MySQL database backend. In a production setup, due to imposed by gVisor, it is not recommended to run your database in a sandbox. The frontend is the critical component with the largest outside attack surface, where gVisor's security/performance trade-off makes the most sense. See the [Production guide] for more details. Note: Docker Compose uses it's own network by default and allows services to communicate using their service name. Docker Compose does this by setting up a DNS server at IP address 127.0.0.11 and configuring containers to use it via . This IP is not addressable inside a gVisor sandbox so it's important that we set the DNS IP address to the alternative `8.8.8.8` and use a network that allows routing to it. See for more details. Note: The `runtime` field was removed from services in the 3.x version of the API in versions of docker-compose < 1.27.0. You will need to write your `docker-compose.yaml` file using the 2.x format or use docker-compose >= 1.27.0. See this for more details. ```yaml version: '2.3' services: db: image: mysql:5.7 volumes: db_data:/var/lib/mysql restart: always environment: MYSQLROOTPASSWORD: somewordpress MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress network_mode: \"bridge\" wordpress: depends_on: db links: db image: wordpress:latest ports: \"8080:80\" restart: always environment: WORDPRESSDBHOST: db:3306 WORDPRESSDBUSER: wordpress WORDPRESSDBPASSWORD: wordpress WORDPRESSDBNAME: wordpress dns: 8.8.8.8 network_mode: \"bridge\" runtime: \"runsc\" volumes: db_data: {} ``` Once you have a `docker-compose.yaml` in the current directory you can start the containers: ```bash docker-compose up ``` Once the containers have started you can access wordpress at http://localhost:8080. Congrats! You now how a working wordpress site up and running using Docker Compose. Learn how to deploy . Before deploying this to production, see the [Production guide] for how to take full advantage of gVisor." } ]
{ "category": "Runtime", "file_name": "docker-compose.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "Versioning ========== The number of the current interface exported by the library. A current value of '1', means that you are calling the interface exported by this library interface 1. The implementation number of the most recent interface exported by this library. In this case, a revision value of `0` means that this is the first implementation of the interface. If the next release of this library exports the same interface, but has a different implementation (perhaps some bugs have been fixed), the revision number will be higher, but current number will be the same. In that case, when given a choice, the library with the highest revision will always be used by the runtime loader. The number of previous additional interfaces supported by this library. If age were '2', then this library can be linked into executables which were built with a release of this library that exported the current interface number, current, or any of the previous two interfaces. By definition age must be less than or equal to current. At the outset, only the first ever interface is implemented, so age can only be `0'. For every release of the library `-version-info` argument needs to be set correctly depending on any interface changes you have made. This is quite straightforward when you understand what the three numbers mean: If you have changed any of the sources for this library, the revision number must be incremented. This is a new revision of the current interface. If the interface has changed, then current must be incremented, and revision reset to '0'. This is the first revision of a new interface. If the new interface is a superset of the previous interface (that is, if the previous interface has not been broken by the changes in this new release), then age must be incremented. This release is backwards compatible with the previous release." } ]
{ "category": "Runtime", "file_name": "versioning.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }