content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "title: How the Weave Net Docker Network Plugins Work menu_order: 10 search_type: Documentation The Weave Net legacy plugin actually provides two network drivers to Docker - one named `weavemesh` that can operate without a cluster store and another one named `weave` that can only work with one (like Docker's overlay driver), while the V2 plugin provides one - `weaveworks/net-plugin:latest_release` operating only in swarm mode. Weave Net handles all co-ordination between hosts (referred to by Docker as a \"local scope\" driver) Supports a single network only. A network named `weave` is automatically created for you. Uses Weave Net's partition tolerant IPAM If you do create additional networks using the `weavemesh` driver, containers attached to them will be able to communicate with containers attached to `weave`. There is no isolation between those networks. This runs in what Docker calls \"global scope\", which requires an external cluster store Supports multiple networks that must be created using `docker network create --driver weave ...` Used with Docker's cluster-store-based IPAM There's no specific documentation from Docker on using a cluster store, but the first part of is a good place to start. Note: In the case of multiple networks using the `weave` driver, all containers are on the same virtual network but Docker allocates their addresses on different subnets so they cannot talk to each other directly. The plugin accepts the following options via `docker network create ... --opt`: `works.weave.multicast` -- tells weave to add a static IP route for multicast traffic onto its interface. Note: If you connect a container to multiple Weave networks, at most one of them can have the multicast route enabled. The `weave` network created when the plugin is first launched has the multicast option turned on, but for any networks you create it defaults to off. The driver runs within the plugin V2. Requires Docker to run in . Supports multiple networks. Used with Docker's IPAM. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "plugin-how-it-works.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(network-physical)= <!-- Include start physical intro --> The `physical` network type connects to an existing physical network, which can be a network interface or a bridge, and serves as an uplink network for OVN. <!-- Include end physical intro --> This network type allows to specify presets to use when connecting OVN networks to a parent interface or to allow an instance to use a physical interface as a NIC. In this case, the instance NICs can simply set the `network`option to the network they connect to without knowing any of the underlying configuration details. (network-physical-options)= The following configuration key namespaces are currently supported for the `physical` network type: `bgp` (BGP peer configuration) `dns` (DNS server and resolution configuration) `ipv4` (L3 IPv4 configuration) `ipv6` (L3 IPv6 configuration) `ovn` (OVN configuration) `user` (free-form key/value for user metadata) ```{note} {{noteipaddresses_CIDR}} ``` The following configuration options are available for the `physical` network type: Key | Type | Condition | Default | Description :-- | :-- | :-- | :-- | :-- `gvrp` | bool | - | `false` | Register VLAN using GARP VLAN Registration Protocol `mtu` | integer | - | - | The MTU of the new interface `parent` | string | - | - | Existing interface to use for network `vlan` | integer | - | - | The VLAN ID to attach to `bgp.peers.NAME.address` | string | BGP server | - | Peer address (IPv4 or IPv6) for use by `ovn` downstream networks `bgp.peers.NAME.asn` | integer | BGP server | - | Peer AS number for use by `ovn` downstream networks `bgp.peers.NAME.password` | string | BGP server | - (no password) | Peer session password (optional) for use by `ovn` downstream networks `bgp.peers.NAME.holdtime` | integer | BGP server | `180` | Peer session hold time (in seconds; optional) `dns.nameservers` | string | standard mode | - | List of DNS server IPs on `physical` network `ipv4.gateway` | string | standard mode | - | IPv4 address for the gateway and network (CIDR) `ipv4.ovn.ranges` | string | - | - | Comma-separated list of IPv4 ranges to use for child OVN network routers (FIRST-LAST format) `ipv4.routes` | string | IPv4 address | - | Comma-separated list of additional IPv4 CIDR subnets that can be used with child OVN networks `ipv4.routes.external` setting `ipv4.routes.anycast` | bool | IPv4 address | `false` | Allow the overlapping routes to be used on multiple networks/NIC at the same time `ipv6.gateway` | string | standard mode | - | IPv6 address for the gateway and network (CIDR) `ipv6.ovn.ranges` | string | - | - | Comma-separated list of IPv6 ranges to use for child OVN network routers (FIRST-LAST format) `ipv6.routes` | string | IPv6 address | - | Comma-separated list of additional IPv6 CIDR subnets that can be used with child OVN networks `ipv6.routes.external` setting `ipv6.routes.anycast` | bool | IPv6 address | `false` | Allow the overlapping routes to be used on multiple networks/NIC at the same time `ovn.ingress_mode` | string | standard mode | `l2proxy` | Sets the method how OVN NIC external IPs will be advertised on uplink network: `l2proxy` (proxy ARP/NDP) or `routed` `user.*` | string | - | - | User-provided free-form key/value pairs (network-physical-features)= The following features are supported for the `physical` network type: {ref}`network-bgp`"
}
] |
{
"category": "Runtime",
"file_name": "network_physical.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Using Automatic Discovery With the Weave Net Proxy menu_order: 40 search_type: Documentation Containers launched via the proxy use automatically if it is running when they are started - see the section for an in depth explanation of the behaviour and how to control it. Typically, the proxy passes on container names as-is to weaveDNS for registration. However, there are situations in which the final container name may be out of your control (for example, if you are using Docker orchestrators which append control/namespacing identifiers to the original container names). For those situations, the proxy provides the following flags: `--hostname-from-label<labelkey>` `--hostname-match <regexp>` `--hostname-replacement <replacement>` When launching a container, the hostname is initialized to the value of the container label using key `<labelkey>`. If no `<labelkey>` was provided, then the container name is used. Additionally, the hostname is matched against a regular expression `<regexp>` and based on that match, `<replacement>` is used to obtain the final hostname, and then handed over to weaveDNS for registration. For example, you can launch the proxy using all three flags, as follows: host1$ weave launch --hostname-from-label hostname-label --hostname-match '^aws-[0-9]+-(.*)$' --hostname-replacement 'my-app-$1' host1$ eval $(weave env) Note: regexp substitution groups must be pre-pended with a dollar sign (for example, `$1`). For further details on the regular expression syntax see . After launching the Weave Net proxy with these flags, running a container named `aws-12798186823-foo` without labels results in weaveDNS registering the hostname `my-app-foo` and not `aws-12798186823-foo`. host1$ docker run -ti --name=aws-12798186823-foo weaveworks/ubuntu ping my-app-foo PING my-app-foo.weave.local (10.32.0.2) 56(84) bytes of data. 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=1 ttl=64 time=0.027 ms 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=2 ttl=64 time=0.067 ms Also, running a container named `foo` with the label `hostname-label=aws-12798186823-foo` leads to the same hostname registration. host1$ docker run -ti --name=foo --label=hostname-label=aws-12798186823-foo weaveworks/ubuntu ping my-app-foo PING my-app-foo.weave.local (10.32.0.2) 56(84) bytes of data. 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=1 ttl=64 time=0.031 ms 64 bytes from my-app-foo.weave.local (10.32.0.2): icmp_seq=2 ttl=64 time=0.042 ms This is because, as explained above, if providing `--hostname-from-label` to the proxy, the specified label takes precedence over the container's name. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "automatic-discovery-proxy.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Debugging virtual-kubelet ========================= Not implemented. virtual-kubelet uses to record traces. These traces include requests on the HTTP API as well as the reconciliation loop which reconciles virtual-kubelet pods with what's in the Kubernetes API server. The granularity of traces may depend on the service provider (e.g. `azure`, `aws`, etc) being used. Traces are collected and then exported to any configured exporter. Built-in exporters currently include: `jaeger` - , supports configuration through environment variables. `JAEGER_ENDPOINT` - Jaeger HTTP Thrift endpoint, e.g. `http://localhost:14268` `JAGERAGENTENDPOINT` - Jaeger agent address, e.g. `localhost:6831` `JAEGER_USER` `JAEGER_PASSWORD` `zpages` - . Currently supports configuration through environment variables, but this interface is not considered stable. ZPAGES_PORT - e.g. `localhost:8080` sets the address to setup the HTTP server to serve zpages on. Will be available at `http://<address>:<port>/debug/tracez` If consuming virtual-kubelet as a library you can configure your own tracing exporter. Traces propagated from other services must be propagated using Zipkin's B3 format. Other formats may be supported in the future. `--trace-exporter` - Sets the exporter to use. Multiple exporters can be specified. If this is unset, traces are not exported. `--trace-service-name` - Sets the name of the service, defaults to `virtual-kubelet` but can be anything. This value is passed to the exporter purely for display purposes. `--trace-tag` - Adds tags in a `<key>=<value>` form which is included with collected traces. Think of this like log tags but for traces. `--trace-sample-rate` - Sets the probability for traces to be recorded. Traces are considered an expensive operation so you may want to set this to a lower value. Range is a value of 0 to 100 where 0 is never trace and 100 is always trace."
}
] |
{
"category": "Runtime",
"file_name": "DEBUGGING.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This design proposal aims to make Velero Uploader configurable by introducing a structured approach for managing Uploader settings. we will define and standardize a data structure to facilitate future additions to Uploader configurations. This enhancement provides a template for extending Uploader-related options. And also includes examples of adding sub-options to the Uploader Configuration. Velero is widely used for backing up and restoring Kubernetes clusters. In various scenarios, optimizing the backup process is essential, future needs may arise for adding more configuration options related to the Uploader component especially when dealing with large datasets. Therefore, a standardized configuration template is required. Extensible Uploader Configuration: Provide an extensible approach to manage Uploader configurations, making it easy to add and modify configuration options related to the Velero uploader. User-friendliness: Ensure that the new Uploader configuration options are easy to understand and use for Velero users without introducing excessive complexity. Expanding to other Velero components: The primary focus of this design is Uploader configuration and does not include extending to other components or modules within Velero. Configuration changes for other components may require separate design and implementation. To achieve extensibility in Velero Uploader configurations, the following key components and changes are proposed: Two new data structures, `UploaderConfigForBackup` and `UploaderConfigForRestore`, will be defined to store Uploader configurations. These structures will include the configuration options related to backup and restore for Uploader: ```go type UploaderConfigForBackup struct { } type UploaderConfigForRestore struct { } ``` The Velero CLI will support an uploader configuration-related flag, allowing users to set the value when creating backups or restores. This value will be stored in the `UploaderConfig` field within the `Backup` CRD and `Restore` CRD: ```go type BackupSpec struct { // UploaderConfig specifies the configuration for the uploader. // +optional // +nullable UploaderConfig *UploaderConfigForBackup `json:\"uploaderConfig,omitempty\"` } type RestoreSpec struct { // UploaderConfig specifies the configuration for the restore. // +optional // +nullable UploaderConfig *UploaderConfigForRestore `json:\"uploaderConfig,omitempty\"` } ``` The configuration specified in `UploaderConfig` needs to be effective for backup and restore both by file system way and data-mover way. Therefore, the `UploaderConfig` field value from the `Backup` CRD should be propagated to `PodVolumeBackup` and `DataUpload` CRDs. We aim for the configurations in PodVolumeBackup to originate not only from UploaderConfig in Backup but also potentially from other sources such as the server or configmap. Simultaneously, to align with the configurations in DataUpload's `DataMoverConfig map[string]string`, we have defined an `UploaderSettings map[string]string` here to record the configurations in PodVolumeBackup. ```go type PodVolumeBackupSpec struct { // UploaderSettings are a map of key-value pairs that should be applied to the // uploader configuration. // +optional // +nullable UploaderSettings map[string]string `json:\"uploaderSettings,omitempty\"` } ``` `UploaderConfig` will be stored in DataUpload's `DataMoverConfig map[string]string` field. Also the `UploaderConfig` field value from the `Restore` CRD should be propagated to `PodVolumeRestore` and `DataDownload` CRDs: ```go type PodVolumeRestoreSpec struct { // UploaderSettings are a map of key-value pairs that should be applied to the // uploader"
},
{
"data": "// +optional // +nullable UploaderSettings map[string]string `json:\"uploaderSettings,omitempty\"` } ``` Also `UploaderConfig` will be stored in DataUpload's `DataMoverConfig map[string]string` field. We need to store and retrieve configurations in the PodVolumeBackup and DataUpload structs. This involves type conversion based on the configuration type, storing it in a map[string]string, or performing type conversion from this map for retrieval. PodVolumeRestore and DataDownload are also similar. Adding fields above in CRDs can accommodate any future additions to Uploader configurations by adding new fields to the `UploaderConfigForBackup` or `UploaderConfigForRestore` structures. This section focuses on enabling the configuration for the number of parallel file uploads during backups. below are the key steps that should be added to support this new feature. The Velero CLI will support a `--parallel-files-upload` flag, allowing users to set the `ParallelFilesUpload` value when creating backups. below the sub-option `ParallelFilesUpload` is added into UploaderConfig: ```go // UploaderConfigForBackup defines the configuration for the uploader when doing backup. type UploaderConfigForBackup struct { // ParallelFilesUpload is the number of files parallel uploads to perform when using the uploader. // +optional ParallelFilesUpload int `json:\"parallelFilesUpload,omitempty\"` } ``` Velero Uploader can set upload policies when calling Kopia APIs. In the Kopia codebase, the structure for upload policies is defined as follows: ```go // UploadPolicy describes the policy to apply when uploading snapshots. type UploadPolicy struct { ... MaxParallelFileReads *OptionalInt `json:\"maxParallelFileReads,omitempty\"` } ``` Velero can set the `MaxParallelFileReads` parameter for Kopia's upload policy as follows: ```go curPolicy := getDefaultPolicy() if parallelUpload > 0 { curPolicy.UploadPolicy.MaxParallelFileReads = newOptionalInt(parallelUpload) } ``` As Restic does not support parallel file upload, the configuration would not take effect, so we should output a warning when the user sets the `ParallelFilesUpload` value by using Restic to do a backup. ```go if parallelFilesUpload > 0 { log.Warnf(\"ParallelFilesUpload is set to %d, but Restic does not support parallel file uploads. Ignoring\", parallelFilesUpload) } ``` Roughly, the process is as follows: Users pass the ParallelFilesUpload parameter and its value through the Velero CLI. This parameter and its value are stored as a sub-option within UploaderConfig and then placed into the Backup CR. When users perform file system backups, UploaderConfig is passed to the PodVolumeBackup CR. When users use the Data-mover for backups, it is passed to the DataUpload CR. The configuration will be stored in map[string]string type of field in CR. Each respective controller within the CRs calls the uploader, and the ParallelFilesUpload from map in CRs is passed to the uploader. When the uploader subsequently calls the Kopia API, it can use the ParallelFilesUpload to set the MaxParallelFileReads parameter, and if the uploader calls the Restic command it would output one warning log for Restic does not support this feature. In many system files, numerous zero bytes or empty blocks persist, occupying physical storage space. Sparse restore employs a more intelligent approach, including appropriately handling empty blocks, thereby achieving the correct system state. This write sparse files mechanism aims to enhance restore efficiency while maintaining restoration"
},
{
"data": "Below are the key steps that should be added to support this new feature. The Velero CLI will support a `--write-sparse-files` flag, allowing users to set the `WriteSparseFiles` value when creating restores with Restic or Kopia uploader. below the sub-option `WriteSparseFiles` is added into UploaderConfig: ```go // UploaderConfigForRestore defines the configuration for the restore. type UploaderConfigForRestore struct { // WriteSparseFiles is a flag to indicate whether write files sparsely or not. // +optional // +nullable WriteSparseFiles *bool `json:\"writeSparseFiles,omitempty\"` } ``` For Restic, it could be enabled by pass the flag `--sparse` in creating restore: ```bash restic restore create --sparse $snapshotID ``` For Kopia, it could be enabled this feature by the `WriteSparseFiles` field in the . ```go fsOutput := &restore.FilesystemOutput{ WriteSparseFiles: uploaderutil.GetWriteSparseFiles(uploaderCfg), } ``` Roughly, the process is as follows: Users pass the WriteSparseFiles parameter and its value through the Velero CLI. This parameter and its value are stored as a sub-option within UploaderConfig and then placed into the Restore CR. When users perform file system restores, UploaderConfig is passed to the PodVolumeRestore CR. When users use the Data-mover for restores, it is passed to the DataDownload CR. The configuration will be stored in map[string]string type of field in CR. Each respective controller within the CRs calls the uploader, and the WriteSparseFiles from map in CRs is passed to the uploader. When the uploader subsequently calls the Kopia API, it can use the WriteSparseFiles to set the WriteSparseFiles parameter, and if the uploader calls the Restic command it would append `--sparse` flag within the restore command. Setting the parallelism of restore operations can improve the efficiency and speed of the restore process, especially when dealing with large amounts of data. The Velero CLI will support a --parallel-files-download flag, allowing users to set the parallelism value when creating restores. when no value specified, the value of it would be the number of CPUs for the node that the node agent pod is running. ```bash velero restore create --parallel-files-download $num ``` below the sub-option parallel is added into UploaderConfig: ```go type UploaderConfigForRestore struct { // ParallelFilesDownload is the number of parallel for restore. // +optional ParallelFilesDownload int `json:\"parallelFilesDownload,omitempty\"` } ``` Velero Uploader can set restore policies when calling Kopia APIs. In the Kopia codebase, the structure for restore policies is defined as follows: ```go // first get concurrrency from uploader config restoreConcurrency, _ := uploaderutil.GetRestoreConcurrency(uploaderCfg) // set restore concurrency into restore options restoreOpt := restore.Options{ Parallel: restoreConcurrency, } // do restore with restore option restore.Entry(..., restoreOpt) ``` Configurable parallel restore is not supported by restic, so we would return one error if the option is configured. ```go restoreConcurrency, err := uploaderutil.GetRestoreConcurrency(uploaderCfg) if err != nil { return extraFlags, errors.Wrap(err, \"failed to get uploader config\") } if restoreConcurrency > 0 { return extraFlags, errors.New(\"restic does not support parallel restore\") } ``` To enhance extensibility further, the option of storing `UploaderConfig` in a Kubernetes ConfigMap can be explored, this approach would allow the addition and modification of configuration options without the need to modify the CRD."
}
] |
{
"category": "Runtime",
"file_name": "velero-uploader-configuration.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List identities ``` cilium-dbg identity list [LABELS] [flags] ``` ``` --endpoints list identities of locally managed endpoints -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage security identities"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_identity_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Creating storageclass `kubectl apply -f storageclass.yaml` ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-carina-sc provisioner: carina.storage.io parameters: csi.storage.k8s.io/fstype: xfs carina.storage.io/disk-group-name: hdd reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer mountOptions: ``` Creating PVC `kubectl apply -f pvc.yaml` ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: raw-block-pvc namespace: carina spec: accessModes: ReadWriteOnce volumeMode: Block resources: requests: storage: 13Gi storageClassName: csi-carina-sc ``` Checking the LV object. ```shell $ kubectl get lv NAME SIZE GROUP NODE STATUS pvc-319c5deb-f637-423b-8b52-42ecfcf0d3b7 7Gi carina-vg-hdd 10.20.9.154 Success ``` mount volume as block in pod`kubectl apply -f pod.yaml` ```yaml apiVersion: v1 kind: Pod metadata: name: carina-block-pod namespace: carina spec: containers: name: centos securityContext: capabilities: add: [\"SYS_RAWIO\"] image: centos:latest imagePullPolicy: \"IfNotPresent\" command: [\"/bin/sleep\", \"infinity\"] volumeDevices: name: data devicePath: /dev/xvda volumes: name: data persistentVolumeClaim: claimName: raw-block-pvc ```"
}
] |
{
"category": "Runtime",
"file_name": "pvc-device.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "sidebar_position: 7 sidebar_label: \"PV and PVC\" The PersistentVolume subsystem provides an API for users and administrators that abstracts details of how storage is provided from how it is consumed. To do this, we introduce two new API resources: PersistentVolume (PV) and PersistentVolumeClaim (PVC). A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes). While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems. Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than size and access modes, without exposing users to the details of how those volumes are implemented. For these needs, there is the StorageClass resource. It is used to mark storage resources and performance, and dynamically provision appropriate PV resources based on PVC demand. After the mechanism of StorageClass and dynamic provisioning developed for storage resources, the on-demand creation of volumes is realized, which is an important step in the automatic management process of shared storage. See also the official documentation provided by Kubernetes:"
}
] |
{
"category": "Runtime",
"file_name": "pv_pvc.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage policy related BPF maps ``` -h, --help help for policy ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Add/update policy entry - Delete a policy entry - Get contents of a policy BPF map - Dump all policy maps"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_policy.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The build process of OpenIO SDS depends on several third-party projects. When building only the SDK, OpenIO only depends on: cmake, make: involved in the build process. bison, flex: generates expression parsers. glib2, glib2-devel curl, libcurl, libcurl-devel json-c, json-c-devel : Now only necessary at the compile time, this is our ASN.1 codec forked from . The purpose of our fork is simply to provide codec for explicitely sized integers (int{8,16,32,64} instead of long int) and GLib-2.0 memory allocations. The forked version is required only when building code prior to version 6.0.0. Building the entire project will require the SDK dependencies, but also: python: Pure python code generator (no dependency), and python modules. python-distutils-extra: required for the installation process httpd, httpd-devel: server base for ECD service (and rawx for code prior to version 6.0.0) apr, apr-util-devel, apr-devel: internally used by rawx modules (prior to version 6.0.0) attr, libattr-devel: we use xattr a lot to stamp rawx chunks and repositories base directory. sqlite, sqlite-devel: base storage for META{0,1,2} services. zeromq3, zeromq3-devel: communication of events between services and forward agents. zookeeper-devel, libzookeeper\\_mt.so: building with distribution's zookeeper client is OK, but the package ships with a lot of dependencies, including the openjdk. We recommand to use the official Oracle/Sun JDK, and to build your own zookeeper client from the source to avoid a huge waste of space and bandwith. python-setuptools python-pbr beanstalkd: you need it to have the event-agent working libapache2-mod-wsgi-py3 (as named on Ubuntu), the WSGI module for apache2 In addition, there some dependencies at runtime (the up-to-date list is in ). You don't need to install them on the system, they will be installed by pip in your virtualenv (see ). python-eventlet python-werkzeug python-gunicorn python-redis python-requests python-simplejson python-cliff python-pyeclib python-futures The account service will require an up and running backend: FoundationDB Generating the documentation will require: epydoc: available in your python virtualenv The Makefile's generation is performed by . The master CMake directives files accepts several"
},
{
"data": "Each option has to be specified on the cmake's command line with the following format: ``` cmake -D${K}=${V} ${SRCDIR} ``` In addition to common cmake options, these specific options are also available: | Directive | Help | | | - | | LD\\_LIBDIR | Path suffix to the installation prefix, to define the default directory for libraries. E.g. \"lib\" or \"lib64\", depending on the architecture. | | STACK\\PROTECTOR | Trigger stack protection code. Only active when CMAKE\\BUILD\\_TYPE is set to \"Debug\" or \"RelWithDebInfo\" | | ALLOW\\_BACKTRACE | generate backtraces in errors. | | FORBID\\_DEPRECATED | define it to turn into errors the warnings for deprecated symbols from the GLib2. | | EXE\\_PREFIX | Defines a prefix to all CLI tool. By default, set to \"sds\". | | SOCKET\\_OPTIMIZED | define if to use socket3 and accept4 syscalls | | SOCKET\\DEFAULT\\LINGER\\ONOFF | (integer value) triggers the onoff value of the SO\\LINGER configuration. | | SOCKET\\DEFAULT\\LINGER\\DELAY | (integer value) set it to the delay in milliseconds, this will the delay part of the SO\\LINGER configuration. | | SOCKET\\DEFAULT\\QUICKACK | boolean | | SOCKET\\DEFAULT\\NODELAY | boolean | Also, some options exist to specify uncommon installation paths. Their format is ``${DEP}INCDIR`` or ``${DEP}LIBDIR``, and ``DEP`` might take the given values ``ASN1C``, ``ATTR``, ``CURL``, ``JSONC``, ``LEVELDB``, ``ZK``, ``ZLIB``, ``ZMQ`` We recommend that you specify the installation directory (especially if you are not root) at this step so you don't need to repeat it when calling ``make install``: ``` cmake -DCMAKEINSTALLPREFIX=$HOME/.local [OTHER CMAKE PARAMETERS] ${SRCDIR} ``` Now that ``cmake`` succeeded, it is time to build and install the binaries with ``make``. ``` make make test make install # or make DESTDIR=${install_dir} install ``` We suggest to install Python dependencies in a virtualenv instead of directly on the system. ``` python3 -m venv oiovenv source oiovenv/bin/activate ``` Then install the python module inside your virtualenv: ``` pip install -e ${SRCDIR} ${SRCDIR}/tools/patch-python-modules.sh ``` Then install FoundationDB with oio-install-fdb.sh ``` ./tools/oio-install-fdb.sh ``` A lot of variables are available, consider reading for more information."
}
] |
{
"category": "Runtime",
"file_name": "BUILD.md",
"project_name": "OpenIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The cluster VPN created by Kilo can also be used by peers as a gateway to access the Internet. In order configure a local machine to use the cluster VPN as a gateway to the Internet, first register the local machine as a peer of the cluster following the steps in the . Once the machine is registered, generate the configuration for the local peer: ```shell PEER=squat # name of the registered peer kgctl showconf peer $PEER > peer.ini ``` Next, the WireGuard configuration must be modified to enable routing traffic for any IP via a node in the cluster. To do so, open the WireGuard configuration in an editor, select a node in the cluster, and set the `AllowedIPs` field of that node's corresponding `peer` section to `0.0.0.0/0, ::/0`: ```shell $EDITOR peer.ini ``` The configuration should now look something like: ```ini [Peer] PublicKey = 2/xU029dz/WtvMZAbnSzmhicl8U1/Y3NYmunRr8EJ0Q= AllowedIPs = 0.0.0.0/0, ::/0 Endpoint = 108.61.142.123:51820 ``` The configuration can then be applied to the local WireGuard interface, e.g. `wg0`: ```shell IFACE=wg0 sudo wg setconf $IFACE peer.ini ``` Next, add routes for the public IPs of the WireGuard peers to ensure that the packets encapsulated by WireGuard are sent through a real interface: ```shell default=$(ip route list all | grep default | awk '{$1=\"\"; print $0}') for ip in $(sudo wg | grep endpoint | awk '{print $2}' | sed 's/\\(.\\+\\):[0-9]\\+/\\1/'); do sudo ip route add $ip $default done ``` Finally, the local machine can be configured to use the WireGuard interface as the device for the default route: ```shell sudo ip route delete default sudo ip route add default dev $IFACE ``` The local machine is now using the selected node as its Internet gateway and the connection can be verified. For example, try finding the local machine's external IP address: ```shell curl https://icanhazip.com ```"
}
] |
{
"category": "Runtime",
"file_name": "vpn-server.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Before digging deeper into the virtualization reference architecture, let's first look at the various GPUDirect use cases in the following table. Were distinguishing between two top-tier use cases where the devices are (1) passthrough and (2) virtualized, where a VM gets assigned a virtual function (VF) and not the physical function (PF). A combination of PF and VF would also be possible. | Device #1 (passthrough) | Device #2 (passthrough) | P2P Compatibility and Mode | | - | -- | -- | | GPU PF | GPU PF | GPUDirect P2P | | GPU PF | NIC PF | GPUDirect RDMA | | MIG-slice | MIG-slice | No GPUDirect P2P | | MIG-slice | NIC PF | GPUDirect RDMA | | PDevice #1 (virtualized) | Device #2 (virtualized) | P2P Compatibility and Mode | | Time-slice vGPU VF | Time-slice vGPU VF | No GPUDirect P2P but NVLINK P2P available | | Time-slice vGPU VF | NIC VF | GPUDirect RDMA | | MIG-slice vGPU | MIG-slice vGPU | No GPUDirect P2P | | MIG-slice vGPU | NIC VF | GPUDirect RDMA | In a virtualized environment we have several distinct features that may prevent Peer-to-peer (P2P) communication of two endpoints in a PCI Express topology. The IOMMU translates IO virtual addresses (IOVA) to physical addresses (PA). Each device behind an IOMMU has its own IOVA memory space, usually, no two devices share the same IOVA memory space but its up to the hypervisor or OS how it chooses to map devices to IOVA spaces. Any PCI Express DMA transactions will use IOVAs, which the IOMMU must translate. By default, all the traffic is routed to the root complex and not issued directly to the peer device. An IOMMU can be used to isolate and protect devices even if virtualization is not used; since devices can only access memory regions that are mapped for it, a DMA from one device to another is not possible. DPDK uses the IOMMU to have better isolation between devices, another benefit is that IOVA space can be represented as a contiguous memory even if the PA space is heavily scattered. In the case of virtualization, the IOMMU is responsible for isolating the device and memory between VMs for safe device assignment without compromising the host and other guest OSes. Without an IOMMU, any device can access the entire system and perform DMA transactions anywhere. The second feature is ACS (Access Control Services), which controls which devices are allowed to communicate with one another and thus avoids improper routing of packets `irrespectively` of whether IOMMU is enabled or not. When IOMMU is enabled, ACS is normally configured to force all PCI Express DMA to go through the root complex so IOMMU can translate it, impacting performance between peers with higher latency and reduced bandwidth. A way to avoid the performance hit is to enable Address Translation Services"
},
{
"data": "ATS-capable endpoints can prefetch IOVA -> PA translations from the IOMMU and then perform DMA transactions directly to another endpoint. Hypervisors enable this by enabling ATS in such endpoints, configuring ACS to enable Direct Translated P2P, and configuring the IOMMU to allow Address Translation requests. Another important factor is that the NVIDIA driver stack will use the PCI Express topology of the system it is running on to determine whether the hardware is capable of supporting P2P. The driver stack qualifies specific chipsets, and PCI Express switches for use with GPUDirect P2P. In virtual environments, the PCI Express topology is flattened and obfuscated to present a uniform environment to the software inside the VM, which breaks the GPUDirect P2P use case. On a bare metal machine, the driver stack groups GPUs into cliques that can perform GPUDirect P2P communication, excluding peer mappings where P2P communication is not possible, prominently if GPUs are attached to multiple CPU sockets. CPUs and local memory banks are referred to as NUMA nodes. In a two-socket server, each of the CPUs has a local memory bank for a total of two NUMA nodes. Some servers provide the ability to configure additional NUMA nodes per CPU, which means a CPU socket can have two NUMA nodes (some servers support four NUMA nodes per socket) with local memory banks and L3 NUMA domains for improved performance. One of the current solutions is that the hypervisor provides additional topology information that the driver stack can pick up and enable GPUDirect P2P between GPUs, even if the virtualized environment does not directly expose it. The PCI Express virtual P2P approval capability structure in the PCI configuration space is entirely emulated by the hypervisor of passthrough GPU devices. A clique ID is provided where GPUs with the same clique ID belong to a group of GPUs capable of P2P communication On vSphere, Azure, and other CPSs, the hypervisor lays down a `topologies.xml` which NCCL can pick up and deduce the right P2P level[^1]. NCCL is leveraging Infiniband (IB) and/or Unified Communication X (UCX) for communication, and GPUDirect P2P and GPUDirect RDMA should just work in this case. The only culprit is that software or applications that do not use the XML file to deduce the topology will fail and not enable GPUDirect ( ) To enable every part of the accelerator stack, we propose a virtualized reference architecture to enable GPUDirect P2P and GPUDirect RDMA for any hypervisor. The idea is split into two parts to enable the right PCI Express topology. The first part builds upon extending the PCI Express virtual P2P approval capability structure to every device that wants to do P2P in some way and groups devices by clique ID. The other part involves replicating a subset of the host topology so that applications running in the VM do not need to read additional information and enable the P2P capability like in the bare-metal use case described above. The driver stack can then deduce automatically if the topology presented in the VM is capable of P2P"
},
{
"data": "We will work with the following host topology for the following sections. It is a system with two converged DPUs, each having an `A100X` GPU and two `ConnectX-6` network ports connected to the downstream ports of a PCI Express switch. ```sh +-00.0-[d8-df]-00.0-[d9-df]--+-00.0-[da-db]--+-00.0 Mellanox Tech MT42822 BlueField-2 integrated ConnectX-6 Dx network | +-00.1 Mellanox Tech MT42822 BlueField-2 integrated ConnectX-6 Dx network | \\-00.2 Mellanox Tech MT42822 BlueField-2 SoC Management Interface \\-01.0-[dc-df]-00.0-[dd-df]-08.0-[de-df]-00.0 NVIDIA Corporation GA100 [A100X] +-00.0-[3b-42]-00.0-[3c-42]--+-00.0-[3d-3e]--+-00.0 Mellanox Tech MT42822 BlueField-2 integrated ConnectX-6 Dx network | +-00.1 Mellanox Tech MT42822 BlueField-2 integrated ConnectX-6 Dx network | \\-00.2 Mellanox Tech MT42822 BlueField-2 SoC Management Interface \\-01.0-[3f-42]-00.0-[40-42]-08.0-[41-42]-00.0 NVIDIA Corporation GA100 [A100X] ``` The green path highlighted above is the optimal and preferred path for efficient P2P communication. Most of the time, the PCI Express topology is flattened and obfuscated to ensure easy migration of the VM image between different physical hardware `topologies`. In Kata, we can configure the hypervisor to use PCI Express root ports to hotplug the VFIO devices one is passing through. A user can select how many PCI Express root ports to allocate depending on how many devices are passed through. A recent addition to Kata will detect the right amount of PCI Express devices that need hotplugging and bail out if the number of root ports is insufficient. In Kata, we do not automatically increase the number of root ports, we want the user to be in full control of the topology. ```toml hotplug_vfio = \"root-port\" pcierootport = 8 ``` VFIO devices are hotplugged on a PCIe-PCI bridge by default. Hotplug of PCI Express devices is only supported on PCI Express root or downstream ports. With this configuration set, if we start up a Kata container, we can inspect our topology and see the allocated PCI Express root ports and the hotplugged devices. ```sh $ lspci -tv -[0000:00]-+-00.0 Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller +-01.0 Red Hat, Inc. Virtio console +-02.0 Red Hat, Inc. Virtio SCSI +-03.0 Red Hat, Inc. Virtio RNG +-04.0-[01]-00.0 Mellanox Technologies MT42822 BlueField-2 integrated ConnectX-6 +-05.0-[02]-00.0 Mellanox Technologies MT42822 BlueField-2 integrated ConnectX-6 +-06.0-[03]-00.0 NVIDIA Corporation Device 20b8 +-07.0-[04]-00.0 NVIDIA Corporation Device 20b8 +-08.0-[05]-- +-09.0-[06]-- +-0a.0-[07]-- +-0b.0-[08]-- +-0c.0 Red Hat, Inc. Virtio socket +-0d.0 Red Hat, Inc. Virtio file system +-1f.0 Intel Corporation 82801IB (ICH9) LPC Interface Controller +-1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller \\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller ``` For devices with huge BARs (Base Address Registers) like the GPU (we need to configure the PCI Express root port properly and allocate enough memory for mapping), we have added a heuristic to Kata to deduce the right settings. Hence, the BARs can be mapped correctly. This functionality is added to which is part of Kata now. ```sh $ sudo dmesg | grep BAR [ 0.179960] pci 0000:00:04.0: BAR 7: assigned [io 0x1000-0x1fff] [ 0.179962] pci 0000:00:05.0: BAR 7: assigned [io 0x2000-0x2fff] [ 0.179963] pci 0000:00:06.0: BAR 7: assigned [io 0x3000-0x3fff] [ 0.179964] pci 0000:00:07.0: BAR 7: assigned [io 0x4000-0x4fff] [ 0.179966] pci 0000:00:08.0: BAR 7: assigned [io 0x5000-0x5fff] [ 0.179967] pci"
},
{
"data": "BAR 7: assigned [io 0x6000-0x6fff] [ 0.179968] pci 0000:00:0a.0: BAR 7: assigned [io 0x7000-0x7fff] [ 0.179969] pci 0000:00:0b.0: BAR 7: assigned [io 0x8000-0x8fff] [ 2.115912] pci 0000:01:00.0: BAR 0: assigned [mem 0x13000000000-0x13001ffffff 64bit pref] [ 2.116203] pci 0000:01:00.0: BAR 2: assigned [mem 0x13002000000-0x130027fffff 64bit pref] [ 2.683132] pci 0000:02:00.0: BAR 0: assigned [mem 0x12000000000-0x12001ffffff 64bit pref] [ 2.683419] pci 0000:02:00.0: BAR 2: assigned [mem 0x12002000000-0x120027fffff 64bit pref] [ 2.959155] pci 0000:03:00.0: BAR 1: assigned [mem 0x11000000000-0x117ffffffff 64bit pref] [ 2.959345] pci 0000:03:00.0: BAR 3: assigned [mem 0x11800000000-0x11801ffffff 64bit pref] [ 2.959523] pci 0000:03:00.0: BAR 0: assigned [mem 0xf9000000-0xf9ffffff] [ 2.966119] pci 0000:04:00.0: BAR 1: assigned [mem 0x10000000000-0x107ffffffff 64bit pref] [ 2.966295] pci 0000:04:00.0: BAR 3: assigned [mem 0x10800000000-0x10801ffffff 64bit pref] [ 2.966472] pci 0000:04:00.0: BAR 0: assigned [mem 0xf7000000-0xf7ffffff] ``` The NVIDIA driver stack in this case would refuse to do P2P communication since (1) the topology is not what it expects, (2) we do not have a qualified chipset. Since our P2P devices are not connected to a PCI Express switch port, we need to provide additional information to support the P2P functionality. One way of providing such meta information would be to annotate the container; most of the settings in Kata's configuration file can be overridden via annotations, but this limits the flexibility, and a user would need to update all the containers that he wants to run with Kata. The goal is to make such things as transparent as possible, so we also introduced (Container Device Interface) to Kata. CDI is a[ specification](https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md) for container runtimes to support third-party devices. As written before, we can provide a clique ID for the devices that belong together and are capable of doing P2P. This information is provided to the hypervisor, which will set up things in the VM accordingly. Let's suppose the user wanted to do GPUDirect RDMA with the first GPU and the NIC that reside on the same DPU, one could provide the specification telling the hypervisor that they belong to the same clique. ```yaml cdiVersion: 0.4.0 kind: nvidia.com/gpu devices: name: gpu0 annotations: bdf: 41:00.0 clique-id: 0 containerEdits: deviceNodes: path: /dev/vfio/71\" cdiVersion: 0.4.0 kind: mellanox.com/nic devices: name: nic0 annotations: bdf: 3d:00.0 clique-id: 0 attach-pci: true containerEdits: deviceNodes: path: \"/dev/vfio/66\" ``` Since this setting is bound to the device and not the container we do not need to alter the container just allocate the right resource and GPUDirect RDMA would be set up correctly. Rather than exposing them separately, an idea would be to expose a GPUDirect RDMA device via NFD (Node Feature Discovery) that combines both of them; this way, we could make sure that the right pair is allocated and used more on Kubernetes deployment in the next section. The GPU driver stack is leveraging the PCI Express virtual P2P approval capability, but the NIC stack does not use this now. One of the action items is to enable MOFED to read the P2P approval capability and enable ATS and ACS settings as described"
},
{
"data": "This way, we could enable GPUDirect P2P and GPUDirect RDMA on any topology presented to the VM application. It is the responsibility of the administrator or infrastructure engineer to provide the right information either via annotations or a CDI specification. The other way to represent the PCI Express topology in the VM is to replicate a subset of the topology needed to support the P2P use case inside the VM. Similar to the configuration for the root ports, we can easily configure the usage of PCI Express switch ports to hotplug the devices. ```toml hotplug_vfio = \"switch-port\" pcieswitchport = 8 ``` Each device that is passed through is attached to a PCI Express downstream port as illustrated below. We can even replicate the hosts two DPUs `topologies` with added metadata through the CDI. Most of the time, a container only needs one pair of GPU and NIC for GPUDirect RDMA. This is more of a showcase of what we can do with the power of Kata and CDI. One could even think of adding groups of devices that support P2P, even from different CPU sockets or NUMA nodes, into one container; indeed, the first group is NUMA node 0 (red), and the second group is NUMA node 1 (green). Since they are grouped correctly, P2P would be enabled naturally inside a group, aka clique ID. ```sh $ lspci -tv -[0000:00]-+-00.0 Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller +-01.0 Red Hat, Inc. Virtio console +-02.0 Red Hat, Inc. Virtio SCSI +-03.0 Red Hat, Inc. Virtio RNG +-04.0-[01-04]-00.0-[02-04]--+-00.0-[03]-00.0 NVIDIA Corporation Device 20b8 | \\-01.0-[04]-00.0 Mellanox Tech MT42822 BlueField-2 integrated ConnectX-6 Dx +-05.0-[05-08]-00.0-[06-08]--+-00.0-[07]-00.0 Mellanox Tech MT42822 BlueField-2 integrated ConnectX-6 Dx | \\-01.0-[08]-00.0 NVIDIA Corporation Device 20b8 +-06.0 Red Hat, Inc. Virtio socket +-07.0 Red Hat, Inc. Virtio file system +-1f.0 Intel Corporation 82801IB (ICH9) LPC Interface Controller +-1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] \\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller \\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller ``` The configuration of using either the root port or switch port can be applied on a per Container or Pod basis, meaning we can switch PCI Express `topologies` on each run of an application. Every hypervisor will have resource limits in terms of how many PCI Express root ports, switch ports, or bridge ports can be created, especially with devices that need to reserve a 4K IO range per PCI specification. Each instance of root or switch port will consume 4K IO of very limited capacity, 64k is the maximum. Simple math brings us to the conclusion that we can have a maximum of 16 PCI Express root ports or 16 PCI Express switch ports in QEMU if devices with IO BARs are used in the PCI Express hierarchy. Additionally, one can have 32 slots on the PCI root bus and a maximum of 256 slots for the complete PCI(e) topology. Per default, QEMU will attach a multi-function device in the last slot on the PCI root bus, ```sh"
},
{
"data": "Intel Corporation 82801IB (ICH9) LPC Interface Controller +-1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] \\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller ``` Kata will additionally add `virtio-xxx-pci` devices consuming (5 slots) plus a PCIe-PCI-bridge (1 slot) and a DRAM controller (1 slot), meaning per default, we have already eight slots used. This leaves us 24 slots for adding other devices to the root bus. The problem that arises here is one use-case from a customer that uses recent RTX GPUs with Kata. The user wanted to pass through eight of these GPUs into one container and ran into issues. The problem is that those cards often consist of four individual device nodes: GPU, Audio, and two USB controller devices (some cards have a USB-C output). These devices are grouped into one IOMMU group. Since one needs to pass through the complete IOMMU group into the VM, we need to allocate 32 PCI Express root ports or 32 PCI Express switch ports, which is technically impossible due to the resource limits outlined above. Since all the devices appear as PCI Express devices, we need to hotplug those into a root or switch port. The solution to this problem is leveraging CDI. For each device, add the information if it is going to be hotplugged as a PCI Express or PCI device, which results in either using a PCI Express root/switch port or an ordinary PCI bridge. PCI bridges are not affected by the limited IO range. This way, the GPU is attached as a PCI Express device to a root/switch port and the other three PCI devices to a PCI bridge, leaving enough resources to create the needed PCI Express root/switch ports. For example, were going to attach the GPUs to a PCI Express root port and the NICs to a PCI bridge. ```jsonld cdiVersion: 0.4.0 kind: mellanox.com/nic devices: name: nic0 annotations: bdf: 3d:00.0 clique-id: 0 attach-pci: true containerEdits: deviceNodes: path: \"/dev/vfio/66\" name: nic1 annotations: bdf: 3d:00.1 clique-id: 1 attach-pci: true containerEdits: deviceNodes: path: \"/dev/vfio/67 ``` The configuration is set to use eight root ports for the GPUs and attach the NICs to a PCI bridge which is connected to a PCI Express-PCI bridge which is the preferred way of introducing a PCI topology in a PCI Express machine. ```sh $ lspci -tv -[0000:00]-+-00.0 Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller +-01.0 Red Hat, Inc. Virtio console +-02.0 Red Hat, Inc. Virtio SCSI +-03.0 Red Hat, Inc. Virtio RNG +-04.0-[01]-00.0 NVIDIA Corporation Device 20b8 +-05.0-[02]-00.0 NVIDIA Corporation Device 20b8 +-06.0-[03]-- +-07.0-[04]-- +-08.0-[05]-- +-09.0-[06]-- +-0a.0-[07]-- +-0b.0-[08]-- +-0c.0-[09-0a]-00.0-[0a]--+-00.0 Mellanox Tech MT42822 BlueField-2 ConnectX-6 | \\-01.0 Mellanox Tech MT42822 BlueField-2 ConnectX-6 +-0d.0 Red Hat, Inc. Virtio socket +-0e.0 Red Hat, Inc. Virtio file system +-1f.0 Intel Corporation 82801IB (ICH9) LPC Interface Controller +-1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller \\-1f.3 Intel Corporation 82801I (ICH9 Family) SMBus Controller ``` The PCI devices will consume a slot of which we have 256 in the PCI(e) topology and leave scarce resources for the needed PCI Express devices."
}
] |
{
"category": "Runtime",
"file_name": "kata-vra.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"ark plugin remove\" layout: docs Remove a plugin Remove a plugin ``` ark plugin remove [NAME | IMAGE] [flags] ``` ``` -h, --help help for remove ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with plugins"
}
] |
{
"category": "Runtime",
"file_name": "ark_plugin_remove.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Run Velero on Azure\" layout: docs To configure Velero on Azure, you: Download an official release of Velero Create your Azure storage account and blob container Create Azure service principal for Velero Install the server If you do not have the `az` Azure CLI 2.0 installed locally, follow the to set it up. Run: ```bash az login ``` Ensure that the VMs for your agent pool allow Managed Disks. If I/O performance is critical, consider using Premium Managed Disks, which are SSD backed. Download the tarball for your client platform. _We strongly recommend that you use an of Velero. The tarballs for each release contain the `velero` command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable!_ Extract the tarball: ```bash tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to ``` We'll refer to the directory you extracted to as the \"Velero directory\" in subsequent steps. Move the `velero` binary from the Velero directory to somewhere in your PATH. Velero requires a storage account and blob container in which to store backups. The storage account can be created in the same Resource Group as your Kubernetes cluster or separated into its own Resource Group. The example below shows the storage account created in a separate `Velero_Backups` Resource Group. The storage account needs to be created with a globally unique id since this is used for dns. In the sample script below, we're generating a random name using `uuidgen`, but you can come up with this name however you'd like, following the . The storage account is created with encryption at rest capabilities (Microsoft managed keys) and is configured to only allow access via https. Create a resource group for the backups storage account. Change the location as needed. ```bash AZUREBACKUPRESOURCEGROUP=VeleroBackups az group create -n $AZUREBACKUPRESOURCE_GROUP --location WestUS ``` Create the storage account. ```bash AZURESTORAGEACCOUNT_ID=\"velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\" az storage account create \\ --name $AZURESTORAGEACCOUNT_ID \\ --resource-group $AZUREBACKUPRESOURCE_GROUP \\ --sku Standard_GRS \\ --encryption-services blob \\ --https-only true \\ --kind BlobStorage \\ --access-tier Hot ``` Create the blob container named `velero`. Feel free to use a different name, preferably unique to a single Kubernetes cluster. See the for more"
},
{
"data": "```bash BLOB_CONTAINER=velero az storage container create -n $BLOBCONTAINER --public-access off --account-name $AZURESTORAGEACCOUNTID ``` Set the name of the Resource Group that contains your Kubernetes cluster's virtual machines/disks. WARNING: If you're using , `AZURERESOURCEGROUP` must be set to the name of the auto-generated resource group that is created when you provision your cluster in Azure, since this is the resource group that contains your cluster's virtual machines/disks. ```bash AZURERESOURCEGROUP=<NAMEOFRESOURCE_GROUP> ``` If you are unsure of the Resource Group name, run the following command to get a list that you can select from. Then set the `AZURERESOURCEGROUP` environment variable to the appropriate value. ```bash az group list --query '[].{ ResourceGroup: name, Location:location }' ``` Get your cluster's Resource Group name from the `ResourceGroup` value in the response, and use it to set `$AZURERESOURCEGROUP`. To integrate Velero with Azure, you must create a Velero-specific . Obtain your Azure Account Subscription ID and Tenant ID: ```bash AZURESUBSCRIPTIONID=`az account list --query '[?isDefault].id' -o tsv` AZURETENANTID=`az account list --query '[?isDefault].tenantId' -o tsv` ``` Create a service principal with `Contributor` role. This will have subscription-wide access, so protect this credential. If you'll be using Velero to backup multiple clusters with multiple blob containers, it may be desirable to create a unique username per cluster rather than the default `velero`. Create service principal and let the CLI generate a password for you. Make sure to capture the password. ```bash AZURECLIENTSECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` ``` After creating the service principal, obtain the client id. ```bash AZURECLIENTID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv` ``` Now you need to create a file that contains all the environment variables you just set. The command looks like the following: ``` cat << EOF > ./credentials-velero AZURESUBSCRIPTIONID=${AZURESUBSCRIPTIONID} AZURETENANTID=${AZURETENANTID} AZURECLIENTID=${AZURECLIENTID} AZURECLIENTSECRET=${AZURECLIENTSECRET} AZURERESOURCEGROUP=${AZURERESOURCEGROUP} EOF ``` Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called `velero`, and place a deployment named `velero` in it. ```bash velero install \\ --provider azure \\ --bucket $BLOB_CONTAINER \\ --secret-file ./credentials-velero \\ --backup-location-config resourceGroup=$AZUREBACKUPRESOURCEGROUP,storageAccount=$AZURESTORAGEACCOUNTID \\ --snapshot-location-config apiTimeout=<YOUR_TIMEOUT> ``` Additionally, you can specify `--use-restic` to enable restic support, and `--wait` to wait for the deployment to be ready. (Optional) Specify for the `--backup-location-config` flag. (Optional) Specify for the `--snapshot-location-config` flag. (Optional) Specify for the Velero/restic pods. For more complex installation needs, use either the Helm chart, or add `--dry-run -o yaml` options for generating the YAML representation for the installation."
}
] |
{
"category": "Runtime",
"file_name": "azure-config.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "[NFS-Ganesha] is a user space NFS server that is well integrated with [CephFS] and [RGW] backends. It can export Ceph's filesystem namespaces and Object gateway namespaces over NFSv4 protocol. Rook already orchestrates Ceph filesystem and Object store (or RGW) on Kubernetes (k8s). It can be extended to orchestrate NFS-Ganesha server daemons as highly available and scalable NFS gateway pods to the Ceph filesystem and Object Store. This will allow NFS client applications to use the Ceph filesystem and object store setup by rook. This feature mainly differs from the feature to add NFS as an another storage backend for rook (the general NFS solution) in the following ways: It will use the rook's Ceph operator and not a separate NFS operator to deploy the NFS server pods. The NFS server pods will be directly configured with CephFS or RGW backend setup by rook, and will not require CephFS or RGW to be mounted in the NFS server pod with a PVC. The NFS-Ganesha server settings will be exposed to Rook as a Custom Resource Definition (CRD). Creating the nfs-ganesha CRD will launch a cluster of NFS-Ganesha server pods that will be configured with no exports. The NFS client recovery data will be stored in a Ceph RADOS pool; and the servers will have stable IP addresses by using [k8s Service]. Export management will be done by updating a per-pod config file object in RADOS by external tools and issuing dbus commands to the server to reread the configuration. This allows the NFS-Ganesha server cluster to be scalable and highly available. A running rook Ceph filesystem or object store, whose namespaces will be exported by the NFS-Ganesha server cluster. e.g., ``` kubectl create -f deploy/examples/filesystem.yaml ``` An existing RADOS pool (e.g., CephFS's data pool) or a pool created with a [Ceph Pool CRD] to store NFS client recovery data. Number of active Ganesha servers in the cluster Placement of the Ganesha servers Resource limits (memory, CPU) of the Ganesha server pods Using [SSSD] (System Security Services Daemon) This can be used to connect to many ID mapping services, but only LDAP has been tested Run SSSD in a sidecar The sidecar uses a different container image than ceph, and users should be able to specify it Resources requests/limits should be configurable for the sidecar container Users must be able to specify the SSSD config file Users can add an SSSD config from any standard Kubernetes VolumeMount Older SSSD versions (like those available in CentOS 7) do not support loading config files from `/etc/sssd/conf.d/*`; they must use `/etc/sssd/sssd.conf`. Newer versions support either method. To make configuration as simple as possible to document for users, we only support the `/etc/sssd/sssd.conf` method. This may reduce some configurability, but it is much simpler to document for"
},
{
"data": "For an option that is already complex, the simplicity here is a value. We allow users to specify any VolumeSource. There are two caveats: The file be mountable as `sssd.conf` via a VolumeMount `subPath`, which is how Rook will mount the file into the SSSD sidecar. The file mode must be 0600. Users only need one SSSD conf Volume per CephNFS that has this option path enabled. Users must be able to provide additional files that are referenced in the the SSSD config file Below is an example NFS-Ganesha CRD, `nfs-ganesha.yaml` ```yaml apiVersion: ceph.rook.io/v1 kind: CephNFS metadata: name: mynfs namespace: rook-ceph spec: server: active: 3 placement: resources: priorityClassName: security: sssd: sidecar: image: registry.access.redhat.com/rhel7/sssd:latest sssdConfigFile: volumeSource: # any standard kubernetes volume source configMap: name: rook-ceph-nfs-organization-sssd-config defaultMode: 0600 # mode must be 0600 additionalFiles: subPath: some-dir volumeSource: configMap: name: rook-ceph-nfs-organization-sssd-ca-bundle defaultMode: 0600 # mode must be 0600 for CA certs resources: kerberos: principalName: nfs configFiles: volumeSource: configMap: name: rook-ceph-nfs-organization-krb-conf keytabFile: volumeSource: secret: secretName: rook-ceph-nfs-organization-keytab defaultMode: 0600 # required ``` When the nfs-ganesha.yaml is created the following will happen: Rook's Ceph operator sees the creation of the NFS-Ganesha CRD. The operator creates as many [k8s Deployments] as the number of active Ganesha servers mentioned in the CRD. Each deployment brings up a Ganesha server pod, a replicaset of size 1. The ganesha servers, each running in a separate pod, use a mostly-identical ganesha config (ganesha.conf) with no EXPORT definitions. The end of the file will have it do a %url include on a pod-specific RADOS object from which it reads the rest of its config. The operator creates a k8s service for each of the ganesha server pods to allow each of the them to have a stable IP address. The ganesha server pods constitute an active-active high availability NFS server cluster. If one of the active Ganesha server pods goes down, k8s brings up a replacement ganesha server pod with the same configuration and IP address. The NFS server cluster can be scaled up or down by updating the number of the active Ganesha servers in the CRD (using `kubectl edit` or modifying the original CRD and running `kubectl apply -f <CRD yaml file>`). After loading the basic ganesha config from inside the container, the node will read the rest of its config from an object in RADOS. This allows external tools to generate EXPORT definitions for ganesha. The object will be named \"conf-<metadata.name>.<index>\", where metadata.name is taken from the CRD and the index is internally generated. It will be stored in `rados.pool` and `rados.namespace` from the above CRD. An external consumer will fetch the ganesha server IPs by querying the k8s services of the Ganesha server pods. It should have network access to the Ganesha pods to manually mount the shares using a NFS"
},
{
"data": "Later, support will be added to allow user pods to easily consume the NFS shares via PVCs. The NFS shares exported by rook's ganesha server pods can be consumed by [OpenStack] cloud's user VMs. To do this, OpenStack's shared file system service, [Manila] will provision NFS shares backed by CephFS using rook. Manila's [CephFS driver] will create NFS-Ganesha CRDs to launch ganesha server pods. The driver will dynamically add or remove exports of the ganesha server pods based on OpenStack users' requests. The OpenStack user VMs will have network connectivity to the ganesha server pods, and manually mount the shares using NFS clients. NFS-Ganesha requires DBus. Run DBus as a sidecar container so that it can be restarted if the process fails. The `/run/dbus` directory must be shared between Ganesha and DBus. SSSD is able to provide user ID mapping to NFS-Ganesha. It can integrate with LDAP, Active Directory, and FreeIPA. Prototype information detailed on Rook blog: https://blog.rook.io/prototyping-an-nfs-connection-to-ldap-using-sssd-7c27f624f1a4 NFS-Ganesha (via libraries within its container) is the client to SSSD. As of Ceph v17.2.3, the Ceph container image does not have the `sssd-client` package installed which is required for supporting SSSD. It is available starting from Ceph v17.2.4. The following directories must be shared between SSSD and the NFS-Ganesha container: `/var/lib/sss/pipes`: this directory holds the sockets used to communicate between client and SSSD `/var/lib/sss/mc`: this is a memory-mapped \"L0\" cache shared between client and SSSD The following directories should not be shared between SSSD and other containers: `/var/lib/sss/db`: this is a memory-mapped \"L1\" cache that is intended to survive reboots `/run/dbus`: using the DBus instance from the sidecar caused SSSD errors in testing. SSSD only uses DBus for internal communications and creates its own socket as needed. Kerberos is the authentication mechanism natively supported by NFS-Ganesha. The Kerberos service principal used by NFS-Ganesha to authenticate with the Kerberos server is built up from 3 components: a configured PrincipalName (from `spec.security.kerberos.principalName` in Rook) that acts as the service name the hostname of the server on which NFS-Ganesha is running (using kernel call `getaddrinfo()`) the realm (as configured by the user's krb.conf file(s)) The full service principal name is constructed as `<principalName>/<hostname>@<realm>`. Users must add this service principal to their Kerberos server configuration. Therefore, this principal must be static, and for the benefit of users it should be deterministic. The hostname of Kubernetes pods is the partly-random name of the pod by default. In order to give NFS-Ganesha server pods a deterministic hostname, the `hostname` field of the pod spec will be set to the namespace plus name of the CephNFS resource. This also means that all servers will be able to use the same service principal, which will be valuable for auto-scaling NFS servers in the"
},
{
"data": "The principal then becomes easy to construct from known CephNFS fields as `<principalName>/<namespace>-<name>@<realm>`. Additionally, `getaddrinfo()` doesn't return the hostname by default because the default `resolv.conf` in the pod does not check localhost. The pod's DNS config must be updated as shown. ```yaml dnsConfig: searches: localhost ``` Volumes that should be mounted into nfs-ganesha container to support Kerberos: `keytabFile` volume: use `subPath` on the mount to add the `krb5.keytab` file to `/etc/krb5.keytab` `configFiles` volume: mount (without `subPath`) to `/etc/krb5.conf.rook/` to allow all files to be mounted (e.g., if a ConfigMap has multiple data items or hostPath has multiple conf.d files) Should add configuration. Docs say Active_krb5 is default true if krb support is compiled in, but most examples have this explicitly set. Default PrincipalName is \"nfs\". Default for keytab path is reportedly empty. Rook can use `/etc/krb5.keytab`. Create a new RADOS object named `kerberos` to configure Kerberos. ```ini NFS_KRB5 { PrincipalName = nfs ; # <-- set from spec.security.kerberos.principalName (or \"nfs\" if unset) KeytabPath = /etc/krb5.keytab ; Active_krb5 = YES ; } ``` Add the following line to to the config object (`conf-nfs.${nfs-name}`) to reference the new `kerberos` RADOS object. Remove this line from the config object if Kerberos is disabled. ``` %url \"rados://.nfs/${nfs-name}/kerberos\" ``` These steps can be done from the Rook operator. Rook should take steps to remove any default configuration. This means that it should create its own minimal krb.conf and ensure that any imported directories are empty. While it might be nice for some users to include the default configurations that are present in the container, it is extremely difficult in practice to adequately document the interactions between defaults that may change from time to time in the container image (or between different distros), and users will have to expend mental effort to understand how their configurations may override defaults. Upstream documentation for krb5.conf is unclear about file ordering and override behavior. Therefore, Rook will rely on users to specify nearly all of the configuration which will ensure users are able to easily supply the configuration they require. The minimal `/etc/krb5.conf` file Rook will create is as so: ```ini includedir /etc/krb5.conf.rook/ # include all user-defined config files [logging] default = STDERR # only log to stderr by default ``` Currently the `ceph nfs ...` CLI tool is unable to create exports with Kerberos security enabled. Users must manually add it by modifying the raw RADOS export object. This should be documented. Example export (with `sectype` manually added): ```ini EXPORT { FSAL { name = \"CEPH\"; user_id = \"nfs.my-nfs.1\"; filesystem = \"myfs\"; secretaccesskey = \"AQBsPf1iNXTRKBAAtw+D5VzFeAMV4iqbfI0IBA==\"; } export_id = 1; path = \"/\"; pseudo = \"/test\"; access_type = \"RW\"; squash = \"none\"; attrexpirationtime = 0; security_label = true; protocols = 4; transports = \"TCP\"; sectype = krb5,krb5i,krb5p; # <-- not included in ceph nfs exports by default } ``` <! LINKS >"
}
] |
{
"category": "Runtime",
"file_name": "ceph-nfs-ganesha.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Kata Containers on Google Compute Engine (GCE) makes use of . Most of the installation procedure is identical to that for Kata on your preferred distribution, but enabling nested virtualization currently requires extra steps on GCE. This guide walks you through creating an image and instance with nested virtualization enabled. Note that `kata-runtime check` checks for nested virtualization, but does not fail if support is not found. As a pre-requisite this guide assumes an installed and configured instance of the . For a zero-configuration option, all of the commands below were been tested under (as of Jun 2018). Verify your `gcloud` installation and configuration: ```bash $ gcloud info || { echo \"ERROR: no Google Cloud SDK\"; exit 1; } ``` VM images on GCE are grouped into families under projects. Officially supported images are automatically discoverable with `gcloud compute images list`. That command produces a list similar to the following (likely with different image names): ```bash $ gcloud compute images list NAME PROJECT FAMILY DEPRECATED STATUS centos-7-v20180523 centos-cloud centos-7 READY coreos-stable-1745-5-0-v20180531 coreos-cloud coreos-stable READY cos-beta-67-10575-45-0 cos-cloud cos-beta READY cos-stable-66-10452-89-0 cos-cloud cos-stable READY debian-9-stretch-v20180510 debian-cloud debian-9 READY rhel-7-v20180522 rhel-cloud rhel-7 READY sles-11-sp4-v20180523 suse-cloud sles-11 READY ubuntu-1604-xenial-v20180522 ubuntu-os-cloud ubuntu-1604-lts READY ubuntu-1804-bionic-v20180522 ubuntu-os-cloud ubuntu-1804-lts READY ``` Each distribution has its own project, and each project can host images for multiple versions of the distribution, typically grouped into families. We recommend you select images by project and family, rather than by name. This ensures any scripts or other automation always works with a non-deprecated image, including security updates, updates to GCE-specific scripts, etc. The following example (substitute your preferred distribution project and image family) produces an image with nested virtualization enabled in your currently active GCE project: ```bash $ SOURCEIMAGEPROJECT=ubuntu-os-cloud $ SOURCEIMAGEFAMILY=ubuntu-1804-lts $ IMAGENAME=${SOURCEIMAGE_FAMILY}-nested $ gcloud compute images create \\ --source-image-project $SOURCEIMAGEPROJECT \\ --source-image-family $SOURCEIMAGEFAMILY \\ --licenses=https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx \\ $IMAGE_NAME ``` If successful, `gcloud` reports that the image was created. Verify that the image has the nested virtualization license with `gcloud compute images describe"
},
{
"data": "This produces output like the following (some fields have been removed for clarity and to redact personal info): ```yaml diskSizeGb: '10' kind: compute#image licenseCodes: '1002001' '5926592092274602096' licenses: https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts name: ubuntu-1804-lts-nested sourceImage: https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/images/ubuntu-1804-bionic-v20180522 sourceImageId: '3280575157699667619' sourceType: RAW status: READY ``` The primary criterion of interest here is the presence of the `enable-vmx` license. Without that licence Kata will not work. Without that license Kata does not work. The presence of that license instructs the Google Compute Engine hypervisor to enable Intel's VT-x instructions in virtual machines created from the image. Note that nested virtualization is only available in VMs running on Intel Haswell or later CPU micro-architectures. Assuming you created a nested-enabled image using the previous instructions, verify that VMs created from this image are VMX-enabled with the following: Create a VM from the image created previously: ```bash $ gcloud compute instances create \\ --image $IMAGE_NAME \\ --machine-type n1-standard-2 \\ --min-cpu-platform \"Intel Broadwell\" \\ kata-testing ``` NOTE: In most zones the `--min-cpu-platform` argument can be omitted. It is only necessary in GCE Zones that include hosts based on Intel's Ivybridge platform. Verify that the VMX CPUID flag is set: ```bash $ gcloud compute ssh kata-testing $ [ -z \"$(lscpu|grep GenuineIntel)\" ] && { echo \"ERROR: Need an Intel CPU\"; exit 1; } ``` If this fails, ensure you created your instance from the correct image and that the previously listed `enable-vmx` license is included. The process for installing Kata itself on a virtualization-enabled VM is identical to that for bare metal. For detailed information to install Kata on your distribution of choice, see the . Optionally, after installing Kata, create an image to preserve the fruits of your labor: ```bash $ gcloud compute instances stop kata-testing $ gcloud compute images create \\ --source-disk kata-testing \\ kata-base ``` The result is an image that includes any changes made to the `kata-testing` instance as well as the `enable-vmx` flag. Verify this with `gcloud compute images describe kata-base`. The result, which omits some fields for clarity, should be similar to the following: ```yaml diskSizeGb: '10' kind: compute#image licenseCodes: '1002001' '5926592092274602096' licenses: https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx https://www.googleapis.com/compute/v1/projects/ubuntu-os-cloud/global/licenses/ubuntu-1804-lts name: kata-base selfLink: https://www.googleapis.com/compute/v1/projects/my-kata-project/global/images/kata-base sourceDisk: https://www.googleapis.com/compute/v1/projects/my-kata-project/zones/us-west1-a/disks/kata-testing sourceType: RAW status: READY ```"
}
] |
{
"category": "Runtime",
"file_name": "gce-installation-guide.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- toc --> - <!-- /toc --> Kubernetes provides the NetworkPolicy API as a simple way for developers to control traffic flows of their applications. While NetworkPolicy is embraced throughout the community, it was designed for developers instead of cluster admins. Therefore, traits such as the lack of explicit deny rules make securing workloads at the cluster level difficult. The Network Policy API working group (subproject of Kubernetes SIG-Network) has then introduced the which aims to solve the cluster admin policy usecases. Starting with v1.13, Antrea supports the `AdminNetworkPolicy` and `BaselineAdminNetworkPolicy` API types, except for advanced Namespace selection mechanisms (namely `sameLabels` and `notSameLabels` rules) which are still in the experimental phase and not required as part of conformance. AdminNetworkPolicy was introduced in v1.13 as an alpha feature and is disabled by default. A feature gate, `AdminNetworkPolicy`, must be enabled in antrea-controller.conf in the `antrea-config` ConfigMap when Antrea is deployed: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: antrea-config namespace: kube-system data: antrea-controller.conf: | featureGates: AdminNetworkPolicy: true ``` Note that the `AdminNetworkPolicy` feature also requires the `AntreaPolicy` featureGate to be set to true, which is enabled by default since Antrea v1.0. In addition, the AdminNetworkPolicy CRD types need to be installed in the K8s cluster. Refer to for more information. Please refer to the of the network-policy-api repo, which contains several user stories for the AdminNetworkPolicy APIs, as well as sample specs for each of the user story. Shown below are sample specs of `AdminNetworkPolicy` and `BaselineAdminNetworkPolicy` for demonstration purposes: ```yaml apiVersion: policy.networking.k8s.io/v1alpha1 kind: AdminNetworkPolicy metadata: name: cluster-wide-deny-example spec: priority: 10 subject: namespaces: matchLabels: kubernetes.io/metadata.name: sensitive-ns ingress: action: Deny from: namespaces: namespaceSelector: {} name: select-all-deny-all ``` ```yaml apiVersion: policy.networking.k8s.io/v1alpha1 kind: BaselineAdminNetworkPolicy metadata: name: default spec: subject: namespaces: {} ingress: action: Deny # zero-trust cluster default security posture from: namespaces: namespaceSelector: {} ``` Note that for a single cluster, the `BaselineAdminNetworkPolicy` resource is supported as a singleton with the name of `default`. AdminNetworkPolicy API objects and Antrea-native policies can co-exist with each other in the same cluster. AdminNetworkPolicy and BaselineAdminNetworkPolicy API types provide K8s upstream supported, cluster admin facing guardrails that are portable and CNI-agnostic. AntreaClusterNetworkPolicy and AntreaNetworkPolicy on the other hand, are designed for similar use cases but provide a richer feature set, including FQDN policies, nodeSelectors and L7 rules. See the and for details. Both the AdminNetworkPolicy object and Antrea-native policy objects use a `priority` field to determine its precedence compared to other policy objects. The following diagram describes the relative precedence between the AdminNetworkPolicy API types and Antrea-native policy types: ```text Antrea-native Policies (tier != baseline) > AdminNetworkPolicies > K8s NetworkPolicies > Antrea-native Policies (tier == baseline) > BaselineAdminNetworkPolicy ``` In other words, any Antrea-native policies that are not created in the `baseline` tier will have higher precedence over, and thus evaluated before, all AdminNetworkPolicies at any `priority`. Effectively, the AdminNetworkPolicy objects are associated with a tier priority lower than Antrea-native policies, but higher than K8s NetworkPolicies. Similarly, baseline-tier Antrea-native policies will have a higher precedence over the BaselineAdminNetworkPolicy object. For more information on policy and rule precedence, refer to ."
}
] |
{
"category": "Runtime",
"file_name": "admin-network-policy.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Previous change logs can be found at Optimize optimize build script, optimize log printing. Translate some document and code comment from Chinese to English. ansible script improve: - <hr> <hr>"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.3.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Configure the default OSD backend via the `cephCluster` CR so that new OSDs are built with it. Existing OSDs should be able to migrate to use a new backend store configured via `cephCluster` CR. The migration process should include destroying the existing OSDs one by one, wiping the drive, then recreating a new OSD on that drive with the same ID. Ceph uses BlueStore as a special-purpose storage backend designed specifically for Ceph OSD workloads. The upstream Ceph team is working on improving BlueStore in ways that will significantly improve performance, especially in disaster recovery scenarios. This new OSD backend is known as `bluestore-rdr`. In the future, other backends will also need to be supported, such as `seastore` from the Crimson effort. To employ a new torage backend, the existing OSD needs to be destroyed, drives wiped, and new OSD created using the specified storage backend. Newly provisioned clusters should be able to use the specific storage backend when creating OSDs. Existing clusters should be able to migrate existing OSDs to a specific backend, without downtime and without risking data loss. Note that this necessarily entails redeploying and backfilling each OSD in turn. Migration of OSDs with following configurations is deferred for now and will considered in the future: Hybrid OSDs where metadata (RocksDB+WAL) is placed on faster storage media and application data on slower media. Setups with multiple OSDs per drive, though with recent Ceph releases the motivation for deploying this way is mostly obviated. Add `spec.storage.store` in the Ceph cluster YAML. ```yaml storage: store: type: bluestore-rdr updateStore: yes-really-update-store ``` `type`: The backend to be used for OSDs: `bluestore`, `bluestore-rdr`, etc. The default type will be `bluestore` `updateStore`: Allows the operator to migrate existing OSDs to a different backend. This field can only take the value `yes-really-update-store`. If the user wants to change the `store.type` field for an existing cluster, they will also need to update `spec.storage.store.updateStore` with `yes-really-update-store`. Add `status.storage.osd` to the Ceph cluster status. This will help convey the progress of OSD migration ``` yaml status: storage: osd: storeType: bluestore: 3 bluestore-rdr: 5 ``` `storeType.bluestore`: Total number of BlueStore OSDs running `storeType.bluestore-rdr`: Total number of BlueStore-rdr OSDs running The cluster's `phase` should be set to `progressing` while OSDs are migrating The migration process will involve destroying existing OSDs one by one, wiping the drives, deploying a new OSD with the same ID, then waiting for all PGs to be `active+clean` before migrating the next OSD. Since this operation involves possible impact or downtime, users should be exercise caution before proceeding with this action. NOTE: Once the OSDs are migrated to a new backend, say `bluestore-rdr`, they won't be allowed to be migrated back to the legacy store (BlueStore). Add a new label `osd-store:<osd store type>` to all OSD"
},
{
"data": "This label will help to identify the current backend being used for the OSD and enable the operator to determine if the OSD should be migrated if the the admin changes the backend type in the spec (`spec.Storage.store.type`). Set the OSD backend provided by the user in `spec.storage.store.type` as an environment variable in the OSD prepare job. If no OSD store is provided in the spec, then set the environment variable to `bluestore`. The prepare pod will use this environment variable when preparing OSDs with the `ceph-volume` command. RAW MODE ``` ceph-volume raw prepare <OSDSTOREENV_VARIABLE> --data /dev/vda ``` LVM MODE ``` ceph-volume lvm prepare <OSDSTOREENV_VARIABLE> --data /dev/vda ``` The ceph-volume `activate` command doesn't require the OSD backend to be passed as an argument. It auto-detects the backend that was used during when the OSD was prepared. After upgrading an existing Rook Ceph cluster to a Ceph release that supports `bluestore-rdr`, admins can migrate `bluestore` OSDs to `bluestore-rdr`. The backend of OSDs can not be overridden, so this update will require the OSDs to be replaced one by one. In order to migrate OSDs to use `bluestore-rdr`, admins must patch the Ceph cluster spec as below: ```yaml storage: store: type: bluestore-rdr updateStore: yes-really-update-store ``` The operator's reconciler will replace one OSD at a time. A configmap will be used to store the OSD ID currently being migrated. OSD replacement steps: List all OSDs where `osd-store:<osd store type>` does not match `spec.storage.store.type`. If all PGs are not `active+clean`, do not proceed. If all PGs are `active+clean` but a previous OSD replacement is not completed, do not proceed. If all PGs are `active+clean` and no replacement is in progress, then select an OSD to be migrated. Delete the OSD deployment. Create an OSD prepare job with an environment variable indictating the OSD ID to be replaced. The OSD prepare pod will destroy the OSD and prepare it again using the same OSD ID. Refer for details. Once the destroyed OSD pod is recreated, delete the configmap. If there is any error during the OSD migration, then preserve the OSD ID being replaced in the configmap for next reconcile. Reconcile the operator and perform as same steps until all the OSDs have migrated to the new backend. The OSD prepare pod job will destroy an OSD using following steps: Check for OSD ID environment variable of the OSD to be destroyed. Use `ceph volume list` to fetch the OSD path. Destroy the OSD using following command: ``` ceph osd destroy <OSD_ID> --yes-i-really-mean-it ``` Wipe the OSD drive. This removes all the data on the device. Prepare the OSD with the new store type by using the same OSD ID. This is done by passing the OSD ID as `--osd-id` flag to `ceph-volume` command. ``` ceph-volume lvm prepare --osd-id <OSDID> --data <OSDPATH> ``` These changes require significant development effort to migrate existing OSDs to use a new backend. They will be divided into following phases: New OSDs (greenfield). Migrating existing OSDs on PVC without metadata devices. Migrating existing OSDs on PVC with metadata devices. Existing node-based OSDs (multiple OSDs are created at once via `ceph-volume batch` which adds additional complications)."
}
] |
{
"category": "Runtime",
"file_name": "osd-migration.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List ip-masq-agent CIDRs List ip-masq-agent CIDRs. Packets sent from pods to IPs from these CIDRs avoid masquerading. ``` cilium-dbg bpf ipmasq list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - ip-masq-agent CIDRs"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_ipmasq_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Get information of all metaNodes, including ID, address, read/write status, and survival status. ```bash cfs-cli metanode list ``` Show basic information of the metaNode, including status, usage, and the ID of the partition it carries. ```bash cfs-cli metanode info [Address] ``` Decommission the metaNode. The partitions on this node will be automatically transferred to other available nodes. ```bash cfs-cli metanode decommission [Address] ``` Transfer the meta partition on the source metaNode to the target metaNode. ```bash cfs-cli metanode migrate [srcAddress] [dstAddress] ```"
}
] |
{
"category": "Runtime",
"file_name": "metanode.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Generating bash completions from a cobra command is incredibly easy. An actual program which does so for the kubernetes kubectl binary is as follows: ```go package main import ( \"io/ioutil\" \"os\" \"github.com/GoogleCloudPlatform/kubernetes/pkg/kubectl/cmd\" ) func main() { kubectl := cmd.NewFactory(nil).NewKubectlCommand(os.Stdin, ioutil.Discard, ioutil.Discard) kubectl.GenBashCompletionFile(\"out.sh\") } ``` `out.sh` will get you completions of subcommands and flags. Copy it to `/etc/bash_completion.d/` as described and reset your terminal to use autocompletion. If you make additional annotations to your code, you can get even more intelligent and flexible behavior. Some more actual code that works in kubernetes: ```bash const ( bashcompletionfunc = `kubectlparseget() { local kubectl_output out if kubectl_output=$(kubectl get --no-headers \"$1\" 2>/dev/null); then out=($(echo \"${kubectl_output}\" | awk '{print $1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } kubectlgetresource() { if [[ ${#nouns[@]} -eq 0 ]]; then return 1 fi kubectlparseget ${nouns[${#nouns[@]} -1]} if [[ $? -eq 0 ]]; then return 0 fi } custom_func() { case ${last_command} in kubectlget | kubectldescribe | kubectldelete | kubectlstop) kubectlgetresource return ;; *) ;; esac } `) ``` And then I set that in my command definition: ```go cmds := &cobra.Command{ Use: \"kubectl\", Short: \"kubectl controls the Kubernetes cluster manager\", Long: `kubectl controls the Kubernetes cluster manager. Find more information at https://github.com/GoogleCloudPlatform/kubernetes.`, Run: runHelp, BashCompletionFunction: bashcompletionfunc, } ``` The `BashCompletionFunction` option is really only valid/useful on the root command. Doing the above will cause `custom_func()` to be called when the built in processor was unable to find a solution. In the case of kubernetes a valid command might look something like `kubectl get pod ` the `customcfunc()` will run because the cobra.Command only understood \"kubectl\" and \"get.\" `customfunc()` will see that the cobra.Command is \"kubectlget\" and will thus call another helper `kubectlgetresource()`. `kubectlgetresource` will look at the 'nouns' collected. In our example the only noun will be `pod`. So it will call `kubectlparseget pod`. `kubectlparse_get` will actually call out to kubernetes and get any pods. It will then set `COMPREPLY` to valid pods! In the above example \"pod\" was assumed to already be typed. But if you want `kubectl get ` to show a list of valid \"nouns\" you have to set them. Simplified code from `kubectl get` looks like: ```go validArgs []string = { \"pod\", \"node\", \"service\", \"replicationcontroller\" } cmd := &cobra.Command{ Use: \"get [(-o|--output=)json|yaml|template|...] (RESOURCE [NAME] | RESOURCE/NAME"
},
{
"data": "Short: \"Display one or many resources\", Long: get_long, Example: get_example, Run: func(cmd *cobra.Command, args []string) { err := RunGet(f, out, cmd, args) util.CheckErr(err) }, ValidArgs: validArgs, } ``` Notice we put the \"ValidArgs\" on the \"get\" subcommand. Doing so will give results like ```bash node pod replicationcontroller service ``` If your nouns have a number of aliases, you can define them alongside `ValidArgs` using `ArgAliases`: ```go argAliases []string = { \"pods\", \"nodes\", \"services\", \"svc\", \"replicationcontrollers\", \"rc\" } cmd := &cobra.Command{ ... ValidArgs: validArgs, ArgAliases: argAliases } ``` The aliases are not shown to the user on tab completion, but they are accepted as valid nouns by the completion algorithm if entered manually, e.g. in: ```bash backend frontend database ``` Note that without declaring `rc` as an alias, the completion algorithm would show the list of nouns in this example again instead of the replication controllers. Most of the time completions will only show subcommands. But if a flag is required to make a subcommand work, you probably want it to show up when the user types . Marking a flag as 'Required' is incredibly easy. ```go cmd.MarkFlagRequired(\"pod\") cmd.MarkFlagRequired(\"container\") ``` and you'll get something like ```bash -c --container= -p --pod= ``` In this example we use --filename= and expect to get a json or yaml file as the argument. To make this easier we annotate the --filename flag with valid filename extensions. ```go annotations := []string{\"json\", \"yaml\", \"yml\"} annotation := make(mapstring) annotation[cobra.BashCompFilenameExt] = annotations flag := &pflag.Flag{ Name: \"filename\", Shorthand: \"f\", Usage: usage, Value: value, DefValue: value.String(), Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` Now when you run a command with this filename flag you'll get something like ```bash test/ example/ rpmbuild/ hello.yml test.json ``` So while there are many other files in the CWD it only shows me subdirs and those with valid extensions. Similar to the filename completion and filtering using cobra.BashCompFilenameExt, you can specifiy a custom flag completion function with cobra.BashCompCustom: ```go annotation := make(mapstring) annotation[cobra.BashCompFilenameExt] = []string{\"kubectlgetnamespaces\"} flag := &pflag.Flag{ Name: \"namespace\", Usage: usage, Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` In addition add the `handlenamespaceflag` implementation in the `BashCompletionFunction` value, e.g.: ```bash kubectlgetnamespaces() { local template template=\"{{ range .items }}{{ .metadata.name }} {{ end }}\" local kubectl_out if kubectl_out=$(kubectl get -o template --template=\"${template}\" namespace 2>/dev/null); then COMPREPLY=( $( compgen -W \"${kubectl_out}[*]\" -- \"$cur\" ) ) fi } ```"
}
] |
{
"category": "Runtime",
"file_name": "bash_completions.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Object Bucket Notifications Rook supports the creation of bucket notifications via two custom resources: a `CephBucketNotification` is a custom resource the defines: topic, events and filters of a bucket notification, and is described by a Custom Resource Definition (CRD) shown below. Bucket notifications are associated with a bucket by setting labels on the Object Bucket claim (OBC). See the Ceph documentation for detailed information: . a `CephBucketTopic` is custom resource which represents a bucket notification topic and is described by a CRD shown below. A bucket notification topic represents an endpoint (or a \"topic\" inside this endpoint) to which bucket notifications could be sent. A CephBucketNotification defines what bucket actions trigger the notification and which topic to send notifications to. A CephBucketNotification may also define a filter, based on the object's name and other object attributes. Notifications can be associated with buckets created via ObjectBucketClaims by adding labels to an ObjectBucketClaim with the following format: ```yaml bucket-notification-<notification name>: <notification name> ``` The CephBucketTopic, CephBucketNotification and ObjectBucketClaim must all belong to the same namespace. If a bucket was created manually (not via an ObjectBucketClaim), notifications on this bucket should also be created manually. However, topics in these notifications may reference topics that were created via CephBucketTopic resources. A CephBucketTopic represents an endpoint (of types: Kafka, AMQP0.9.1 or HTTP), or a specific resource inside this endpoint (e.g a Kafka or an AMQP topic, or a specific URI in an HTTP server). The CephBucketTopic also holds any additional info needed for a CephObjectStore's RADOS Gateways (RGW) to connect to the endpoint. Topics don't belong to a specific bucket or notification. Notifications from multiple buckets may be sent to the same topic, and one bucket (via multiple CephBucketNotifications) may send notifications to multiple topics. Notifications may be sent synchronously, as part of the operation that triggered them. In this mode, the operation is acknowledged only after the notification is sent to the topics configured endpoint, which means that the round trip time of the notification is added to the latency of the operation itself. The original triggering operation will still be considered as successful even if the notification fail with an error, cannot be delivered or times out. Notifications may also be sent asynchronously. They will be committed into persistent storage and then asynchronously sent to the topics configured endpoint. In this case, the only latency added to the original operation is of committing the notification to persistent storage. If the notification fail with an error, cannot be delivered or times out, it will be retried until successfully acknowledged. ```yaml apiVersion:"
},
{
"data": "kind: CephBucketTopic metadata: name: my-topic [1] namespace: my-app-space [2] spec: objectStoreName: my-store [3] objectStoreNamespace: rook-ceph [4] opaqueData: [email protected] [5] persistent: false [6] endpoint: [7] http: [8] uri: http://my-notification-endpoint:8080 disableVerifySSL: true [9] sendCloudEvents: false [10] ``` `name` of the `CephBucketTopic` In case of AMQP endpoint, the name is used for the AMQP topic (routing key for a topic exchange) In case of Kafka endpoint, the name is used as the Kafka topic `namespace`(optional) of the `CephBucketTopic`. Should match the namespace of the CephBucketNotification associated with this CephBucketTopic, and the OBC with the label referencing the CephBucketNotification `objectStoreName` is the name of the object store in which the topic should be created. This must be the same object store used for the buckets associated with the notifications referencing this topic. `objectStoreNamespace` is the namespace of the object store in which the topic should be created `opaqueData` (optional) is added to all notifications triggered by a notifications associated with the topic `persistent` (optional) indicates whether notifications to this endpoint are persistent (=asynchronous) or sent synchronously (false by default) `endpoint` to which to send the notifications to. Exactly one of the endpoints must be defined: `http`, `amqp`, `kafka` `http` (optional) hold the spec for an HTTP endpoint. The format of the URI would be: `http` port defaults to: 80/443 for HTTP/S accordingly `disableVerifySSL` indicates whether the RGW is going to verify the SSL certificate of the HTTP server in case HTTPS is used (\"false\" by default) `sendCloudEvents`: (optional) send the notifications with the . Supported for Ceph Quincy (v17) or newer (\"false\" by default) `amqp` (optional) hold the spec for an AMQP endpoint. The format of the URI would be: `amqp` port defaults to: 5672/5671 for AMQP/S accordingly user/password defaults to: guest/guest user/password may only be provided if HTTPS is used with the RGW. If not, topic creation request will be rejected vhost defaults to: / `disableVerifySSL` (optional) indicates whether the RGW is going to verify the SSL certificate of the AMQP server in case AMQPS is used (\"false\" by default) `ackLevel` (optional) indicates what kind of ack the RGW is waiting for after sending the notifications: none: message is considered delivered if sent to broker broker: message is considered delivered if acked by broker (default) routable: message is considered delivered if broker can route to a consumer `exchange` in the AMQP broker that would route the notifications. Different topics pointing to the same endpoint must use the same exchange `kafka` (optional) hold the spec for a Kafka endpoint. The format of the URI would be: `kafka://[<user>:<password>@]<fqdn>[:<port]` port defaults to: 9092 user/password may only be provided if HTTPS is used with the"
},
{
"data": "If not, topic creation request will be rejected user/password may only be provided together with `useSSL`, if not, the connection to the broker would fail `disableVerifySSL` (optional) indicates whether the RGW is going to verify the SSL certificate of the Kafka server in case `useSSL` flag is used (\"false\" by default) `ackLevel` (optional) indicates what kind of ack the RGW is waiting for after sending the notifications: none: message is considered delivered if sent to broker broker: message is considered delivered if acked by broker (default) `useSSL` (optional) indicates that secure connection will be used for connecting with the broker (false by default) !!! note In case of Kafka and AMQP, the consumer of the notifications is not required to ack the notifications, since the broker persists the messages before delivering them to their final destinations. ```yaml apiVersion: ceph.rook.io/v1 kind: CephBucketNotification metadata: name: my-notification [1] namespace: my-app-space [2] spec: topic: my-topic [3] filter: [4] keyFilters: [5] name: prefix value: hello name: suffix value: .png name: regex value: \"[a-z]\\\\.\" metadataFilters: [6] name: x-amz-meta-color value: blue name: x-amz-meta-user-type value: free tagFilters: [7] name: project value: brown events: [8] s3:ObjectCreated:Put s3:ObjectCreated:Copy ``` `name` of the `CephBucketNotification` `namespace`(optional) of the `CephBucketNotification`. Should match the namespace of the CephBucketTopic referenced in [3], and the OBC with the label referencing the CephBucketNotification `topic` to which the notifications should be sent `filter` (optional) holds a list of filtering rules of different types. Only objects that match all the filters will trigger notification sending `keyFilter` (optional) are filters based on the object key. There could be up to 3 key filters defined: `prefix`, `suffix` and `regex` `metadataFilters` (optional) are filters based on the object metadata. All metadata fields defined as filters must exists in the object, with the values defined in the filter. Other metadata fields may exist in the object `tagFilters` (optional) are filters based on object tags. All tags defined as filters must exists in the object, with the values defined in the filter. Other tags may exist in the object `events` (optional) is a list of events that should trigger the notifications. By default all events should trigger notifications. Valid Events are: s3:ObjectCreated: s3:ObjectCreated:Put s3:ObjectCreated:Post s3:ObjectCreated:Copy s3:ObjectCreated:CompleteMultipartUpload s3:ObjectRemoved: s3:ObjectRemoved:Delete s3:ObjectRemoved:DeleteMarkerCreated For a notifications to be associated with a bucket, a labels must be added to the OBC, indicating the name of the notification. To delete a notification from a bucket the matching label must be removed. When an OBC is deleted, all of the notifications associated with the bucket will be deleted as well. ```yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-notification-bucket labels: some-label: some-value bucket-notification-my-notification: my-notification bucket-notification-another-notification: another-notification spec: generateBucketName: ceph-bkt storageClassName: rook-ceph-delete-bucket ```"
}
] |
{
"category": "Runtime",
"file_name": "ceph-object-bucket-notifications.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting [email protected], a members-only group that is world-postable. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. This Code of Conduct is adapted from the , version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html"
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- toc --> - - - - - - <!-- /toc --> Antrea versions are expressed as `x.y.z`, where `x` is the major version, `y` is the minor version, and `z` is the patch version, following [Semantic Versioning] terminology. Unlike minor releases, patch releases should not contain miscellaneous feature additions or improvements. No incompatibilities should ever be introduced between patch versions of the same minor version. API groups / versions must not be introduced or removed as part of patch releases. Patch releases are intended for important bug fixes to recent minor versions, such as addressing security vulnerabilities, fixes to problems preventing Antrea from being deployed & used successfully by a significant number of users, severe problems with no workaround, and blockers for products (including commercial products) which rely on Antrea. When it comes to dependencies, the following rules are observed between patch versions of the same Antrea minor versions: the same minor OVS version should be used the same minor version should be used for all Go dependencies, unless updating to a new minor / major version is required for an important bug fix for Antrea Docker images shipped as part of a patch release, the same version must be used for the base Operating System (Linux distribution / Windows server), unless an update is required to fix a critical bug. If important updates are available for a given Operating System version (e.g. which address security vulnerabilities), they should be included in Antrea patch releases. For every Antrea minor release, the stability level of supported features may be updated (from `Alpha` to `Beta` or from `Beta` to `GA`). Refer to the the [CHANGELOG] for information about feature stability level for each release. For features controlled by a feature gate, this information is also present in a more structured way in . New Antrea minor releases are currently shipped every 6 to 8 weeks. This fast release cadence enables us to ship new features quickly and frequently. It may change in the future. Compared to deploying the top-of-tree of the Antrea main branch, using a released version should provide more stability guarantees: despite our CI pipelines, some bugs can sneak into the branch and be fixed shortly after merge conflicts can break the top-of-tree temporarily some CI jobs are run periodically and not for every pull request before merge; as much as possible we run the entire test suite for each release candidate Antrea maintains release branches for the two most recent minor releases (e.g. the `release-0.10` and `release-0.11` branches are maintained until Antrea 0.12 is released). As part of this maintenance process, patch versions are released as frequently as needed, following these . With the current release cadence, this means that each minor release receives approximately 3 months of patch support. This may seem short, but was done on purpose to encourage users to upgrade Antrea often and avoid potential incompatibility"
},
{
"data": "In the future, we may reduce our release cadence for minor releases and simultaneously increase the support window for each release. Our goal is to support \"graceful\" upgrades for Antrea. By \"graceful\", we notably mean that there should be no significant disruption to data-plane connectivity nor to policy enforcement, beyond the necessary disruption incurred by the restart of individual components: during the Antrea Controller restart, new policies will not be processed. Because the Controller also runs the validation webhook for , an attempt to create an Antrea-native policy resource before the restart is complete may return an error. during an Antrea Agent restart, the Node's data-plane will be impacted: new connections to & from the Node will not be possible, and existing connections may break. In particular, it should be possible to upgrade Antrea without compromising enforcement of existing network policies for both new and existing Pods. In order to achieve this, the different Antrea components need to support version skew. Antrea Controller*: must be upgraded first Antrea Agent: must not be newer than the Antrea Controller*, and may be up to 4 minor versions older Antctl: must not be newer than the Antrea Controller*, and may be up to 4 minor versions older The supported version skew means that we only recommend Antrea upgrades to a new release up to 4 minor versions newer. For example, a cluster using 0.10 can be upgraded to one of 0.11, 0.12, 0.13 or 0.14, but we discourage direct upgrades to 0.15 and beyond. With the current release cadence, this provides a 6-month window of compatibility. If we reduce our release cadence in the future, we may revisit this policy as well. When directly applying a newer Antrea YAML manifest, as provided for each , there is no guarantee that the Antrea Controller will be upgraded first. In practice, the Controller would be upgraded simultaneously with the first Agent(s) to be upgraded by the rolling update of the Agent DaemonSet. This may create some transient issues and compromise the \"graceful\" upgrade. For upgrade scenarios, we therefore recommend that you \"split-up\" the manifest to ensure that the Controller is upgraded first. Each Antrea minor release should support [maintained K8s releases](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions) at the time of release (3 up to K8s 1.19, 4 after that). For example, at the time that Antrea 0.10 was released, the latest K8s version was 1.19; as a result we guarantee that 0.10 supports at least 1.19, 1.18 and 1.17 (in practice it also supports K8s 1.16). In addition, we strive to support the K8s versions used by default in cloud-managed K8s services ([EKS], [AKS] and [GKE] regular channel). Antrea follows a similar policy as for metrics deprecation. Alpha metrics have no stability guarantees; as such they can be modified or deleted at any time. Stable metrics are guaranteed to not change; specifically, stability means: the metric itself will not be renamed the type of metric will not be modified Eventually, even a stable metric can be"
},
{
"data": "In this case, the metric must be marked as deprecated first and the metric must stay deprecated for at least one minor release. The [CHANGELOG] must announce both metric deprecations and metric deletions. Before deprecation: ```bash some_counter 0 ``` After deprecation: ```bash some_counter 0 ``` In the future, we may introduce the same concept of [hidden metric](https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/#show-hidden-metrics) as K8s, as an additional part of the metric lifecycle. The Antrea APIs are built using K8s (they are a combination of CustomResourceDefinitions and aggregation layer APIServices) and we follow the same versioning scheme as the K8s APIs and the same [deprecation policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/). Other than the most recent API versions in each track, older API versions must be supported after their announced deprecation for a duration of no less than: GA: 12 months Beta: 9 months Alpha: N/A (can be removed immediately) This also applies to the `controlplane` API. In particular, introduction and removal of new versions for this API must respect the [\"graceful\" upgrade guarantee](#antrea-upgrade-and-supported-version-skew). The `controlplane` API (which is exposed using the aggregation layer) is often referred to as an \"internal\" API as it is used by the Antrea components to communicate with each other, and is usually not consumed by end users, e.g. cluster admins. However, this API may also be used for integration with other software, which is why we abide to the same deprecation policy as for other more \"user-facing\" APIs (e.g. Antrea-native policy CRDs). K8s has a on the removal of API object versions that have been persisted to storage. At the moment, none of Antrea APIServices (which use the aggregation layer) persist objects to storage. So the only objects we need to worry about are CustomResources, which are persisted by the K8s apiserver. For them, we adopt the following rules: Alpha API versions may be removed at any time. The [`deprecated` field] must be used for CRDs to indicate that a particular version of the resource has been deprecated. Beta and GA API versions must be supported after deprecation for the respective durations stipulated above before they can be removed. For deprecated Beta and GA API versions, a [conversion webhook] must be provided along with each Antrea release, until the API version is removed altogether. Starting with Antrea v1.0, all Custom Resource Definitions (CRDs) for Antrea are defined in the same API group, `crd.antrea.io`, and all CRDs in this group are versioned individually. For example, at the time of writing this (v1.3 release timeframe), the Antrea CRDs include: `ClusterGroup` in `crd.antrea.io/v1alpha2` `ClusterGroup` in `crd.antrea.io/v1alpha3` `Egress` in `crd.antrea.io/v1alpha2` etc. Notice how 2 versions of `ClusterGroup` are supported: the one in `crd.antrea.io/v1alpha2` was introduced in v1.0, and is being deprecated as it was replaced by the one in `crd.antrea.io/v1alpha3`, introduced in v1.1. When introducing a new version of a CRD, [the API deprecation policy should be followed](#apis-deprecation-policy). When introducing a CRD, the following rule should be followed in order to avoid potential dependency cycles (and thus import cycles in Go): if the CRD depends on other object types spread across potentially different versions of"
},
{
"data": "the CRD should be defined in a group version greater or equal to all of these versions. For example, if we want to introduce a new CRD which depends on types `v1alpha1.X` and `v1alpha2.Y`, it needs to go into `v1alpha2` or a more recent version of `crd.antrea.io`. As a rule it should probably go into `v1alpha2` unless it is closely related to other CRDs in a later version, in which case it can be defined alongside these CRDs, in order to avoid user confusion. If a new CRD does not have dependencies and is not closely related to an existing CRD, it will typically be defined in `v1alpha1`. In some rare cases, a CRD can be defined in `v1beta1` directly if there is enough confidence in the stability of the API. Several CRD API Alpha versions were removed as part of the major version bump to Antrea v2, following the introduction of Beta versions in earlier minor releases. For more details, refer to this . Because of these CRD version removals, you will need to make sure that you upgrade your existing CRs (for the affected CRDs) to the new (storage) version, before trying to upgrade to Antrea v2.0. You will also need to ensure that the `status.storedVersions` field for the affected CRDs is patched, with the old versions being removed. To simplify your upgrade process, we provide an antctl command which will automatically handle these steps for you: `antctl upgrade api-storage`. There are 3 possible scenarios: 1) You never installed an Antrea minor version older than v1.13 in your cluster. In this case you can directly upgrade to Antrea v2.0. 2) Your cluster is currently running Antrea v1.13 through v1.15 (included), but you previously ran an Antrea minor version older than v1.13. In this case, you will need to run `antctl upgrade api-storage` prior to upgrading to Antrea v2.0, regardless of whether you have created Antrea CRs or not. 3) Your cluster is currently running an Antrea minor version older than v1.13. In this case, you will first need to upgrade to Antrea v1.15.1, then run `antctl upgrade api-storage`, before being able to upgrade to Antrea v2.0. Even for scenario 1, feel free to run `antctl upgrade api-storage`. It is not strictly required, but it will not cause any harm either. In the sub-sections below, we give some detailed instructions for upgrade, based on your current Antrea version and installation method. For more information about CRD versioning, refer to the K8s . The `antctl upgrade api-storage` command aims at automating that process for our users. ```text $ antctl version antctlVersion: v1.13.4 controllerVersion: v1.13.4 $ antctl upgrade api-storage Skip upgrading CRD \"externalnodes.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"trafficcontrols.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"externalentities.crd.antrea.io\" since all stored objects are in the storage version. Skip upgrading CRD \"supportbundlecollections.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"antreaagentinfos.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"ippools.crd.antrea.io\" since it only has one version. Skip upgrading CRD"
},
{
"data": "since it only has one version. Upgrading 6 objects of CRD \"tiers.crd.antrea.io\". Successfully upgraded 6 objects of CRD \"tiers.crd.antrea.io\". $ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea.yml ``` ```text $ antctl version antctlVersion: v1.13.4 controllerVersion: v1.13.4 $ antctl upgrade api-storage Skip upgrading CRD \"externalnodes.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"trafficcontrols.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"externalentities.crd.antrea.io\" since all stored objects are in the storage version. Skip upgrading CRD \"supportbundlecollections.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"antreaagentinfos.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"ippools.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"antreacontrollerinfos.crd.antrea.io\" since it only has one version. Upgrading 6 objects of CRD \"tiers.crd.antrea.io\". Successfully upgraded 6 objects of CRD \"tiers.crd.antrea.io\". $ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea-crds.yml $ helm upgrade antrea antrea/antrea --namespace kube-system --version 2.0.0 ``` ```text $ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.15.1/antrea.yml $ antctl version antctlVersion: v1.15.1 controllerVersion: v1.15.1 $ antctl upgrade api-storage Skip upgrading CRD \"externalnodes.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"trafficcontrols.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"externalentities.crd.antrea.io\" since all stored objects are in the storage version. Skip upgrading CRD \"supportbundlecollections.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"antreaagentinfos.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"ippools.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"antreacontrollerinfos.crd.antrea.io\" since it only has one version. Upgrading 6 objects of CRD \"tiers.crd.antrea.io\". Successfully upgraded 6 objects of CRD \"tiers.crd.antrea.io\". $ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea.yml ``` ```text $ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v1.15.1/antrea-crds.yml $ helm upgrade antrea antrea/antrea --namespace kube-system --version 1.15.1 $ antctl version antctlVersion: v1.15.1 controllerVersion: v1.15.1 $ antctl upgrade api-storage Skip upgrading CRD \"externalnodes.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"trafficcontrols.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"externalentities.crd.antrea.io\" since all stored objects are in the storage version. Skip upgrading CRD \"supportbundlecollections.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"antreaagentinfos.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"ippools.crd.antrea.io\" since it only has one version. Skip upgrading CRD \"antreacontrollerinfos.crd.antrea.io\" since it only has one version. Upgrading 6 objects of CRD \"tiers.crd.antrea.io\". Successfully upgraded 6 objects of CRD \"tiers.crd.antrea.io\". $ kubectl apply -f https://github.com/antrea-io/antrea/releases/download/v2.0.0/antrea-crds.yml $ helm upgrade antrea antrea/antrea --namespace kube-system --version 2.0.0 ``` Some deprecated options have been removed from the Antrea configuration: `nplPortRange` has been removed from the Agent configuration, use `nodePortLocal.portRange` instead. `enableIPSecTunnel` has been removed from the Agent configuration, use `trafficEncryptionMode` instead. `multicastInterfaces` has been removed from the Agent configuration, use `multicast.multicastInterfaces` instead. `multicluster.enable` has been removed from the Agent configuration, as the Multicluster functionality is no longer gated by a boolean parameter. `legacyCRDMirroring` has been removed from the Controller configuration, as it dates back to the v1 major version bump, and it has been ignored for years. If you are porting your old Antrea configuration to Antrea v2, please make sure that you are no longer using these parameters. Unknown parameters will be ignored by Antrea, and the behavior may not be what you expect."
}
] |
{
"category": "Runtime",
"file_name": "versioning.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "CRICTL User Guide ================= This document presumes you already have `containerd` with the `cri` plugin installed and running. This document is for developers who wish to debug, inspect, and manage their pods, containers, and container images. Before generating issues against this document, `containerd`, `containerd/cri`, or `crictl` please make sure the issue has not already been submitted. If you have not already installed crictl please install the version compatible with the `cri` plugin you are using. If you are a user, your deployment should have installed crictl for you. If not, get it from your release tarball. If you are a developer the current version of crictl is specified . A helper command has been included to install the dependencies at the right version: ```console $ make install-deps ``` Note: The file named `/etc/crictl.yaml` is used to configure crictl so you don't have to repeatedly specify the runtime sock used to connect crictl to the container runtime: ```console $ cat /etc/crictl.yaml runtime-endpoint: unix:///run/containerd/containerd.sock image-endpoint: unix:///run/containerd/containerd.sock timeout: 10 debug: true ``` The pull command tells the container runtime to download a container image from a container registry. ```console $ crictl pull busybox ... $ crictl inspecti busybox ... displays information about the image. ``` *Note:* If you get an error similar to the following when running a `crictl` command (and your containerd instance is already running): ```console crictl info FATA[0000] getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService ``` This could be that you are using an incorrect containerd configuration (maybe from a Docker install). You will need to update your containerd configuration to the containerd instance that you are running. One way of doing this is as follows: ```console $ mv /etc/containerd/config.toml /etc/containerd/config.bak $ containerd config default > /etc/containerd/config.toml ``` Another way to load an image into the container runtime is with the load command. With the load command you inject a container image into the container runtime from a file. First you need to create a container image tarball. For example to create an image tarball for a pause container using Docker: ```console $ docker pull registry.k8s.io/pause:3.9 3.9: Pulling from pause 7582c2cc65ef: Pull complete Digest: sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 Status: Downloaded newer image for registry.k8s.io/pause:3.9 registry.k8s.io/pause:3.9 $ docker save registry.k8s.io/pause:3.9 -o pause.tar ``` Then use `ctr` to load the container image into the container runtime: ```console $ sudo ctr -n=k8s.io images import pause.tar Loaded image: registry.k8s.io/pause:3.9 ``` List images and inspect the pause image: ```console $ sudo crictl images IMAGE TAG IMAGE ID SIZE docker.io/library/busybox latest f6e427c148a76 728kB registry.k8s.io/pause 3.9 e6f181688397 311kB $ sudo crictl inspecti e6f181688397 ... displays information about the pause image. $ sudo crictl inspecti registry.k8s.io/pause:3.9 ... displays information about the pause image. ``` ```console $ cat sandbox-config.json { \"metadata\": { \"name\": \"nginx-sandbox\", \"namespace\": \"default\", \"attempt\": 1, \"uid\": \"hdishd83djaidwnduwk28bcsb\" }, \"linux\": { } } $ crictl runp sandbox-config.json e1c83b0b8d481d4af8ba98d5f7812577fc175a37b10dc824335951f52addbb4e $ crictl pods PODSANDBOX ID CREATED STATE NAME NAMESPACE ATTEMPT e1c83b0b8d481 2 hours ago SANDBOX_READY nginx-sandbox default 1 $ crictl inspectp e1c8"
},
{
"data": "displays information about the pod and the pod sandbox pause container. ``` Note: As shown above, you may use truncated IDs if they are unique. Other commands to manage the pod include `stops ID` to stop a running pod and `rmp ID` to remove a pod sandbox. ```console $ cat container-config.json { \"metadata\": { \"name\": \"busybox\" }, \"image\":{ \"image\": \"busybox\" }, \"command\": [ \"top\" ], \"linux\": { } } $ crictl create e1c83 container-config.json sandbox-config.json 0a2c761303163f2acaaeaee07d2ba143ee4cea7e3bde3d32190e2a36525c8a05 $ crictl ps -a CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT 0a2c761303163 docker.io/busybox 2 hours ago CONTAINER_CREATED busybox 0 $ crictl start 0a2c 0a2c761303163f2acaaeaee07d2ba143ee4cea7e3bde3d32190e2a36525c8a05 $ crictl ps CONTAINER ID IMAGE CREATED STATE NAME ATTEMPT 0a2c761303163 docker.io/busybox 2 hours ago CONTAINER_RUNNING busybox 0 $ crictl inspect 0a2c7 ... show detailed information about the container ``` ```console $ crictl exec -i -t 0a2c ls bin dev etc home proc root sys tmp usr var ``` ```console $ crictl stats CONTAINER CPU % MEM DISK INODES 0a2c761303163f 0.00 983kB 16.38kB 6 ``` Other commands to manage the container include `stop ID` to stop a running container and `rm ID` to remove a container. ```console $ crictl version Version: 0.1.0 RuntimeName: containerd RuntimeVersion: v1.7.0 RuntimeApiVersion: v1 ``` <details> <p> ```console $ crictl info { \"status\": { \"conditions\": [ { \"type\": \"RuntimeReady\", \"status\": true, \"reason\": \"\", \"message\": \"\" }, { \"type\": \"NetworkReady\", \"status\": true, \"reason\": \"\", \"message\": \"\" } ] }, \"cniconfig\": { \"PluginDirs\": [ \"/opt/cni/bin\" ], \"PluginConfDir\": \"/etc/cni/net.d\", \"PluginMaxConfNum\": 1, \"Prefix\": \"eth\", \"Networks\": [] }, \"config\": { \"containerd\": { \"snapshotter\": \"overlayfs\", \"defaultRuntimeName\": \"runc\", \"defaultRuntime\": { \"runtimeType\": \"\", \"runtimePath\": \"\", \"runtimeEngine\": \"\", \"PodAnnotations\": [], \"ContainerAnnotations\": [], \"runtimeRoot\": \"\", \"options\": {}, \"privilegedwithouthost_devices\": false, \"privilegedwithouthostdevicesalldevicesallowed\": false, \"baseRuntimeSpec\": \"\", \"cniConfDir\": \"\", \"cniMaxConfNum\": 0, \"snapshotter\": \"\", \"sandboxMode\": \"\" }, \"untrustedWorkloadRuntime\": { \"runtimeType\": \"\", \"runtimePath\": \"\", \"runtimeEngine\": \"\", \"PodAnnotations\": [], \"ContainerAnnotations\": [], \"runtimeRoot\": \"\", \"options\": {}, \"privilegedwithouthost_devices\": false, \"privilegedwithouthostdevicesalldevicesallowed\": false, \"baseRuntimeSpec\": \"\", \"cniConfDir\": \"\", \"cniMaxConfNum\": 0, \"snapshotter\": \"\", \"sandboxMode\": \"\" }, \"runtimes\": { \"runc\": { \"runtimeType\": \"io.containerd.runc.v2\", \"runtimePath\": \"\", \"runtimeEngine\": \"\", \"PodAnnotations\": [], \"ContainerAnnotations\": [], \"runtimeRoot\": \"\", \"options\": { \"BinaryName\": \"\", \"CriuImagePath\": \"\", \"CriuPath\": \"\", \"CriuWorkPath\": \"\", \"IoGid\": 0, \"IoUid\": 0, \"NoNewKeyring\": false, \"NoPivotRoot\": false, \"Root\": \"\", \"ShimCgroup\": \"\", \"SystemdCgroup\": false }, \"privilegedwithouthost_devices\": false, \"privilegedwithouthostdevicesalldevicesallowed\": false, \"baseRuntimeSpec\": \"\", \"cniConfDir\": \"\", \"cniMaxConfNum\": 0, \"snapshotter\": \"\", \"sandboxMode\": \"podsandbox\" } }, \"noPivot\": false, \"disableSnapshotAnnotations\": true, \"discardUnpackedLayers\": false, \"ignoreBlockIONotEnabledErrors\": false, \"ignoreRdtNotEnabledErrors\": false }, \"cni\": { \"binDir\": \"/opt/cni/bin\", \"confDir\": \"/etc/cni/net.d\", \"maxConfNum\": 1, \"setupSerially\": false, \"confTemplate\": \"\", \"ipPref\": \"\" }, \"registry\": { \"configPath\": \"\", \"mirrors\": {}, \"configs\": {}, \"auths\": {}, \"headers\": {} }, \"imageDecryption\": { \"keyModel\": \"node\" }, \"disableTCPService\": true, \"streamServerAddress\": \"127.0.0.1\", \"streamServerPort\": \"0\", \"streamIdleTimeout\": \"4h0m0s\", \"enableSelinux\": false, \"selinuxCategoryRange\": 1024, \"sandboxImage\": \"registry.k8s.io/pause:3.9\", \"statsCollectPeriod\": 10, \"systemdCgroup\": false, \"enableTLSStreaming\": false, \"x509KeyPairStreaming\": { \"tlsCertFile\": \"\", \"tlsKeyFile\": \"\" }, \"maxContainerLogSize\": 16384, \"disableCgroup\": false, \"disableApparmor\": false, \"restrictOOMScoreAdj\": false, \"maxConcurrentDownloads\": 3, \"disableProcMount\": false, \"unsetSeccompProfile\": \"\", \"tolerateMissingHugetlbController\": true, \"disableHugetlbController\": true, \"deviceownershipfromsecuritycontext\": false, \"ignoreImageDefinedVolumes\": false, \"netnsMountsUnderStateDir\": false, \"enableUnprivilegedPorts\": false, \"enableUnprivilegedICMP\": false, \"enableCDI\": false, \"cdiSpecDirs\": [ \"/etc/cdi\", \"/var/run/cdi\" ], \"imagePullProgressTimeout\": \"1m0s\", \"drainExecSyncIOTimeout\": \"0s\", \"containerdRootDir\": \"/var/lib/containerd\", \"containerdEndpoint\": \"/run/containerd/containerd.sock\", \"rootDir\": \"/var/lib/containerd/io.containerd.grpc.v1.cri\", \"stateDir\": \"/run/containerd/io.containerd.grpc.v1.cri\" }, \"golang\": \"go1.20.3\", \"lastCNILoadStatus\": \"OK\", \"lastCNILoadStatus.default\": \"OK\" } ``` </p> </details> See for information about crictl."
}
] |
{
"category": "Runtime",
"file_name": "crictl.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "fwadm add [-f <file>] add firewall rules or remote VMs fwadm delete <rule uuid> delete a rule fwadm disable <rule uuid> disable a rule fwadm enable <rule uuid> enable a rule fwadm get <rule uuid> get a rule fwadm list list rules fwadm update [-f <file>] <rule uuid> update firewall rules or data fwadm vms <rule uuid> list the UUIDs of VMs affected by a rule fwadm add-rvm [-f <file>] add a remote VM fwadm delete-rvm <rvm uuid> delete a remote VM fwadm get-rvm <rvm uuid> get a remote VM fwadm list-rvms list remote VMs fwadm rvm-rules <rvm uuid> list rules that apply to a remote VM fwadm rules <vm uuid> list rules that apply to a VM fwadm start <vm uuid> start a VM's firewall fwadm status <vm uuid> get the status of a VM's firewall fwadm stats <vm uuid> get rule statistics for a VM's firewall fwadm stop <vm uuid> stop a VM's firewall fwadm help [command] help on a specific subcommand The fwadm tool allows you to manage firewall data on a SmartOS system. It is primarily used to manage firewall rules and remote VMs. Firewall rules are JSON objects. They contain a rule written in a Domain-Specific Language, as well as other metadata. See fwrule(7) and the \"EXAMPLES\" section below for rule syntax. Remote VMs are JSON objects. They represent VMs on other SmartOS hosts. The format is similar to the vmadm(8) format with most properties omitted and some simplified properties. See the \"REMOTE VMS\", \"REMOTE VM PROPERTIES\" and \"EXAMPLES\" sections below for details. Firewall rules only apply to VMs that have the firewall\\_enabled property set to true. Rules with an owner\\_uuid are scoped to VMs with a matching owner. Global rules, which are ownerless, will apply to all VMs described in their targets. Adding, updating or deleting firewall rules or remote VMs will reload the firewalls of any VMs affected. -h, --help Print help or subcommand help and exit. -v, --verbose Output verbose diagnostic information. When a command results in an error, output the stack trace for that error. -j, --json Output results or errors as JSON. The following commands and options are supported: fwadm help [command] Print general tool help or help on a specific command. fwadm add -f <file> fwadm add [-e] [--desc <description>] [-g] [-O <owner uuid>] <rule> Add firewall rules or remote VMs. A single rule and its properties can be added using arguments and options, or the -f option can be used to pass a file containing a JSON object with one or many rules and remote VMs to be added. See the \"EXAMPLES\" section below for details on what to pass in the JSON object. Options: --desc <description> Rule description --dryrun Output changes to be made, but don't write files to disk or reload VM firewalls. -e, --enable Set the enabled property for the rule. Default is false. -f <filename> Path to file containing JSON payload of firewall data to add. -g, --global Set the global property for the rule. -O, --owner_uuid Set the owner for the rule. --stdout Output ipf rule lists for VM firewalls that were updated. Arguments: <rule> Firewall rule, written in the rule DSL. See fwrule(7) for syntax. Examples: fwadm add --owner_uuid=e6c73bd2-fae4-4e0a-af76-2c05d088b066 FROM \\ any TO all vms ALLOW tcp PORT 22 fwadm add -g -e FROM any TO all vms ALLOW tcp PORT 22 echo '{ \"rules\": [ { \"enabled\": true, \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"rule\": \"FROM vm a223bec2-c62b-4fe7-babb-ad4c4d8441bb \\ TO all vms ALLOW tcp PORT 22\" } ], \"remoteVMs\": [ { \"uuid\": \"5baca016-6dda-11e3-a6f2-730593c54f04\", \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"nics\": [ { \"ip\":"
},
{
"data": "} ], \"tags\": { \"role\": \"web\" } } ] }' | fwadm add fwadm add-rvm -f <file> Add a remote VM. Options: --dryrun Output changes to be made, but don't write files to disk or reload VM firewalls. --stdout Output ipf rule lists for VM firewalls that were updated. fwadm delete <rule uuid> Delete a firewall rule. Options: --dryrun Output changes to be made, but don't write files to disk or reload VM firewalls. Arguments: <rule uuid> Firewall rule UUID fwadm disable <rule uuid> Disable a firewall rule. Options: --dryrun Output changes to be made, but don't write files to disk or reload VM firewalls. Arguments: <rule uuid> Firewall rule UUID fwadm enable <rule uuid> Enable a firewall rule. Options: --dryrun Output changes to be made, but don't write files to disk or reload VM firewalls. Arguments: <rule uuid> Firewall rule UUID fwadm get <rule uuid> Get a firewall rule. Arguments: <rule uuid> Firewall rule UUID fwadm list List firewall rules. Options: -d, --delim Set the delimiting character for parseable output. Default is \":\". -j, --json Output results as JSON. -o, --fields Rule properties to output. -p, --parseable Output results in parseable format. Examples: fwadm list -d \"|\" -p fwadm list -j -o uuid,rule fwadm update -f <file> fwadm update <rule uuid> [-e] [--desc <description>] [-g] \\ [-O <owner uuid>] <rule> Update firewall rules or remote VMs. A single rule and its properties can be updated using arguments, or the -f option can be used to pass a file containing a JSON object with one or many rules and remote VMs to be updated. See the \"EXAMPLES\" section below for details on what to pass in the JSON object. Options: --desc <description> Rule description --dryrun Output changes to be made, but don't write files to disk or reload VM firewalls. -e, --enable Set the enabled property for the rule. Default is false. -f <filename> Path to file containing JSON payload of firewall data to add. -g, --global Set the global property for the rule. -O, --owner_uuid Set the owner for the rule. --stdout Output ipf rule lists for VM firewalls that were updated. Arguments: <rule> Firewall rule, written in the rule DSL. See fwrule(7) for syntax. Examples: fwadm update 71bf3c29-bcd3-42a4-b4cb-222585429a70 'FROM (tag www \\ OR ip 172.30.0.250) TO tag db ALLOW tcp PORT 5432' echo '{ \"remoteVMs\": [ { \"uuid\": \"5baca016-6dda-11e3-a6f2-730593c54f04\", \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"ips\": [ \"172.29.0.2\", \"172.31.0.2\" ], \"tags\": { \"role\": \"web\" } } ] }' | fwadm update fwadm vms <rule uuid> List the UUIDs of VMs affected by a rule. Arguments: <rule uuid> Firewall rule UUID fwadm delete-rvm <rvm uuid> Delete a remote VM. Options: --dryrun Output changes to be made, but don't write files to disk or reload VM firewalls. Arguments: <rvm uuid> Remote VM UUID fwadm get-rvm <rvm uuid> Get a remote VM. Arguments: <rvm uuid> Remote VM UUID fwadm list-rvms List remote VMs in JSON format. fwadm rvm-rules <rvm uuid> List rules that apply to a remote VM. Arguments: <rvm uuid> Remote VM UUID fwadm rules <vm uuid> List rules that apply to a VM. Arguments: <vm uuid> VM UUID fwadm start <vm uuid> Start the firewall for a VM. Arguments: <vm uuid> VM UUID fwadm status [-v] <vm uuid> Get the firewall status (running, stopped) for a VM. Options: --v, --verbose Output additional information about the firewall Arguments: <vm uuid> VM UUID fwadm stats <vm uuid> Get ipfilter rule statistics for a VM's firewall. Arguments: <vm uuid> VM UUID fwadm stop <vm uuid> Stop the firewall for a"
},
{
"data": "Arguments: <vm uuid> VM UUID The purpose of remote VMs is to allow VMs on other SmartOS hosts to be included when generating rules. For example, if the following remote VM from another SmartOS host was added: { \"uuid\": \"86abf627-5398-45ee-8e65-8260d3466e3f\", \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"ips\": [ \"172.29.0.4\" ], \"tags\": { \"role\": \"bastion\" } } And the following rule: { \"description\": \"allow ssh from bastion host\", \"enabled\": true, \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"rule\": \"FROM tag role=bastion TO all vms ALLOW tcp PORT 22\" } The remote VM has the tag role with value bastion, which means that it matches the rule above. All VMs on this host with firewall_enabled set would then allow connections on TCP port 22 from that remote VM. This rule would also match, since it has the remote VM's UUID as a target: { \"description\": \"block UDP port 5400 to bastion host\", \"enabled\": true, \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"rule\": \"FROM all vms TO vm 86abf627-5398-45ee-8e65-8260d3466e3f \\ BLOCK udp PORT 54\" } Remote VMs are simplified versions of the VM objects used by vmadm(8). They are also in a JSON format, but only the properties below will be stored and used by fwadm. All other properties will be discarded. The properties used are: ips: Array of IP addresses for the remote VM. At least one IP from this property or the nics property below must be specified when creating or updating. nics: Array of nics, as per vmadm(8). Only the \"ip\" property of each of these nic objects is required - all other properties will be ignored. This property is used for creation of remote VMs only - it is not stored in the object. IPs from these objects will be added to the ips array. This property is supported so the output of \"vmadm get\" on one host can be used in the input to \"fwadm add\" on another host. owner_uuid: Owner UUID. Only rules with a matching owner_uuid can use IPs for remote VMs with this property set. tags: vmadm(8) tags object, mapping tag keys to values. uuid (required): UUID. This must not be the same as the UUID of any other remote VM or local VM managed by vmadm(8). Note that VMs can be added and updated in this simplified representation, or using the same representation as \"vmadm get\". This enables the output of \"vmadm get\" or \"vmadm lookup\" to be input to the commands listed in the \"SUBCOMMANDS\" section. fwadm relies on properties of VMs from vmadm(8) in order to generate firewall rules correctly. Therefore, when vmadm is used to create a new VM or update properties on an existing VM that can affect firewall rules, it will update firewall rules through fwadm accordingly. As an example, if the following rules are present on a SmartOS host: { \"description\": \"block all outgoing SMTP traffic\", \"enabled\": true, \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"rule\": \"FROM tag blocksmtp TO any BLOCK tcp PORT 25\" } { \"description\": \"allow HTTP and HTTPS traffic\", \"enabled\": true, \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"rule\": \"FROM any TO tag role=webserver ALLOW tcp (PORT 80 AND PORT \\ 443)\" } And then a VM is created with these parameters: { \"brand\": \"joyent\", \"image_uuid\": \"7b0b4140-6e98-11e5-b1ae-ff68fe257228\", \"firewall_enabled\": true, \"nics\": [ { \"nic_tag\": \"external\", \"ip\": \"10.88.88.59\", \"netmask\": \"255.255.255.0\", \"gateway\": \"10.88.88.2\", \"primary\": true } ], \"owner_uuid\": \"e6c73bd2-fae4-4e0a-af76-2c05d088b066\", \"ram\": 128, \"tags\": { \"blocksmtp\": true }, \"uuid\": \"60e90d15-fb48-4bb9-90e6-1e1bb8269d1e\" } The first rule would be applied to that VM. If the following vmadm command was then run: echo '{ \"set_tags\": { \"role\": \"webserver\" } }' | vmadm update \\ 60e90d15-fb48-4bb9-90e6-1e1bb8269d1e The second rule would then be applied to that VM in addition to the first. The following exit values are returned: 0"
}
] |
{
"category": "Runtime",
"file_name": "fwadm.8.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-operator-generic completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-operator-generic completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-generic_completion_powershell.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List IP addresses in the userspace IPcache ``` cilium-dbg ip list [flags] ``` ``` -h, --help help for list -n, --numeric Print numeric identities -o, --output string json| yaml| jsonpath='{}' -v, --verbose Print all fields of ipcache ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage IP addresses and associated information"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_ip_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. -->"
}
] |
{
"category": "Runtime",
"file_name": "contributing.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Run cilium-operator-azure ``` cilium-operator-azure [flags] ``` ``` --auto-create-cilium-pod-ip-pools map Automatically create CiliumPodIPPool resources on startup. Specify pools in the form of <pool>=ipv4-cidrs:<cidr>,[<cidr>...];ipv4-mask-size:<size> (multiple pools can also be passed by repeating the CLI flag) --azure-resource-group string Resource group to use for Azure IPAM --azure-subscription-id string Subscription ID to access Azure API --azure-use-primary-address Use Azure IP address from interface's primary IPConfigurations --azure-user-assigned-identity-id string ID of the user assigned identity used to auth with the Azure API --bgp-announce-lb-ip Announces service IPs of type LoadBalancer via BGP --bgp-config-path string Path to file containing the BGP configuration (default \"/var/lib/cilium/bgp/config.yaml\") --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cilium-endpoint-gc-interval duration GC interval for cilium endpoints (default 5m0s) --cilium-pod-labels string Cilium Pod's labels. Used to detect if a Cilium pod is running to remove the node taints where its running and set NetworkUnavailable to false (default \"k8s-app=cilium\") --cilium-pod-namespace string Name of the Kubernetes namespace in which Cilium is deployed in. Defaults to the same namespace defined in k8s-namespace --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --cluster-pool-ipv4-cidr strings IPv4 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' --cluster-pool-ipv4-mask-size int Mask size for each IPv4 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' (default 24) --cluster-pool-ipv6-cidr strings IPv6 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' --cluster-pool-ipv6-mask-size int Mask size for each IPv6 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' (default 112) --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --cnp-status-cleanup-burst int Maximum burst of requests to clean up status nodes updates in CNPs (default 20) --cnp-status-cleanup-qps float Rate used for limiting the clean up of the status nodes updates in CNP, expressed as qps (default 10) --config string Configuration file (default \"$HOME/ciliumd.yaml\") --config-dir string Configuration directory that contains a file for each option --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and"
},
{
"data": "The set of controller group names available is not guaranteed to be stable between Cilium versions. -D, --debug Enable debugging mode --enable-cilium-endpoint-slice If set to true, the CiliumEndpointSlice feature is enabled. If any CiliumEndpoints resources are created, updated, or deleted in the cluster, all those changes are broadcast as CiliumEndpointSlice updates to all of the Cilium agents. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-ipv4 Enable IPv4 support (default true) --enable-ipv6 Enable IPv6 support (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-metrics Enable Prometheus metrics --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for cilium-operator-azure --identity-allocation-mode string Method to use for identity allocation (default \"kvstore\") --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for"
},
{
"data": "(default \"cilium-ingress\") --instance-tags-filter map EC2 Instance tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --ipam string Backend to use for IPAM (default \"azure\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-namespace string Name of the Kubernetes namespace in which Cilium Operator is deployed in --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --kvstore string Key-value store type --kvstore-opt map Key-value store options e.g. etcd.address=127.0.0.1:4001 --leader-election-lease-duration duration Duration that non-leader operator candidates will wait before forcing to acquire leadership (default 15s) --leader-election-renew-deadline duration Duration that current acting master will retry refreshing leadership in before giving up the lock (default 10s) --leader-election-retry-period duration Duration that LeaderElector clients should wait between retries of the actions (default 2s) --limit-ipam-api-burst int Upper burst limit when accessing external APIs (default 20) --limit-ipam-api-qps float Queries per second limit when accessing external IPAM APIs (default 4) --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-operator, configmap example for syslog driver: {\"syslog.level\":\"info\",\"syslog.facility\":\"local4\"} --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --nodes-gc-interval duration GC interval for CiliumNodes (default 5m0s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --parallel-alloc-workers int Maximum number of parallel IPAM workers (default 50) --pod-restart-selector string cilium-operator will delete/restart any pods with these labels if the pod is not managed by Cilium. If this option is empty, then all pods may be restarted (default \"k8s-app=kube-dns\") --remove-cilium-node-taints Remove node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes once Cilium is up and running (default true) --set-cilium-is-up-condition Set CiliumIsUp Node condition to mark a Kubernetes Node that a Cilium pod is up and running in that node (default true) --set-cilium-node-taints Set node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes if Cilium is scheduled but not up and running --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created --subnet-ids-filter strings Subnets IDs (separated by commas) --subnet-tags-filter map Subnets tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --synchronize-k8s-nodes Synchronize Kubernetes nodes to kvstore and perform CNP GC (default true) --synchronize-k8s-services Synchronize Kubernetes services to kvstore (default true) --unmanaged-pod-watcher-interval int Interval to check for unmanaged kube-dns pods (0 to disable) (default 15) --version Print version information ``` - Generate the autocompletion script for the specified shell - Inspect"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-azure.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Kube-router is built around concept of watchers and controllers. Watchers use Kubernetes watch API to get notification on events related to create, update, delete of Kubernetes objects. Each watcher gets notification related to a particular API object. On receiving an event from API server, watcher broadcasts events. Controller registers to get event updates from the watchers and act up on the events. Kube-router consists of 3 core controllers and multiple watchers as depicted in below diagram. Each of the follows below structure: ```go func Run() { for { Sync() // control loop that runs for ever and perfom sync at periodic interval } } func OnUpdate() { Sync() // on receiving update of a watched API object (namespace, node, pod, network policy etc) } Sync() { //re-concile any state changes } Cleanup() { // cleanup any changes (to iptables, ipvs, network etc) done to the system } ```"
}
] |
{
"category": "Runtime",
"file_name": "architecture.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The repository can help you deploy CubeFS cluster quickly in containers orchestrated by kubernetes. Kubernetes 1.12+ and Helm 3 are required. cubefs-helm has already integrated CubeFS CSI plugin ``` $ git clone https://github.com/cubefs/cubefs-helm $ cd cubefs-helm ``` CubeFS CSI driver will use client-go to connect the Kubernetes API Server. First you need to copy the kubeconfig file to `cubefs-helm/cubefs/config/` directory, and rename to kubeconfig ``` $ cp ~/.kube/config cubefs/config/kubeconfig ``` Create a `cubefs.yaml` file, and put it in a user-defined path. Suppose this is where we put it. ``` $ cat ~/cubefs.yaml ``` ``` yaml path: data: /cubefs/data log: /cubefs/log datanode: disks: /data0:21474836480 /data1:21474836480 metanode: total_mem: \"26843545600\" provisioner: kubelet_path: /var/lib/kubelet ``` Note that `cubefs-helm/cubefs/values.yaml` shows all the config parameters of CubeFS. The parameters `path.data` and `path.log` are used to store server data and logs, respectively. You should tag each Kubernetes node with the appropriate labels accorindly for server node and CSI node of CubeFS. ``` kubectl label node <nodename> cubefs-master=enabled kubectl label node <nodename> cubefs-metanode=enabled kubectl label node <nodename> cubefs-datanode=enabled kubectl label node <nodename> cubefs-csi-node=enabled ``` ``` $ helm install cubefs ./cubefs -f ~/cubefs.yaml ```"
}
] |
{
"category": "Runtime",
"file_name": "HELM.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "gVisor has implemented the (Recent ACKnowledgement) TCP loss-detection algorithm in our network stack, which improves throughput in the presence of packet loss and reordering. TCP is a connection-oriented protocol that detects and recovers from loss by retransmitting packets. is one of the recent loss-detection methods implemented in Linux and BSD, which helps in identifying packet loss quickly and accurately in the presence of packet reordering and tail losses. The TCP congestion window indicates the number of unacknowledged packets that can be sent at any time. When packet loss is identified, the congestion window is reduced depending on the type of loss. The sender will recover from the loss after all the packets sent before reducing the congestion window are acknowledged. If the loss is identified falsely by the connection, then the connection enters loss recovery unnecessarily, resulting in sending fewer packets. Packet loss is identified mainly in two ways: Three duplicate acknowledgments, which will result in either or recovery. The congestion window is reduced depending on the type of congestion control algorithm. For example, in the algorithm it is reduced to half. RTO (Retransmission Timeout) which will result in Timeout recovery. The congestion window is reduced to one . Both of these cases result in reducing the congestion window, with RTO being more expensive. Most of the existing algorithms do not detect packet reordering, which get incorrectly identified as packet loss, resulting in an RTO. Furthermore, the loss of an ACK at the end of a sequence (known as \"tail loss\") will also trigger RTO and slow down future transmissions unnecessarily. RACK helps us to identify loss accurately in all these scenarios, and will avoid entering RTO. Implementation of RACK requires support for: Per-packet transmission timestamps: RACK detects loss depending on the transmission times of the packet and the timestamp at which ACK was received. SACK and ability to detect DSACK: Selective Acknowledgement and Duplicate SACK are used to adjust the timer window after which a packet can be marked as lost. Packet reordering commonly occurs when different packets take different paths through a network. The diagram below shows the transmission of four packets which get reordered in transmission, and the resulting TCP behavior with and without RACK. In the above example, the sender sees three duplicate"
},
{
"data": "Without RACK, this is identified falsely as packet loss, and the congestion window will be reduced after entering Fast/SACK recovery. To detect packet reordering, RACK uses a reorder window, bounded between /4, RTT]. The reorder timer is set to expire after RTT+reorder\\window_. A packet is marked as lost when the packets following it were acknowledged using SACK and the reorder timer expires. The reorder window is increased when a DSACK is received (which indicates that there is a higher degree of reordering). Tail loss occurs when the packets are lost at the end of data transmission. The diagram below shows an example of tail loss when the last three packets are lost, and how it is handled with and without RACK. For tail losses, RACK uses a Tail Loss Probe (TLP), which relies on a timer for the last packet sent. The TLP timer is set to 2 \\* RTT, after which a probe is sent. The probe packet will allow the connection one more chance to detect a loss by triggering ACK feedback to avoid entering RTO. In the above example, the loss is recovered without entering the RTO. TLP will also help in cases where the ACK was lost but all the packets were received by the receiver. The below diagram shows that the ACK received for the probe packet avoided the RTO. If there was some loss, then the ACK for the probe packet will have the SACK blocks, which will be used to detect and retransmit the lost packets. In gVisor, we have support for and SACK loss recovery methods. We recently, and it is the default when SACK is enabled. After enabling RACK, our internal benchmarks in the presence of reordering and tail losses and the data we took from internal users inside Google have shown ~50% reduction in the number of RTOs. While RACK has improved one aspect of TCP performance by reducing the timeouts in the presence of reordering and tail losses, in gVisor we plan to implement the undoing of congestion windows and (once there is an RFC available) to further improve TCP performance in less ideal network conditions. If you havent already, try gVisor. The instructions to get started are in our . You can also get involved with the gVisor community via our , , , and ."
}
] |
{
"category": "Runtime",
"file_name": "2021-08-31-gvisor-rack.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|"
}
] |
{
"category": "Runtime",
"file_name": "fuzzy_mode_convert_table.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "In this quick start guide, we will set up an Antrea Multi-cluster ClusterSet with two clusters. One cluster will serve as the leader of the ClusterSet, and meanwhile also join as a member cluster; another cluster will be a member only. Antrea Multi-cluster supports two types of IP addresses as multi-cluster Service endpoints - exported Services' ClusterIPs or backend Pod IPs. We use the default `ClusterIP` endpoint type for multi-cluster Services in this guide. The diagram below shows the two clusters and the ClusterSet to be created (for simplicity, the diagram just shows two Nodes for each cluster). <img src=\"assets/sample-clusterset.svg\" width=\"800\" alt=\"Antrea Multi-cluster Example ClusterSet\"> We assume an Antrea version >= `v1.8.0` is used in this guide, and the Antrea version is set to an environment variable `TAG`. For example, the following command sets the Antrea version to `v1.8.0`. ```bash export TAG=v1.8.0 ``` To use the latest version of Antrea Multi-cluster from the Antrea main branch, you can change the YAML manifest path to: `https://github.com/antrea-io/antrea/tree/main/multicluster/build/yamls/` when applying or downloading an Antrea YAML manifest. Antrea must be deployed in both cluster A and cluster B, and the `Multicluster` feature of `antrea-agent` must be enabled to support multi-cluster Services. As we use `ClusterIP` endpoint type for multi-cluster Services, an Antrea Multi-cluster Gateway needs be set up in each member cluster to route Service traffic across clusters, and two clusters must have non-overlapping Service CIDRs. Set the following configuration parameters in `antrea-agent.conf` of the Antrea deployment manifest to enable the `Multicluster` feature: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | featureGates: Multicluster: true multicluster: enableGateway: true namespace: \"\" ``` At the moment, Multi-cluster Gateway only works with the Antrea `encap` traffic mode, and all member clusters in a ClusterSet must use the same tunnel type. `antctl` provides a couple of commands to facilitate deployment, configuration, and troubleshooting of Antrea Multi-cluster. This section describes the steps to deploy Antrea Multi-cluster and set up the example ClusterSet using `antctl`. A will describe the steps to achieve the same using YAML manifests. To execute any command in this section, `antctl` needs access to the target cluster's API server, and it needs a kubeconfig file for that. Please refer to the to learn more about the kubeconfig file configuration, and the `antctl` Multi-cluster commands. For installation of `antctl`, please refer to the . Run the following commands to deploy Multi-cluster Controller for the leader into Namespace `antrea-multicluster` (Namespace `antrea-multicluster` will be created by the commands), and Multi-cluster Controller for the member into Namepsace `kube-system`. ```bash kubectl create ns antrea-multicluster antctl mc deploy leadercluster -n antrea-multicluster --antrea-version $TAG antctl mc deploy membercluster -n kube-system --antrea-version $TAG ``` You can run the following command to verify the the leader and member `antrea-mc-controller` Pods are deployed and running: ```bash $ kubectl get all -A -l=\"component=antrea-mc-controller\" NAMESPACE NAME READY STATUS RESTARTS AGE antrea-multicluster pod/antrea-mc-controller-cd7bf8f68-kh4kz 1/1 Running 0 50s kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 48s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE antrea-multicluster deployment.apps/antrea-mc-controller 1/1 1 1 50s kube-system deployment.apps/antrea-mc-controller 1/1 1 1 48s ``` Run the following commands to create a ClusterSet with cluster A to be the leader, and also join the ClusterSet as a"
},
{
"data": "```bash antctl mc init --clusterset test-clusterset --clusterid test-cluster-leader -n antrea-multicluster --create-token -j join-config.yml antctl mc join --clusterid test-cluster-leader -n kube-system --config-file join-config.yml ``` The above `antctl mc init` command creates a default token (with the `--create-token` flag) for member clusters to join the ClusterSet and authenticate to the leader cluster API server, and the command saves the token Secret manifest and other ClusterSet join arguments to file `join-config.yml` (specified with the `-o` option), which can be provided to the `antctl mc join` command (with the `--config-file` option) to join the ClusterSet with these arguments. If you want to use a separate token for each member cluster for security considerations, you can run the following commands to create a token and use the token (together with the previously generated configuration file `join-config.yml`) to join the ClusterSet: ```bash antctl mc create membertoken test-cluster-leader-token -n antrea-multicluster -o test-cluster-leader-token.yml antctl mc join --clusterid test-cluster-leader -n kube-system --config-file join-config.yml --token-secret-file test-cluster-leader-token.yml ``` Last, you need to choose at least one Node in cluster A to serve as the Multi-cluster Gateway. The Node should have an IP that is reachable from the cluster B's Gateway Node, so a tunnel can be created between the two Gateways. For more information about Multi-cluster Gateway, please refer to the . Assuming K8s Node `node-a1` is selected for the Multi-cluster Gateway, run the following command to annotate the Node with: `multicluster.antrea.io/gateway=true` (so Antrea can know it is the Gateway Node from the annotation): ```bash kubectl annotate node node-a1 multicluster.antrea.io/gateway=true ``` Let us switch to cluster B. All the `kubectl` and `antctl` commands in the following steps should be run with the `kubeconfig` for cluster B. Run the following command to deploy the member Multi-cluster Controller into Namespace `kube-system`. ```bash antctl mc deploy membercluster -n kube-system --antrea-version $TAG ``` You can run the following command to verify the `antrea-mc-controller` Pod is deployed and running: ```bash $ kubectl get all -A -l=\"component=antrea-mc-controller\" NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 40s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/antrea-mc-controller 1/1 1 1 40s ``` Run the following command to make cluster B join the ClusterSet: ```bash antctl mc join --clusterid test-cluster-member -n kube-system --config-file join-config.yml ``` `join-config.yml` is generated when creating the ClusterSet in cluster A. Again, you can also run the `antctl mc create membertoken` in the leader cluster (cluster A) to create a separate token for cluster B, and join using that token, rather than the default token in `join-config.yml`. Assuming K8s Node `node-b1` is chosen to be the Multi-cluster Gateway for cluster B, run the following command to annotate the Node: ```bash kubectl annotate node node-b1 multicluster.antrea.io/gateway=true ``` So far, we set up an Antrea Multi-cluster ClusterSet with two clusters following the above sections of this guide. Next, you can start to consume the Antrea Multi-cluster features with the ClusterSet, including , , and , Please check the relevant Antrea Multi-cluster User Guide sections to learn more. If you want to add a new member cluster to your ClusterSet, you can follow the steps for cluster B to do so. For example, you can run the following command to join the ClusterSet in a member cluster with ID `test-cluster-member2`: ```bash antctl mc join --clusterid test-cluster-member2 -n kube-system --config-file"
},
{
"data": "``` Run the following commands to deploy Multi-cluster Controller for the leader into Namespace `antrea-multicluster` (Namespace `antrea-multicluster` will be created by the commands), and Multi-cluster Controller for the member into Namepsace `kube-system`. ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-global.yml kubectl create ns antrea-multicluster kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-leader-namespaced.yml kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml ``` Antrea provides several template YAML manifests to set up a ClusterSet quicker. You can run the following commands that use the template manifests to create a ClusterSet named `test-clusterset` in the leader cluster and a default token for the member clusters (both cluster A and B in our case) to join the ClusterSet. ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/leader-clusterset-template.yml kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/leader-access-token-template.yml kubectl get secret default-member-token -n antrea-multicluster -o yaml | grep -w -e '^apiVersion' -e '^data' -e '^metadata' -e '^ *name:' -e '^kind' -e ' ca.crt' -e ' token:' -e '^type' -e ' namespace' | sed -e 's/kubernetes.io\\/service-account-token/Opaque/g' -e 's/antrea-multicluster/kube-system/g' > default-member-token.yml ``` The last command saves the token Secret manifest to `default-member-token.yml`, which will be needed for member clusters to join the ClusterSet. Note, in this example, we use a shared token for all member clusters. If you want to use a separate token for each member cluster for security considerations, you can follow the instructions in the . Next, run the following commands to make cluster A join the ClusterSet also as a member: ```bash kubectl apply -f default-member-token.yml curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml sed -e 's/test-cluster-member/test-cluster-leader/g' -e 's/<LEADERAPISERVERIP>/172.10.0.11/g' member-clusterset.yml | kubectl apply -f - ``` Here, `172.10.0.11` is the `kube-apiserver` IP of cluster A. You should replace it with the `kube-apiserver` IP of your leader cluster. Assuming K8s Node `node-a1` is selected for the Multi-cluster Gateway, run the following command to annotate the Node: ```bash kubectl annotate node node-a1 multicluster.antrea.io/gateway=true ``` Let us switch to cluster B. All the `kubectl` commands in the following steps should be run with the `kubeconfig` for cluster B. Run the following command to deploy the member Multi-cluster Controller into Namespace `kube-system`. ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/$TAG/antrea-multicluster-member.yml ``` You can run the following command to verify the `antrea-mc-controller` Pod is deployed and running: ```bash $ kubectl get all -A -l=\"component=antrea-mc-controller\" NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/antrea-mc-controller-85dbf58b75-pjj48 1/1 Running 0 40s NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system deployment.apps/antrea-mc-controller 1/1 1 1 40s ``` Run the following commands to make cluster B join the ClusterSet: ```bash kubectl apply -f default-member-token.yml curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml sed -e 's/<LEADERAPISERVERIP>/172.10.0.11/g' member-clusterset.yml | kubectl apply -f - ``` `default-member-token.yml` saves the default member token which was generated when initializing the ClusterSet in cluster A. Assuming K8s Node `node-b1` is chosen to be the Multi-cluster Gateway for cluster B, run the following command to annotate the Node: ```bash kubectl annotate node node-b1 multicluster.antrea.io/gateway=true ``` If you want to add a new member cluster to your ClusterSet, you can follow the steps for cluster B to do so. Remember to update the member cluster ID `spec.clusterID` in `member-clusterset-template.yml` to the new member cluster's ID in the step 2 of joining ClusterSet. For example, you can run the following commands to join the ClusterSet in a member cluster with ID `test-cluster-member2`: ```bash kubectl apply -f default-member-token.yml curl -L https://raw.githubusercontent.com/antrea-io/antrea/$TAG/multicluster/config/samples/clusterset_init/member-clusterset-template.yml > member-clusterset.yml sed -e 's/<LEADERAPISERVERIP>/172.10.0.11/g' -e 's/test-cluster-member/test-cluster-member2/g' member-clusterset.yml | kubectl apply -f - ```"
}
] |
{
"category": "Runtime",
"file_name": "quick-start.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List endpoint IPs (local and remote) and their corresponding security identities List endpoint IPs (local and remote) and their corresponding security identities. ``` cilium-dbg bpf ipcache list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the IPCache mappings for IP/CIDR <-> Identity"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_ipcache_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: \"Upgrading to Velero 1.8\" layout: docs Velero installed. If you're not yet running at least Velero v1.6, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Install the Velero v1.8 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.8.0 Git commit: <git SHA> ``` Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: Since velero v1.8.0 only v1 CRD will be supported during installation, therefore, the v1.8.0 will only work on kubernetes version >= v1.16 Update the container image used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.8.0 \\ --namespace velero kubectl set image daemonset/restic \\ restic=velero/velero:v1.8.0 \\ --namespace velero ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.8.0 Git commit: <git SHA> Server: Version: v1.8.0 ``` We have deprecated the way to indicate the default backup storage location. Previously, that was indicated according to the backup storage location name set on the velero server-side via the flag `velero server --default-backup-storage-location`. Now we configure the default backup storage location on the velero client-side. Please refer to the on how to indicate which backup storage location is the default one. After upgrading, if there is a previously created backup storage location with the name that matches what was defined on the server side as the default, it will be automatically set as the `default`."
}
] |
{
"category": "Runtime",
"file_name": "upgrade-to-1.8.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Google Cloud Platform (GCP) link: https://github.com/vmware-tanzu/velero-plugin-for-gcp objectStorage: true volumesnapshotter: true supportedByVeleroTeam: true This repository contains an object store plugin and a volume snapshotter plugin to support running Velero on Google Cloud Platform."
}
] |
{
"category": "Runtime",
"file_name": "01-google-cloud-platform.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: MinIO This guide describes how to configure Alluxio with {:target=\"_blank\"} as the under storage system. MinIO is a high-performance, S3 compatible object store. It is built for large scale AI/ML, data lake and database workloads. It runs on-prem and on any cloud (public or private) and from the data center to the edge. MinIO provides an open source alternative to AWS S3. Alluxio natively provides the `s3://` scheme (recommended for better performance). You can use this scheme to connect Alluxio with a MinIO server. For more information about MinIO, please read its {:target=\"_blank\"} If you haven't already, please see before you get started. Launch a MinIO server instance using the steps mentioned {:target=\"_blank\"}. Once the MinIO server is launched, keep a note of the server endpoint, accessKey and secretKey. In preparation for using MinIO with Alluxio: <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<MINIO_BUCKET>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing bucket</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<MINIO_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use in the bucket, either by creating a new directory or using an existing one</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<MINIO_ENDPOINT>`</td> <td markdown=\"span\"></td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3ACCESSKEY_ID>`</td> <td markdown=\"span\">Used to sign programmatic requests made to AWS. See {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<S3SECRETKEY>`</td> <td markdown=\"span\">Used to sign programmatic requests made to AWS. See {:target=\"_blank\"}</td> </tr> </table> To use MinIO as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Specify an existing MinIO bucket and directory as the underfs address. Because Minio supports the `s3` protocol, it is possible to configure Alluxio as if it were pointing to an AWS S3 endpoint. Modify `conf/alluxio-site.properties` to include: ```properties alluxio.dora.client.ufs.root=s3://<MINIOBUCKET>/<MINIODIRECTORY> alluxio.underfs.s3.endpoint=http://<MINIO_ENDPOINT>/ alluxio.underfs.s3.disable.dns.buckets=true alluxio.underfs.s3.inherit.acl=false s3a.accessKeyId=<S3ACCESSKEY_ID> s3a.secretKey=<S3SECRETKEY> ``` For these parameters, replace `<MINIO_ENDPOINT>` with the hostname and port of your MinIO service, e.g., `http://localhost:9000/`. If the port value is left unset, it defaults to port 80 for `http` and 443 for `https`. Once you have configured Alluxio to MinIO, try to see that everything works. There are a few variations of errors which might appear if Alluxio is incorrectly"
},
{
"data": "See below for a few common cases and their resolutions. If a message like this is returned, then you'll need to double check the name of the bucket in the `alluxio-site.properties` file and make sure that it exists in MinIO. The property for the bucket name is controlled by ``` Exception in thread \"main\" alluxio.exception.DirectoryNotEmptyException: Failed to delete /defaulttestsfiles (com.amazonaws.services.s3.model.AmazonS3Exception: The specified bucket does not exist (Service: Amazon S3; Status Code: 404; Error Code: NoSuchBucke t; Request ID: 158681CA87E59BA0; S3 Extended Request ID: 2d47b54e-7dd4-4e32-bc6e-48ffb8e2265c), S3 Extended Request ID: 2d47b54e-7dd4-4e32-bc6e-48ffb8e2265c) from the under file system at alluxio.client.file.BaseFileSystem.delete(BaseFileSystem.java:234) at alluxio.cli.TestRunner.runTests(TestRunner.java:118) at alluxio.cli.TestRunner.main(TestRunner.java:100) ``` If an exception like this is encountered then it may be that the Alluxio property is set to `false`. Setting this value to `true` for MinIO will allow Alluxio to resolve the proper bucket location. ``` Exception in thread \"main\" alluxio.exception.DirectoryNotEmptyException: Failed to delete /defaulttestsfiles (com.amazonaws.SdkClientException: Unable to execute HTTP request: {{BUCKET_NAME}}) from the under file system at alluxio.client.file.BaseFileSystem.delete(BaseFileSystem.java:234) at alluxio.cli.TestRunner.runTests(TestRunner.java:118) at alluxio.cli.TestRunner.main(TestRunner.java:100) ``` If an exception occurs where the client returns an error with information about a refused connection then Alluxio most likely cannot contact the MinIO server. Make sure that the value of is correct and that the node running the Alluxio master can reach the MinIO endpoint over the network. ``` Exception in thread \"main\" alluxio.exception.DirectoryNotEmptyException: Failed to delete /defaulttestsfiles (com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to localhost:9001 [localhost/127.0.0.1] failed: Connection refused (Connect ion refused)) from the under file system at alluxio.client.file.BaseFileSystem.delete(BaseFileSystem.java:234) at alluxio.cli.TestRunner.runTests(TestRunner.java:118) at alluxio.cli.TestRunner.main(TestRunner.java:100) ``` If an exception including a message about forbidden access is encountered, it's possible that the Alluxio master has been configured with incorrect credentials. Check the and . If this error is appearing, double check that both properties are set correctly. ``` ERROR CliUtils - Exception running test: alluxio.examples.BasicNonByteBufferOperations@388526fb alluxio.exception.AlluxioException: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 1586BF770688AB20; S3 Extended Request ID: null), S3 Extended Request ID: null at alluxio.exception.status.AlluxioStatusException.toAlluxioException(AlluxioStatusException.java:111) at alluxio.client.file.BaseFileSystem.createFile(BaseFileSystem.java:200) at alluxio.examples.BasicNonByteBufferOperations.createFile(BasicNonByteBufferOperations.java:102) at alluxio.examples.BasicNonByteBufferOperations.write(BasicNonByteBufferOperations.java:85) at alluxio.examples.BasicNonByteBufferOperations.call(BasicNonByteBufferOperations.java:80) at alluxio.examples.BasicNonByteBufferOperations.call(BasicNonByteBufferOperations.java:49) at alluxio.cli.CliUtils.runExample(CliUtils.java:51) at alluxio.cli.TestRunner.runTest(TestRunner.java:164) at alluxio.cli.TestRunner.runTests(TestRunner.java:134) at alluxio.cli.TestRunner.main(TestRunner.java:100) Caused by: alluxio.exception.status.UnknownException: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 1586BF770688AB20; S3 Extended Request ID: null), S3 Extended Request ID: null ```"
}
] |
{
"category": "Runtime",
"file_name": "Minio.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(network-macvlan)= <!-- Include start macvlan intro --> Macvlan is a virtual {abbr}`LAN (Local Area Network)` that you can use if you want to assign several IP addresses to the same network interface, basically splitting up the network interface into several sub-interfaces with their own IP addresses. You can then assign IP addresses based on the randomly generated MAC addresses. <!-- Include end macvlan intro --> The `macvlan` network type allows to specify presets to use when connecting instances to a parent interface. In this case, the instance NICs can simply set the `network` option to the network they connect to without knowing any of the underlying configuration details. ```{note} If you are using a `macvlan` network, communication between the Incus host and the instances is not possible. Both the host and the instances can talk to the gateway, but they cannot communicate directly. ``` (network-macvlan-options)= The following configuration key namespaces are currently supported for the `macvlan` network type: `user` (free-form key/value for user metadata) ```{note} {{noteipaddresses_CIDR}} ``` The following configuration options are available for the `macvlan` network type: Key | Type | Condition | Default | Description :-- | :-- | :-- | :-- | :-- `gvrp` | bool | - | `false` | Register VLAN using GARP VLAN Registration Protocol `mtu` | integer | - | - | The MTU of the new interface `parent` | string | - | - | Parent interface to create `macvlan` NICs on `vlan` | integer | - | - | The VLAN ID to attach to `user.*` | string | - | - | User-provided free-form key/value pairs"
}
] |
{
"category": "Runtime",
"file_name": "network_macvlan.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Hydropper is a lightweight test framework based on pytest. It encapsulates virtualization-related test atoms and is used for stratovirt black-box tests.Hydropper has provided some testcases for lightweight virtualization and standard virtualization to help Developers find and locate Stratovirt problems. Ensure that python3 has been installed on your openEuler system. The requirements.txt file contains the Python3 dependency package. pytest>5.0.0 aexpect>1.5.0 retrying You can install these packages by running the following commands: ```sh $ pip3 install -r requirements.txt ``` Network dependency package: ```sh $ yum install nmap $ yum install iperf3 $ yum install bridge-utils ``` Network configurationtemplate ```sh brctl addbr strato_br0 ip link set strato_br0 up ip address add 1.1.1.1 dev strato_br0 ``` For details about how to build a test image, see docs/IMAGE_BUILD.md. Set parameters and corresponding paths in the config/config.ini. Generally, the kernel and rootfs must be configured for test cases. ```ini [env.params] ... VM_USERNAME = <usrname> VM_PASSWORD = <passwd> ... [stratovirt.params] ... STRATOVIRT_VMLINUX = /path/to/kernel STRATOVIRT_ROOTFS = /path/to/rootfs ... ``` Configure IPPREFIX and IP3RD in the \"config.ini\" file, which indicate the first 24 bits of the VM IPv4 address, The last 8 bits are automatically configured by the hydropper. Note that the VM and the host must be in the same network segment. ```ini [network.params] IP_PREFIX = 1.1 IP_3RD = 1 ``` You can run the following commands in the hydroper directory to execute cases: ```sh $ pytest $ pytest -k microvm $ pytest testcases/microvm/functional/testmicrovmcmdline.py $ pytest testcases/microvm/functional/testmicrovmcmdline.py::testmicrovmwithout_daemonize ``` Add customized cases to the microvm directory under testcases.You can add a python file or add a function to an existing python file.The file name and function name must be in the format of test_*. ```python testmicrovmxxx.py def testmicrovmxxx() ``` We have preset some virtual machine objects. You can test the virtual machine by generating their instances ```python def testmicrovmxxx(microvm): test_vm = microvm test_vm.launch() ``` In addition, Fixture is useful to write testcases.You can use Fixture in the following ways: ```python @pytest.mark.system def testmicrovmxxx(microvm): test_vm = microvm test_vm.launch() ``` Now you can use the pytest -m system command to run all the \"system\" cases. You can use the basic_config() function to configure VM parameters ```python def testmicrovmxxx(microvm): test_vm = microvm testvm.basicconfig(vcpucount=4, memsize='4G') test_vm.launch() ``` pytest default log path: /var/log/pytest.log stratovirt default log path: /var/log/stratovirt"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "StratoVirt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List BPF template objects. ``` cilium-dbg bpf sha list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage compiled BPF template objects"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_sha_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(server-configure)= See {ref}`server` for all configuration options that are available for the Incus server. If the Incus server is part of a cluster, some of the options apply to the cluster, while others apply only to the local server, thus the cluster member. In the {ref}`server` option tables, options that apply to the cluster are marked with a `global` scope, while options that apply to the local server are marked with a `local` scope. You can configure a server option with the following command: incus config set <key> <value> For example, to allow remote access to the Incus server on port 8443, enter the following command: incus config set core.https_address :8443 In a cluster setup, to configure a server option for a cluster member only, add the `--target` flag. For example, to configure where to store image tarballs on a specific cluster member, enter a command similar to the following: incus config set storage.images_volume my-pool/my-volume --target member02 To display the current server configuration, enter the following command: incus config show In a cluster setup, to show the local configuration for a specific cluster member, add the `--target` flag. To edit the full server configuration as a YAML file, enter the following command: incus config edit In a cluster setup, to edit the local configuration for a specific cluster member, add the `--target` flag."
}
] |
{
"category": "Runtime",
"file_name": "server_configure.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"Image tagging policy\" layout: docs This document describes Velero's image tagging policy. `velero/velero:<SemVer>` Velero follows the standard for releases. Each tag in the `github.com/vmware-tanzu/velero` repository has a matching image, `velero/velero:v1.0.0`. `velero/velero:latest` The `latest` tag follows the most recently released version of Velero. `velero/velero:main` The `main` tag follows the latest commit to land on the `main` branch."
}
] |
{
"category": "Runtime",
"file_name": "image-tagging.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "When flannel starts up, it ensures that the host has a subnet lease. If there is an existing lease then it's used, otherwise one is assigned. Leases can be viewed by checking the contents of etcd. e.g. ``` $ export ETCDCTL_API=3 $ etcdctl get /coreos.com/network/subnets --prefix --keys-only /coreos.com/network/subnets/10.5.52.0-24 $ etcdctl get /coreos.com/network/subnets/10.5.52.0-24 /coreos.com/network/subnets/10.5.52.0-24 {\"PublicIP\":\"192.168.64.3\",\"PublicIPv6\":null,\"BackendType\":\"vxlan\",\"BackendData\":{\"VNI\":1,\"VtepMAC\":\"c6:d2:32:6f:8f:44\"}} $ etcdctl lease list found 1 leases 694d854330fc5110 $ etcdctl lease timetolive --keys 694d854330fc5110 lease 694d854330fc5110 granted with TTL(86400s), remaining(74737s), attached keys([/coreos.com/network/subnets/10.5.52.0-24]) ``` This shows that there is a single lease (`10.5.52.0/24`) which will expire in 74737 seconds. flannel will attempt to renew the lease before it expires, but if flannel is not running for an extended period then the lease will be lost. The `\"PublicIP\"` value is how flannel knows to reuse this lease when restarted. This means that if the public IP changes, then the flannel subnet will change too. In case a host is unable to renew its lease before the lease expires (e.g. a host takes a long time to restart and the timing lines up with when the lease would normally be renewed), flannel will then attempt to renew the last lease that it has saved in its subnet config file (which, unless specified, is located at `/var/run/flannel/subnet.env`) ```bash cat /var/run/flannel/subnet.env FLANNEL_NETWORK=10.5.0.0/16 FLANNEL_SUBNET=10.5.52.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=false ``` In this case, if flannel fails to retrieve an existing lease from etcd, it will attempt to renew lease specified in `FLANNEL_SUBNET` (`10.5.52.1/24`). It will only renew this lease if the subnet specified is valid for the current etcd network configuration otherwise it will allocate a new lease. flannel also supports reservations for the subnet assigned to a host. Reservations allow a fixed subnet to be used for a given host. The only difference between a lease and reservation is the etcd TTL value. Simply removing the TTL from a lease will convert it to a reservation. e.g. ``` $ export ETCDCTL_API=3 $ etcdctl put /coreos.com/network/subnets/10.5.1.0-24 $(etcdctl get /coreos.com/network/subnets/10.5.1.0-24) ```"
}
] |
{
"category": "Runtime",
"file_name": "reservations.md",
"project_name": "Flannel",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(server-expose)= By default, Incus can be used only by local users through a Unix socket and is not accessible over the network. To expose Incus to the network, you must configure it to listen to addresses other than the local Unix socket. To do so, set the {config:option}`server-core:core.https_address` server configuration option. For example, to allow access to the Incus server on port `8443`, enter the following command: incus config set core.https_address :8443 To allow access through a specific IP address, use `ip addr` to find an available address and then set it. For example: ```{terminal} :input: ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo validlft forever preferredlft forever inet6 ::1/128 scope host validlft forever preferredlft forever 2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:16:3e:e3:f3:3f brd ff:ff:ff:ff:ff:ff inet 10.68.216.12/24 metric 100 brd 10.68.216.255 scope global dynamic enp5s0 validlft 3028sec preferredlft 3028sec inet6 fd42:e819:7a51:5a7b:216:3eff:fee3:f33f/64 scope global mngtmpaddr noprefixroute validlft forever preferredlft forever inet6 fe80::216:3eff:fee3:f33f/64 scope link validlft forever preferredlft forever 3: incusbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 00:16:3e:8d:f3:72 brd ff:ff:ff:ff:ff:ff inet 10.64.82.1/24 scope global incusbr0 validlft forever preferredlft forever inet6 fd42:f4ab:4399:e6eb::1/64 scope global validlft forever preferredlft forever :input: incus config set core.https_address 10.68.216.12 ``` All remote clients can then connect to Incus and access any image that is marked for public use. (server-authenticate)= To be able to access the remote API, clients must authenticate with the Incus server. There are several authentication methods; see {ref}`authentication` for detailed information. The recommended method is to add the client's TLS certificate to the server's trust store through a trust token. To authenticate a client using a trust token, complete the following steps: On the server, enter the following command: incus config trust add <client_name> The command generates and prints a token that can be used to add the client certificate. On the client, add the server with the following command: incus remote add <remote_name> <token> % Include content from ```{include} ../authentication.md :start-after: <!-- Include start NAT authentication --> :end-before: <!-- Include end NAT authentication --> ``` See {ref}`authentication` for detailed information and other authentication methods."
}
] |
{
"category": "Runtime",
"file_name": "server_expose.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<! Describe the change, including rationale and design decisions --> <! If you are fixing an existing issue, please include \"Fixes #nnn\" in your commit message and your description; but you should still explain what the change does. --> <! Pick one below and delete the rest: --> Feature Pull Request Bugfix Pull Request Docs Pull Request <! Name of the servicetype --> <! Paste verbatim output from \"openio --version\" between quotes below --> ``` ``` <! Include additional information to help people understand the change here. For bugs that don't have a linked bug report, a step-by-step reproduction of the problem is helpful. --> <! Paste verbatim command output below, e.g. before and after your change --> ``` ```"
}
] |
{
"category": "Runtime",
"file_name": "PULL_REQUEST_TEMPLATE.md",
"project_name": "OpenIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document describes how to run the Antrea agent simulator. The simulator is useful for Antrea scalability testing, without having to create a very large cluster. ```bash make build-scale-simulator ``` This demo uses 1 simulator, this command will create a yaml file build/yamls/antrea-scale.yml ```bash make manifest-scale ``` The above yaml will create one simulated Node/Pod, to change the number of instances, you can modify `spec.replicas` of the StatefulSet `antrea-agent-simulator` in the yaml, or scale it via `kubectl scale statefulset/antrea-agent-simulator -n kube-system --replicas=<COUNT>` after deploying it. To prevent Pods from being scheduled on the simulated Node(s), you can use the following taint. ```bash kubectl taint -l 'antrea/instance=simulator' node mocknode=true:NoExecute ``` ```bash kubectl create secret generic kubeconfig --type=Opaque --namespace=kube-system --from-file=admin.conf=<path to kubeconfig file> ``` ```bash kubectl apply -f build/yamls/antrea-scale.yml ``` check the simulated Node: ```bash kubectl get nodes -l 'antrea/instance=simulator' ```"
}
] |
{
"category": "Runtime",
"file_name": "antrea-agent-simulator.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "``` The operator manages multiple Rook storage clusters The operator manages all CRDs for the Rook clusters One instance of the operator is active Multiple instances of the operator can be on standby in an HA configuration The cluster CRD defines desired settings for a storage cluster All resources for a Rook cluster are created in the same Kubernetes namespace A cluster has an odd number of mons that form quorum A cluster has an osd per storage device A cluster has zero or more pools A cluster has zero or more block devices A cluster has zero or more object stores A cluster has zero or more shared file services The pool CRD defines desired settings for a pool A pool is created with either replication or erasure coding Replication can be 1 or more Erasure coding requires k >= 2 and m >= 1, where k is data chunks and m is coding chunks Erasure coding specifies a plugin (default=jerasure) Erasure coding specifies an encoding algorithm (default=reedsolvan) A pool can set its failure domain using a CRUSH rule (default=host) The object store CRD defines desired settings for an object store An object store has a set of pools dedicated to its instance Object store metadata pools can specify the same set of pool settings The object store data pool can specify all pool settings An object store has a unique set of authorized users An object store has one or more stateless RGW pods for load balancing An object store can specify an SSL certificate for secure connections An object store can specify a port for RGW services (default=53390) An object store represents a Ceph zone An object store can be configured for replication from an object store in the same cluster or another cluster The file system CRD defines desired settings for a file system A file system has one MDS service if not partitioned A file system has multiple MDS services if partitioned A file system has one metadata pool A file system has one data pool ```"
}
] |
{
"category": "Runtime",
"file_name": "data-model.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document defines a high level roadmap for Rook development and upcoming releases. The features and themes included in each milestone are optimistic in the sense that some do not have clear owners yet. Community and contributor involvement is vital for successfully implementing all desired items for each release. We hope that the items listed below will inspire further engagement from the community to keep Rook progressing and shipping exciting and valuable features. Any dates listed below and the specific issues that will ship in a given milestone are subject to change but should give a general idea of what we are planning. See the for the most up-to-date issues and their status. The following high level features are targeted for Rook v1.14 (April 2024). For more detailed project tracking see the . Support for Ceph Squid (v19) Allow setting the application name on a CephBlockPool Pool sharing for multiple object stores DNS subdomain style access to RGW buckets Replace a single OSD when a metadataDevice is configured with multiple OSDs Create a default service account for all Ceph daemons Enable the rook orchestrator mgr module by default for improved dashboard integration Option to run all components on the host network Multus-enabled clusters to begin \"holder\" pod deprecation Separate CSI image repository and tag for all images in the helm chart Ceph-CSI Add build support for Go 1.22 Add topology based provisioning for external clusters Features are planned in the 1.14 time frame for the . Collect details to help troubleshoot the csi driver Command to flatten an RBD image"
}
] |
{
"category": "Runtime",
"file_name": "ROADMAP.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Validate a policy ``` cilium-dbg policy validate <path> [flags] ``` ``` -h, --help help for validate --print Print policy after validation -v, --verbose Enable verbose output (default true) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage security policies"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_policy_validate.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "English | Spiderpool is an Underlay networking solution for Kubernetes, but the egress IP address is not fixed in a Kubernetes cluster when a Pod accesses an external service. In an Overlay network, the egress IP address is the address of the node on which the Pod resides, whereas in an Underlay network, the Pod communicates directly with the outside world using its own IP address. Therefore, when a Pod undergoes new scheduling, the IP address of the Pod when communicating with the outside world will change regardless of the network mode. This instability creates IP address management challenges for system maintainers. Especially when the cluster scale increases and network troubleshooting is required, it is difficult to control the egress traffic based on the Pod's original egress IP outside the cluster. Spiderpool can be used with the component to solve the problem of Pod egress traffic management in Underlay network. EgressGateway is an open source Egress gateway designed to solve the problem of exporting Egress IP addresses in different CNI network modes (Spiderpool, Calico, Flannel, Weave). By flexibly configuring and managing egress policies, Egress IP is set for tenant-level or cluster-level workloads, so that when a Pod accesses an external network, the system will uniformly use this set Egress IP as the egress address, thus providing a stable egress traffic management solution. However, all EgressGateway rules are effective on the host's network namespace. To make the EgressGateway policy effective, the traffic of Pods accessing the outside of the cluster has to go through the host's network namespace. Therefore, you can configure the subnet routes forwarded from the host via the `spec.hijackCIDR` field of `spidercoordinators` in Spiderpool, and then configure the subnet routes forwarded from the host via to forward matching traffic from the veth pair to the host. This enables egress traffic management on an underlay network by allowing access to external traffic to be matched by EgressGateway rules. Some of the features and benefits of Spiderpool with EgressGateway are as follows: Solve IPv4 IPv6 dual-stack connectivity, ensuring seamless communication across different protocol stacks. Solve the high availability of Egress Nodes, ensuring network connectivity remains unaffected by single-point failures. Support finer-grained policy control, allowing flexible filtering of Pods' Egress policies, including Destination CIDR. Support application-level control, allowing EgressGateway to filter Egress applications (Pods) for precise management of specific application outbound traffic. Support multiple egress gateways instance, capable of handling communication between multiple network partitions or clusters. Support namespaced egress IP. Support automatic detection of cluster traffic for egress gateways policies. Support namespace default egress instances. Can be used in low kernel version, making EgressGateway suitable for various Kubernetes deployment environments. A ready-to-use Kubernetes. has been already installed. Refer to to install Spiderpool and create SpiderMultusConfig CR and IPPool CR. After installing Spiderpool, Add the service addresses outside the cluster to the 'hijackCIDR' field in the 'default' object of spiderpool.spidercoordinators. This ensures that when Pods access these external services, the traffic is routed through the host where the Pod is located, allowing the EgressGateway rules to match. ```bash ~# kubectl patch spidercoordinators default --type='merge' -p '{\"spec\": {\"hijackCIDR\": [\"10.6.168.63/32\"]}}' ``` Install EgressGateway via helm: ```shell helm repo add egressgateway https://spidernet-io.github.io/egressgateway/ helm repo update egressgateway helm install egressgateway egressgateway/egressgateway -n kube-system --set feature.tunnelIpv4Subnet=\"192.200.0.1/16\" --set"
},
{
"data": "--wait --debug ``` If IPv6 is required, enable it with the option `-set feature.enableIPv6=true` and set `feature.tunnelIpv6Subnet`, it is worth noting that when configuring IPv4 or IPv6 segments via `feature.tunnelIpv4Subnet` and `feature. tunnelIpv6Subnet`, it is worth noting that when configuring IPv4 or IPv6 segments via `feature.tunnelIpv4Subnet` and `feature.tunnelIpv6Subnet`, you need to make sure that the segments don't conflict with any other addresses in the cluster. `feature.enableGatewayReplyRoute` is true to enable return routing rules on gateway nodes, which must be enabled when pairing with Spiderpool to support underlay CNI. If you are a mainland user who is not available to access ghcr.io, you can specify the parameter `-set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pulling failures for EgressGateway. Verify your EgressGateway installation: ```bash ~# kubectl get pod -n kube-system | grep egressgateway egressgateway-agent-4s8lt 1/1 Running 0 29m egressgateway-agent-thzth 1/1 Running 0 29m egressgateway-controller-77698899df-tln7j 1/1 Running 0 29m ``` For more installation details, refer to . An EgressGateway defines a set of nodes that act as an egress gateway for the cluster, through which egress traffic within the cluster will be forwarded out of the cluster. Therefore, an EgressGateway instance needs to be pre-defined. The following example Yaml creates an EgressGateway instance. `spec.ippools.ipv4`: defines a set of egress IP addresses, which need to be adjusted according to the actual situation of the specific environment. The CIDR of `spec.ippools.ipv4` should be the same as the subnet of the egress NIC on the gateway node (usually the NIC of the default route), or else the egress access may not work. `spec.nodeSelector`: the node affinity method provided by EgressGateway, when `selector.matchLabels` matches with a node, the node will be used as the egress gateway for the cluster, when `selector.matchLabels` does not match with a node, the When `selector.matchLabels` does not match with a node, the EgressGateway skips that node and it will not be used as an egress gateway for the cluster, which supports selecting multiple nodes for high availability. ```bash cat <<EOF | kubectl apply -f - apiVersion: egressgateway.spidernet.io/v1beta1 kind: EgressGateway metadata: name: default spec: ippools: ipv4: \"10.6.168.201-10.6.168.205\" nodeSelector: selector: matchLabels: egressgateway: \"true\" EOF ``` Label the node with the Label specified in `nodeSelector.selector.matchLabels` above so that the node can be selected by the EgressGateway to act as an egress gateway. ```bash ~# kubectl get node NAME STATUS ROLES AGE VERSION controller-node-1 Ready control-plane 5d17h v1.26.7 worker-node-1 Ready <none> 5d17h v1.26.7 ~# kubectl label node worker-node-1 egressgateway=\"true\" ``` When the creation is complete, check the EgressGateway status. `spec.ippools.ipv4DefaultEIP` represents the default VIP of the EgressGateway for the group, which is an IP address that will be randomly selected from `spec.ippools.ipv4`, and its function is: when creating an EgressPolicy object for an application, if no VIP address is specified, the is used if no VIP address is specified when creating an EgressPolicy object for the application. `status.nodeList` represents the status of the nodes identified as matching the `spec.nodeSelector` and the corresponding EgressTunnel object for that node. ```shell ~# kubectl get EgressGateway default -o yaml ... spec: ippools: ipv4DefaultEIP: 10.6.168.201 ... status: nodeList: name: worker-node-1 status: Ready ``` Create an application that will be used to test Pod access for external cluster purposes and label it to be associated with the EgressPolicy, as shown in the following example Yaml."
},
{
"data": "used to specify the subnet used by the application, the Multus CR corresponding to this value needs to be created in advance by referring to the document to create it in advance. `ipam.spidernet.io/ippool`: Specify which SpiderIPPool resources are used by the Pod, the corresponding SpiderIPPool CR should be created in advance by referring to the . ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: labels: app: test-app name: test-app namespace: default spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: labels: app: test-app annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"v4-pool\"], } v1.multus-cni.io/default-network: kube-system/macvlan-conf spec: containers: image: nginx imagePullPolicy: IfNotPresent name: test-app ports: name: http containerPort: 80 protocol: TCP EOF ``` The EgressPolicy instance is used to define which Pods' egress traffic is to be forwarded through the EgressGateway node, as well as other configuration details. The following is an example of creating an EgressPolicy CR object for an application. `spec.egressGatewayName` is used to specify which set of EgressGateways to use. `spec.appliedTo.podSelector` is used to specify on which Pods within the cluster this policy takes effect. `namespace` is used to specify the tenant where the EgressPolicy object resides. Because EgressPolicy is tenant-level, it must be created under the same namespace as the associated application, so that when the matching Pod accesses any address outside the cluster, it can be forwarded by the EgressGateway Node. ```bash cat <<EOF | kubectl apply -f - apiVersion: egressgateway.spidernet.io/v1beta1 kind: EgressPolicy metadata: name: test namespace: default spec: egressGatewayName: default appliedTo: podSelector: matchLabels: app: \"test-app\" EOF ``` When creation is complete, check the status of the EgressPolicy. `status.eip` shows the egress IP address used by the group when applying out of the cluster. `status.node` shows which EgressGateway node is responsible for forwarding traffic out of the EgressPolicy. ```bash ~# kubectl get EgressPolicy -A NAMESPACE NAME GATEWAY IPV4 IPV6 EGRESSNODE default test default 10.6.168.201 worker-node-1 ~# kubectl get EgressPolicy test -o yaml apiVersion: egressgateway.spidernet.io/v1beta1 kind: EgressPolicy metadata: name: test namespace: default spec: appliedTo: podSelector: matchLabels: app: test-app egressIP: allocatorPolicy: default useNodeIP: false status: eip: ipv4: 10.6.168.201 node: worker-node-1 ``` After creating the EgressPolicy object, an EgressEndpointSlices object containing a collection of IP addresses of all the applications will be generated according to the application selected by the EgressPolicy, so that you can check whether the IP addresses in the EgressEndpointSlices object are normal or not when the application cannot be accessed by export. ```bash ~# kubectl get egressendpointslices -A NAMESPACE NAME AGE default test-4vbqf 41s ~# kubectl get egressendpointslices test-kvlp6 -o yaml apiVersion: egressgateway.spidernet.io/v1beta1 endpoints: ipv4: 10.6.168.208 node: worker-node-1 ns: default pod: test-app-f44846544-8dnzp kind: EgressEndpointSlice metadata: name: test-4vbqf namespace: default ``` Deploy the application `nettools` outside the cluster to emulate a service outside the cluster, and `nettools` will return the source IP address of the requester in the http reply. ```bash ~# docker run -d --net=host ghcr.io/spidernet-io/egressgateway-nettools:latest /usr/bin/nettools-server -protocol web -webPort 8080 ``` To verify the effect of egress traffic in a test app within the cluster: test-app, you can see that the source IP returned by `nettools` complies with `EgressPolicy.status.eip` when accessing an external service in the Pod corresponding to this app. ```bash ~# kubectl get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-f44846544-8dnzp 1/1 Running 0 4m27s 10.6.168.208 worker-node-1 <none> <none> ~# kubectl exec -it test-app-f44846544-8dnzp bash ~# curl 10.6.168.63:8080 # IP address of the node outside the cluster + webPort Remote IP: 10.6.168.201 ```"
}
] |
{
"category": "Runtime",
"file_name": "egress.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(devices-tpm)= ```{note} The `tpm` device type is supported for both containers and VMs. It supports hotplugging only for containers, not for VMs. ``` TPM devices enable access to a {abbr}`TPM (Trusted Platform Module)` emulator. TPM devices can be used to validate the boot process and ensure that no steps in the boot chain have been tampered with, and they can securely generate and store encryption keys. Incus uses a software TPM that supports TPM 2.0. For containers, the main use case is sealing certificates, which means that the keys are stored outside of the container, making it virtually impossible for attackers to retrieve them. For virtual machines, TPM can be used both for sealing certificates and for validating the boot process, which allows using full disk encryption compatible with, for example, Windows BitLocker. `tpm` devices have the following device options: Key | Type | Default | Required | Description :-- | :-- | :-- | :-- | :-- `path` | string | - | for containers | Only for containers: path inside the instance (for example, `/dev/tpm0`) `pathrm` | string | - | for containers | Only for containers: resource manager path inside the instance (for example, `/dev/tpmrm0`)"
}
] |
{
"category": "Runtime",
"file_name": "devices_tpm.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "sidebar_position: 5 sidebar_label: \"Admission Controller\" `admission-controller` is a webhook that can automatically verify which pod uses the HwameiStor volume and help to modify the schedulerName to hwameistor-scheduler. For the specific principle, refer to . `admission-controller` gets all the PVCs used by a pod, and checks the of each PVC in turn. If the suffix of the provisioner name is `*.hwameistor.io`, it is believed that the pod is using the volume provided by HwameiStor. Only Pod resources will be verified, and the verification process occurs at the time of creation. :::info In order to ensure that the pods of HwameiStor can be started smoothly, the pods in the namespace where HwameiStor is deployed will not be verified. :::"
}
] |
{
"category": "Runtime",
"file_name": "admission_controller.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"Upgrading to Velero 1.10\" layout: docs Velero installed. If you're not yet running at least Velero v1.6, see the following: Before upgrading, check the to make sure your version of Kubernetes is supported by the new version of Velero. Caution: From Velero v1.10, except for using restic to do file-system level backup and restore, kopia is also been integrated, so there would be a little bit of difference when upgrading to v1.10 from a version lower than v1.10.0. Install the Velero v1.10 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.10.0 Git commit: <git SHA> ``` Update the Velero custom resource definitions (CRDs) to include schema changes across all CRDs that are at the core of the new features in this release: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: Since velero v1.10.0 only v1 CRD will be supported during installation, therefore, the v1.10.0 will only work on kubernetes version >= v1.16 Update the container image and objects fields used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl get deploy -n velero -ojson \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.10.0\\\"#g\" \\ | sed \"s#\\\"server\\\",#\\\"server\\\",\\\"--uploader-type=$uploader_type\\\",#g\" \\ | sed \"s#default-volumes-to-restic#default-volumes-to-fs-backup#g\" \\ | sed \"s#default-restic-prune-frequency#default-repo-maintain-frequency#g\" \\ | sed \"s#restic-timeout#fs-backup-timeout#g\" \\ | kubectl apply -f - echo $(kubectl get ds -n velero restic -ojson) \\ | sed \"s#\\\"image\\\"\\: \\\"velero\\/velero\\:v[0-9].[0-9].[0-9]\\\"#\\\"image\\\"\\: \\\"velero\\/velero\\:v1.10.0\\\"#g\" \\ | sed \"s#\\\"name\\\"\\: \\\"restic\\\"#\\\"name\\\"\\: \\\"node-agent\\\"#g\" \\ | sed \"s#\\[ \\\"restic\\\",#\\[ \\\"node-agent\\\",#g\" \\ | kubectl apply -f - kubectl delete ds -n velero restic --force --grace-period 0 ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.10.0 Git commit: <git SHA> Server: Version: v1.10.0 ``` If upgraded from v1.9.x, there still remains some resources left over in the cluster and never used in v1.10.x, which could be deleted through kubectl and it is based on your desire: resticrepository CRD and related CRs velero-restic-credentials secret in velero install namespace"
}
] |
{
"category": "Runtime",
"file_name": "upgrade-to-1.10.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "* * * * * * * * * * * * * * * * * * * * This section describes what features the DANM networking suite adds to a vanilla Kubernetes environment, and how can users utilize them. As DANM becomes more and more complex, we offer some level of control over the internal behaviour of how network provisioning is done. Unless stated otherwise, DANM behaviour can be configured purely through its CNI configuration file. The following configuration options are currently supported: cniDir: Users can define where should DANM search for the CNI config files for static delegates. Default value is /etc/cni/net.d namingScheme: if it is set to legacy, container network interface names are set exactly to the value of the respective network's Spec.Options.container_prefix parameter. Otherwise refer to for details\" The DANM CNI is a full-fledged CNI metaplugin, capable of provisioning multiple network interfaces to a Pod, on-demand! DANM can utilize any of the existing and already integrated CNI plugins to do so. DANM supports two kind of network management experiences as of DANM 4.0 - lightweight (the only supported mode before v4.0), and production-grade. Your experience depends on which CRD-based management APIs you chose to add to your cluster during installation. If you want you can even add all available APIs at the same time to see which method better fits your need! We advise new users, or users operating a single tenant Kubernetes cluster to start out with a streamlined, lightweight network management experience. In this \"mode\" DANM only recognizes one network management API, called DanmNet. Both administrators, and tenant users manage their networks through the same API. Everyone has the same level of access, and can configure all the parameters supported by DANM at their leisure. At the same time it is impossible to create networks, which can be used across tenants (disclaimer: we use the word \"tenant\" as a synonym to \"Kubernetes namespace\" throughout the document). In a real, production-grade cluster the lightweight management paradigm does not suffice, because usually there are different users, with different roles interacting with each other. There are possibly multiple users using their own segment of the cloud -or should we say tenant?- at the same time; while there can be administrator(s) overseeing that everything is configured, and works as it should be. The idea behind production-grade network management is that: tenant users shall be restricted to using only the network resources allocated to them by the administrators, but should be able to freely decide what to do with these resources within the confines of their tenant administrators, and only administrators shall have control over the network resources of the whole cloud To satisfy the needs of this complex ecosystem, DANM provides different APIs for the different purposes: TenantNetworks, and ClusterNetworks! TenantNetworks is a namespaced API, and can be freely created by tenant users. It basically is the same API as DanmNet, with one big difference: parameters any way related to host settings cannot be freely configured through this API. These parameters are automatically filled by DANM instead! Wonder how? Refer to chapter for more information. ClusterNetworks on the other hand is a cluster-wide API, and as such, can be -or should be- only provisioned by administrator level users. Administrators can freely set all available configuration options, even the physical"
},
{
"data": "The other nice thing in ClusterNetworks is that all Pods, in any namespace can connect to them - unless the network administrator forbade it via the newly introduced AllowedTenants configuration list. Interested user can find reference manifests showcasing the features of the new APIs under . Regardless which paradigm thrives in your cluster, network objects are managed the exact same way - you just might not be allowed to execute a specific provisioning operation in case you are trying to overstep your boundaries! Don't worry, as DANM will always explicitly and instantly tell you when this happens. Unless explicitly stated in the description of a specific feature, all API features are generally supported, and supported the same way regardless through which network management API type you use them. Network management always starts with the creation of Kubernetes API objects, logically representing the characteristics of a network Pods can connect to. Users first need to create the manifest files of these objects according to the schema described in the , , or template files. A network object can be created just like any other Kubernetes object, for example by issuing: ``` kubectl create -f test-net1.yaml ``` Users can also interact with the existing network management objects just as they would with other core API objects: ``` / # kubectl describe danmnet test-net1 Name: test-net1 Namespace: default Labels: <none> Annotations: <none> API Version: kubernetes.nokia.com/v1 Kind: DanmNet Metadata: Cluster Name: Creation Timestamp: 2018-05-24T16:53:27Z Generation: 0 Resource Version: 3146 Self Link: /apis/kubernetes.nokia.com/v1/namespaces/default/danmnets/test-net1 UID: fb1fdfb5-5f72-11e8-a8d0-fa163e98af00 Spec: Network ID: test-net1 Network Type: ipvlan Options: Allocation _ Pool: Start: 192.168.1.10 End: 192.168.1.100 Container _ Prefix: eth0 Host _ Device: ens4 Rt _ Tables: 201 Validation: True Events: <none> ``` WARNING: DANM stores pretty important information in these API objects. Under no circumstances shall a network be manually deleted, if there are any running Pods still referencing it! Such action will undoubtedly lead to ruin and DANMation! From DANM 4.0 upward the Webhook component makes sure this cannot happpen, but it is better to be aware of this detail. Generally speaking, you need to care about how the network interfaces of your Pods are named inside their respective network namespaces. The hard reality to keep in mind is that you shall always have an interface literally called \"eth0\" created within all your Kubernetes Pods, because Kubelet will always search for the existence of such an interface at the end of Pod instantiation. If such an interface does not exist after CNI is invoked, the state of the Pod will be considered \"faulty\", and it will be re-created in a loop. To be able to comply with this Kubernetes limitation, DANM always names the first container interface \"eth0\", regardless your original intention. Sorry, but they made us do it :) Note: some CNI plugins try to be smart about this limitation on their own, and decided not to adhere to the CNI standard! An example of this behaviour can be found in"
},
{
"data": "It is the user's responsibility to put the network connection of such boneheaded backends to the first place in the Pod's annotation! Besides making sure the first interface is always named correctly, DANM also supports both explicit, and implicit interface naming schemes for all NetworkTypes to help you flexibly name the other -and CNI standard- interfaces! An interface connected to a network containing the container_prefix attribute is always named accordingly. You can use this API to explicitly set descriptive, unique names to NICs connecting to this network. In case container_prefix is not set in an interface's network descriptor, DANM automatically uses the \"eth\" as the prefix when naming the interface. Regardless which prefix is used, the interface name is also suffixed with an integer number corresponding to the sequence number of the network connection (e.g. the first interface defined in the annotation is called \"eth0\", second interface \"eth1\" etc.) DANM even supports the mixing of the networking schemes within the same Pod, and it supports the whole naming scheme for all network backends. This enables network administrators to even connect Pods to the same network more than once! We recognize that not all networking involves an overlay technology, so provisioning IP routes directly into the Pod's network namespace needs to be generally supported. Network administrators can define routing rules for both IPv4, and IPv6 destination subnets under the \"routes\", and \"routes6\" attributes respectively. These attributes take a map of string-string key (destination subnet)-value(gateway address) pairs. The configured routes will be added to the default routing table of all Pods connecting to this network. Configuring generic routes on the network level is a nice feature, but in more complex network configurations (e.g. Pod connects to multiple networks) it is desirable to support Pod-level route provisioning. The routing table to hold the Pods' policy-based IP routes can be configured via the \"rt_tables\" API attribute. Whenever a Pod asks for policy-based routes via the \"proutes\", and/or \"proutes6\" network connection attributes, the related routes will be added to the configured table. DANM also provisions the necessary rule pointing to the configured routing table. Pay special attention to the network attribute called \"NetworkType\". This parameter controls which CNI plugin is invoked by the DANM metaplugin during the execution of a CNI operation to setup, or delete exactly one network interface of a Pod. In case this parameter is set to \"ipvlan\", or is missing; then DANM's in-built IPVLAN CNI plugin creates the network (see next chapter for details). In case this attribute is provided and set to another value than \"ipvlan\", then network management is delegated to the CNI plugin with the same name. The binary will be searched in the configured CNI binary directory. Example: when a Pod is created and requests a connection to a network with \"NetworkType\" set to \"flannel\", then DANM will delegate the creation of this network interface to the <CONFIGUREDCNIPATHINKUBELET>/flannel binary. We strongly believe that network management in general should be driven by generic APIs -almost- completely adhering to the same schema. Therefore, DANM is capable of \"translating\" the generic options coming from network objects into the specific \"language\" the delegate CNI plugin understands. This way users can dynamically configure various networking solutions via the same, abstract API without caring about how a specific option is called exactly in the terminology of the delegate solution. A generic framework supporting this method is built into DANM's code, but still this level of integration requires case-by-case implementation. As a result, DANM currently supports two integration levels: Dynamic integration level: CNI-specific network attributes (e.g. name of parent host devices"
},
{
"data": "can be controlled on a per network level, exclusively taken directly from the CRD object Static integration level: CNI-specific network attributes are by default configured via static CNI configuration files (Note: this is the default CNI configuration method). Note: most of the DANM API supported attributes (e.g. IP route configuration, IP address management etc.) are generally supported for all CNIs, regardless their supported integration level. Always refer to the schema descriptors for more details on which parameters are universally supported! Our aim is to integrate all the popular CNIs into the DANM eco-system over time, but currently the following CNI's achieved dynamic integration level: DANM's own, in-built IPVLAN CNI plugin Set the \"NetworkType\" parameter to value \"ipvlan\" to use this backend Intel's Set the \"NetworkType\" parameter to value \"sriov\" to use this backend Generic MACVLAN CNI from the CNI plugins example repository Set the \"NetworkType\" parameter to value \"macvlan\" to use this backend No separate configuration file is required when DANM connects Pods to such networks, everything happens automatically purely based on the network manifest! When network management is delegated to CNI plugins with static integration level; DANM first reads their configuration from the configured CNI config directory. The directory can be configured via setting the \"CNICONFDIR\" environment variable in DANM CNI's context (be it in the host namespace, or inside a Kubelet container). Default value is \"/etc/cni/net.d\". In case there are multiple configuration files present for the same backend, users can control which one is used in a specific network provisioning operation via the NetworkID parameter. So, all in all: a Pod connecting to a network with \"NetworkType\" set to \"bridge\", and \"NetworkID\" set to \"examplenetwork\" gets an interface provisioned by the <CONFIGUREDCNIPATHINKUBELET>/bridge binary based on the <CNICONFDIR>/examplenetwork.conf file! In addition to simply delegating the interface creation operation, the universally supported features of the DANM management APIs -such as static and dynamic IP route provisioning, flexible interface naming, or centralized IPAM- are also configured either before, or after the delegation took place. Pods can request network connections to networks by defining one or more network connections in the annotation of their (template) spec field, according to the schema described in the schema/network_attach.yaml file. For each connection defined in such a manner DANM provisions exactly one interface into the Pod's network namespace, according to the way described in previous chapters (configuration taken from the referenced API object). In case you have added more than one network management APIs to your cluster, it is possible to connect the same Pod to different networks of different APIs. But please note, that physical network interfaces are 1:1 mapped to logical networks. In addition to simply invoking other CNI libraries to set-up network connections, Pod's can even influence the way their interfaces are created to a certain extent. For example Pods can ask DANM to provision L3 IP addresses to their network interfaces dynamically, statically, or not at all! Or, as described earlier; creation of policy-based L3 IP routes into their network namespace is also universally supported by the solution. If the Pod annotation is empty (no explicit connections are defined), DANM tries to fall back to a configured default network. In the lightweight network management paradigm default networks can be only configured on a per namespace level, by creating one DanmNet object with ObjectMeta.Name field set to \"default\" in the Pod's"
},
{
"data": "In a production grade cluster, default networks can be configured both on the namespace, and on the cluster level. If both are configured for a Pod -both a TenantNetwork named default in the Pod's namespace, and a ClusterNetwork named default exist in the cluster-; the namespace level default takes precedence. There are no restrictions as to what DANM supported attributes can be configured for a default network. However, in this case users cannot specify any further fine-grained properties for the Pod (i.e. static IP address, policy-based IP routes). This feature is beneficial for cluster operators who would like to use unmodified upstream manifest files (i.e. community maintained Helm charts or Pods created by K8s operators), or would like to use DANM in the \"vanilla K8s\" way. Regardless which CNI plugins are involved in managing the networks of a Pod, and how they are configured; DANM invokes all of them at the same time, in parallel threads. DANM waits for the CNI result of all executors before converting, and merging them together into one summarized result object. The aggregated result is then sent back to kubelet. If any executor reported an error, or hasn't finished its job even after 10 seconds; the result of the whole operation will be an error. DANM reports all errors towards kubelet in case multiple CNI plugins failed to do their job. DANM includes a fully generic and very flexible IPAM module in-built into the solution. The usage of this module is seamlessly integrated together with all the natively supported CNI plugins (DANM's IPVLAN, Intel's SR-IOV, and the CNI project's reference MACVLAN plugins); as well as with any other CNI backend fully adhering to the v0.3.1 CNI standard! The main feature of DANM's IPAM is that it's fully integrated into DANM's network management APIs through the attributes called \"cidr\", \"allocationpool\", \"net6\", and \"allocationpool_v6\". Therefore users of the module can easily configure all aspects of network management by manipulating solely dynamic Kubernetes API objects! This native integration also enables a very tempting possibility. As IP allocations belonging to a network are dynamically tracked *within the same API object*, it becomes possible to define: discontinuous subnets 1:1 mapped to a logical network cluster-wide usable subnets* (instead of node restricted sub CIDRs) Network administrators can simply provision their desired CIDRs, and the allocation pools into the network object. Whenever a Pod is instantiated or deleted on any host within the cluster, DANM updates the respective allocation record belonging to the network through the Kubernetes API before provisioning the chosen IP to the Pod's interface. The flexible IPAM module also allows Pods to define the IP allocation scheme best suited for them. Pods can ask dynamically allocated IPs from the defined allocation pool, or can ask for one, specific, static address. The application can even ask DANM to forego the allocation of any IPs to their interface in case a L2 network interface is required. DANM IPAM is capable of handling 8 million -that's right!- IP allocations per network object, IPv4, and IPv6"
},
{
"data": "If this is still not enough to impress you, we honestly don't know what else you might need from your IPAM! So please come, and tell us :) While using the DANM IPAM with dynamic backends is mandatory, netadmins can freely choose if they want their static CNI backends to be also integrated to DANM's IPAM; or they would prefer these interfaces to be statically configured by another IPAM module. By default the \"ipam\" section of a static delegate is always configured from the CNI configuration file identified by the network's NetworkID parameter. However, users can overwrite this inflexible -and most of the time host-local- option by defining \"cidr\", and/or \"net6\" in their network manifest just as they would with a dynamic backend. When a Pod connects to a network with static NetworkType but containing allocation subnets, and explicitly asks for an \"ip\", and/or \"ip6\" address from DANM in its annotation; DANM overwrites the \"ipam\" section coming from the static config with its own, dynamically allocated address. If a Pod does not ask DANM to allocate an IP, or the network does not define the necessary parameters; the delegation automatically falls back to the \"ipam\" defined in the static config file. Note: DANM can only integrate static backends to its flexible IPAM if the CNI itself is fully compliant to the standard, i.e. uses the plugin defined in the \"ipam\" section of its configuration. It is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI! DANM's IPAM module supports both pure IPv6, and dual-stack (one IPv4, and one IPv6 address provisioned to the same interface) addresses with full feature parity! To configure an IPv6 CIDR for a network, network administrators shall configure the \"net6\" attribute. Similarly to IPv4 addess management operators can define a desired allocation pool for their V6 subnet via the \"allocationpolv6\" structure. Additionally, IP routes for IPv6 subnets can be configured via \"routes6\". If both \"cidr\", and \"net6\" are configured for the same network, Pods connecting to that network can ask either one IPv4 or IPv6 address - or even both at the same time! This feature is generally supported the same way even for static CNI backends! However the promise that every specific CNI plugin is compatible and comfortable with both IPv6, and dual IPs allocated by an IPAM cannot be guaranteed by DANM. Therefore, it is the administrator's responsibility to configure the DANM management APIs according to the capabilities of every CNI! DANM's IPVLAN CNI uses the Linux kernel's IPVLAN module to provision high-speed, low-latency network interfaces for applications which need better performance than a bridge (or any other overlay technology) can provide. *Keep in mind that the IPVLAN module is a fairly recent addition to the Linux kernel, so the feature cannot be used on systems whose kernel is older than 4.4! 4.14+ would be even better (lotta bug fixes)* The CNI provisions IPVLAN interfaces in L2 mode, and supports the following extra features: attaching IPVLAN sub-interfaces to any host interface attaching IPVLAN sub-interfaces to dynamically created VLAN or VxLAN host interfaces renaming the created interfaces according to the \"container_prefix\" attribute defined in the network object allocating IP addresses by using DANM's flexible, in-built IPAM module provisioning generic IP routes into a configured routing table inside the Pod's network namespace Pod-level controlled provisioning of policy-based IP routes into Pod's network namespace DANM provides general support for CNIs interworking with Kubernetes' Device Plugin mechanism. A practical example of such a network provisioner is the SR-IOV"
},
{
"data": "When a properly configured Network Device Plugin runs, the allocatable resource list for the node should be updated with resource discovered by the plugin. SR-IOV Network Device Plugin allows to create a list of netdevice type resource definitions with sriovMode, where each resource definition can have one or more assigned rootDevice (Physical Function). The plugin looks for Virtual Functions (VF) for each configured Physical Function (PF) and adds all discovered VFs to the allocatable resource's list of the given Kubernetes Node. The Device Plugin resource name will be the device pool name on the Node. These device pools can be referred in Pod definition's resource request part on the usual way. In the following example, the \"nokia.k8s.io/sriovens1f0\" device pool name consists of the \"nokia.k8s.io\" prefix and \"sriovens1f0\" resourceName. ``` kubectl get nodes 172.30.101.104 -o json | jq '.status.allocatable' { \"cpu\": \"48\", \"ephemeral-storage\": \"48308001098\", \"hugepages-1Gi\": \"16Gi\", \"memory\": \"246963760Ki\", \"nokia.k8s.io/default\": \"0\", \"nokia.k8s.io/sriov_ens1f0\": \"8\", \"nokia.k8s.io/sriov_ens1f1\": \"8\", \"pods\": \"110\" } ``` All network management APIs contain an optional device_pool field where a specific device pool can be assigned to the given network. Note: device_pool and host_device parameters are mutually exclusive! Before DANM invokes a CNI which expects a given resource to be attached to the Pod, it gathers all the Kubelet assigned device IDs belonging to device pool defined in the Pod's network, and passes one ID from the list to the CNI. Note: Pods connecting to networks depending on a device_pool must declare their respective resource requests through their Pod.Spec.Resources API! The following example network definition shows how to configure device_pool parameter for sriov network type. ``` apiVersion: danm.io/v1 kind: DanmNet metadata: name: sriov-a namespace: example-sriov spec: NetworkID: sriov-a NetworkType: sriov Options: devicepool: \"nokia.k8s.io/sriovens1f0\" ``` The following Pod definition shows how to combine K8s Device resource requests and multiple network connections using the assigned resources: ``` apiVersion: v1 kind: Pod metadata: name: sriov-pod namespace: example-sriov labels: env: test annotations: danm.io/interfaces: | [ {\"network\":\"management\", \"ip\":\"dynamic\"}, {\"network\":\"sriov-a\", \"ip\":\"none\"}, {\"network\":\"sriov-b\", \"ip\":\"none\"} ] spec: containers: name: sriov-pod image: busybox:latest args: sleep \"1000\" resources: requests: nokia.k8s.io/sriov_ens1f0: '1' nokia.k8s.io/sriov_ens1f1: '1' limits: nokia.k8s.io/sriov_ens1f0: '1' nokia.k8s.io/sriov_ens1f1: '1' nodeSelector: sriov: enabled ``` DANM's SR-IOV integration supports -and is tested with- both Intel, and Mellanox manufactured physical functions. Moreover Pods can use the allocated Virtual Functions for either kernel, or user space networking. The only restriction to keep in mind is when a DPDK using application requests VFs from an Intel NIC for the purpose of user space networking (i.e. DPDK), those VFs shall be already bound to the vfio-pci kernel driver before the Pod is instantiated. To guarantee such VFs are always available on the Node the Pod is scheduled to, we strongly suggest advertising vfio-pci bound VFs as a separate Device Pool. When an already vfio bound function is mounted to an application, DANM also creates a dummy kernel interface in its stead in the Pod's network namespace. The dummy interface can be easily identified by the application, because it's named exactly as the VF would be, following the standard DANM interface naming conventions. The dummy interface is used to convey all the information the user space application requires to start its own networking stack in a standardized manner. The list includes: the IPAM details belonging to the user space device, such as IP addresses, IP routes"
},
{
"data": "VLAN tag of the VF, if any PCI address of the specific device -as a link alias- so applications know which IPs/VLANs belong to which user space device the original MAC address of the VF User space applications can interrogate this information via the usual kernel APIs, and then configure the allocated resources into their own network stack without the need of requesting any extra kernel privileges! The Webhook component introduced in DANM V4 is responsible for three things: it initializes essential, but not human configurable API attributes (i.e. allocation tracking bitmasks) at the time of object creation it matches, and connects TenantNetworks to administrator configured physical profiles allowed for tenant users it validates the syntactic and semantic integrity of all API objects before any CREATE, or PUT REST operation are allowed to be persisted in the K8s API server's data store TenantNetworks cannot freely define the following attributes: host_devices device_pool vlan vxlan NetworkID Reason is that all these attributes are related to physical resources, which might not be allowed to be used by the specific tenants: VLANs might not be configured in the switches, specific NICs are reserved for infrastructure use, static CNI configuration files might not exist on the container host's disk etc. Instead, these parameters are either entirely, or partially managed by DANM in TenantNetwork provisioning time. DANM does this by introducing a third new API with v4.0 called TenantConfig. TenantConfig is a mandatory API when DANM is used in the production grade mode. TenantConfig is a cluster-wide API, containing two major parameters: physical interface profiles usable by TenantNetworks, and NetworkType:NetworkID mappings. Refer to for more information on TenantConfigs. There are multiple ways of how DANM can select the appropriate interface profile for a tenant user's network. Note: physical interface profiles are only relevant for dynamic backends. For backends dependent on the host_device option (such as IPVLAN, and MACVLAN): if the TenantNetwork contains host_device attribute, DANM selects the entry from the TenantConfig with the matching name if host_device is not provided by user, DANM randomly selects an interface profile from the TenantConfig For backends dependent on the devicepool option (such as SR-IOV), the user needs to explicitly state which devicepool it wants to use. The reasoning behind not supporting random profile selection for K8s Devices based backends is that the Pod using such Devices anyway need to explicitly request resources from a specific pool in its own Pod manifest. Randomly matching its network with a possibly different pool could result in run-time failures. If there are no suitable physical interface profiles configured by the cluster's network administrator, or the TenantNetwork tried to select a physical device which is not allowed; webhook denies the creation of the TenantNetwork. If a suitable profile could be selected, DANM: mutates the physical interface profile's name into either the TenantNetwork's hostdevice, or devicepool attribute (DANM automatically figures out which one based on the name of the profile, and the NetworkType parameter) if the interface profile is a virtual profile, DANM automatically reserves the next previously unused VNI from the configured VNI range then mutates the reserved VNI into the TenantNetwork's respective attribute (vlan, or vxlan) To avoid the leaking of VNIs in the cluster, DANM also takes care of freeing the reserved VNI of a TenantNetwork when it is deleted. Delegation to backends with static integration level (e.g. Calico, Flannel etc.) is configured via static CNI config files read from the container host's"
},
{
"data": "These files are selected based on the NetworkType parameter of the TenantNetwork. Network administrators can configure NetworkType: NetworkID mappings into the TenantConfig. When a TenantNetwork is created with a NetworkType having a configured mapping, DANM automatically overwrites it's NetworkID with the provided value. Thus it becomes guaranteed that the tenant user's network will use the right CNI configuration file during Pod creation! Every CREATE, and PUT (see ) DanmNet operation is subject to the following validation rules: spec.Options.Cidr must be supplied in a valid IPv4 CIDR notation all gateway addresses belonging to an entry of spec.Options.Routes shall be in the defined IPv4 CIDR spec.Options.Net6 must be supplied in a valid IPv6 CIDR notation all gateway addresses belonging to an entry of spec.Options.Routes6 shall be in the defined IPv6 CIDR spec.Options.Alloc shall not be manually defined spec.Options.Alloc6 shall not be manually defined spec.Options.Allocation_pool cannot be defined without defining spec.Options.Cidr spec.Options.Allocation_pool.Start shall be in the provided IPv4 CIDR spec.Options.Allocation_pool.End shall be in the provided IPv4 CIDR spec.Options.Allocationpool.End shall be smaller than spec.Options.Allocationpool.Start spec.Options.AllocationpoolV6 cannot be defined without defining spec.Options.Cidr spec.Options.AllocationpoolV6.Start shall be in the provided IPv6 CIDR spec.Options.AllocationpoolV6.End shall be in the provided IPv6 CIDR spec.Options.AllocationpoolV6.End shall be smaller than spec.Options.AllocationpoolV6.Start spec.Options.AllocationpoolV6.Cidr must be supplied in a valid IPv6 CIDR notation, and must be in the provided IPv6 CIDR The combined number of allocatable IP addresses of the manually provided IPv4 and IPv6 allocation CIDRs cannot be higher than 8 million spec.Options.Vlan and spec.Options.Vxlan cannot be provided together spec.NetworkID cannot be longer than 10 characters for dynamic backends spec.AllowedTenants is not a valid parameter for this API type spec.Options.Devicepool must be, and spec.Options.Hostdevice mustn't be provided for K8s Devices based networks (such as SR-IOV) Any of spec.Options.Device, spec.Options.Vlan, or spec.Options.Vxlan attributes cannot be changed if there are any Pods currently connected to the network Every DELETE DanmNet operation is subject to the following validation rules: the network cannot be deleted if there are any Pods currently connected to the network Not complying with any of these rules results in the denial of the provisioning operation. Every CREATE, and PUT (see ) TenantNetwork operation is subject to the DanmNet validation rules no. 1-16, 18, 19. In addition TenantNetwork provisioning has the following extra rules: spec.Options.Vlan cannot be provided spec.Options.Vxlan cannot be provided spec.Options.Vlan cannot be modified spec.Options.Vxlan cannot be modified spec.Options.Host_device cannot be modified spec.Options.Device_pool cannot be modified Every DELETE TenantNetwork operation is subject to the DanmNet validation rule no.22. Not complying with any of these rules results in the denial of the provisioning operation. Every CREATE, and PUT (see ) ClusterNetwork operation is subject to the DanmNet validation rules no. 1-18, 20-21. Every DELETE ClusterNetwork operation is subject to the DanmNet validation rule no.22. Not complying with any of these rules results in the denial of the provisioning operation. Every CREATE, and PUT TenantConfig operation is subject to the following validation rules: Either HostDevices, or NetworkIDs must not be empty VniType and VniRange must be defined together for every HostDevices entry Both key, and value must not be empty in every NetworkType: NetworkID mapping entry A NetworkID cannot be longer than 10 characters in a NetworkType: NetworkID mapping belonging to a dynamic NetworkType Netwatcher is a standalone Network Operator responsible for dynamically managing"
},
{
"data": "creation and deletion) VxLAN and VLAN interfaces on all the hosts based on dynamic network management K8s APIs. Netwatcher is a mandatory component of the DANM networking suite, but can be a great standalone add to Multus, or any other NetworkAttachmentDefinition driven K8s clusters! When netwatcher is deployed it runs as a DaemonSet, brought-up on all hosts where a meta CNI plugin is configured. Whenever a DANM network is created, modified, or deleted -any network, belonging to any of the supported API types- within the Kubernetes cluster, netwatcher will be triggered. If the network in question contained either the \"vxlan\", or the \"vlan\" attributes; then netwatcher immediately creates, or deletes the VLAN or VxLAN host interface with the matching VID. If the Spec.Options.host_device, .vlan, or .vxlan attributes are modified netwatcher first deletes the old, and then creates the new host interface. This feature is the most beneficial when used together with a dynamic network provisioning backend supporting connecting Pod interfaces to virtual host devices (IPVLAN, MACVLAN, SR-IOV for VLANs). Whenever a Pod is connected to such a network containing a virtual network identifier, the CNI component automatically connects the created interface to the VxLAN or VLAN host interface created by the netwatcher; instead of directly connecting it to the configured host device. But wait that's not all - Netwatcher is an API agnostic standalone Operator! This means all of its supported features can be used even in clusters where DANM is not the configured meta CNI solution! If your cluster uses a CNI solution driven by the NetworkAttachmentDefinition API -such as Multus, or Genie-, you can deploy netwatcher as-is to automate various network management operatios of TelCo workloads. Whenever you deploy a NAD Netwatcher will inspect the CNI config portion stored under Spec.Config. If there is a VLAN, or VxLAN identifier added to a CNI configuration it will trigger Netwatcher to create the necessary host interfaces, the exact same way as if these attributes were added to a DANM API object. For example if you want your IPVLAN type NAD to be connected to a specific VLAN just add the tag to your object the following way: ``` apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ipvlan-conf spec: config: '{ \"name\": \"vlantest\", \"cniVersion\": \"0.3.1\", \"type\": \"ipvlan\", \"master\": \"tenant-bond\", \"vlan\": 500, \"ipam\": { \"type\": \"static\", \"routes\": [ { \"dst\": \"0.0.0.0/0\", \"gw\": \"10.1.1.1\" } ] } }' ``` When it comes to dealing with NADs Netwatcher understands that these extra tags are not recognized by the existing CNI eco-system. So to achieve E2E automation Netwatcher will also modify the CNI configuration of the NAD to point to the right host interface! Let's use the above example to show how this works! First, upon seeing this network Netwatcher creates the appropriate host interface with the tag: ``` 568: vlantest.500@tenant-bond: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default ``` Then it also initiates an Update operation on the NAD, exchanging the old host interface reference to the correct one: ``` apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: ... apiVersion: k8s.cni.cncf.io/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:config: {} manager: netwatcher operation: Update time: \"2021-03-01T17:21:11Z\" name: ipvlan-conf namespace: default spec: config:"
},
{
"data": "``` This approach ensures users can seamlessly integrate Netwatcher into their existing clusters and enjoy its extra capabilities without any extra hassle - just the way we like it! Svcwatcher component showcases the whole reason why DANM exists, and is designed the way it is. It is the first higher-level feature accomplishing our true goal described in the introduction section, that is, extending basic Kubernetes constructs to seamlessly work with multiple network interfaces. The first such construct is the Kubernetes Service! Let's see how it works. Svcwatcher basically works the same way as the default Service controller inside Kubernetes. It continuously monitors both the Service and the Pod APIs, and provisions Endpoints whenever the cluster administrator creates, updates, or deletes relevant API objects (e.g. creates a new Service, updates a Pod label etc.). DANM svcwatcher does the same, and more! The default Service controller assumes the Pod has one interface, so whenever a logical Service Endpoint is created it will be always done with the IP of the Pod's first (the infamous \"eth0\" in Kubernetes), and supposedly only network interface. DANM svcwatcher on the other hand makes this behaviour configurable! DANM enhances the same Service API so an object will always explicitly select one logical network, rather than implicitly choosing the one with the hard-coded name of \"eth0\". Then, svcwatcher provisions a Service Endpoint with the address of the selected Pod's chosen network interface. This enhancement basically upgrades the in-built Kubernetes Service Discovery concept to work over multiple network interfaces, making Service Discovery only return truly relevant Endpoints in every scenario! The services of the svcwatcher component work with all supported network management APIs! Based on the feature description experienced Kubernetes users are probably already thinking \"but wait, there is no \"network selector\" field in the Kubernetes Service core API\". That is indeed true right now, but consider the core concept behind the creation of DANM: \"what use-cases would become possible if Networks would be part of the core Kubernetes API\"? So, we went ahead and simulated exactly this scenario, while making sure our solution also works with a vanilla Kubernetes today; just as we did with all our other API enhancements. This is possible by leveraging the so-called \"headless and selectorless Services\" concept in Kubernetes. Headless plus selectorless Services do not contain Pod selector field, which tells the Kubernetes native Service controller that Endpoint administration is handled by a 3rd party service. DANM svcwatcher is triggered when such a service is created, if it contains the DANM \"core API\" attributes in their annotation. These extra attributes are the following: \"danm.io/selector\": this selector serves the exact same purpose as the default Pod selector field (which is missing from a selectorless Service by definition). Endpoints are created for Pods which match all labels provided in this list \"danm.io/network\": this is the \"special sauce\" of DANM. When svcwatcher creates an Endpoint, it's IP will be taken from the selected Pod's physical interface connected to the DanmNet with the matching name \"danm.io/tenantNetwork\": serves the exact same purpose as the network selector, but it selects interfaces connected to TenantNetworks, rather than DanmNets \"danm.io/clusterNetwork\": serves the exact same purpose as the network selector, but it selects interfaces connected to ClusterNetworks, rather than DanmNets This means that DANM controlled Services behave exactly as in Kubernetes: a selected Pod's availability is advertised through one of its network"
},
{
"data": "The big difference is that operators can now decide through which interface(s) they want the Pod to be discoverable! (Of course nothing forbids the creation of multiple Services selecting different interfaces of the same Pod, in case a Pod should be discoverable by different kind of communication partners). The schema of the enhanced, DANM-compatible Service object is described in detail in schema/DanmService.yaml file. Why is this feature useful, the reader might ask? The answer depends on the use-case your application serves. If you share one, cloud-wide network between all application and infrastructure components, and everyone communicates with everyone through this -most probably overlay- network, then you are probably not excited by DANM's svcwatcher. However, if you believe in physically separated interfaces (or certain government organizations made you believe in it), non-default networks, multi-domain gateway components; then this is the feature you probably already built-in to your application's Helm chart in the form of an extra Consul, or Etcd component. This duplication of platform responsibility ends today! :) Allow us to demonstrate the usage of this feature via an every-day common TelCo inspired example located in the project's example/svcwatcher_demo directory. The example contains three Pods running in the same cluster: A LoadBalancer Pod, whose job is to accept connections over any exotic but widely used non-L7 protocols (e.g. DIAMETER, LDAP, SIP, SIGTRAN etc.), and distribute the workload to backend services An ExternalClient Pod, supplying the LoadBalancer with traffic through an external network An InternalProcessor Pod, receiving requests to be served from the LoadBalancer Pod Our cluster contains three physical networks: external, internal, management. LoadBalancer connects to all three, because it needs to be able to establish connections to entities both supplying, and serving traffic. LoadBalancer also wishes to be scaled via Prometheus, hence it connects to the cluster's management network to expose its own \"packetservedper_second\" custom metric. ExternalClient only connects to the LoadBalancer Pod, because it simply wants to send traffic to the application (VNF), and deal with the result of transactions. It doesn't care, or know anything about the internal architecture of the application (VNF). Because ExternalClient is not part of the same application (namespace) as LoadBalancer and InternalProcessor, it can't have access to their internal network. It doesn't require scaling, being a lightweight, non-critical component, therefore it also does not connect to the cluster's management network. InternalProcessor only connects to the LoadBalancer Pod, but being a small, dynamically changing component, we don't want to expose it to external clients. InternalProcessor wants to have access to the many network-based features of Kubernetes, so it also connects to the management network, similarly to LoadBalancer. So, how can ExternalClient(S) discover LoadBalancer(S), how can LoadBalancer(S) discover InternalProcessor(S), and how can we avoid making LoadBalancer(S) and InternalProcessor(S) discoverable through their management interface? With DANM, the answer is as simple as instantiating the demonstration Kubernetes manifest files in the following order: Namespaces -> DanmNets -> Deployments -> Services \"vnf-internal-processor\" will make the InternalProcessors discoverable through their application-internal network interface. LoadBalancers can use this Service to discover working backends serving transactions. \"vnf-internal-lb\" will make the LoadBalancers discoverable through their application-internal network interface. InternalProcessors can use this Service to discover application egress points/gateway components. Lastly, \"vnf-external-svc\" makes the same LoadBalancer instances discoverable but this time through their external network interfaces. External clients connecting to the same network can use this Service to find the ingress/gateway interfaces of the whole application (VNF)! As a closing note: remember to delete the now unnecessary Service Discovery tool's Deployment manifest from your Helm chart :)"
}
] |
{
"category": "Runtime",
"file_name": "user-guide.md",
"project_name": "DANM",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Docker is a platform designed to help developers build, share, and run container applications. In gVisor, all basic docker commands should function as expected. However, it's important to note that, currently, only the host network driver is supported. This means that both 'docker run' and 'docker build' commands must be executed with the `--network=host` option. First, prepare a container image with pre-installed Docker: ```shell $ cd images/basic/docker/ $ docker build -t docker-in-gvisor . ``` Since Docker requires root privileges and a full set of capabilities, a gVisor sandbox needs to be started in privileged mode: ```shell $ docker run --runtime runsc -d --rm --privileged --name docker-in-gvisor docker-in-gvisor ``` Now, we can build and run Docker containers. Let's enter in the gvisor sandbox and run some docker commands: ```shell docker exec -it docker-in-gvisor bash ``` ```shell $ mkdir whalesay && cd whalesay $ cat > Dockerfile <<EOF FROM ubuntu RUN apt-get update && apt-get install -y cowsay curl RUN mkdir -p /usr/share/cowsay/cows/ RUN curl -o /usr/share/cowsay/cows/docker.cow https://raw.githubusercontent.com/docker/whalesay/master/docker.cow ENTRYPOINT [\"/usr/games/cowsay\", \"-f\", \"docker.cow\"] EOF $ docker build --network=host -t whalesay . .... Successfully tagged whalesay:latest $ docker run --network host -it --rm whalesay \"Containers do not contain, but gVisor-s do!\" _ / Containers do not contain, but gVisor-s \\ \\ do! / -- \\ ## . \\ ## ## ## == /\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\\_/ === ~ { ~ ~ / ===- ~~~ \\ o / \\ \\ / \\\\/ ```"
}
] |
{
"category": "Runtime",
"file_name": "docker-in-gvisor.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-agent --cmdref, do not edit manually--> Generate the autocompletion script for zsh Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: echo \"autoload -U compinit; compinit\" >> ~/.zshrc To load completions in your current shell session: source <(cilium-health completion zsh) To load completions for every new session, execute once: cilium-health completion zsh > \"${fpath[1]}/_cilium-health\" cilium-health completion zsh > $(brew --prefix)/share/zsh/site-functions/_cilium-health You will need to start a new shell for this setup to take effect. ``` cilium-health completion zsh [flags] ``` ``` -h, --help help for zsh --no-descriptions disable completion descriptions ``` ``` -D, --debug Enable debug messages -H, --host string URI to cilium-health server API --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-health e.g. syslog.level=info,syslog.facility=local5,syslog.tag=cilium-agent ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-health_completion_zsh.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This list follows the , and collects useful resources about CRI-O. <!-- markdownlint-disable-file MD039 MD051 --> <!-- TOC start --> - - - - <!-- TOC end -->"
}
] |
{
"category": "Runtime",
"file_name": "awesome.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Documentation updates. Small linter cleanups. Added example in test. Document the usage of Cleanup when re-reading a file (thanks to @lesovsky) for issue #18. Add example directories with example and tests for issues. Fix of checksum problem because of forced tag. No changes to the code. Incorporated PR 162 by by Mohammed902: \"Simplify non-Windows build tag\". Incorporated PR 9 by mschneider82: \"Added seekinfo to Tail\". Incorporated PR 7: \"Fix deadlock when stopping on non-empty file/buffer\", fixes upstream issue 93. Incorporated changes of unmerged upstream PR 149 by mezzi: \"added line num to Line struct\". Incorporated changes of unmerged upstream PR 128 by jadekler: \"Compile-able code in readme\". Incorporated changes of unmerged upstream PR 130 by fgeller: \"small change to comment wording\". Incorporated changes of unmerged upstream PR 133 by sm3142: \"removed spurious newlines from log messages\". Incorporated changes of unmerged upstream PR 126 by Code-Hex: \"Solved the problem for never return the last line if it's not followed by a newline\". Incorporated changes of unmerged upstream PR 131 by StoicPerlman: \"Remove deprecated os.SEEK consts\". The changes bumped the minimal supported Go release to 1.9. migration to go modules. release of master branch of the dormant upstream, because it contains fixes and improvement no present in the tagged release."
}
] |
{
"category": "Runtime",
"file_name": "CHANGES.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "[TOC] To enable debug and system call logging, add the `runtimeArgs` below to your configuration (`/etc/docker/daemon.json`): ```json { \"runtimes\": { \"runsc\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--debug-log=/tmp/runsc/\", \"--debug\", \"--strace\" ] } } } ``` Note: the last `/` in `--debug-log` is needed to interpret it as a directory. Then each `runsc` command executed will create a separate log file. Otherwise, log messages from all commands will be appended to the same file. You may also want to pass `--log-packets` to troubleshoot network problems. Then restart the Docker daemon: ```bash sudo systemctl restart docker ``` Run your container again, and inspect the files under `/tmp/runsc`. The log file ending with `.boot` will contain the strace logs from your application, which can be useful for identifying missing or broken system calls in gVisor. If you are having problems starting the container, the log file ending with `.create` may have the reason for the failure. The command `runsc debug --stacks` collects stack traces while the sandbox is running which can be useful to troubleshoot issues or just to learn more about gVisor. It connects to the sandbox process, collects a stack dump, and writes it to the console. For example: ```bash docker run --runtime=runsc --rm -d alpine sh -c \"while true; do echo running; sleep 1; done\" 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b sudo runsc --root /var/run/docker/runtime-runsc/moby debug --stacks 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b ``` Note: `--root` variable is provided by docker and is normally set to `/var/run/docker/runtime-[runtime-name]/moby`. If in doubt, `--root` is logged to `runsc` logs. You can debug gVisor like any other Golang program. If you're running with Docker, you'll need to find the sandbox PID and attach the debugger as root. Here is an example: Install a runsc with debug symbols (you can also use the ): ```bash make dev BAZEL_OPTIONS=\"-c dbg --define gotags=debug\" ``` Start the container you want to debug using the runsc runtime with debug options: ```bash docker run --runtime=$(git branch --show-current)-d --rm --name=test -p 8080:80 -d nginx ``` Find the PID and attach your favorite debugger: ```bash sudo dlv attach $(docker inspect test | grep Pid | head -n 1 | grep -oe \"[0-9]*\") ``` Set a breakpoint for accept: ```bash break"
},
{
"data": "continue ``` In a different window connect to nginx to trigger the breakpoint: ```bash curl http://localhost:8080/ ``` It's also easy to attach a debugger to one of the predefined syscall tests when you're working on specific gVisor features. With the `delay-for-debugger` flag you can pause the test runner before execution so that you can attach the sandbox process to a debugger. Here is an example: ```bash make test BAZEL_OPTIONS=\"-c dbg --define gotags=debug\" \\ OPTIONS=\"--testarg=--delay-for-debugger=5m --testoutput=streamed\" \\ TARGETS=//test/syscalls:mounttestrunsc_systrap ``` The `delay-for-debugger=5m` flag means the test runner will pause for 5 minutes before running the test. To attach to the sandbox process, you can run the following in a separate window. ```bash dlv attach $(ps aux | grep -m 1 -e 'runsc-sandbox' | awk '{print $2}') ``` Once you've attached to the process and set a breakpoint, you can signal the test to start by running the following in another separate window. ```bash kill -SIGUSR1 $(ps aux | grep -m 1 -e 'bash.*test/syscalls' | awk '{print $2}') ``` `runsc` integrates with Go profiling tools and gives you easy commands to profile CPU and heap usage. First you need to enable `--profile` in the command line options before starting the container: ```json { \"runtimes\": { \"runsc-prof\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--profile\" ] } } } ``` Note: Enabling profiling loosens the seccomp protection added to the sandbox, and should not be run in production under normal circumstances. Then restart docker to refresh the runtime options. While the container is running, execute `runsc debug` to collect profile information and save to a file. Here are the options available: --profile-heap:* Generates heap profile to the speficied file. --profile-cpu:* Enables CPU profiler, waits for `--duration` seconds and generates CPU profile to the speficied file. For example: ```bash docker run --runtime=runsc-prof --rm -d alpine sh -c \"while true; do echo running; sleep 1; done\" 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b sudo runsc --root /var/run/docker/runtime-runsc-prof/moby debug --profile-heap=/tmp/heap.prof 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b sudo runsc --root /var/run/docker/runtime-runsc-prof/moby debug --profile-cpu=/tmp/cpu.prof --duration=30s 63254c6ab3a6989623fa1fb53616951eed31ac605a2637bb9ddba5d8d404b35b ``` The resulting files can be opened using `go tool pprof` or . The examples below create image file (`.svg`) with the heap profile and writes the top functions using CPU to the console: ```bash go tool pprof -svg /usr/local/bin/runsc /tmp/heap.prof go tool pprof -top /usr/local/bin/runsc /tmp/cpu.prof ``` When forwarding a port to the container, Docker will likely route traffic through the . This proxy may make profiling noisy, so it can be helpful to bypass it. Do so by sending traffic directly to the container IP and port. e.g., if the `docker0` IP is `192.168.9.1`, the container IP is likely a subsequent IP, such as `192.168.9.2`."
}
] |
{
"category": "Runtime",
"file_name": "debugging.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Ceph Operator Helm Chart <! Document is generated by `make helm-docs`. DO NOT EDIT. Edit the corresponding *.gotmpl.md file instead --> Installs to create, configure, and manage Ceph clusters on Kubernetes. This chart bootstraps a deployment on a cluster using the package manager. Kubernetes 1.22+ Helm 3.x See the for more details. The Ceph Operator helm chart will install the basic components necessary to create a storage platform for your Kubernetes cluster. Install the Helm chart . The `helm install` command deploys rook on the Kubernetes cluster in the default configuration. The section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the `rook-ceph` namespace (you will install your clusters into separate namespaces). Rook currently publishes builds of the Ceph operator to the `release` and `master` channels. The release channel is the most recent release of Rook that is considered stable for the community. ```console helm repo add rook-release https://charts.rook.io/release helm install --create-namespace --namespace rook-ceph rook-ceph rook-release/rook-ceph -f values.yaml ``` For example settings, see the next section or The following table lists the configurable parameters of the rook-operator chart and their default values. | Parameter | Description | Default | |--|-|| | `allowLoopDevices` | If true, loop devices are allowed to be used for osds in test clusters | `false` | | `annotations` | Pod annotations | `{}` | | `cephCommandsTimeoutSeconds` | The timeout for ceph commands in seconds | `\"15\"` | | `containerSecurityContext` | Set the container security context for the operator | `{\"capabilities\":{\"drop\":[\"ALL\"]},\"runAsGroup\":2016,\"runAsNonRoot\":true,\"runAsUser\":2016}` | | `crds.enabled` | Whether the helm chart should create and update the CRDs. If false, the CRDs must be managed independently with deploy/examples/crds.yaml. WARNING Only set during first deployment. If later disabled the cluster may be DESTROYED. If the CRDs are deleted in this case, see to restore them. | `true` | | `csi.allowUnsupportedVersion` | Allow starting an unsupported ceph-csi image | `false` | | `csi.attacher.repository` | Kubernetes CSI Attacher image repository | `\"registry.k8s.io/sig-storage/csi-attacher\"` | | `csi.attacher.tag` | Attacher image tag | `\"v4.5.1\"` | | `csi.cephFSAttachRequired` | Whether to skip any attach operation altogether for CephFS PVCs. See more details . If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the CephFS PVC fast. WARNING It's highly discouraged to use this for CephFS RWO volumes. Refer to this for more details. | `true` | | `csi.cephFSFSGroupPolicy` | Policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | `\"File\"` | | `csi.cephFSKernelMountOptions` | Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options. Set to \"ms_mode=secure\" when connections.encrypted is enabled in CephCluster CR | `nil` | | `csi.cephFSPluginUpdateStrategy` | CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | `RollingUpdate` | | `csi.cephFSPluginUpdateStrategyMaxUnavailable` | A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy. | `1` | | `csi.cephcsi.repository` | Ceph CSI image repository | `\"quay.io/cephcsi/cephcsi\"` | | `csi.cephcsi.tag` | Ceph CSI image tag | `\"v3.11.0\"` | |"
},
{
"data": "| CSI CephFS driver metrics port | `9081` | | `csi.cephfsPodLabels` | Labels to add to the CSI CephFS Deployments and DaemonSets Pods | `nil` | | `csi.clusterName` | Cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster | `nil` | | `csi.csiAddons.enabled` | Enable CSIAddons | `false` | | `csi.csiAddons.repository` | CSIAddons sidecar image repository | `\"quay.io/csiaddons/k8s-sidecar\"` | | `csi.csiAddons.tag` | CSIAddons sidecar image tag | `\"v0.8.0\"` | | `csi.csiAddonsPort` | CSI Addons server port | `9070` | | `csi.csiCephFSPluginResource` | CEPH CSI CephFS plugin resource requirement list | see values.yaml | | `csi.csiCephFSPluginVolume` | The volume of the CephCSI CephFS plugin DaemonSet | `nil` | | `csi.csiCephFSPluginVolumeMount` | The volume mounts of the CephCSI CephFS plugin DaemonSet | `nil` | | `csi.csiCephFSProvisionerResource` | CEPH CSI CephFS provisioner resource requirement list | see values.yaml | | `csi.csiDriverNamePrefix` | CSI driver name prefix for cephfs, rbd and nfs. | `namespace name where rook-ceph operator is deployed` | | `csi.csiLeaderElectionLeaseDuration` | Duration in seconds that non-leader candidates will wait to force acquire leadership. | `137s` | | `csi.csiLeaderElectionRenewDeadline` | Deadline in seconds that the acting leader will retry refreshing leadership before giving up. | `107s` | | `csi.csiLeaderElectionRetryPeriod` | Retry period in seconds the LeaderElector clients should wait between tries of actions. | `26s` | | `csi.csiNFSPluginResource` | CEPH CSI NFS plugin resource requirement list | see values.yaml | | `csi.csiNFSProvisionerResource` | CEPH CSI NFS provisioner resource requirement list | see values.yaml | | `csi.csiRBDPluginResource` | CEPH CSI RBD plugin resource requirement list | see values.yaml | | `csi.csiRBDPluginVolume` | The volume of the CephCSI RBD plugin DaemonSet | `nil` | | `csi.csiRBDPluginVolumeMount` | The volume mounts of the CephCSI RBD plugin DaemonSet | `nil` | | `csi.csiRBDProvisionerResource` | CEPH CSI RBD provisioner resource requirement list csi-omap-generator resources will be applied only if `enableOMAPGenerator` is set to `true` | see values.yaml | | `csi.disableCsiDriver` | Disable the CSI driver. | `\"false\"` | | `csi.disableHolderPods` | Deprecation note: Rook uses \"holder\" pods to allow CSI to connect to the multus public network without needing hosts to the network. Holder pods are being removed. See issue for details: https://github.com/rook/rook/issues/13055. New Rook deployments should set this to \"true\". | `true` | | `csi.enableCSIEncryption` | Enable Ceph CSI PVC encryption support | `false` | | `csi.enableCSIHostNetwork` | Enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary in some network configurations where the SDN does not provide access to an external cluster or there is significant drop in read/write performance | `true` | | `csi.enableCephfsDriver` | Enable Ceph CSI CephFS driver | `true` | | `csi.enableCephfsSnapshotter` | Enable Snapshotter in CephFS provisioner pod | `true` | | `csi.enableLiveness` | Enable Ceph CSI Liveness sidecar deployment | `false` | | `csi.enableMetadata` | Enable adding volume metadata on the CephFS subvolumes and RBD"
},
{
"data": "Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images. Hence enable metadata is false by default | `false` | | `csi.enableNFSSnapshotter` | Enable Snapshotter in NFS provisioner pod | `true` | | `csi.enableOMAPGenerator` | OMAP generator generates the omap mapping between the PV name and the RBD image which helps CSI to identify the rbd images for CSI operations. `CSIENABLEOMAP_GENERATOR` needs to be enabled when we are using rbd mirroring feature. By default OMAP generator is disabled and when enabled, it will be deployed as a sidecar with CSI provisioner pod, to enable set it to true. | `false` | | `csi.enablePluginSelinuxHostMount` | Enable Host mount for `/etc/selinux` directory for Ceph CSI nodeplugins | `false` | | `csi.enableRBDSnapshotter` | Enable Snapshotter in RBD provisioner pod | `true` | | `csi.enableRbdDriver` | Enable Ceph CSI RBD driver | `true` | | `csi.enableVolumeGroupSnapshot` | Enable volume group snapshot feature. This feature is enabled by default as long as the necessary CRDs are available in the cluster. | `true` | | `csi.forceCephFSKernelClient` | Enable Ceph Kernel clients on kernel < 4.17. If your kernel does not support quotas for CephFS you may want to disable this setting. However, this will cause an issue during upgrades with the FUSE client. See the | `true` | | `csi.grpcTimeoutInSeconds` | Set GRPC timeout for csi containers (in seconds). It should be >= 120. If this value is not set or is invalid, it defaults to 150 | `150` | | `csi.imagePullPolicy` | Image pull policy | `\"IfNotPresent\"` | | `csi.kubeletDirPath` | Kubelet root directory path (if the Kubelet uses a different path for the `--root-dir` flag) | `/var/lib/kubelet` | | `csi.logLevel` | Set logging level for cephCSI containers maintained by the cephCSI. Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity. | `0` | | `csi.nfs.enabled` | Enable the nfs csi driver | `false` | | `csi.nfsAttachRequired` | Whether to skip any attach operation altogether for NFS PVCs. See more details . If cephFSAttachRequired is set to false it skips the volume attachments and makes the creation of pods using the NFS PVC fast. WARNING It's highly discouraged to use this for NFS RWO volumes. Refer to this for more details. | `true` | | `csi.nfsFSGroupPolicy` | Policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | `\"File\"` | | `csi.nfsPluginUpdateStrategy` | CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | `RollingUpdate` | | `csi.nfsPodLabels` | Labels to add to the CSI NFS Deployments and DaemonSets Pods | `nil` | | `csi.pluginNodeAffinity` | The node labels for affinity of the CephCSI RBD plugin DaemonSet [^1] | `nil` | | `csi.pluginPriorityClassName` | PriorityClassName to be set on csi driver plugin pods | `\"system-node-critical\"` | | `csi.pluginTolerations` | Array of tolerations in YAML format which will be added to CephCSI plugin DaemonSet | `nil` | | `csi.provisioner.repository` | Kubernetes CSI provisioner image repository | `\"registry.k8s.io/sig-storage/csi-provisioner\"` | |"
},
{
"data": "| Provisioner image tag | `\"v4.0.1\"` | | `csi.provisionerNodeAffinity` | The node labels for affinity of the CSI provisioner deployment [^1] | `nil` | | `csi.provisionerPriorityClassName` | PriorityClassName to be set on csi driver provisioner pods | `\"system-cluster-critical\"` | | `csi.provisionerReplicas` | Set replicas for csi provisioner deployment | `2` | | `csi.provisionerTolerations` | Array of tolerations in YAML format which will be added to CSI provisioner deployment | `nil` | | `csi.rbdAttachRequired` | Whether to skip any attach operation altogether for RBD PVCs. See more details . If set to false it skips the volume attachments and makes the creation of pods using the RBD PVC fast. WARNING It's highly discouraged to use this for RWO volumes as it can cause data corruption. csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false since we'll have no VolumeAttachments to determine which node the PVC is mounted on. Refer to this for more details. | `true` | | `csi.rbdFSGroupPolicy` | Policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted. supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html | `\"File\"` | | `csi.rbdLivenessMetricsPort` | Ceph CSI RBD driver metrics port | `8080` | | `csi.rbdPluginUpdateStrategy` | CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate | `RollingUpdate` | | `csi.rbdPluginUpdateStrategyMaxUnavailable` | A maxUnavailable parameter of CSI RBD plugin daemonset update strategy. | `1` | | `csi.rbdPodLabels` | Labels to add to the CSI RBD Deployments and DaemonSets Pods | `nil` | | `csi.registrar.repository` | Kubernetes CSI registrar image repository | `\"registry.k8s.io/sig-storage/csi-node-driver-registrar\"` | | `csi.registrar.tag` | Registrar image tag | `\"v2.10.1\"` | | `csi.resizer.repository` | Kubernetes CSI resizer image repository | `\"registry.k8s.io/sig-storage/csi-resizer\"` | | `csi.resizer.tag` | Resizer image tag | `\"v1.10.1\"` | | `csi.serviceMonitor.enabled` | Enable ServiceMonitor for Ceph CSI drivers | `false` | | `csi.serviceMonitor.interval` | Service monitor scrape interval | `\"10s\"` | | `csi.serviceMonitor.labels` | ServiceMonitor additional labels | `{}` | | `csi.serviceMonitor.namespace` | Use a different namespace for the ServiceMonitor | `nil` | | `csi.sidecarLogLevel` | Set logging level for Kubernetes-csi sidecar containers. Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity. | `0` | | `csi.snapshotter.repository` | Kubernetes CSI snapshotter image repository | `\"registry.k8s.io/sig-storage/csi-snapshotter\"` | | `csi.snapshotter.tag` | Snapshotter image tag | `\"v7.0.2\"` | | `csi.topology.domainLabels` | domainLabels define which node labels to use as domains for CSI nodeplugins to advertise their domains | `nil` | | `csi.topology.enabled` | Enable topology based provisioning | `false` | | `currentNamespaceOnly` | Whether the operator should watch cluster CRD in its own namespace or not | `false` | | `disableDeviceHotplug` | Disable automatic orchestration when new devices are discovered. | `false` | | `discover.nodeAffinity` | The node labels for affinity of `discover-agent` [^1] | `nil` | | `discover.podLabels` | Labels to add to the discover pods | `nil` | | `discover.resources` | Add resources to discover daemon pods | `nil` | | `discover.toleration` | Toleration for the discover pods. Options: `NoSchedule`, `PreferNoSchedule` or `NoExecute` | `nil` | |"
},
{
"data": "| The specific key of the taint to tolerate | `nil` | | `discover.tolerations` | Array of tolerations in YAML format which will be added to discover deployment | `nil` | | `discoverDaemonUdev` | Blacklist certain disks according to the regex provided. | `nil` | | `discoveryDaemonInterval` | Set the discovery daemon device discovery interval (default to 60m) | `\"60m\"` | | `enableDiscoveryDaemon` | Enable discovery daemon | `false` | | `enableOBCWatchOperatorNamespace` | Whether the OBC provisioner should watch on the operator namespace or not, if not the namespace of the cluster will be used | `true` | | `hostpathRequiresPrivileged` | Runs Ceph Pods as privileged to be able to write to `hostPaths` in OpenShift with SELinux restrictions. | `false` | | `image.pullPolicy` | Image pull policy | `\"IfNotPresent\"` | | `image.repository` | Image | `\"rook/ceph\"` | | `image.tag` | Image tag | `master` | | `imagePullSecrets` | imagePullSecrets option allow to pull docker images from private docker registry. Option will be passed to all service accounts. | `nil` | | `logLevel` | Global log level for the operator. Options: `ERROR`, `WARNING`, `INFO`, `DEBUG` | `\"INFO\"` | | `monitoring.enabled` | Enable monitoring. Requires Prometheus to be pre-installed. Enabling will also create RBAC rules to allow Operator to create ServiceMonitors | `false` | | `nodeSelector` | Kubernetes to add to the Deployment. | `{}` | | `obcProvisionerNamePrefix` | Specify the prefix for the OBC provisioner in place of the cluster namespace | `ceph cluster namespace` | | `priorityClassName` | Set the priority class for the rook operator deployment if desired | `nil` | | `pspEnable` | If true, create & use PSP resources | `false` | | `rbacAggregate.enableOBCs` | If true, create a ClusterRole aggregated to for objectbucketclaims | `false` | | `rbacEnable` | If true, create & use RBAC resources | `true` | | `resources` | Pod resource requests & limits | `{\"limits\":{\"memory\":\"512Mi\"},\"requests\":{\"cpu\":\"200m\",\"memory\":\"128Mi\"}}` | | `scaleDownOperator` | If true, scale down the rook operator. This is useful for administrative actions where the rook operator must be scaled down, while using gitops style tooling to deploy your helm charts. | `false` | | `tolerations` | List of Kubernetes to add to the Deployment. | `[]` | | `unreachableNodeTolerationSeconds` | Delay to use for the `node.kubernetes.io/unreachable` pod failure toleration to override the Kubernetes default of 5 minutes | `5` | | `useOperatorHostNetwork` | If true, run rook operator on the host network | `nil` | To deploy from a local build from your development environment: Build the Rook docker image: `make` Copy the image to your K8s cluster, such as with the `docker save` then the `docker load` commands Install the helm chart: ```console cd deploy/charts/rook-ceph helm install --create-namespace --namespace rook-ceph rook-ceph . ``` To see the currently installed Rook chart: ```console helm ls --namespace rook-ceph ``` To uninstall/delete the `rook-ceph` deployment: ```console helm delete --namespace rook-ceph rook-ceph ``` The command removes all the Kubernetes components associated with the chart and deletes the release. After uninstalling you may want to clean up the CRDs as described on the ."
}
] |
{
"category": "Runtime",
"file_name": "operator-chart.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "klog ==== klog is a permanent fork of https://github.com/golang/glog. The decision to create klog was one that wasn't made lightly, but it was necessary due to some drawbacks that are present in . Ultimately, the fork was created due to glog not being under active development; this can be seen in the glog README: The code in this repo [...] is not itself under development This makes us unable to solve many use cases without a fork. The factors that contributed to needing feature development are listed below: `glog` and introduces challenges in containerized environments, all of which aren't well documented. `glog` doesn't provide an easy way to test logs, which detracts from the stability of software using it A long term goal is to implement a logging interface that allows us to add context, change output format, etc. Historical context is available here: https://github.com/kubernetes/kubernetes/issues/61006 https://github.com/kubernetes/kubernetes/issues/70264 https://groups.google.com/forum/#!msg/kubernetes-sig-architecture/wCWiWf3Juzs/hXRVBH90CgAJ https://groups.google.com/forum/#!msg/kubernetes-dev/7vnijOMhLS0/1oRiNtigBgAJ How to use klog =============== Replace imports for `\"github.com/golang/glog\"` with `\"k8s.io/klog/v2\"` Use `klog.InitFlags(nil)` explicitly for initializing global flags as we no longer use `init()` method to register the flags You can now use `logfile` instead of `logdir` for logging to a single file (See `examples/logfile/usagelog_file.go`) If you want to redirect everything logged using klog somewhere else (say syslog!), you can use `klog.SetOutput()` method and supply a `io.Writer`. (See `examples/setoutput/usageset_output.go`) For more logging conventions (See ) NOTE: please use the newer go versions that support semantic import versioning in modules, ideally go 1.11.4 or greater. See to see how to coexist with both klog/v1 and klog/v2. This package can be used side by side with glog. shows how to initialize and synchronize flags from the global `flag.CommandLine` FlagSet. In addition, the example makes use of stderr as combined output by setting `alsologtostderr` (or `logtostderr`) to `true`. Learn how to engage with the Kubernetes community on the . You can reach the maintainers of this project at: Participation in the Kubernetes community is governed by the . glog ==== Leveled execution logs for Go. This is an efficient pure Go implementation of leveled logs in the manner of the open source C++ package https://github.com/google/glog By binding methods to booleans it is possible to use the log package without paying the expense of evaluating the arguments to the log. Through the -vmodule flag, the package also provides fine-grained control over logging at the file level. The comment from glog.go introduces the ideas: Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: glog.Info(\"Prepare to repel boarders\") glog.Fatalf(\"Initialization failed: %s\", err) See the documentation for the V function for an explanation of these examples: if glog.V(2) { glog.Info(\"Starting transaction...\") } glog.V(2).Infoln(\"Processed\", nItems, \"elements\") The repository contains an open source version of the log package used inside Google. The master copy of the source lives inside Google, not here. The code in this repo is for export only and is not itself under development. Feature requests will be ignored. Send bug reports to [email protected]."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Inclavare Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This directory contains an example `storage_class.lua` on how to use to read and write the Storage Class field of a put request. Create Zonegroup placement info for a Storage Class (QLCCLASS in this example) and point class to a data pool (qlcpool in this example) NOTE: RGW will need restarted due to the Zonegroup placement info change. See: https://docs.ceph.com/en/latest/radosgw/placement/#zonegroup-zone-configuration for more information. ```bash ./bin/radosgw-admin zonegroup placement add --rgw-zonegroup default --placement-id default-placement --storage-class QLC_CLASS ./bin/radosgw-admin zone placement add --rgw-zone default --placement-id default-placement --storage-class QLCCLASS --data-pool qlcpool ``` Restart radosgw for Zone/ZoneGroup placement changes to take effect. Upload the script: ```bash ./bin/radosgw-admin script put --infile=storage_class.lua --context=preRequest ``` Create a bucket and put and object with a Storage Class header (no modification will occur): ```bash aws --profile=ceph --endpoint=http://localhost:8000 s3api create-bucket --bucket test-bucket aws --profile=ceph --endpoint=http://localhost:8000 s3api put-object --bucket test-bucket --key truv-0 --body ./64KiB_object.bin --storage-class STANDARD ``` Send a request without a Storage Class header (Storage Class will be changed to QLC_CLASS by Lua script): ```bash aws --profile=ceph --endpoint=http://localhost:8000 s3api put-object --bucket test-bucket --key truv-0 --body ./64KiB_object.bin ``` NOTE: If you use s3cmd instead of aws command-line, s3cmd adds \"STANDARD\" StorageClass to any put request so the example Lua script will not modify it. Verify S3 object had its StorageClass header added ```bash grep Lua ceph/build/out/radosgw.8000.log 2021-11-01T17:10:14.048-0400 7f9c7f697640 20 Lua INFO: Put_Obj with StorageClass: 2021-11-01T17:10:14.048-0400 7f9c7f697640 20 Lua INFO: No StorageClass for Object and size >= threshold: truv-0 adding QLC StorageClass ``` Lua 5.3"
}
] |
{
"category": "Runtime",
"file_name": "storage_class.md",
"project_name": "Ceph",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Providers description: Extend the Virtual Kubelet interface weight: 4 The Virtual Kubelet provides a pluggable provider interface that developers can implement to define the actions of a typical kubelet. This enables on-demand and nearly instantaneous container compute, orchestrated by Kubernetes, without needing to manage VM infrastructure. Each provider may have its own configuration file and required environment variables. Virtual Kubelet providers must provide the following functionality to be considered a fully compliant integration: Provide the back-end plumbing necessary to support the lifecycle management of Pods, containers, and supporting resources in the context of Kubernetes. Conform to the current API provided by Virtual Kubelet. Restrict all access to the and provide a well-defined callback mechanism for retrieving data like or . Virtual Kubelet currently has a wide variety of providers: {{< providers >}} To add a new Virtual Kubelet provider, create a new directory for your provider. In that created directory, implement interface in . For an example implementation of the Virtual Kubelet `PodLifecycleHandler` interface, see the , especially . Each Virtual Kubelet provider can be configured using its own configuration file and environment variables. You can see the list of required methods, with relevant descriptions of each method, below: ```go // PodLifecycleHandler defines the interface used by the PodController to react // to new and changed pods scheduled to the node that is being managed. // // Errors produced by these methods should implement an interface from // github.com/virtual-kubelet/virtual-kubelet/errdefs package in order for the // core logic to be able to understand the type of failure. type PodLifecycleHandler interface { // CreatePod takes a Kubernetes Pod and deploys it within the provider. CreatePod(ctx context.Context, pod *corev1.Pod) error // UpdatePod takes a Kubernetes Pod and updates it within the provider. UpdatePod(ctx context.Context, pod *corev1.Pod) error // DeletePod takes a Kubernetes Pod and deletes it from the provider. DeletePod(ctx context.Context, pod *corev1.Pod) error // GetPod retrieves a pod by name from the provider (can be"
},
{
"data": "// The Pod returned is expected to be immutable, and may be accessed // concurrently outside of the calling goroutine. Therefore it is recommended // to return a version after DeepCopy. GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error) // GetPodStatus retrieves the status of a pod by name from the provider. // The PodStatus returned is expected to be immutable, and may be accessed // concurrently outside of the calling goroutine. Therefore it is recommended // to return a version after DeepCopy. GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error) // GetPods retrieves a list of all pods running on the provider (can be cached). // The Pods returned are expected to be immutable, and may be accessed // concurrently outside of the calling goroutine. Therefore it is recommended // to return a version after DeepCopy. GetPods(context.Context) ([]*corev1.Pod, error) } ``` In addition to `PodLifecycleHandler`, there's an optional interface that providers can implement to expose Kubernetes Pod stats: ```go type PodMetricsProvider interface { GetStatsSummary(context.Context) (*stats.Summary, error) } ``` For a Virtual Kubelet provider to be considered viable, it must support the following functionality: It must provide the backend plumbing necessary to support the lifecycle management of Pods, containers, and supporting resources in the Kubernetes context. It must conform to the current API provided by Virtual Kubelet (see ) It won't have access to the , so it must provide a well-defined callback mechanism for fetching data like and . No Virtual Kubelet provider is complete without solid documentation. We strongly recommend providing a README for your provider in its directory. The READMEs for the currently existing implementations can provide a blueprint. You'll also likely want your provider to appear in the . That list is generated from a file. Add a `name` field for the displayed name of the provider and the subdirectory as the `tag` field. The `name` field supports Markdown, so feel free to use bold text or a hyperlink. In order to test the provider you're developing, simply run `make test` from the root of the Virtual Kubelet directory."
}
] |
{
"category": "Runtime",
"file_name": "providers.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Update go.mod for submodules to depend on the new release which will happen in the next step. Run the pre-release script. It creates a branch `prerelease<new tag>` that will contain all release changes. ``` ./pre_release.sh -t <new tag> ``` Verify the changes. ``` git diff main ``` This should have changed the version for all modules to be `<new tag>`. Update the . Make sure all relevant changes for this release are included and are in language that non-contributors to the project can understand. To verify this, you can look directly at the commits since the `<last tag>`. ``` git --no-pager log --pretty=oneline \"<last tag>..HEAD\" ``` Move all the `Unreleased` changes into a new section following the title scheme (`[<new tag>] - <date of release>`). Update all the appropriate links at the bottom. Push the changes to upstream and create a Pull Request on GitHub. Be sure to include the curated changes from the in the description. Once the Pull Request with all the version changes has been approved and merged it is time to tag the merged commit. *IMPORTANT*: It is critical you use the same tag that you used in the Pre-Release step! Failure to do so will leave things in a broken state. *IMPORTANT*: . It is critical you make sure the version you push upstream is correct. . Run the tag.sh script using the `<commit-hash>` of the commit on the main branch for the merged Pull Request. ``` ./tag.sh <new tag> <commit-hash> ``` Push tags to the upstream remote (not your fork: `github.com/open-telemetry/opentelemetry-go.git`). Make sure you push all sub-modules as well. ``` git push upstream <new tag> git push upstream <submodules-path/new tag> ... ``` Finally create a Release for the new `<new tag>` on GitHub. The release body should include all the release notes from the Changelog for this release. Additionally, the `tag.sh` script generates commit logs since last release which can be used to supplement the release notes. After releasing verify that examples build outside of the repository. ``` ./verify_examples.sh ``` The script copies examples into a different directory removes any `replace` declarations in `go.mod` and builds them. This ensures they build with the published release, not the local copy. Once verified be sure to that uses this release."
}
] |
{
"category": "Runtime",
"file_name": "RELEASING.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Due to the complicated steps of building and installation form source, it is recommended that you install iSuald through the rpm package. For details, please refer to: . If you still want to build and install iSulad from source, please follow the steps below. You can automatically install isulad on openEuler directly by compiling dependencies (other rpm distributions can also refer to this method, but some package names are inconsistent). ```bash dnf builddep iSulad.spec ``` `tips` iSulad.spec directly uses the files in the isulad source code. Since isulad depends on capability.h, you need to use yum to install the libcap-devel library additionally. Then, you should build and install iSulad: ```sh $ git clone https://gitee.com/openeuler/iSulad.git $ cd iSulad $ mkdir build $ cd build $ sudo -E cmake .. $ sudo -E make $ sudo -E make install ``` `tips` The communication between isula and isulad uses grpc by default. If you want to use rest for communication, you can replace it with the following compilation options ```c cmake -DENABLE_GRPC=OFF ../ ``` We provided a script to auto install iSulad on centos7, you can just execute the script to install iSulad. ```sh $ git clone https://gitee.com/openeuler/iSulad.git $ cd iSulad/docs/build_docs/guide/script $ sudo ./installiSuladonCentos7.sh ``` We also provided a script to auto install iSulad on Ubuntu20.04, you can just execute the script to install iSulad. ```sh $ git clone https://gitee.com/openeuler/iSulad.git $ cd iSulad/docs/build_docs/guide/script $ sudo chmod +x ./installiSuladonUbuntu2004LTS.sh $ sudo ./installiSuladonUbuntu2004LTS.sh ``` `tips`: If you want to keep the source of all dependencies, you can comment `rm -rf $BUILD_DIR` in the script. After executing the automated installation command, if there are dependencies that do not exist in the package management or the versions do not meet the requirements, you can optionally build them from source. Note: grpc-1.22 can not support GCC 9+. Similarly, if you want to build and install iSulad from source step by step, you can follow the steps below to build and install basic dependencies, and then build and install specific versions of key dependencies. The default installation path is `/usr/local/lib/`, which needs to be added to `PKGCONFIGPATH` and `LDLIBRARYPATH`, so that the system can find the packages and lib libraries. If the installed path is `/usr/lib/`, you can ignore this step. ```bash $ export PKGCONFIGPATH=/usr/local/lib/pkgconfig:$PKGCONFIGPATH $ export LDLIBRARYPATH=/usr/local/lib:/usr/lib:$LDLIBRARYPATH $ sudo -E echo \"/usr/local/lib\" >> /etc/ld.so.conf ``` Note: If the build is interrupted, you must re-execute the above commands to set the paths of ldconfig and pkgconfig before building the source again. ```bash $ git clone https://gitee.com/src-openeuler/protobuf.git $ cd protobuf $ git checkout openEuler-20.03-LTS-tag $ tar -xzvf protobuf-all-3.9.0.tar.gz $ cd protobuf-3.9.0 $ sudo -E ./autogen.sh $ sudo -E ./configure $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` ```bash $ git clone https://gitee.com/src-openeuler/c-ares.git $ cd c-ares $ git checkout openEuler-20.03-LTS-tag $ tar -xzvf c-ares-1.15.0.tar.gz $ cd c-ares-1.15.0 $ sudo -E autoreconf -if $ sudo -E"
},
{
"data": "--enable-shared --disable-dependency-tracking $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` ```bash $ git clone https://gitee.com/src-openeuler/grpc.git $ cd grpc $ git checkout openEuler-20.03-LTS-tag $ tar -xzvf grpc-1.22.0.tar.gz $ cd grpc-1.22.0 $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` ```bash $ git clone https://gitee.com/src-openeuler/libevent.git $ cd libevent $ git checkout -b openEuler-20.03-LTS-tag openEuler-20.03-LTS-tag $ tar -xzvf libevent-2.1.11-stable.tar.gz $ cd libevent-2.1.11-stable && ./autogen.sh && ./configure $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` ```bash $ git clone https://gitee.com/src-openeuler/libevhtp.git $ cd libevhtp && git checkout -b openEuler-20.03-LTS-tag openEuler-20.03-LTS-tag $ tar -xzvf libevhtp-1.2.16.tar.gz $ cd libevhtp-1.2.16 $ patch -p1 -F1 -s < ../0001-support-dynamic-threads.patch $ patch -p1 -F1 -s < ../0002-close-openssl.patch $ rm -rf build && mkdir build && cd build $ sudo -E cmake -D EVHTPBUILDSHARED=on -D EVHTPDISABLESSL=on ../ $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` ```bash $ git clone https://gitee.com/src-openeuler/libwebsockets.git $ cd libwebsockets $ git checkout openEuler-20.03-LTS-tag $ tar -xzvf libwebsockets-2.4.2.tar.gz $ cd libwebsockets-2.4.2 $ patch -p1 -F1 -s < ../libwebsockets-fix-coredump.patch $ mkdir build $ cd build $ sudo -E cmake -DLWSWITHSSL=0 -DLWSMAXSMP=32 -DCMAKEBUILDTYPE=Debug ../ $ sudo -E make -j $(nproc) $ sudo -E make install $ sudo -E ldconfig ``` Finally, because iSulad depends on some specific versions of key dependencies, and each dependency is called through a functional interface, you must ensure that the versions of all key dependencies are consistent. The consistency of the version includes the following four aspects: Branch consistency: build and install the same branch of each dependency; Consistent releases: Since each isulad release has an adapted dependency release, you need to build and install the dependencies of the specified release; Specific OS: If you use a specific OS version of , you need to obtain the `src.rpm` package of each dependency through the package management tool to obtain the source to build and install; Src-openeuler: If the dependencies are obtained from the community, it is also necessary to keep the dependencies built with the same branch; ```bash $ git clone https://gitee.com/src-openeuler/lxc.git $ cd lxc $ ./apply-patches $ cd lxc-4.0.3 $ sudo -E ./autogen.sh $ sudo -E ./configure $ sudo -E make -j $ sudo -E make install ``` ```bash $ git clone https://gitee.com/openeuler/lcr.git $ cd lcr $ mkdir build $ cd build $ sudo -E cmake .. $ sudo -E make -j $ sudo -E make install ``` ```bash $ git clone https://gitee.com/openeuler/clibcni.git $ cd clibcni $ mkdir build $ cd build $ sudo -E cmake .. $ sudo -E make -j $ sudo -E make install ``` ```sh $ git clone https://gitee.com/openeuler/iSulad.git $ cd iSulad $ mkdir build $ cd build $ sudo -E cmake .. $ sudo -E make $ sudo -E make install ``` Tips The communication between isula and isulad uses grpc by default. If you want to use rest for communication, you can replace it with the following compilation options ```c cmake -DENABLE_GRPC=OFF ../ ```"
}
] |
{
"category": "Runtime",
"file_name": "build_guide.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "We currently do not have a way for users to monitor and alert about events happen in Longhorn such as volume is full, backup is failed, CPU usage, memory consumption. This enhancement exports Prometheus metrics so that users can use Prometheus or other monitoring systems to monitor Longhorn. https://github.com/longhorn/longhorn/issues/1180 We are planing to expose 22 metrics in this release: longhornvolumecapacity_bytes longhornvolumeactualsizebytes longhornvolumestate longhornvolumerobustness longhornnodestatus longhornnodecount_total longhornnodecpucapacitymillicpu longhornnodecpuusagemillicpu longhornnodememorycapacitybytes longhornnodememoryusagebytes longhornnodestoragecapacitybytes longhornnodestorageusagebytes longhornnodestoragereservationbytes longhorndiskcapacity_bytes longhorndiskusage_bytes longhorndiskreservation_bytes longhorninstancemanagercpuusage_millicpu longhorninstancemanagercpurequests_millicpu longhorninstancemanagermemoryusage_bytes longhorninstancemanagermemoryrequests_bytes longhornmanagercpuusagemillicpu longhornmanagermemoryusagebytes See the section for definition of each metric. We are not planing to expose 6 metrics in this release: longhornbackupstatsnumberfailed_backups longhornbackupstatsnumbersucceed_backups longhornbackupstatsbackupstatus (status for this backup (0=InProgress,1=Done,2=Failed)) longhornvolumeio_ops longhornvolumeioreadthroughput longhornvolumeiowritethroughput Longhorn already has a great UI with many useful information. However, Longhorn doesn't have any alert/notification mechanism yet. Also, we don't have any dashboard or graphing support so that users can have overview picture of the storage system. This enhancement will address both of the above issues. In many cases, a problem/issue can be quickly discovered if we have a monitoring dashboard. For example, there are many times users ask us for supporting and the problems were that the Longhorn engines were killed due to over-use CPU limit. If there is a CPU monitoring dashboard for instance managers, those problems can be quickly detected. User want to be notified about abnormal event such as disk space limit approaching. We can expose metrics provide information about it and user can scrape the metrics and setup alert system. After this enhancement is merged, Longhorn expose metrics at end point `/metrics` in Prometheus' . Users can use Prometheus or other monitoring systems to collect those metrics by scraping the end point `/metrics` in longhorn manager. Then, user can display the collected data using tools such as Grafana. User can also setup alert by using tools such as Prometheus Alertmanager. Below are the descriptions of metrics which Longhorn exposes and how users can use them: longhornvolumecapacity_bytes This metric reports the configured size in bytes for each volume which is managed by the current longhorn manager. This metric contains 2 labels (dimensions): `node`: the node of the longhorn manager which is managing this volume `volume`: the name of this volume Example of a sample of this metric could be: ``` longhornvolumecapacity_bytes{node=\"worker-2\",volume=\"testvol\"} 6.442450944e+09 ``` Users can use this metrics to draw graph about and quickly see the big volumes in the storage system. longhornvolumeactualsizebytes This metric reports the actual space used by each replica of the volume on the corresponding nodes This metric contains 2 labels (dimensions): `node`: the node of the longhorn manager which is managing this volume `volume`: the name of this volume Example of a sample of this metric could be: ``` longhornvolumeactualsizebytes{node=\"worker-2\",volume=\"testvol\"} 1.1917312e+08 ``` Users can use this metrics to the actual size occupied on disks of Longhorn volumes longhornvolumestate This metric reports the state of the volume. The states are: 1=creating, 2=attached, 3=Detached, 4=Attaching, 5=Detaching, 6=Deleting. This metric contains 2 labels (dimensions): `node`: the node of the longhorn manager which is managing this volume `volume`: the name of this volume Example of a sample of this metric could be: ``` longhornvolumestate{node=\"worker-3\",volume=\"testvol1\"} 2 ``` longhornvolumerobustness This metric reports the robustness of the"
},
{
"data": "Possible values are: 0=unknown, 1=healthy, 2=degraded, 3=faulted This metric contains 2 labels (dimensions): `node`: the node of the longhorn manager which is managing this volume `volume`: the name of this volume Example of a sample of this metric could be: ``` longhornvolumerobustness{node=\"worker-3\",volume=\"testvol1\"} 1 ``` longhornnodestatus This metric reports the `ready`, `schedulable`, `mountPropagation` condition for the current node. This metric contains 3 labels (dimensions): `node` `condition`: the name of the condition (`ready`, `schedulable`, `mountPropagation`) `condition_reason` Example of a sample of this metric could be: ``` longhornnodestatus{condition=\"allowScheduling\",condition_reason=\"\",node=\"worker-3\"} 1 longhornnodestatus{condition=\"mountpropagation\",condition_reason=\"\",node=\"worker-3\"} 1 longhornnodestatus{condition=\"ready\",condition_reason=\"\",node=\"worker-3\"} 1 longhornnodestatus{condition=\"schedulable\",condition_reason=\"\",node=\"worker-3\"} 1 ``` Users can use this metrics to setup alert about node status. longhornnodecount_total This metric reports the total nodes in Longhorn system. Example of a sample of this metric could be: ``` longhornnodecount_total 3 ``` Users can use this metric to detect the number of down nodes longhornnodecpucapacitymillicpu Report the maximum allocatable cpu on this node Example of a sample of this metric could be: ``` longhornnodecpucapacitymillicpu{node=\"worker-3\"} 2000 ``` longhornnodecpuusagemillicpu Report the cpu usage on this node Example of a sample of this metric could be: ``` longhornnodecpuusagemillicpu{node=\"worker-3\"} 149 ``` longhornnodememorycapacitybytes Report the maximum allocatable memory on this node Example of a sample of this metric could be: ``` longhornnodememorycapacitybytes{node=\"worker-3\"} 4.031217664e+09 ``` longhornnodememoryusagebytes Report the memory usage on this node Example of a sample of this metric could be: ``` longhornnodememoryusagebytes{node=\"worker-3\"} 1.643794432e+09 ``` longhornnodestoragecapacitybytes Report the storage capacity of this node Example of a sample of this metric could be: ``` longhornnodestoragecapacitybytes{node=\"worker-3\"} 8.3987283968e+10 ``` longhornnodestorageusagebytes Report the used storage of this node Example of a sample of this metric could be: ``` longhornnodestorageusagebytes{node=\"worker-3\"} 9.060212736e+09 ``` longhornnodestoragereservationbytes Report the reserved storage for other applications and system on this node Example of a sample of this metric could be: ``` longhornnodestoragereservationbytes{node=\"worker-3\"} 2.519618519e+10 ``` longhorndiskcapacity_bytes Report the storage capacity of this disk. Example of a sample of this metric could be: ``` longhorndiskcapacity_bytes{disk=\"default-disk-8b28ee3134628183\",node=\"worker-3\"} 8.3987283968e+10 ``` longhorndiskusage_bytes Report the used storage of this disk Example of a sample of this metric could be: ``` longhorndiskusage_bytes{disk=\"default-disk-8b28ee3134628183\",node=\"worker-3\"} 9.060212736e+09 ``` longhorndiskreservation_bytes Report the reserved storage for other applications and system on this disk Example of a sample of this metric could be: ``` longhorndiskreservation_bytes{disk=\"default-disk-8b28ee3134628183\",node=\"worker-3\"} 2.519618519e+10 ``` longhorninstancemanagercpurequests_millicpu This metric reports the requested CPU resources in Kubernetes of the Longhorn instance managers on the current node. The unit of this metric is milliCPU. See more about the unit at https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units This metric contains 3 labels (dimensions): `node` `instance_manager` `instancemanagertype` Example of a sample of this metric could be: ``` longhorninstancemanagercpurequestsmillicpu{instancemanager=\"instance-manager-r-61ffe369\",instancemanagertype=\"replica\",node=\"worker-3\"} 250 ``` longhorninstancemanagercpuusage_millicpu This metric reports the CPU usage of the Longhorn instance managers on the current node. The unit of this metric is milliCPU. See more about the unit at https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units This metric contains 3 labels (dimensions): `node` `instance_manager` `instancemanagertype` Example of a sample of this metric could be: ``` longhorninstancemanagercpuusagemillicpulonghorninstancemanagermemoryrequestsbytes{instancemanager=\"instance-manager-r-61ffe369\",instancemanager_type=\"replica\",node=\"worker-3\"} 0 ``` longhorninstancemanagermemoryrequests_bytes This metric reports the requested memory in Kubernetes of the Longhorn instance managers on the current node. This metric contains 3 labels (dimensions): `node` `instance_manager` `instancemanagertype` Example of a sample of this metric could be: ``` longhorninstancemanagermemoryrequestsbytes{instancemanager=\"instance-manager-e-0a67975b\",instancemanagertype=\"engine\",node=\"worker-3\"} 0 ``` longhorninstancemanagerusagememory_bytes This metrics reports the memory usage of the Longhorn instance managers on the current node. This metric contains 3 labels (dimensions): `node` `instance_manager` `instancemanagertype` Example of a sample of this metric could be: ``` longhorninstancemanagermemoryusagebytes{instancemanager=\"instance-manager-e-0a67975b\",instancemanagertype=\"engine\",node=\"worker-3\"} 1.374208e+07 ``` longhornmanagercpuusagemillicpu This metric reports the CPU usage of the Longhorn manager on the current node. The unit of this metric is milliCPU. See more about the unit"
},
{
"data": "https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#cpu-units This metric contains 2 labels (dimensions): `node` `manager` Example of a sample of this metric could be: ``` longhornmanagercpuusagemillicpu{manager=\"longhorn-manager-x5cjj\",node=\"phan-cluster-23-worker-3\"} 15 ``` longhornmanagermemoryusagebytes This metric reports the memory usage of the Longhorn manager on the current node. This metric contains 2 labels (dimensions): `node` `manager` Example of a sample of this metric could be: ``` longhornmanagermemoryusagebytes{manager=\"longhorn-manager-x5cjj\",node=\"worker-3\"} 2.7979776e+07 ``` We add a new end point `/metrics` to exposes all longhorn Prometheus metrics. We follow the , each Longhorn manager reports information about the components it manages. Prometheus can use service discovery mechanism to find all longhorn-manager pods in longhorn-backend service. We create a new collector for each type (volumeCollector, backupCollector, nodeCollector, etc..) and have a common baseCollector. This structure is similar to the controller package: we have volumeController, nodeController, etc.. which have a common baseController. The end result is a structure like a tree: ``` a custom registry <- many custom collectors share the same base collector <- many metrics in each custom collector ``` When a scrape request is made to endpoint `/metric`, a handler gathers data in the Longhorn custom registry, which in turn gathers data in custom collectors, which in turn gathers data in all metrics. Below are how we collect data for each metric: longhornvolumecapacity_bytes We get the information about volumes' capacity by reading volume CRD from datastore. When volume move to a different node, the current longhorn manager stops reporting the vol. The volume will be reported by a new longhorn manager. longhornactualsize_bytes We get the information about volumes' actual size by reading volume CRD from datastore. When volume move to a different node, the current longhorn manager stops reporting the vol. The volume will be reported by a new longhorn manager. longhornvolumestate We get the information about volumes' state by reading volume CRD from datastore. longhornvolumerobustness We get the information about volumes' robustness by reading volume CRD from datastore. longhornnodestatus We get the information about node status by reading node CRD from datastore. Nodes don't move likes volume, so we don't have to decide which longhorn manager reports which node. longhornnodecount_total We get the information about total number node by reading from datastore longhornnodecpucapacitymillicpu We get the information about the maximum allocatable cpu on this node by reading Kubernetes node resource longhornnodecpuusagemillicpu We get the information about the cpu usage on this node from metric client longhornnodememorycapacitybytes We get the information about the maximum allocatable memory on this node by reading Kubernetes node resource longhornnodememoryusagebytes We get the information about the memory usage on this node from metric client longhornnodestoragecapacitybytes We get the information by reading node CRD from datastore longhornnodestorageusagebytes We get the information by reading node CRD from datastore longhornnodestoragereservationbytes We get the information by reading node CRD from datastore longhorndiskcapacity_bytes We get the information by reading node CRD from datastore longhorndiskusage_bytes We get the information by reading node CRD from datastore longhorndiskreservation_bytes We get the information by reading node CRD from datastore longhorninstancemanagercpurequests_millicpu We get the information by reading instance manager Pod objects from datastore. longhorninstancemanagercpuusage_millicpu We get the information by using kubernetes metric client. longhorninstancemanagermemoryusage_bytes We get the information by using kubernetes metric client. longhorninstancemanagermemoryrequests_bytes We get the information by reading instance manager Pod objects from datastore. longhornmanagercpuusagemillicpu We get the information by using kubernetes metric client. longhornmanagermemoryusagebytes We get the information by using kubernetes metric client. The manual test plan is detailed at This enhancement doesn't require any upgrade."
}
] |
{
"category": "Runtime",
"file_name": "20200909-prometheus-support.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Traditional web applications follows the client-server model. In the past era of application servers, the entire UI is dynamically generated from the server. The browser is simply a thin client that displays the rendered web pages at real time. However, as the browser becomes more capable and sophisticated, the client can now take on more workload to improve application UX, performance, and security. That gives rise to the era of Jamstack. There is now a clear separation between frontend and backend services. The frontend is a static web site (HTML + JavaScript + WebAssembly) generated from UI frameworks such as React.js, Vue.js, Yew or Percy, and the backend consists of microservices. Yet, as Jamstack gains popularity, the diversity of clients (both browsers and apps) makes it very difficult to achieve great performance across all use cases. The solution is server-side rendering (SSR). That is to have edge servers run the \"client side\" UI code (ie the React generated JavaScript OR Percy generated WebAssembly), and send back the rendered HTML DOM objects to the browser. In this case, the edge server must execute the exact same code (i.e. and WebAssembly) as the browser to render the UI. That is called isomorphic Jamstack applications. The WasmEdge runtime provides a lightweight, high performance, OCI complaint, and polyglot container to run all kinds of SSR functions on edge servers. Vue JS SSR function (coming soon) Yew Rust compiled to WebAssembly SSR function (coming soon) We also exploring ways to render more complex UI and interactions on WasmEdge-based edge servers, and then stream the rendered results to the client application. Potential examples include Render Unity3D animations on the edge server (based on ) Render interactive video (generated from AI) on the edge server Of course, the edge cloud could grow well beyond SSR for UI components. It could also host high-performance microservices for business logic and serverless functions. Read on to the next chapter."
}
] |
{
"category": "Runtime",
"file_name": "server_side_render.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: RBD Asynchronous DR Failover and Failback Rook comes with the volume replication support, which allows users to perform disaster recovery and planned migration of clusters. The following document will help to track the procedure for failover and failback in case of a Disaster recovery or Planned migration use cases. !!! note The document assumes that RBD Mirroring is set up between the peer clusters. For information on rbd mirroring and how to set it up using rook, please refer to the . !!! info Use cases: Datacenter maintenance, technology refresh, disaster avoidance, etc. The Relocation operation is the process of switching production to a backup facility(normally your recovery site) or vice versa. For relocation, access to the image on the primary site should be stopped. The image should now be made primary on the secondary cluster so that the access can be resumed there. !!! note :memo: Periodic or one-time backup of the application should be available for restore on the secondary site (cluster-2). Follow the below steps for planned migration of workload from the primary cluster to the secondary cluster: Scale down all the application pods which are using the mirrored PVC on the Primary Cluster. of PVC and PV object from the primary cluster. This can be done using some backup tools like . to set `replicationState` to `secondary` at the Primary Site. When the operator sees this change, it will pass the information down to the driver via GRPC request to mark the dataSource as `secondary`. If you are manually recreating the PVC and PV on the secondary cluster, remove the `claimRef` section in the PV objects. (See for details) Recreate the storageclass, PVC, and PV objects on the secondary site. As you are creating the static binding between PVC and PV, a new PV wont be created here, the PVC will get bind to the existing PV. on the secondary site. for all the PVCs for which mirroring is enabled `replicationState` should be `primary` for all the PVCs on the secondary site. to verify if the image is marked `primary` on the secondary"
},
{
"data": "Once the Image is marked as `primary`, the PVC is now ready to be used. Now, we can scale up the applications to use the PVC. !!! warning :memo: In Async Disaster recovery use case, we don't get the complete data. We will only get the crash-consistent data based on the snapshot interval time. !!! info Use cases: Natural disasters, Power failures, System failures, and crashes, etc. !!! note To effectively resume operations after a failover/relocation, backup of the kubernetes artifacts like deployment, PVC, PV, etc need to be created beforehand by the admin; so that the application can be restored on the peer cluster. For more information, see . In case of Disaster recovery, create VolumeReplication CR at the Secondary Site. Since the connection to the Primary Site is lost, the operator automatically sends a GRPC request down to the driver to forcefully mark the dataSource as `primary` on the Secondary Site. If you are manually creating the PVC and PV on the secondary cluster, remove the claimRef section in the PV objects. (See for details) Create the storageclass, PVC, and PV objects on the secondary site. As you are creating the static binding between PVC and PV, a new PV wont be created here, the PVC will get bind to the existing PV. and on the secondary site. to verify if the image is marked `primary` on the secondary site. Once the Image is marked as `primary`, the PVC is now ready to be used. Now, we can scale up the applications to use the PVC. Once the failed cluster is recovered on the primary site and you want to failback from secondary site, follow the below steps: Scale down the running applications (if any) on the primary site. Ensure that all persistent volumes in use by the workload are no longer in use on the primary cluster. replicationState from `primary` to `secondary` on the primary site. Scale down the applications on the secondary site. replicationState state from `primary` to `secondary` in secondary site. On the primary site, is marked as volume ready to use. Once the volume is marked to ready to use, change the replicationState state from `secondary` to `primary` in primary site. Scale up the applications again on the primary site."
}
] |
{
"category": "Runtime",
"file_name": "rbd-async-disaster-recovery-failover-failback.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The metadata service is designed to help running apps introspect their execution environment and assert their pod identity. In particular, the metadata service exposes the contents of the pod and image manifests as well as a convenient method of looking up annotations. Finally, the metadata service provides a pod with cryptographically verifiable identity. The metadata service is implemented by the `rkt metadata-service` command. When started, it will listen for registration events over Unix socket on `/run/rkt/metadata-svc.sock`. For systemd-based distributions, it also supports . If using socket activation, ensure the socket is named `/run/rkt/metadata-svc.sock`, as `rkt run` uses this name during registration. Please note that when started under socket activation, the metadata service will not remove the socket on exit. Use the `RemoveOnStop` directive in the relevant `.socket` file to clean up. Example systemd unit files for running the metadata service are available in . In addition to listening on a Unix socket, the metadata service will also listen on a TCP port 2375. When contacting the metadata service, the apps utilize this port. The IP and port of the metadata service are passed by rkt to pods via the `ACMETADATAURL` environment variable. See for more information about the metadata service including a list of supported endpoints and their usage. | Flag | Default | Options | Description | | | | | | | `--listen-port` | `18112` | A port number | Listen port | See the table with ."
}
] |
{
"category": "Runtime",
"file_name": "metadata-service.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Short for IO Buffer. It is one allocatable unit for the consumers of the IOBUF API, each unit hosts @page_size(defined in arena structure) bytes of memory. As initial step of processing a fop, the IO buffer passed onto GlusterFS by the other applications (FUSE VFS/ Applications using gfapi) is copied into GlusterFS space i.e. iobufs. Hence Iobufs are mostly allocated/deallocated in Fuse, gfapi, protocol xlators, and also in performance xlators to cache the IO buffers etc. ``` struct iobuf { union { struct list_head list; struct { struct iobuf *next; struct iobuf *prev; }; }; struct iobufarena *iobufarena; gflockt lock; / for ->ptr and ->ref / int ref; / 0 == passive, >0 == active / void ptr; / usable memory region by the consumer */ void free_ptr; / in case of stdalloc, this is the one to be freed not the ptr / }; ``` There may be need of multiple iobufs for a single fop, like in vectored read/write. Hence multiple iobufs(default 16) are encapsulated under one iobref. ``` struct iobref { gflockt lock; int ref; struct iobuf iobrefs; / list of iobufs / int alloced; / 16 by default, grows as required / int used; / number of iobufs added to this iobref / }; ``` One region of memory MMAPed from the operating system. Each region MMAPs @arenasize bytes of memory, and hosts @arenasize / @page_size IOBUFs. The same sized iobufs are grouped into one arena, for sanity of access. ``` struct iobuf_arena { union { struct list_head list; struct { struct iobuf_arena *next; struct iobuf_arena *prev; }; }; sizet pagesize; / size of all iobufs in this arena / sizet arenasize; /* this is equal to (iobufpool->arenasize / page_size) page_size / sizet pagecount; struct iobufpool *iobufpool; void *mem_base; struct iobuf iobufs; / allocated iobufs list */ int active_cnt; struct listhead passivelist; / list of passive iobufs / struct listhead activelist; / list of active iobufs / int passive_cnt; (unused by itself) */ uint64t alloccnt; / total allocs in this pool / int max_active; / max active buffers at a given time / }; ``` Pool of Iobufs. As there may be many Io buffers required by the filesystem, a pool of iobufs are preallocated and kept, if these preallocated ones are exhausted only then the standard malloc/free is called, thus improving the performance. Iobuf pool is generally one per process, allocated during glusterfsctxt init (glusterfsctxdefaults_init), currently the preallocated iobuf pool memory is freed on process exit. Iobuf pool is globally accessible across GlusterFs, hence iobufs allocated by any xlator can be accessed by any other xlators(unless iobuf is not"
},
{
"data": "``` struct iobuf_pool { pthreadmutext mutex; sizet arenasize; /* size of memory region in arena */ sizet defaultpage_size; / default size of iobuf / int arena_cnt; struct listhead arenas[GFVARIABLEIOBUFCOUNT]; /* array of arenas. Each element of the array is a list of arenas holding iobufs of particular page_size */ struct listhead filled[GFVARIABLEIOBUFCOUNT]; / array of arenas without free iobufs / struct listhead purge[GFVARIABLEIOBUFCOUNT]; / array of of arenas which can be purged / uint64t requestmisses; /* mostly the requests for higher value of iobufs */ }; ``` ~~~ The default size of the iobuf_pool(as of yet): 1024 iobufs of 128Bytes = 128KB 512 iobufs of 512Bytes = 256KB 512 iobufs of 2KB = 1MB 128 iobufs of 8KB = 1MB 64 iobufs of 32KB = 2MB 32 iobufs of 128KB = 4MB 8 iobufs of 256KB = 2MB 2 iobufs of 1MB = 2MB Total ~13MB ~~~ As seen in the datastructure iobuf_pool has 3 arena lists. arenas: The arenas allocated during iobuf_pool create, are part of this list. This list also contains arenas that are partially filled i.e. contain few active and few passive iobufs (passivecnt !=0, activecnt!=0 except for initially allocated arenas). There will be by default 8 arenas of the sizes mentioned above. filled: If all the iobufs in the arena are filled(passive_cnt = 0), the arena is moved to the filled list. If any of the iobufs from the filled arena is iobuf_put, then the arena moves back to the 'arenas' list. purge: If there are no active iobufs in the arena(active_cnt = 0), the arena is moved to purge list. iobuf_put() triggers destruction of the arenas in this list. The arenas in the purge list are destroyed only if there is atleast one arena in 'arenas' list, that way there won't be spurious mmap/unmap of buffers. (e.g: If there is an arena (page_size=128KB, count=32) in purge list, this arena is destroyed(munmap) only if there is an arena in 'arenas' list with page_size=128KB). ``` struct iobuf iobuf_get (struct iobuf_pool iobuf_pool); ``` Creates a new iobuf of the default page size(128KB hard coded as of yet). Also takes a reference(increments ref count), hence no need of doing it explicitly after getting iobuf. ``` struct iobuf iobuf_get2 (struct iobuf_pool iobufpool, sizet page_size); ``` Creates a new iobuf of a specified page size, if page_size=0 default page size is considered. ``` if (requested iobuf size > Max iobuf size in the pool(1MB as of yet)) { Perform standard allocation(CALLOC) of the requested size and add it to the list iobufpool->arenas[IOBUFARENAMAXINDEX]. } else { -Round the page size to match the stndard sizes in iobuf pool. (eg: if 3KB is requested, it is rounded to 8KB). -Select the arena list corresponding to the rounded size (eg: select 8KB arena) If the selected arena has passive count > 0, then return the iobuf from this arena, set the counters(passive/active/etc.)"
},
{
"data": "else the arena is full, allocate new arena with rounded size and standard page numbers and add to the arena list (eg: 128 iobufs of 8KB is allocated). } ``` Also takes a reference(increments ref count), hence no need of doing it explicitly after getting iobuf. ``` struct iobuf iobuf_ref (struct iobuf iobuf); ``` Take a reference on the iobuf. If using an iobuf allocated by some other xlator/function/, its a good practice to take a reference so that iobuf is not deleted by the allocator. ``` void iobuf_unref (struct iobuf *iobuf); ``` Unreference the iobuf, if the ref count is zero iobuf is considered free. ``` -Delete the iobuf, if allocated from standard alloc and return. -set the active/passive count appropriately. -if passive count > 0 then add the arena to 'arena' list. -if active count = 0 then add the arena to 'purge' list. ``` Every iobufref should have a corresponding iobufunref, and also every iobufget/2 should have a correspondning iobufunref. ``` struct iobref *iobref_new (); ``` Creates a new iobref structure and returns its pointer. ``` struct iobref iobref_ref (struct iobref iobref); ``` Take a reference on the iobref. ``` void iobref_unref (struct iobref *iobref); ``` Decrements the reference count of the iobref. If the ref count is 0, then unref all the iobufs(iobuf_unref) in the iobref, and destroy the iobref. ``` int iobref_add (struct iobref iobref, struct iobuf iobuf); ``` Adds the given iobuf into the iobref, it takes a ref on the iobuf before adding it, hence explicit iobuf_ref is not required if adding to the iobref. ``` int iobref_merge (struct iobref to, struct iobref from); ``` Adds all the iobufs in the 'from' iobref to the 'to' iobref. Merge will not cause the delete of the 'from' iobref, therefore it will result in another ref on all the iobufs added to the 'to' iobref. Hence iobref_unref should be performed both on 'from' and 'to' iobrefs (performing iobref_unref only on 'to' will not free the iobufs and may result in leak). ``` void iobref_clear (struct iobref *iobref); ``` Unreference all the iobufs in the iobref, and also unref the iobref. If all iobufrefs/iobufnew do not have correspondning iobuf_unref, then the iobufs are not freed and recurring execution of such code path may lead to huge memory leaks. The easiest way to identify if a memory leak is caused by iobufs is to take a statedump. If the statedump shows a lot of filled arenas then it is a sure sign of leak. Refer doc/debugging/statedump.md for more details. If iobufs are leaking, the next step is to find where the iobuf_unref went missing. There is no standard/easy way of debugging this, code reading and logs are the only ways. If there is a liberty to reproduce the memory leak at will, then logs(gfcallinginfo) in iobufref/unref might help. TODO: A easier way to debug iobuf leaks."
}
] |
{
"category": "Runtime",
"file_name": "datastructure-iobuf.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Tech debt about: Track internal tech debt title: '' labels: type-debt assignees: '' Summary Describe the tech debt Urgency Explain why it is important to address the tech debt"
}
] |
{
"category": "Runtime",
"file_name": "tech_debt.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "[toc] Cloud Cluster:a standard k8s cluster, located at the cloud side, providing the cloud computing capability. Edge Cluster: a standard k8s cluster, located at the edge side, providing the edge computing capability. Connector Node: a k8s node, located at the cloud side, connector is responsible for communication between the cloud side and edge side. Since a connector node will have a large traffic burden, it's better not to run other programs on them. Edge Node: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge. Host Cluster: a selective cloud cluster, used to manage cross-cluster communication. The 1st cluster deployed by FabEdge must be the host cluster. Member Cluster: an edge cluster, registered into the host cluster, reports the network information to the host cluster. Community: an K8S CRD defined by FabEdge there are two types: Node Type: to define the communication between nodes within the same cluster Cluster Type: to define the cross-cluster communication Kubernetes (v1.22.5+) Flannel (v0.14.0) or Calico (v3.16.5) KubeEdge (>= v1.9.0) or SuperEdge(v0.8.0) or OpenYurt(>= v1.2.0) PS1: For flannel, only Vxlan mode is supported. Support dual-stack environment. PS2: For calico, only IPIP mode is supported. Support IPv4 environment only. Make sure the following ports are allowed by the firewall or security group. ESP(50)UDP/500UDP/4500 Turn off firewalld if your machine has it. Collect the configuration of the current cluster ```shell $ curl -s https://fabedge.github.io/helm-chart/scripts/getclusterinfo.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : cluster.local cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` Use helm to add fabedge repo: ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` Deploy FabEdge ```shell $ curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name beijing \\ --cluster-role host \\ --cluster-zone beijing \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.47 \\ --chart fabedge/fabedge ``` > Note: > --connectors: The names of k8s nodes in which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > --edges: The names of edge nodes those nodes will be labeled as node-role.kubernetes.io/edge > --edge-pod-cidr: The range of IPv4 addresses for the edge pod, it is required if you use Calico. Please make sure the value is not overlapped with cluster CIDR of your cluster. > --connector-public-addresses: IP addresses of k8s nodes which connectors are located PS: The `quickstart.sh` script has more parameters the example above only uses the necessary parameters, execute `quickstart.sh --help` to check all of them. Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2 edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2 master Ready master 5h29m v1.22.5 node1 Ready connector 5h23m"
},
{
"data": "$ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7dd5ccf489-5dc29 1/1 Running 0 24h fabedge-agent-bvnvj 2/2 Running 2 (23h ago) 24h fabedge-agent-c9bsx 2/2 Running 2 (23h ago) 24h fabedge-cloud-agent-lgqkw 1/1 Running 3 (24h ago) 24h fabedge-connector-54c78b5444-9dkt6 2/2 Running 0 24h fabedge-operator-767bc6c58b-rk7mr 1/1 Running 0 24h service-hub-7fd4659b89-h522c 1/1 Running 0 24h ``` Create a community for edges that need to communicate with each other ```shell $ cat > all-edges.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes # community name spec: members: beijing.edge1 # format:{cluster name}.{edge node name} beijing.edge2 EOF $ kubectl apply -f all-edges.yaml ``` Update the dependent configuration Update the dependent configuration If you have any member cluster, register it in the host cluster first, then deploy FabEdge in it. Before you that, you'd better to make sure none of the addresses of host network and container network of those clusters overlap. In the host clustercreate an edge cluster named \"shanghai\". Get the token for registration. ```shell $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # cluster name EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJomitted--9u0 ``` Use helm to add fabedge repo: ```shell helm repo add fabedge https://fabedge.github.io/helm-chart ``` Deploy FabEdge in the member cluster ```shell curl https://fabedge.github.io/helm-chart/scripts/quickstart.sh | bash -s -- \\ --cluster-name shanghai \\ --cluster-role member \\ --cluster-zone shanghai \\ --cluster-region china \\ --connectors node1 \\ --edges edge1,edge2 \\ --edge-pod-cidr 10.233.0.0/16 \\ --connector-public-addresses 10.22.46.26 \\ --chart fabedge/fabedge \\ --service-hub-api-server https://10.22.46.47:30304 \\ --operator-api-server https://10.22.46.47:30303 \\ --init-token ey...Jh ``` > Note: > --connectors: The names of k8s nodes in which connectors are located, those nodes will be labeled as node-role.kubernetes.io/connector > --edges: The names of edge nodes those nodes will be labeled as node-role.kubernetes.io/edge > --edge-pod-cidr: The range of IPv4 addresses for the edge pod, if you use Calico, this is required. Please make sure the value is not overlapped with cluster CIDR of your cluster. > --connector-public-addresses: ip address of k8s nodes on which connectors are located in the member cluster > --init-token: token when the member cluster is added in the host cluster > --service-hub-api-server: endpoint of serviceHub in the host cluster > --operator-api-server: endpoint of operator-api in the host cluster Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.22.6-kubeedge-v1.12.2 edge2 Ready edge 5h21m v1.22.6-kubeedge-v1.12.2 master Ready master 5h29m v1.22.5 node1 Ready connector 5h23m"
},
{
"data": "$ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-m55h5 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` In the host clustercreate a community for all clusters which need to communicate with each other ```shell $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: shanghai.connector # format: {cluster name}.connector beijing.connector # format: {cluster name}.connector EOF $ kubectl apply -f community.yaml ``` Change the coredns configmap: ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # cluster-ip of fab-dns service } .:53 { ... } ``` Reboot coredns to take effect Enable dynamicController of cloudcore: ``` dynamicController: enable: true ``` This configuration item is in the cloudcore configuration file cloudcore.yaml, please find the file yourself according to your environment. Make sure cloudcore has permissions to access endpointslices resources (only if cloudcore is running in cluster): ``` kubectl edit clusterrole cloudcore apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: app.kubernetes.io/managed-by: Helm k8s-app: kubeedge kubeedge: cloudcore name: cloudcore rules: apiGroups: discovery.k8s.io resources: endpointslices verbs: get list watch ``` Restart cloudcore. Update `edgecore` on all edge nodes ( kubeedge < v.1.12.0) ```shell $ vi /etc/kubeedge/config/edgecore.yaml edged: enable: true ... networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 clusterDomain: \"cluster.local\" # clusterDomain from getclusterinfo script output metaManager: metaServer: enable: true ``` or ( kubeedge >= v.1.12.2) ```yaml $ vi /etc/kubeedge/config/edgecore.yaml edged: enable: true ... networkPluginName: cni networkPluginMTU: 1500 tailoredKubeletConfig: clusterDNS: [\"169.254.25.10\"] clusterDomain: \"cluster.local\" # clusterDomain from getclusterinfo script output metaManager: metaServer: enable: true ``` Reboot `edgecore` on all edge nodes ```shell $ systemctl restart edgecore ``` Since v0.7.0, fabedge can manage calico ippools of CIDRS from other clusters, the function is enabled when you use quickstart.sh to install fabedge. If you prefer to configure ippools by yourself, provide `--auto-keep-ippools false` when you install fabedge. If you choose to let fabedge configure ippools, the following content can be skipped. Regardless of the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation. one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel) on the host (Calico) cluster, add the addresses of the member (Calico) cluster and the member(Flannel) cluster on the member1 (Calico) cluster, add the addresses of the host (Calico) cluster and the member(Flannel) cluster on the member2 (Flannel) cluster, there is no configuration required. ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` cidr should be one the of following values edge-pod-cidr of current cluster cluster-cidr parameter of another cluster service-cluster-ip-range of another cluster This document introduces how to install FabEdge via a script which help you to try FabEdge soon, but we would recommend you to read which might fit in production environment better."
}
] |
{
"category": "Runtime",
"file_name": "get-started.md",
"project_name": "FabEdge",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "layout: global title: FUSE SDK Advanced Tuning Alluxio now supports both libfuse2 and libfuse3. Alluxio FUSE on libfuse2 is more stable and has been tested in production. Alluxio FUSE on libfuse3 is currently experimental but under active development. Alluxio will focus more on libfuse3 and utilize new features provided. libfuse3 is used by default. Set to use libfuse2 via: ```shell $ sudo yum install fuse $ alluxio-fuse mount understoragedataset mount_point -o fuse=2 ``` See `logs/fuse.out` for which version is used. ``` INFO NativeLibraryLoader - Loaded libjnifuse with libfuse version 2(or 3). ``` You can use `alluxio-fuse mount -o mountoptiona -o mountoptionb=value` to set mount options when launching the standalone Fuse process. Different versions of `libfuse` and `osxfuse` may support different mount options. The available Linux mount options are listed . The mount options of MacOS with osxfuse are listed . Some mount options (e.g. `allowother` and `allowroot`) need additional set-up and the set-up process may be different depending on the platform. ```shell $ alluxio-fuse mount <understoragedataset> <mountpoint> -o mountoption ``` By default, Alluxio-FUSE mount point can only be accessed by the user mounting the Alluxio namespace to the local filesystem. For Linux, add the following line to file `/etc/fuse.conf` to allow other users or allow root to access the mounted directory: ``` userallowother ``` Only after this step that non-root users have the permission to specify the `allowother` or `allowroot` mount options. For MacOS, follow the to allow other users to use the `allowother` and `allowroot` mount options. After setting up, pass the `allowother` or `allowroot` mount options when mounting Alluxio-FUSE: ```shell $ alluxio-fuse mount <understoragedataset> <mountpoint> -o allowother ``` ```shell $ alluxio-fuse mount <understoragedataset> <mountpoint> -o allowroot ``` Note that only one of the `allowother` or `allowroot` could be set. This section talks about how to troubleshoot issues related to Alluxio POSIX API. When encountering the out of direct memory issue, add the following JVM opts to `${ALLUXIO_HOME}/conf/alluxio-env.sh` to increase the max amount of direct memory. ```sh ALLUXIOFUSEJAVA_OPTS+=\" -XX:MaxDirectMemorySize=8G\" ```"
}
] |
{
"category": "Runtime",
"file_name": "Advanced-Tuning.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "storage/posix translator ======================== Notes -- This is so that all filesystem checks are done with the user's uid/gid and not GlusterFS's uid/gid. This macro concatenates the base directory of the posix volume ('option directory') with the given path. If this flag is passed, lookup returns a xattr dictionary that contains the file's create time, the file's contents, and the version number of the file. This is a hack to increase small file performance. If an application wants to read a small file, it can finish its job with just a lookup call instead of a lookup followed by read. These are used by unify to set and get directory entries. Macro to align an address to a page boundary (4K). In some cases, two exported volumes may reside on the same partition on the server. Sending statvfs info for both the volumes will lead to erroneous df output at the client, since free space on the partition will be counted twice. In such cases, user can disable exporting statvfs info on one of the volumes by setting this option. This fop is used by replicate to set version numbers on files. A key, `GLUSTERFSFILECONTENT_STRING`, is handled in a special way by `getxattr`/`setxattr`. A getxattr with the key will return the entire content of the file as the value. A `setxattr` with the key will write the value as the entire content of the file. This calculates a simple XOR checksum on all entry names in a directory that is used by unify to compare directory contents."
}
] |
{
"category": "Runtime",
"file_name": "posix.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This guide will guide you through updating the version of the coreos flavor of stage1. We usually want to do this to update the systemd version used by the stage1. The process is quite manual because it's not done often, but improvements are welcomed. Let's assume you want to update CoreOS Container Linux from version 991.0.0 to version 1032.0.0. First, you need to download and verify the image. Make sure you trust the . Since 1032.0.0 is currently only available in the Alpha channel, we'll use the alpha URL: ``` $ mkdir /tmp/coreos-image $ curl -O https://alpha.release.core-os.net/amd64-usr/1032.0.0/coreosproductionpxe_image.cpio.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 222M 100 222M 0 0 7769k 0 0:00:29 0:00:29 --:--:-- 7790k $ curl -O http://alpha.release.core-os.net/amd64-usr/1032.0.0/coreosproductionpxe_image.cpio.gz.sig % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 543 100 543 0 0 495 0 0:00:01 0:00:01 --:--:-- 495 $ gpg --verify coreosproductionpxe_image.cpio.gz.sig gpg: assuming signed data in 'coreosproductionpxe_image.cpio.gz' gpg: Signature made Thu 28 Apr 2016 04:54:00 AM CEST using RSA key ID 1CB5FA26 gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: PGP gpg: depth: 0 valid: 5 signed: 5 trust: 0-, 0q, 0n, 0m, 0f, 5u gpg: depth: 1 valid: 5 signed: 0 trust: 3-, 0q, 0n, 0m, 2f, 0u gpg: next trustdb check due at 2017-01-19 gpg: Good signature from \"CoreOS Buildbot (Offical Builds) <[email protected]>\" [ultimate] ``` Then you need to extract it: ``` $ gunzip coreosproductionpxe_image.cpio.gz $ cpio -i < coreosproductionpxe_image.cpio 457785 blocks $ unsquashfs usr.squashfs Parallel unsquashfs: Using 4 processors 13445 inodes (14861 blocks) to write write_xattr: could not write xattr security.capability for file squashfs-root/bin/arping because you're not superuser! write_xattr: to avoid this error message, either specify -user-xattrs, -no-xattrs, or run as superuser! Further error messages of this type are suppressed! [======================================================================================================================================-] 14861/14861 100% created 12391 files created 1989 directories created 722 symlinks created 0 devices created 0 fifos ``` You should have now the rootfs of the image in the `squashfs-root` directory. Back to the rkt repo, in the directory"
},
{
"data": "there are some manifest files that define which files are copied from the Container Linux image to the stage1 image. You need to go through all of them and check that the files listed correspond to files that are in the actual rootfs of the image (which we extracted in the previous step). Do this from your root directory: ```bash for f in $(cat stage1/usrfromcoreos/manifest-amd64-usr.d/*.manifest); do fspath=/tmp/coreos-image/squashfs-root/$f if [ ! -e $fspath -a ! -h $fspath ]; then echo missing: $f fi done ``` Usually, there are some updated libraries which need an update on their version numbers. In our case, there are no updates and all the files mentioned in the manifest are present in the updated Container Linux image. If any of the manifest files have been modified run the script `scripts/sort-stage1-manifests.sh` to keep the manifest files in sorted order. In the file `stage1/usrfromcoreos/coreos-common.mk`, we define which Container Linux image version we use for the coreos flavor. Update `CCNIMGRELEASE` to 1032.0.0 and `CCNSYSTEMDVERSION` to the systemd version shipped with the image (in our case, v229). ```diff diff --git a/stage1/usrfromcoreos/coreos-common.mk b/stage1/usrfromcoreos/coreos-common.mk index b5bfa77..f864f56 100644 a/stage1/usrfromcoreos/coreos-common.mk +++ b/stage1/usrfromcoreos/coreos-common.mk @@ -9,9 +9,9 @@ CCNINCLUDED_ := x $(call setup-tmp-dir,CCN_TMPDIR) -CCNSYSTEMDVERSION := v225 +CCNSYSTEMDVERSION := v229 -CCNIMGRELEASE := 991.0.0 +CCNIMGRELEASE := 1032.0.0 CCNIMGURL := https://alpha.release.core-os.net/amd64-usr/$(CCNIMGRELEASE)/coreosproductionpxe_image.cpio.gz ``` Once you're finished updating the manifest files and `coreos-common.mk`, we'll do some sanity checks. First, do a clean build. Make sure that every binary links: ```bash for f in $(cat stage1/usrfromcoreos/manifest-amd64-usr.d/*.manifest); do if [[ $f =~ ^bin/ ]]; then sudo chroot build*/aci-for-coreos-flavor/rootfs /usr/lib64/ld-linux-x86-64.so.2 --list $f >/dev/null st=$? if [ $st -ne 0 ] ; then echo $f failed with exit code $st break fi fi done ``` Run a quick smoketest: ```bash sudo build*/target/bin/rkt run quay.io/coreos/alpine-sh ``` If there are some new libraries missing from the image, you need to add them to the corresponding manifest file. For example, this update breaks systemd. When you try to run rkt, you get this error: ``` /usr/lib/systemd/systemd: error while loading shared libraries: libpam.so.0: cannot open shared object file: No such file or directory ``` This means that we need to add libpam to the systemd manifest file: ```diff diff --git a/stage1/usrfromcoreos/manifest.d/systemd.manifest b/stage1/usrfromcoreos/manifest.d/systemd.manifest index fca30bb..51d5fbc 100644 a/stage1/usrfromcoreos/manifest.d/systemd.manifest +++ b/stage1/usrfromcoreos/manifest.d/systemd.manifest @@ -61,6 +61,9 @@ lib64/libmount.so.1 lib64/libmount.so.1.1.0 lib64/libnss_files-2.21.so lib64/libnss_files.so.2 +lib64/libpam.so +lib64/libpam.so.0 +lib64/libpam.so.0.84.1 lib64/libpcre.so lib64/libpcre.so.1 lib64/libpcre.so.1.2.4 ``` Then build and test again."
}
] |
{
"category": "Runtime",
"file_name": "update-coreos-stage1.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This section demonstrates how to use the Rust APIs of the `wasmedge-sys` crate to run a host function. As you may know, several mainstream programming languages, such as C/C++, Rust, Go, and Python, support compiling their programs into WebAssembly binary. In this demo, we'll introduce how to use the APIs defined in `Vm` of `wasmedge-sys` crate to call a WebAssembly function which could be coded in any programming language mentioned above. The code in the example is verified on wasmedge-sys v0.10.0 wasmedge-types v0.3.0 We use `fibonacci.wasm` in this demo, and the contents of the WebAssembly file are presented below. The statement, `(export \"fib\" (func $fib))`, declares an exported function named `fib`. This function computes a Fibonacci number with a given `i32` number as input. We'll use the function name later to achieve the goal of computing a Fibonacci number. ```wasm (module (export \"fib\" (func $fib)) (func $fib (param $n i32) (result i32) (if (i32.lt_s (get_local $n) (i32.const 2) ) (return (i32.const 1) ) ) (return (i32.add (call $fib (i32.sub (get_local $n) (i32.const 2) ) ) (call $fib (i32.sub (get_local $n) (i32.const 1) ) ) ) ) ) ) ``` In this step, we'll create a WasmEdge `AST Module` instance from a WebAssembly file. First, create a `Loader` context; Then, load a specified WebAssebly file (\"fibonacci.wasm\") via the `from_file` method of the `Loader` context. If the process is successful, then a WasmEdge `AST Module` instance is returned. ```rust use wasmedge_sys::Loader; use std::path::PathBuf; // create a Loader context let loader = Loader::create(None).expect(\"fail to create a Loader context\"); // load a wasm module from a specified wasm file, and return a WasmEdge AST Module instance let path = PathBuf::from(\"fibonacci.wasm\"); let module = loader.from_file(path).expect(\"fail to load the WebAssembly file\"); ``` In WasmEdge, a `Vm` defines a running environment, in which all varieties of instances and contexts are stored and maintained. In the demo code below, we explicitly create a WasmEdge `Store` context, and then use it as one of the inputs in the creation of a `Vm` context. If not specify a `Store` context explicitly, then `Vm` will create a store by itself. ```rust use wasmedge_sys::{Config, Store, Vm}; // create a Config context let config = Config::create().expect(\"fail to create a Config context\"); // create a Store context let mut store = Store::create().expect(\"fail to create a Store context\"); // create a Vm context with the given Config and Store let mut vm = Vm::create(Some(config), Some(&mut store)).expect(\"fail to create a Vm context\"); ``` In Step 1, we got a module that hosts the target `fib` function defined in the WebAssembly. Now, we can call the function via the `runwasmfrom_module` method of the `Vm` context by passing the exported function name, `fib`. ```rust use wasmedge_sys::WasmValue; // run a function let returns = vm.runwasmfrommodule(module, \"fib\", [WasmValue::fromi32(5)]).expect(\"fail to run the target function in the module\"); println!(\"The result of fib(5) is {}\", returns[0].to_i32()); ``` This is the final result printing on the screen: ```bash The result of fib(5) is 8 ```"
}
] |
{
"category": "Runtime",
"file_name": "sys_run_host_func.md",
"project_name": "WasmEdge Runtime",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document describes the current status, the plan ahead and general thoughts about IPv6 / Dual-Stack support in kube-router. Dual-Stack (e.g. IPv4 and IPv6) has been supported in Kubernetes since version `v1.21`: kube-router's current approach is to implement dual-stack functionality function-by-function: CNI `--enable-cni` Proxy `--run-service-proxy` Router `--run-router` Network policies `--run-firewall` Support for dual-stack in kube-router is feature complete. Release v2.0.0 and above of kube-router has all controllers updated for dual-stack compatibility. This represents a major release for kube-router and as such, user's should approach deploying this into an established kube-router environment carefully. While there aren't any huge bugs that the maintainers are aware of at this time, there are several small breaks in backwards compatibility. We'll try to detail these below as best we can. In order to enable dual-stack functionality please ensure the following: kube-router option `--enable-ipv4=true` is set (this is the default) kube-router option `--enable-ipv6=true` is set Your Kubernetes node has both IPv4 and IPv6 addresses on its physical interfaces Your Kubernetes node has both IPv4 and IPv6 addresses in its node spec: ```shell $ kubectl describe node foo ... Addresses: InternalIP: 10.95.0.202 InternalIP: 2001:1f18:3d5:ed00:d61a:454f:b886:7000 Hostname: foo ... ``` Add additional `--service-cluster-ip-range` and `--service-external-ip-range` kube-router parameters for your IPv6 addresses. If you use `--enable-cni=true`, ensure `kube-controller-manager` has been started with both IPv4 and IPv6 cluster CIDRs (e.g. `--cluster-cidr=10.242.0.0/16,2001:db8:42:1000::/56`) Ensure `kube-controller-manager` & `kube-apiserver` have been started with both IPv4 and IPv6 service cluster IP ranges (e.g. `--service-cluster-ip-range=10.96.0.0/16,2001:db8:42:1::/112`) In order to facilitate both IPv4 and IPv6 tunnels, we had to change the hashing format for our current tunnel names. As such, if you do a kube-router upgrade in place (i.e. without reboot), it is very likely that kube-router will not clean up old tunnels. This will only impact users that were utilizing the overlay feature of kube-router to some extent. Such as if you were running kube-router with `--enable-overlay` or `--overlay-type=full` or `--overlay-type=subnet` (it should be noted that these options default to on currently). If you are upgrading kube-router from a pre v2.0.0 release to a v2.0.0 release, we recommend that you coordinate your upgrade of kube-router with a rolling reboot of your Kubernetes fleet to clean up any tunnels that were left from previous versions of kube-router. While v2.X and above versions of kube-router are IPv6 compatible and advertise both IPv4 and IPv6 addresses, it still does this over a single BGP peering. This peering is made from what kube-router considers the node's primary IP address. Which is typically the first internal IP address listed in the node's Kubernetes metadata (e.g. `kubectl get node`) unless it is overriden by a configuration. This address with be either an IPv4 or IPv6 address and kube-router will use this to make the peering. Without `--override-nexthop` kube-router does the work to ensure that an IP or subnet is advertised by the matching IP family for the IP or subnet. However, with `--override-nexthop` enabled kube-router doesn't have control over what the next-hop for the advertised route will be. Instead the next-hop will be overridden by the IP that is being used to peer with kube-router. This can cause trouble for many configurations and so it is not recommended to use `--override-nexthop` in dual-stack kube-router configurations. One place where this was particularly problematic was when advertising the Pod IP subnets between different kube-router enabled Kubernetes worker"
},
{
"data": "Workers that use overlay networking in a kube-router cluster are made aware of their neighbors via BGP protocol advertisements and `--override-nexthop` would mean that one family of addresses would never work correctly. As such, we no longer apply the `--override-nexthop` setting to pod subnet advertisements between kube-router nodes. This is different functionality between version v1.X of kube-router and v2.x. Due to implementation restrictions with GoBGP, the annotation `kube-router.io/node.bgp.customimportreject`, which allows user's to add rules for rejecting specific routes sent to GoBGP, can only accept a single IP family (e.g. IPv4 or IPv6). Attempting to add IPs of two different families will result in a GoBGP error when it attempts to import BGP policy from kube-router. Network Policy in Kubernetes allows users to specify ranges for ingress and egress policies. These blocks are string-based network CIDRs and allow the user to specify any ranges that they wish in order to allow ingress or egress from network ranges that are not selectable using Kubernetes pod selectors. Currently, kube-router is only able to work with CIDRs for IP families that it has been enabled for using the `--enable-ipv4=true` & `--enable-ipv6=true` CLI flags. If a user adds a network policy for an IP family that kube-router is not enabled for, you will see a warning in your kube-router logs and no firewall rule will be added. Now that kube-router has dual-stack capability, it doesn't make sense to have an annotation that can only represent a single pod CIDR any longer. As such, with this release we are announcing the deprecation of the `kube-router.io/pod-cidr` annotation in favor of the new `kube-router.io/pod-cidrs` annotation. The new `kube-router.io/pod-cidrs` annotation is a comma-separated list of CIDRs and can hold either IPv4 or IPv6 CIDRs in string form. It should be noted, that until `kube-router.io/pod-cidr` is fully removed, at some point in the future, it will still be preferred over the `kube-router.io/pod-cidrs` annotation in order to preserve as much backwards compatibility as possible. Until `kube-router.io/pod-cidr` has been fully retired, users that use the old annotation will get a warning in their kube-router logs saying that they should change to the new annotation. The recommended action here, is that upon upgrade, you convert nodes from using the `kube-router.io/pod-cidr` to the new `kube-router.io/pod-cidrs` annotation. Since kube-router currently only updates node annotations at start and not as they are updated, this is a safe change to make before updating kube-router. If neither annotation is specified, kube-router will use the `PodCIDRs` field of the Kubernetes node spec which is populated by the `kube-controller-manager` as part of it's `--allocate-node-cidrs` functionality. This should be a sane default for most users of kube-router. Now that kube-router supports dual-stack, it also supports multiple ranges in the CNI file. While kube-router will still add your pod CIDRs to your CNI configuration via node configuration like `kube-router.io/pod-cidr`, `kube-router.io/pod-cidrs`, or `.node.Spec.PodCIDRs`, you can also customize your own CNI to add additional ranges or plugins. A CNI configuration with multiple ranges will typically look something like the following: ```json { \"cniVersion\": \"0.3.0\", \"name\": \"mynet\", \"plugins\": [ { \"bridge\": \"kube-bridge\", \"ipam\": { \"ranges\": [ [ { \"subnet\": \"10.242.0.0/24\" } ], [ { \"subnet\": \"2001:db8:42:1000::/64\" } ] ], \"type\": \"host-local\" }, \"isDefaultGateway\": true, \"mtu\": 9001, \"name\": \"kubernetes\", \"type\": \"bridge\" } ] } ``` All kube-router's handling of the CNI file attempts to minimize disruption to any user made edits to the file."
}
] |
{
"category": "Runtime",
"file_name": "ipv6.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Network Providers Rook deploys CephClusters using Kubernetes' software-defined networks by default. This is simple for users, provides necessary connectivity, and has good node-level security. However, this comes at the expense of additional latency, and the storage network must contend with Kubernetes applications for network bandwidth. It also means that Kubernetes applications coexist on the same network as Ceph daemons and can reach the Ceph cluster easily via network scanning. Rook allows selecting alternative network providers to address some of these downsides, sometimes at the expense of others. Selecting alternative network providers is an advanced topic. !!! Note This is an advanced networking topic. See also the Ceph daemons can operate on up to two distinct networks: public, and cluster. Ceph daemons always use the public network. The public network is used for client communications with the Ceph cluster (reads/writes). Rook configures this as the Kubernetes pod network by default. Ceph-CSI uses this network for PVCs. The cluster network is optional and is used to isolate internal Ceph replication traffic. This includes additional copies of data replicated between OSDs during client reads/writes. This also includes OSD data recovery (re-replication) when OSDs or nodes go offline. If the cluster network is unspecified, the public network is used for this traffic instead. Refer to for deeper explanations of any topics. `network.addressRanges` This configuration is always optional but is important to understand. Some Rook network providers allow specifying the public and network interfaces that Ceph will use for data traffic. Use `addressRanges` to specify this. For example: ```yaml network: provider: host addressRanges: public: \"192.168.100.0/24\" \"192.168.101.0/24\" cluster: \"192.168.200.0/24\" ``` This `public` and `cluster` translate directly to Ceph's `publicnetwork` and `hostnetwork` configurations. The default network provider cannot make use of these configurations. Ceph public and cluster network configurations are allowed to change, but this should be done with great care. When updating underlying networks or Ceph network settings, Rook assumes that the current network configuration used by Ceph daemons will continue to operate as intended. Network changes are not applied to Ceph daemon pods (like OSDs and MDSes) until the pod is restarted. When making network changes, ensure that restarted pods will not lose connectivity to existing pods, and vice versa. `network.provider: host` Host networking allows the Ceph cluster to use network interfaces on Kubernetes hosts for communication. This eliminates latency from the software-defined pod network, but it provides no host-level security isolation. Ceph daemons will use any network available on the host for communication. To restrict Ceph to using only a specific specific host interfaces or networks, use `addressRanges` to select the network CIDRs Ceph will bind to on the host. If the host networking setting is changed in a cluster where mons are already running, the existing mons will remain running with the same network settings with which they were created. To complete the conversion to or from host networking after you update this setting, you will need to in order to have mons on the desired network configuration. `network.provider: multus` Rook supports using Multus NetworkAttachmentDefinitions for Ceph public and cluster"
},
{
"data": "This allows Rook to attach any CNI to Ceph as a public and/or cluster network. This provides strong isolation between Kubernetes applications and Ceph cluster daemons. While any CNI may be used, the intent is to allow use of CNIs which allow Ceph to be connected to specific host interfaces. This improves latency and bandwidth while preserving host-level network isolation. In order for host network-enabled Ceph-CSI to communicate with a Multus-enabled CephCluster, some setup is required for Kubernetes hosts. These prerequisites require an understanding of how Multus networks are configured and how Rook uses them. Following sections will help clarify questions that may arise here. Two basic requirements must be met: Kubernetes hosts must be able to route successfully to the Multus public network. Pods on the Multus public network must be able to route successfully to Kubernetes hosts. These two requirements can be broken down further as follows: For routing Kubernetes hosts to the Multus public network, each host must ensure the following: the host must have an interface connected to the Multus public network (the \"public-network-interface\"). the \"public-network-interface\" must have an IP address. a route must exist to direct traffic destined for pods on the Multus public network through the \"public-network-interface\". For routing pods on the Multus public network to Kubernetes hosts, the public NetworkAttachementDefinition must be configured to ensure the following: The definition must have its IP Address Management (IPAM) configured to route traffic destined for nodes through the network. To ensure routing between the two networks works properly, no IP address assigned to a node can overlap with any IP address assigned to a pod on the Multus public network. These requirements require careful planning, but some methods are able to meet these requirements more easily than others. to help understand and implement these requirements. !!! Tip Keep in mind that there are often ten or more Rook/Ceph pods per host. The pod address space may need to be an order of magnitude larger (or more) than the host address space to allow the storage cluster to grow in the future. Refer to for details about how to set up and select Multus networks. Rook will attempt to auto-discover the network CIDRs for selected public and/or cluster networks. This process is not guaranteed to succeed. Furthermore, this process will get a new network lease for each CephCluster reconcile. Specify `addressRanges` manually if the auto-detection process fails or if the selected network configuration cannot automatically recycle released network leases. Only OSD pods will have both public and cluster networks attached (if specified). The rest of the Ceph component pods and CSI pods will only have the public network attached. The Rook operator will not have any networks attached; it proxies Ceph commands via a sidecar container in the mgr pod. A NetworkAttachmentDefinition must exist before it can be used by Multus for a Ceph network. A recommended definition will look like the following: ```yaml apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: ceph-multus-net spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\":"
},
{
"data": "} }' ``` Note that the example above does not specify information that would route pods on the network to Kubernetes hosts. Ensure that `master` matches the network interface on hosts that you want to use. It must be the same across all hosts. CNI type is highly recommended. It has less CPU and memory overhead compared to traditional Linux `bridge` configurations. IPAM type is recommended because it ensures each pod gets an IP address unique within the Kubernetes cluster. No DHCP server is required. If a DHCP server is present on the network, ensure the IP range does not overlap with the DHCP server's range. NetworkAttachmentDefinitions are selected for the desired Ceph network using `selectors`. Selector values should include the namespace in which the NAD is present. `public` and `cluster` may be selected independently. If `public` is left unspecified, Rook will configure Ceph to use the Kubernetes pod network for Ceph client traffic. Consider the example below which selects a hypothetical Kubernetes-wide Multus network in the default namespace for Ceph's public network and selects a Ceph-specific network in the `rook-ceph` namespace for Ceph's cluster network. The commented-out portion shows an example of how address ranges could be manually specified for the networks if needed. ```yaml network: provider: multus selectors: public: default/kube-multus-net cluster: rook-ceph/ceph-multus-net ``` We highly recommend validating your Multus configuration before you install Rook. A tool exists to facilitate validating the Multus configuration. After installing the Rook operator and before installing any Custom Resources, run the tool from the operator pod. The tool's CLI is designed to be as helpful as possible. Get help text for the multus validation tool like so: ```console kubectl --namespace rook-ceph exec -it deploy/rook-ceph-operator -- rook multus validation run --help ``` Then, update the args in the job template. Minimally, add the NAD names(s) for public and/or cluster as needed and then, create the job to validate the Multus configuration. If the tool fails, it will suggest what things may be preventing Multus networks from working properly, and it will request the logs and outputs that will help debug issues. Check the logs of the pod created by the job to know the status of the validation test. Daemons leveraging Kubernetes service IPs (Monitors, Managers, Rados Gateways) are not listening on the NAD specified in the `selectors`. Instead the daemon listens on the default network, however the NAD is attached to the container, allowing the daemon to communicate with the rest of the cluster. There is work in progress to fix this issue in the repository. At the time of writing it's unclear when this will be supported. The network plan for this cluster will be as follows: The underlying network supporting the public network will be attached to hosts at `eth0` Macvlan will be used to attach pods to `eth0` Pods and nodes will have separate IP ranges Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 hosts) Nodes will have IPs assigned dynamically via DHCP (DHCP configuration is outside the scope of this document) Pods will get the IP range"
},
{
"data": "(this allows up to 16,384 Rook/Ceph pods) Whereabouts will be used to assign IPs to the Multus public network Node configuration must allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Because the host IP range is different from the pod IP range, a route must be added to include the pod range. Such a configuration should be equivalent to the following: ```console ip link add public-shim link eth0 type macvlan mode bridge ip link set public-shim up dhclient public-shim # gets IP in range 192.168.252.0/22 ip route add 192.168.0.0/18 dev public-shim ``` The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' `exclude` option to simplify the `range` request. The Whereabouts `routes[].dst` option () but allows adding routing pods to hosts via the Multus public network. ```yaml apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"1192.168.0.0/18\", \"routes\": [ {\"dst\": \"192.168.252.0/22\"} ] } }' ``` The network plan for this cluster will be as follows: The underlying network supporting the public network will be attached to hosts at `eth0` Macvlan will be used to attach pods to `eth0` Pods and nodes will share the IP range 192.168.0.0/16 Nodes will get the IP range 192.168.252.0/22 (this allows up to 1024 hosts) Pods will get the remainder of the ranges (192.168.0.1 to 192.168.251.255) Whereabouts will be used to assign IPs to the Multus public network Nodes will have IPs assigned statically via PXE configs (PXE configuration and static IP management is outside the scope of this document) PXE configuration for the nodes must apply a configuration to nodes to allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Because the host IP range is a subset of the whole range, a route must be added to include the whole range. Such a configuration should be equivalent to the following: ```console ip link add public-shim link eth0 type macvlan mode bridge ip addr add 192.168.252.${STATIC_IP}/22 dev public-shim ip link set public-shim up ip route add 192.168.0.0/16 dev public-shim ``` The NetworkAttachmentDefinition for the public network would look like the following, using Whereabouts' `exclude` option to simplify the `range` request. The Whereabouts `routes[].dst` option ensures pods route to hosts via the Multus public network. ```yaml apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.0.0/16\", \"exclude\": [ \"192.168.252.0/22\" ], \"routes\": [ {\"dst\": \"192.168.252.0/22\"} ] } }' ``` The network plan for this cluster will be as follows: The underlying network supporting the public network will be attached to hosts at `eth0` Macvlan will be used to attach pods to `eth0` Pods and nodes will share the IP range"
},
{
"data": "DHCP will be used to ensure nodes and pods get unique IP addresses Node configuration must allow nodes to route to pods on the Multus public network. Because pods will be connecting via Macvlan, and because Macvlan does not allow hosts and pods to route between each other, the host must also be connected via Macvlan. Such a configuration should be equivalent to the following: ```console ip link add public-shim link eth0 type macvlan mode bridge ip link set public-shim up dhclient public-shim # gets IP in range 192.168.0.0/16 ``` The NetworkAttachmentDefinition for the public network would look like the following. ```yaml apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: public-net spec: config: '{ \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"eth0\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\", } }' ``` Rook plans to remove CSI \"holder\" pods in Rook v1.16. CephCluster with `csi-plugin-holder-` pods present in the Rook operator namespace must plan to set `CSIDISABLEHOLDER_PODS` to `\"true\"` after Rook v1.14 is installed and before v1.16 is installed by following the migration sections below. CephClusters with no holder pods do not need to follow migration steps. Helm users will set `csi.disableHolderPods: true` in values.yaml instead of `CSIDISABLEHOLDER_PODS`. CephClusters that do not use `network.provider: multus` can follow the section. CephClusters that use `network.provider: multus` will need to plan the migration more carefully. Read the section in full before beginning. !!! hint To determine if holder pods are deployed, use `kubectl --namespace $ROOK_OPERATOR get pods | grep plugin-holder` This migration section applies when any CephCluster `network.provider` is `\"multus\"`. If the scenario does not apply, skip ahead to the section. Step 1 Before setting `CSIENABLEHOSTNETWORK: \"true\"` and `CSIDISABLEHOLDERPODS: \"true\"`, thoroughly read through the . Use the prerequisites section to develop a plan for modifying host configurations as well as the public NetworkAttachmentDefinition. Once the plan is developed, execute the plan by following the steps below. Step 2 First, modify the public NetworkAttachmentDefinition as needed. For example, it may be necessary to add the `routes` directive to the Whereabouts IPAM configuration as in . Step 3 Next, modify the host configurations in the host configuration system. The host configuration system may be something like PXE, ignition config, cloud-init, Ansible, or any other such system. A node reboot is likely necessary to apply configuration updates, but wait until the next step to reboot nodes. Step 4 After the NetworkAttachmentDefinition is modified, OSD pods must be restarted. It is easiest to complete this requirement at the same time nodes are being rebooted to apply configuration updates. For each node in the Kubernetes cluster: `cordon` and `drain` the node Wait for all pods to drain Reboot the node, ensuring the new host configuration will be applied `uncordon` and `undrain` the node Wait for the node to be rehydrated and stable Proceed to the next node By following this process, host configurations will be updated, and OSDs are also automatically restarted as part of the `drain` and `undrain` process on each node. OSDs can be restarted manually if node configuration updates do not require reboot. Step 5 Once all nodes are running the new configuration and all OSDs have been restarted, check that the new node and NetworkAttachmentDefinition configurations are"
},
{
"data": "To do so, verify that each node can `ping` OSD pods via the public network. Use the or the to list OSD IPs. The example below uses the kubectl plugin, and the OSD public network is 192.168.20.0/24. ```console $ kubectl rook-ceph ceph osd dump | grep 'osd\\.' osd.0 up in weight 1 upfrom 7 upthru 0 downat 0 lastclean_interval [0,0) [v2:192.168.20.19:6800/213587265,v1:192.168.20.19:6801/213587265] [v2:192.168.30.1:6800/213587265,v1:192.168.30.1:6801/213587265] exists,up 7ebbc19a-d45a-4b12-8fef-0f9423a59e78 osd.1 up in weight 1 upfrom 24 upthru 24 downat 20 lastclean_interval [8,23) [v2:192.168.20.20:6800/3144257456,v1:192.168.20.20:6801/3144257456] [v2:192.168.30.2:6804/3145257456,v1:192.168.30.2:6805/3145257456] exists,up 146b27da-d605-4138-9748-65603ed0dfa5 osd.2 up in weight 1 upfrom 21 upthru 0 downat 20 lastclean_interval [18,20) [v2:192.168.20.21:6800/1809748134,v1:192.168.20.21:6801/1809748134] [v2:192.168.30.3:6804/1810748134,v1:192.168.30.3:6805/1810748134] exists,up ff3d6592-634e-46fd-a0e4-4fe9fafc0386 ``` Now check that each node (NODE) can reach OSDs over the public network: ```console $ ssh user@NODE $ user@NODE $> ping -c3 192.168.20.19 $ user@NODE $> ping -c3 192.168.20.20 $ user@NODE $> ping -c3 192.168.20.21 ``` If any node does not get a successful `ping` to a running OSD, it is not safe to proceed. A problem may arise here for many reasons. Some reasons include: the host may not be properly attached to the Multus public network, the public NetworkAttachmentDefinition may not be properly configured to route back to the host, the host may have a firewall rule blocking the connection in either direction, or the network switch may have a firewall rule blocking the connection. Diagnose and fix the issue, then return to Step 1. Step 6 If the above check succeeds for all nodes, proceed with the steps below. Step 1 If any CephClusters have Multus enabled (`network.provider: \"multus\"`), follow the steps above before continuing. Step 2 Begin by setting `CSIDISABLEHOLDERPODS: \"true\"`. If `CSIENABLEHOSTNETWORK` is set to `\"false\"`, also set this value to `\"true\"` at the same time. After this, `csi-plugin-` pods will restart, and `csi-plugin-holder-` pods will remain running. Step 3 Check that CSI pods are using the correct host networking configuration using the example below as guidance (in the example, `CSIENABLEHOST_NETWORK` is `\"true\"`): ```console $ kubectl -n rook-ceph get -o yaml daemonsets.apps csi-rbdplugin | grep -i hostnetwork hostNetwork: true $ kubectl -n rook-ceph get -o yaml daemonsets.apps csi-cephfsplugin | grep -i hostnetwork hostNetwork: true $ kubectl -n rook-ceph get -o yaml daemonsets.apps csi-nfsplugin | grep -i hostnetwork hostNetwork: true ``` Step 4 At this stage, PVCs for running applications are still using the holder pods. These PVCs must be migrated from the holder to the new network. Follow the below process to do so. For each node in the Kubernetes cluster: `cordon` and `drain` the node Wait for all pods to drain Delete all `csi-plugin-holder` pods on the node (a new holder will take it's place) `uncordon` and `undrain` the node Wait for the node to be rehydrated and stable Proceed to the next node Step 5 After this process is done for all Kubernetes nodes, it is safe to delete the `csi-plugin-holder` daemonsets. Delete the holder daemonsets using the example below as guidance: ```console $ kubectl -n rook-ceph get daemonset -o name | grep plugin-holder daemonset.apps/csi-cephfsplugin-holder-my-cluster daemonset.apps/csi-rbdplugin-holder-my-cluster $ kubectl -n rook-ceph delete daemonset.apps/csi-cephfsplugin-holder-my-cluster daemonset.apps \"csi-cephfsplugin-holder-my-cluster\" deleted $ kubectl -n rook-ceph delete daemonset.apps/csi-rbdplugin-holder-my-cluster daemonset.apps \"csi-rbdplugin-holder-my-cluster\" deleted ``` Step 6 The migration is now complete! Congratulations!"
}
] |
{
"category": "Runtime",
"file_name": "network-providers.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Kata Containers, a second layer of isolation is created on top of those provided by traditional namespace-containers. The hardware virtualization interface is the basis of this additional layer. Kata will launch a lightweight virtual machine, and use the guests Linux kernel to create a container workload, or workloads in the case of multi-container pods. In Kubernetes and in the Kata implementation, the sandbox is carried out at the pod level. In Kata, this sandbox is created using a virtual machine. This document describes how Kata Containers maps container technologies to virtual machines technologies, and how this is realized in the multiple hypervisors and virtual machine monitors that Kata supports. A typical deployment of Kata Containers will be in Kubernetes by way of a Container Runtime Interface (CRI) implementation. On every node, Kubelet will interact with a CRI implementer (such as containerd or CRI-O), which will in turn interface with Kata Containers (an OCI based runtime). The CRI API, as defined at the , implies a few constructs being supported by the CRI implementation, and ultimately in Kata Containers. In order to support the full with the CRI-implementer, Kata must provide the following constructs: These constructs can then be further mapped to what devices are necessary for interfacing with the virtual machine: Ultimately, these concepts map to specific para-virtualized devices or virtualization technologies. Each hypervisor or VMM varies on how or if it handles each of these. Kata Containers . Details of each solution and a summary are provided below. Kata Containers with QEMU has complete compatibility with Kubernetes. Depending on the host architecture, Kata Containers supports various machine types, for example `q35` on x86 systems, `virt` on ARM systems and `pseries` on IBM Power systems. The default Kata Containers machine type is `q35`. The machine type and its can be changed by editing the runtime file. Devices and features used: virtio VSOCK or virtio serial virtio block or virtio SCSI virtio fs or virtio 9p (recommend: virtio fs) VFIO hotplug machine accelerators Machine accelerators and hotplug are used in Kata Containers to manage resource constraints, improve boot time and reduce memory footprint. These are documented below. Machine accelerators are architecture specific and can be used to improve the performance and enable specific features of the machine types. The following machine accelerators are used in Kata Containers: NVDIMM: This machine accelerator is x86 specific and only supported by `q35` machine types. `nvdimm` is used to provide the root filesystem as a persistent memory device to the Virtual Machine. The Kata Containers VM starts with a minimum amount of resources, allowing for faster boot time and a reduction in memory footprint. As the container launch progresses, devices are hotplugged to the"
},
{
"data": "For example, when a CPU constraint is specified which includes additional CPUs, they can be hot added. Kata Containers has support for hot-adding the following devices: Virtio block Virtio SCSI VFIO CPU Firecracker, built on many rust crates that are within , has a very limited device model, providing a lighter footprint and attack surface, focusing on function-as-a-service like use cases. As a result, Kata Containers with Firecracker VMM supports a subset of the CRI API. Firecracker does not support file-system sharing, and as a result only block-based storage drivers are supported. Firecracker does not support device hotplug nor does it support VFIO. As a result, Kata Containers with Firecracker VMM does not support updating container resources after boot, nor does it support device passthrough. Devices used: virtio VSOCK virtio block virtio net , based on , is designed to have a lighter footprint and smaller attack surface for running modern cloud workloads. Kata Containers with Cloud Hypervisor provides mostly complete compatibility with Kubernetes comparable to the QEMU configuration. As of the 1.12 and 2.0.0 release of Kata Containers, the Cloud Hypervisor configuration supports both CPU and memory resize, device hotplug (disk and VFIO), file-system sharing through virtio-fs, block-based volumes, booting from VM images backed by pmem device, and fine-grained seccomp filters for each VMM threads (e.g. all virtio device worker threads). Please check for details of ongoing integration efforts. Devices and features used: virtio VSOCK or virtio serial virtio block virtio net virtio fs virtio pmem VFIO hotplug seccomp filters is an enterprise-level open source VMM oriented to cloud data centers, implements a unified architecture to support Standard-VMs, containers and serverless (Micro-VM). StratoVirt has some competitive advantages, such as lightweight and low resource overhead, fast boot, hardware acceleration, and language-level security with Rust. Currently, StratoVirt in Kata supports Micro-VM machine type, mainly focus on FaaS cases, supporting device hotplug (virtio block), file-system sharing through virtio fs and so on. Kata Containers with StratoVirt now use virtio-mmio bus as driver, and doesn't support CPU/memory resize nor VFIO, thus doesn't support updating container resources after booted. Devices and features used currently: Micro-VM machine type for FaaS(mmio, no ACPI) Virtual Socket(vhost VSOCKvirtio console) Virtual Storage(virtio block, mmio) Virtual Networking(virtio net, mmio) Shared Filesystem(virtio fs) Device Hotplugging(virtio block hotplug) Entropy Source(virtio RNG) QMP API | Solution | release introduced | brief summary | |-|-|-| | Cloud Hypervisor | 1.10 | upstream Cloud Hypervisor with rich feature support, e.g. hotplug, VFIO and FS sharing| | Firecracker | 1.5 | upstream Firecracker, rust-VMM based, no VFIO, no FS sharing, no memory/CPU hotplug | | QEMU | 1.0 | upstream QEMU, with support for hotplug and filesystem sharing | | StratoVirt | 3.3 | upstream StratoVirt with FS sharing and virtio block hotplug, no VFIO, no CPU/memory resize |"
}
] |
{
"category": "Runtime",
"file_name": "virtualization.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "``` ${EDITOR} integration/install/danm-installer-config.yaml kubectl apply -f integration/install ``` This (currently experimental) method installs DANM using a Kubernetes Job. Just like the manual deployment, it assumes that a previous CNI (also referred to as \"bootstrap CNI\") is already installed. In the setup deployed by this installer, the bootstrap CNI will both be used by DANM components themselves (ie. netwatcher and svcwatcher will utilize that bootstrap CNI for their own network connectivity), as well as being configured as a DanmNet or ClusterNetwork with the name \"default\", that will be used by any Kubernetes applications without a `danm.io` annotation. Please be aware that the existing (bootstrap) CNI configuration must be a single CNI, not a list of CNIs. This means that in your CNI configuration directory, `/etc/cni/net.d`, there should be an existing file with a `.conf` extension, often named something like `10-flannel.conf` or `10-calico.conf`. If there is a file with a `.conflist` extension (such as `10-calico.conflist`), then that is a chained list of multiple CNIs. DANM does not currently support using such a `.conflist` chain as a bootstrap network. Depending on your setup, you may be able to to extract only the first CNI from the list using a command such as the following: ``` jq -M '{ name: .name, cniVersion: .cniVersioni} + .plugins[0]' \\ /etc/cni/net.d/${EXISTINGCONFLISTFILE}.conflist \\ > /etc/cni/net.d/${FIRSTPLUGINFROMLISTCONFIG_FILE}.conf ``` Either way, please be sure that you have a functional `/etc/cni/net.d/*.conf` CNI configuration before proceeding, and know the name of that `.conf` file. This file will need modification to match your setup. Please review/edit `integration/install/danm-installer-config.yaml`. This file will need modification only if the installation container needs to be pulled from an external registry. If this is the case, then please review/edit `integration/install/danm-installer.yaml`. If you have built DANM locally and do not need to pull images, this file does not need updating. ``` kubectl apply -f integration/install ``` After applying the installer CRD, in `kubectl get pods -n kube-system` you should first see a `danm-installer-*` pod starting, and shortly after, the `danm-cni` and `netwatcher` daemonsets, `svcwatcher`, and `danm-webhook-deployment` pods. The `danm-installer-*` pod should end up in \"Completed\" status - if not, please check the pod logs for any errors. After the installer pod ran to completion, you can remove the installer itself: ``` kubectl delete -f integration/install ```"
}
] |
{
"category": "Runtime",
"file_name": "deployment-installer-job.md",
"project_name": "DANM",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Due to its dependency on `dmsetup`, executing the snapshotter process in an environment where a udev daemon is not accessible (such as a container) may result in unexpected behavior. In this case, try executing the snapshotter with the `DMDISABLEUDEV=1` environment variable, which tells `dmsetup` to ignore udev and manage devices itself. See and for more information. `containerd` project contains AWS CloudFormation template to run an EC2 instance suitable for benchmarking. It installs dependencies, prepares EBS volumes with same performance characteristics, and creates thin-pool device. You can make stack with the following command (note: there is a charge for using AWS resources): ```bash aws cloudformation create-stack \\ --stack-name benchmark-instance \\ --template-body file://benchmark_aws.yml \\ --parameters \\ ParameterKey=Key,ParameterValue=SSH_KEY \\ ParameterKey=SecurityGroups,ParameterValue=sg-XXXXXXXX \\ ParameterKey=VolumesSize,ParameterValue=20 \\ ParameterKey=VolumesIOPS,ParameterValue=1000 ``` You can find an IP address of newly created EC2 instance in AWS Console or via AWS CLI: ```bash $ aws ec2 describe-instances \\ --instance-ids $(aws cloudformation describe-stack-resources --stack-name benchmark-instance --query 'StackResources[*].PhysicalResourceId' --output text) \\ --query 'Reservations[].Instances[].PublicIpAddress' \\ --output text ``` SSH to an instance and prepare `containerd`: ```bash ssh -i SSH_KEY ec2-user@IP mkdir /mnt/disk1/data /mnt/disk2/data /mnt/disk3/data go get github.com/containerd/containerd cd $(go env GOPATH)/src/github.com/containerd/containerd make ``` Now you're ready to run the benchmark: ```bash sudo su - cd snapshots/benchsuite/ go test -bench . \\ -dm.thinPoolDev=bench-docker--pool \\ -dm.rootPath=/mnt/disk1/data \\ -overlay.rootPath=/mnt/disk2/data \\ -native.rootPath=/mnt/disk3/data ``` The output will look like: ```bash goos: linux goarch: amd64 pkg: github.com/containerd/containerd/snapshots/testsuite BenchmarkOverlay/run-4 1 1019730210 ns/op 164.53 MB/s BenchmarkOverlay/prepare 1 26799447 ns/op BenchmarkOverlay/write 1 968200363 ns/op BenchmarkOverlay/commit 1 24582560 ns/op BenchmarkDeviceMapper/run-4 1 3139232730 ns/op 53.44 MB/s BenchmarkDeviceMapper/prepare 1 1758640440 ns/op BenchmarkDeviceMapper/write 1 1356705388 ns/op BenchmarkDeviceMapper/commit 1 23720367 ns/op PASS ok github.com/containerd/containerd/snapshots/testsuite 185.204s ``` Don't forget to tear down the stack so it does not continue to incur charges: ```bash aws cloudformation delete-stack --stack-name benchmark-instance ```"
}
] |
{
"category": "Runtime",
"file_name": "snapshotter_bench_readme.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "To safeguard the integrity of container images and prevent tampering from the host side, we propose guest image management. This method, employed for Confidential Containers, ensures container images remain unaltered and secure. Containerd 1.7 introduced `remote snapshotter` feature which is the foundation for pulling images in the guest for Confidential Containers. While it's beyond the scope of this document to fully explain how the container rootfs is created to the point it can be executed, a fundamental grasp of the snapshot concept is essential. Putting it in a simple way, containerd fetches the image layers from an OCI registry into its local content storage. However, they cannot be mounted as is (e.g. the layer can be tar+gzip compressed) as well as they should be immutable so the content can be shared among containers. Thus containerd leverages snapshots of those layers to build the container's rootfs. The role of `remote snapshotter` is to reuse snapshots that are stored in a remotely shared place, thus enabling containerd to prepare the containers rootfs in a manner similar to that of a local `snapshotter`. The key behavior that makes this the building block of Kata's guest image management for Confidential Containers is that containerd will not pull the image layers from registry, instead it assumes that `remote snapshotter` and/or an external entity will perform that operation on his behalf. Maybe the simplest example of `remote snapshotter` in Confidential Containers is the pulling of images in the guest VM. Once ensuring the VM is part of a Trusted Computing Base (TCB) and throughout a chain of delegations involving containerd, `remote snapshotter` and kata-runtime, it is possible for the kata-agent to pull the image directly. `Remote snapshotter` is containerd plug-ins that should implement the . The following `remote snapshotter` is leveraged by Kata Containers: `nydus snapshotter` This `snapshotter` is implemented as an external containerd proxy plug-in for . Currently it supports a couple of runtime backend, notably, `FUSE`, `virtiofs`, and `EROFS`, being the former leveraged on the tested Kata Containers CI. ```go // KataVirtualVolume encapsulates information for extra mount options and direct volumes. type KataVirtualVolume struct { VolumeType string `json:\"volume_type\"` Source string `json:\"source,omitempty\"` FSType string `json:\"fs_type,omitempty\"` Options []string `json:\"options,omitempty\"` DirectVolume *DirectAssignedVolume `json:\"direct_volume,omitempty\"` ImagePull *ImagePullVolume `json:\"image_pull,omitempty\"` //<-Used for pulling images in the guest NydusImage *NydusImageVolume `json:\"nydus_image,omitempty\"` DmVerity *DmVerityInfo `json:\"dm_verity,omitempty\"` } ``` Pull the container image directly from the guest VM using `nydus snapshotter`"
},
{
"data": "Container image pulled in the guest Pause image should be built in the guest's rootfs Confidentiality for image manifest and config: No Confidentiality for blob data: Yes Use `nydus snapshotter` as `remote snapshotter` configured with the FUSE runtime backend The following diagram provides an overview of the architecture for pulling image in the guest with key components. The following sequence diagram depicted below offers a detailed overview of the messages/calls exchanged to pull an unencrypted unsigned image from an unauthenticated container registry. This involves the kata-runtime, kata-agent, and the guest-components image-rs to use the guest pull mechanism. First and foremost, the guest pull code path is only activated when `nydus snapshotter` requires the handling of a volume which type is `imageguestpull`, as can be seen on the message below: ```json { { \"volumetype\": \"imageguest_pull\", \"source\":\"quay.io/kata-containers/confidential-containers:unsigned\", \"fs_type\":\"overlayfs\" \"options\": [ \"containerd.io/snapshot/cri.layer-digest=sha256:24fb2886d6f6c5d16481dd7608b47e78a8e92a13d6e64d87d57cb16d5f766d63\", \"containerd.io/snapshot/nydus-proxy-mode=true\" ], \"image_pull\": { \"metadata\": { \"containerd.io/snapshot/cri.layer-digest\": \"sha256:24fb2886d6f6c5d16481dd7608b47e78a8e92a13d6e64d87d57cb16d5f766d63\", \"containerd.io/snapshot/nydus-proxy-mode\": \"true\" } } } } ``` In other words, `VolumeType` of `KataVirtualVolumeType` is set to `imageguestpull`. Next the `handleImageGuestPullBlockVolume()` is called to build the Storage object that will be attached to the message later sent to kata-agent via the `CreateContainerRequest()` RPC. It is in the `handleImageGuestPullBlockVolume()` that it will begin the handling of the pause image if the request is for a sandbox container type (see more about pause image below). Below is an example of storage information packaged in the message sent to the kata-agent: ```json \"driver\": \"imageguestpull\", \"driver_options\": [ \"imageguestpull\"{ \"metadata\":{ \"containerd.io/snapshot/cri.layer-digest\": \"sha256:24fb2886d6f6c5d16481dd7608b47e78a8e92a13d6e64d87d57cb16d5f766d63\", \"containerd.io/snapshot/nydus-proxy-mode\": \"true\", \"io.katacontainers.pkg.oci.bundle_path\": \"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb0b47276ea66ee9f44cc53afa94d7980b57a52c3f306f68cb034e58d9fbd3c6\", \"io.katacontainers.pkg.oci.containertype\": \"podcontainer\", \"io.kubernetes.cri.container-name\": \"coco-container\", \"io.kubernetes.cri.container-type\": \"container\", \"io.kubernetes.cri.image-name\": \"quay.io/kata-containers/confidential-containers:unsigned\", \"io.kubernetes.cri.sandbox-id\":\"7a0d058477e280604ae02de6a016959e8a05fcd3165c47af41eabcf205b55517\", \"io.kubernetes.cri.sandbox-name\": \"coco-pod\",\"io.kubernetes.cri.sandbox-namespace\": \"default\", \"io.kubernetes.cri.sandbox-uid\": \"de7c6a0c-79c0-44dc-a099-69bb39f180af\", } } ], \"source\": \"quay.io/kata-containers/confidential-containers:unsigned\", \"fstype\": \"overlay\", \"options\": [], \"mount_point\": \"/run/kata-containers/cb0b47276ea66ee9f44cc53afa94d7980b57a52c3f306f68cb034e58d9fbd3c6/rootfs\", ``` Next, the kata-agent's RPC module will handle the create container request which, among other things, involves adding storages to the sandbox. The storage module contains implementations of `StorageHandler` interface for various storage types, being the `ImagePullHandler` in charge of handling the storage object for the container image (the storage manager instantiates the handler based on the value of the \"driver\"). `ImagePullHandler` delegates the image pulling operation to the `ImageService.pull_image()` that is going to create the image's bundle directory on the guest filesystem and, in turn, class the image-rs to in fact fetch and uncompress the image's bundle. Notes: In this flow, `ImageService.pullimage()` parses the image metadata, looking for either the `io.kubernetes.cri.container-type: sandbox` or `io.kubernetes.cri-o.ContainerType: sandbox` (CRI-IO case) annotation, then it never calls the `image-rs.pullimage()` because the pause image is expected to already be inside the guest's filesystem, so instead `ImageService.unpackpauseimage()` is called. References: [2] https://github.com/containerd/containerd/blob/main/docs/content-flow.md"
}
] |
{
"category": "Runtime",
"file_name": "kata-guest-image-management-design.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<picture> <source media=\"(prefers-color-scheme: dark)\" srcset=\"assets/libbpf-logo-sideways-darkbg.png\" width=\"40%\"> <img src=\"assets/libbpf-logo-sideways.png\" width=\"40%\"> </picture> Libbpf sync =========== Libbpf authoritative source code is developed as part of [bpf-next Linux source tree](https://kernel.googlesource.com/pub/scm/linux/kernel/git/bpf/bpf-next) under `tools/lib/bpf` subdirectory and is periodically synced to Github. Most of the mundane mechanical things like bpf and bpf-next tree merge, Git history transformation, cherry-picking relevant commits, re-generating auto-generated headers, etc. are taken care by . But occasionally human needs to do few extra things to make everything work nicely. This document goes over the process of syncing libbpf sources from Linux repo to this Github repository. Feel free to contribute fixes and additions if you run into new problems not outlined here. Setup expectations Sync script has particular expectation of upstream Linux repo setup. It expects that current HEAD of that repo points to bpf-next's master branch and that there is a separate local branch pointing to bpf tree's master branch. This is important, as the script will automatically merge their histories for the purpose of libbpf sync. Below, we assume that Linux repo is located at `~/linux`, it's current head is at latest `bpf-next/master`, and libbpf's Github repo is located at `~/libbpf`, checked out to latest commit on `master` branch. It doesn't matter from where to run `sync-kernel.sh` script, but we'll be running it from inside `~/libbpf`. ``` $ cd ~/linux && git remote -v | grep -E '^(bpf|bpf-next)' bpf https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git (fetch) bpf ssh://[email protected]/pub/scm/linux/kernel/git/bpf/bpf.git (push) bpf-next https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git (fetch) bpf-next ssh://[email protected]/pub/scm/linux/kernel/git/bpf/bpf-next.git (push) $ git branch -vv | grep -E '^? (master|bpf-master)' bpf-master 2d311f480b52 [bpf/master] riscv, bpf: Fix patch_text implicit declaration master c8ee37bde402 [bpf-next/master] libbpf: Fix bpfxdpquery() in old kernels $ git checkout bpf-master && git pull && git checkout master && git pull ... $ git log --oneline -n1 c8ee37bde402 (HEAD -> master, bpf-next/master) libbpf: Fix bpfxdpquery() in old kernels $ cd ~/libbpf && git checkout master && git pull Your branch is up to date with 'libbpf/master'. Already up to date. ``` Running setup script -- First step is to always run `sync-kernel.sh` script. It expects three arguments: ``` $ scripts/sync-kernel.sh <libbpf-repo> <kernel-repo> <bpf-branch> ``` Note, that we'll store script's entire output in `/tmp/libbpf-sync.txt` and put it into PR summary later on. Please store scripts output and include it in PR summary for others to check for anything unexpected and suspicious. ``` $ scripts/sync-kernel.sh ~/libbpf ~/linux bpf-master | tee /tmp/libbpf-sync.txt Dumping existing libbpf commit signatures... WORKDIR: /home/andriin/libbpf LINUX REPO: /home/andriin/linux LIBBPF REPO: /home/andriin/libbpf ... ``` Most of the time this will go very uneventful. One expected case when sync script might require user intervention is if `bpf` tree has some libbpf fixes, which is nowadays not a very frequent occurence. But if that happens, script will show you a diff between expected state as of latest bpf-next and synced Github repo state. And will ask if these changes look good. Please use your best judgement to verify that differences are indeed from expected `bpf` tree fixes. E.g., it might look like below: ``` Comparing list of files... Comparing file contents... /home/andriin/linux/include/uapi/linux/netdev.h 2023-02-27 16:54:42.270583372 -0800 +++ /home/andriin/libbpf/include/uapi/linux/netdev.h 2023-02-27"
},
{
"data": "-0800 @@ -19,7 +19,7 @@ @NETDEVXDPACTXSKZEROCOPY: This feature informs if netdev supports AF_XDP in zero copy mode. @NETDEVXDPACTHWOFFLOAD: This feature informs if netdev supports XDP hw * oflloading. * offloading. @NETDEVXDPACTRXSG: This feature informs if netdev implements non-linear XDP buffer support in the driver napi callback. @NETDEVXDPACTNDOXMIT_SG: This feature informs if netdev implements /home/andriin/linux/include/uapi/linux/netdev.h and /home/andriin/libbpf/include/uapi/linux/netdev.h are different! Unfortunately, there are some inconsistencies, please double check. Does everything look good? [y/N]: ``` If it looks sensible and expected, type `y` and script will proceed. If sync is successful, your `~/linux` repo will be left in original state on the original HEAD commit. `~/libbpf` repo will now be on a new branch, named `libbpf-sync-<timestamp>` (e.g., `libbpf-sync-2023-02-28T00-53-40.072Z`). Push this branch into your fork of `libbpf/libbpf` Github repo and create a PR: ``` $ git push --set-upstream origin libbpf-sync-2023-02-28T00-53-40.072Z Enumerating objects: 130, done. Counting objects: 100% (115/115), done. Delta compression using up to 80 threads Compressing objects: 100% (28/28), done. Writing objects: 100% (32/32), 5.57 KiB | 1.86 MiB/s, done. Total 32 (delta 21), reused 0 (delta 0), pack-reused 0 remote: Resolving deltas: 100% (21/21), completed with 9 local objects. remote: remote: Create a pull request for 'libbpf-sync-2023-02-28T00-53-40.072Z' on GitHub by visiting: remote: https://github.com/anakryiko/libbpf/pull/new/libbpf-sync-2023-02-28T00-53-40.072Z remote: To github.com:anakryiko/libbpf.git [new branch] libbpf-sync-2023-02-28T00-53-40.072Z -> libbpf-sync-2023-02-28T00-53-40.072Z Branch 'libbpf-sync-2023-02-28T00-53-40.072Z' set up to track remote branch 'libbpf-sync-2023-02-28T00-53-40.072Z' from 'origin'. ``` Please, adjust PR name to have a properly looking timestamp. Libbpf maintainers will be very thankful for that! By default Github will turn above branch name into PR with subject \"Libbpf sync 2023 02 28 t00 53 40.072 z\". Please fix this into a proper timestamp, e.g.: \"Libbpf sync 2023-02-28T00:53:40.072Z\". Thank you! Please don't forget to paste contents of /tmp/libbpf-sync.txt into PR summary! Once PR is created, libbpf CI will run a bunch of tests to check that everything is good. In simple cases that would be all you'd need to do. In more complicated cases some extra adjustments might be necessary. Please, keep naming and style consistent. Prefix CI-related fixes with `ci: ` prefix. If you had to modify sync script, prefix it with `sync: `. Also make sure that each such commit has `Signed-off-by: Your Full Name <[email protected]>`, just like you'd do that for Linux upstream patch. Libbpf closely follows kernel conventions and styling, so please help maintaining that. Including new sources If entirely new source files (typically `*.c`) were added to the library in the kernel repository, it may be necessary to add these to the build system manually (you may notice linker errors otherwise), because the script cannot handle such changes automatically. To that end, edit `src/Makefile` as necessary. Commit is an example of how to go about doing that. Similarly, if new public API header files were added, the `Makefile` will need to be adjusted as well. Updating allow/deny lists Libbpf CI intentionally runs a subset of latest BPF selftests on old kernel (4.9 and 5.5, currently). It happens from time to time that some tests that previously were successfully running on old kernels now don't, typically due to reliance on some freshly added kernel"
},
{
"data": "It might look something like this in : ``` All error logs: serialtestxdpinfo:FAIL:getxdp_none errno=2 Summary: 49/166 PASSED, 5 SKIPPED, 1 FAILED ``` In such case we can either work with upstream to fix test to be compatible with old kernels, or we'll have to add a test into a denylist (or remove it from allowlist, like was for the case above). ``` $ find . -name 'LIST' ./ci/vmtest/configs/ALLOWLIST-4.9.0 ./ci/vmtest/configs/DENYLIST-5.5.0 ./ci/vmtest/configs/DENYLIST-latest.s390x ./ci/vmtest/configs/DENYLIST-latest ./ci/vmtest/configs/ALLOWLIST-5.5.0 ``` Please determine which tests need to be added/removed from which list. And then add that as a separate commit. Please keep using the same branch name, so that the same PR can be updated. There is no need to open new PRs for each such fix. Regenerating vmlinux.h header -- To compile latest BPF selftests against old kernels, we check in pre-generated header file, located at `.github/actions/build-selftests/vmlinux.h`, which contains type definitions from latest upstream kernel. When after libbpf sync upstream BPF selftests require new kernel types, we'd need to regenerate `vmlinux.h` and check it in as well. This will looks something like this in : ``` In file included from progs/testspinlock_fail.c:5: /home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/bpfexperimental.h:73:53: error: declaration of 'struct bpfrb_root' will not be visible outside of this function [-Werror,-Wvisibility] extern struct bpfrbnode bpf_rbtree_remove(struct bpf_rb_root root, ^ /home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/bpfexperimental.h:81:35: error: declaration of 'struct bpfrb_root' will not be visible outside of this function [-Werror,-Wvisibility] extern void bpfrbtreeadd(struct bpfrbroot root, struct bpf_rb_node node, ^ /home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/bpfexperimental.h:90:52: error: declaration of 'struct bpfrb_root' will not be visible outside of this function [-Werror,-Wvisibility] extern struct bpfrbnode bpf_rbtree_first(struct bpf_rb_root root) ksym; ^ 3 errors generated. make: * [Makefile:572: /home/runner/work/libbpf/libbpf/.kernel/tools/testing/selftests/bpf/testspinlock_fail.bpf.o] Error 1 make: * Waiting for unfinished jobs.... Error: Process completed with exit code 2. ``` You'll need to build latest upstream kernel from `bpf-next` tree, using BPF selftest configs. Concat arch-agnostic and arch-specific configs, build kernel, then use bpftool to dump `vmlinux.h`: ``` $ cd ~/linux $ cat tools/testing/selftests/bpf/config \\ tools/testing/selftests/bpf/config.x86_64 > .config $ make -j$(nproc) olddefconfig all ... $ bpftool btf dump file ~/linux/vmlinux format c > ~/libbpf/.github/actions/build-selftests/vmlinux.h $ cd ~/libbpf && git add . && git commit -s ``` Check in generated `vmlinux.h`, don't forget to use `ci: ` commit prefix, add it on top of sync commits. Push to Github and let libbpf CI do the checking for you. See for reference. Troubleshooting If something goes wrong and sync script exits early or is terminated early by user, you might end up with `~/linux` repo on temporary sync-related branch. Don't worry, though, sync script never destroys repo state, it follows \"copy-on-write\" philosophy and creates new branches where necessary. So it's very easy to restore previous state. So if anything goes wrong, it's easy to start fresh: ``` $ git branch | grep -E 'libbpf-.*Z' libbpf-baseline-2023-02-28T00-43-35.146Z libbpf-bpf-baseline-2023-02-28T00-43-35.146Z libbpf-bpf-tip-2023-02-28T00-43-35.146Z libbpf-squash-base-2023-02-28T00-43-35.146Z libbpf-squash-tip-2023-02-28T00-43-35.146Z $ git cherry-pick --abort $ git checkout master && git branch | grep -E 'libbpf-.*Z' | xargs git br -D Switched to branch 'master' Your branch is up to date with 'bpf-next/master'. Deleted branch libbpf-baseline-2023-02-28T00-43-35.146Z (was 951bce29c898). Deleted branch libbpf-bpf-baseline-2023-02-28T00-43-35.146Z (was 3a70e0d4c9d7). Deleted branch libbpf-bpf-tip-2023-02-28T00-43-35.146Z (was 2d311f480b52). Deleted branch libbpf-squash-base-2023-02-28T00-43-35.146Z (was 957f109ef883). Deleted branch libbpf-squash-tip-2023-02-28T00-43-35.146Z (was be66130d2339). Deleted branch libbpf-tip-2023-02-28T00-43-35.146Z (was 2d311f480b52). ``` You might need to do the same for your `~/libbpf` repo sometimes, depending at which stage sync script was terminated."
}
] |
{
"category": "Runtime",
"file_name": "SYNC.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Memory hotplug is a key feature for containers to allocate memory dynamically in deployment. As Kata Container bases on VM, this feature needs support both from VMM and guest kernel. Luckily, it has been fully supported for the current default version of QEMU and guest kernel used by Kata on arm64. For other VMMs, e.g, Cloud Hypervisor, the enablement work is on the road. Apart from VMM and guest kernel, memory hotplug also depends on ACPI which depends on firmware either. On x86, you can boot a VM using QEMU with ACPI enabled directly, because it boots up with firmware implicitly. For arm64, however, you need specify firmware explicitly. That is to say, if you are ready to run a normal Kata Container on arm64, what you need extra to do is to install the UEFI ROM before use the memory hotplug feature. We have offered a helper script for you to install the UEFI ROM. If you have installed Kata normally on your host, you just need to run the script as fellows: ```bash $ pushd $GOPATH/src/github.com/kata-containers/tests $ sudo .ci/aarch64/installromaarch64.sh $ popd ``` After executing the above script, two files will be generated under the directory `/usr/share/kata-containers/` by default, namely `kata-flash0.img` and `kata-flash1.img`. Next we need to change the configuration file of `kata qemu`, which is in `/opt/kata/share/defaults/kata-containers/configuration-qemu.toml` by default, specify in the configuration file to use the UEFI ROM installed above. The above is an example of `kata deploy` installation. For package management installation, please use `kata-runtime env` to find the location of the configuration file. Please refer to the following configuration. ``` [hypervisor.qemu] pflashes = [\"/usr/share/kata-containers/kata-flash0.img\", \"/usr/share/kata-containers/kata-flash1.img\"] ``` Let's test if the memory hotplug is ready for Kata after install the UEFI ROM. Make sure containerd is ready to run Kata before test. ```bash $ sudo ctr image pull docker.io/library/ubuntu:latest $ sudo ctr run --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/ubuntu:latest hello sh -c \"free -h\" $ sudo ctr run --runtime io.containerd.run.kata.v2 -t --memory-limit 536870912 --rm docker.io/library/ubuntu:latest hello sh -c \"free -h\" ``` Compare the results between the two tests. If the latter is 0.5G larger than the former, you have done what you want, and congratulation!"
}
] |
{
"category": "Runtime",
"file_name": "how-to-hotplug-memory-arm64.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Moving more functionality into containerd means more requirements from users. One thing that we have ran into was the disconnect of what our Container model is and what users expect, `docker create;docker start;docker rm` vs a container that is destroyed when it exits. To users, containers are more of a metadata object that resources(rw, configuration) and information(state, last exit status) are attached to. We have been reworking what we call a \"container\" today to be called a \"task\" and a Container metadata object. The task only has runtime state: a pid, namespaces, cgroups, etc. A container has an id, root filesystem, configuration, and other metadata from a user. Managing static state and runtime state in the same object is very tricky so we choose to keep execution and metadata separate. We are hoping to not cause more confusion with this additional task object. You can see a mockup of a client interacting with a container and how the task is handled below: ```go container, err := client.NewContainer(id, spec, rootfs) task, err := container.CreateTask(containerd.Stdio()) task.Start() task.Pid() task.Kill(syscall.SIGKILL) container.Delete() ``` There is a PR open for checkpoint and restore within containerd. We needed to port over this code from the existing containerd branch but we are able to do more in terms of functionality since we have filesystem and distribution built in. The overall functionality for checkpoint/restore is that you can still checkpoint containers but instead of having a directory on disk with the checkpointed data, it is checkpointed to the content store. Having checkpoints in the content store allows you to push checkpoints that include the container's memory data, rw layer with the file contents that the container has written, and other resources like bind(volumes) to a registry that is running so that you can live migrate containers to other hosts in your cluster. And you can do all this with your existing registries; no other services required to migrate containers around your data center. The RootFS service has been removed and replaced with a snapshot service and diff service. The snapshot service provides access to all snapshotter methods and allows clients to operate directly against the snapshotter"
},
{
"data": "This enables clients to prepare RW layers as it could before with the RootFS service, but also has full access to commit or remove those snapshots directly. The diff service provides 2 methods, extract and diff. The extract takes in a set of mounts and a descriptor to a layer tar, mounts, then extracts the tar from the content store into the mount. The diff service takes in 2 sets of mounts, computes the diff, sends the diff to the content store and returns the content descriptor for the computed diff. The diff service is designed to allow clients to pull content into snapshotters without requiring the privileges to mount and handle root-owned files. We have a few remaining tasks to finish up in the next few weeks before we consider containerd to be feature complete. We want the ability to have a single containerd running on a system but allow multiple consumers like Docker, swarmkit, and Kube all consuming the same containerd daemon without stepping on each others containers and images. We will do this with namespaces. ```go client, err := containerd.NewClient(address, namespace) ``` Right now we have pull support for fetching content but we need to have the ability to produce content in terms of builds and pushing checkpoints. We need to finish the events service so that consumers have a single service that they can consume to handle events produced by containerd, images, and their containers. After we have these features completed, the majority of the time in June will be spent reviewing the API and client packages so that we have an API we are comfortable supporting in an LTS release. We currently have a few integrations in the works for using containerd with Kube, Swarmkit, and Docker and in June we hope to complete some of these integrations and have testing infrastructure setup for these. So that is the basic plan for the next 6 weeks or so. Feature complete with the last remaining tasks then iterate on our API and clients to make sure we have the features that consumers need and an API that we can support for the long term."
}
] |
{
"category": "Runtime",
"file_name": "2017-05-19.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(dev-incus)= Communication between the hosted workload (instance) and its host while not strictly needed is a pretty useful feature. In Incus, this feature is implemented through a `/dev/incus/sock` node which is created and set up for all Incus instances. This file is a Unix socket which processes inside the instance can connect to. It's multi-threaded so multiple clients can be connected at the same time. ```{note} {config:option}`instance-security:security.guestapi` must be set to `true` (which is the default) for an instance to allow access to the socket. ``` Incus on the host binds `/var/lib/incus/guestapi/sock` and starts listening for new connections on it. This socket is then exposed into every single instance started by Incus at `/dev/incus/sock`. The single socket is required so we can exceed 4096 instances, otherwise, Incus would have to bind a different socket for every instance, quickly reaching the FD limit. Queries on `/dev/incus/sock` will only return information related to the requesting instance. To figure out where a request comes from, Incus will extract the initial socket's user credentials and compare that to the list of instances it manages. The protocol on `/dev/incus/sock` is plain-text HTTP with JSON messaging, so very similar to the local version of the Incus protocol. Unlike the main Incus API, there is no background operation and no authentication support in the `/dev/incus/sock` API. `/` `/1.0` `/1.0/config` `/1.0/config/{key}` `/1.0/devices` `/1.0/events` `/1.0/images/{fingerprint}/export` `/1.0/meta-data` Description: List of supported APIs Return: list of supported API endpoint URLs (by default `['/1.0']`) Return value: ```json [ \"/1.0\" ] ``` Description: Information about the 1.0 API Return: JSON object Return value: ```json { \"api_version\": \"1.0\", \"location\": \"foo.example.com\", \"instance_type\": \"container\", \"state\": \"Started\", } ``` Description: Update instance state (valid states are `Ready` and `Started`) Return: none Input: ```json { \"state\": \"Ready\" } ``` Description: List of configuration keys Return: list of configuration keys URL Note that the configuration key names match those in the instance configuration, however not all configuration namespaces will be exported to `/dev/incus/sock`. Currently only the `cloud-init.` and `user.` keys are accessible to the instance. At this time, there also aren't any instance-writable namespace. Return value: ```json [ \"/1.0/config/user.a\" ] ``` Description: Value of that key Return: Plain-text value Return value: blah Description: Map of instance devices Return: JSON object Return value: ```json { \"eth0\": { \"name\": \"eth0\", \"network\": \"incusbr0\", \"type\": \"nic\" }, \"root\": { \"path\": \"/\", \"pool\": \"default\", \"type\": \"disk\" } } ``` Description: WebSocket upgrade Return: none (never ending flow of events) Supported arguments are: type: comma-separated list of notifications to subscribe to (defaults to all) The notification types are: `config` (changes to any of the `user.` configuration keys) `device` (any device addition, change or removal) This never returns. Each notification is sent as a separate JSON object: ```json { \"timestamp\": \"2017-12-21T18:28:26.846603815-05:00\", \"type\": \"device\", \"metadata\": { \"name\": \"kvm\", \"action\": \"added\", \"config\": { \"type\": \"unix-char\", \"path\": \"/dev/kvm\" } } } ``` ```json { \"timestamp\": \"2017-12-21T18:28:26.846603815-05:00\", \"type\": \"config\", \"metadata\": { \"key\": \"user.foo\", \"old_value\": \"\", \"value\": \"bar\" } } ``` Description: Download a public/cached image from the host Return: raw image or error Access: Requires `security.guestapi.images` set to `true` Return value: See /1.0/images/<FINGERPRINT>/export in the daemon API. Description: Container meta-data compatible with cloud-init Return: cloud-init meta-data Return value: instance-id: af6a01c7-f847-4688-a2a4-37fddd744625 local-hostname: abc"
}
] |
{
"category": "Runtime",
"file_name": "dev-incus.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"ark get schedules\" layout: docs Get schedules Get schedules ``` ark get schedules [flags] ``` ``` -h, --help help for schedules --label-columns stringArray a comma-separated list of labels to be displayed as columns -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. (default \"table\") -l, --selector string only show items matching this label selector --show-labels show labels in the last column ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Get ark resources"
}
] |
{
"category": "Runtime",
"file_name": "ark_get_schedules.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "[TOC] gVisor implements its own network stack called netstack. All aspects of the network stack are handled inside the Sentry including TCP connection state, control messages, and packet assembly keeping it isolated from the host network stack. Data link layer packets are written directly to the virtual device inside the network namespace setup by Docker or Kubernetes. Configuring the network stack may provide performance benefits, but isn't the only step to optimizing gVisor performance. See the [Production guide] for more. The IP address and routes configured for the device are transferred inside the sandbox. The loopback device runs exclusively inside the sandbox and does not use the host. You can inspect them by running: ```bash docker run --rm --runtime=runsc alpine ip addr ``` For high-performance networking applications, you may choose to disable the user space network stack and instead use the host network stack, including the loopback. Note that this mode decreases the isolation to the host. Add the following `runtimeArgs` to your Docker configuration (`/etc/docker/daemon.json`) and restart the Docker daemon: ```json { \"runtimes\": { \"runsc\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--network=host\" ] } } } ``` To completely isolate the host and network from the sandbox, external networking can be disabled. The sandbox will still contain a loopback provided by netstack. Add the following `runtimeArgs` to your Docker configuration (`/etc/docker/daemon.json`) and restart the Docker daemon: ```json { \"runtimes\": { \"runsc\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--network=none\" ] } } } ``` If your Linux is older than 4.14.77, you can disable Generic Segmentation Offload (GSO) to run with a kernel that is newer than 3.17. Add the `--gso=false` flag to your Docker runtime configuration (`/etc/docker/daemon.json`) and restart the Docker daemon: Note: Network performance, especially for large payloads, will be greatly reduced. ```json { \"runtimes\": { \"runsc\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--gso=false\" ] } } } ```"
}
] |
{
"category": "Runtime",
"file_name": "networking.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This page contains a list of organizations who are using Carina in production or at stages of testing. If you'd like to be included here, please send a pull request which put the picture in the static directory,modifies this file and add information in README.md known users. : BoCloud Cloud Container Platform was awarded by Gratner as the Representative Vendor in China's Cloud Container CaaS segment and was ranked in the top 5 market share in China Mainland (IDC Container Software Market Report). Many of the existing customers have been using carina in production or testing environments for years, typically for cloudnative middlewares. |Logo| Organization | Status | |-| | | || | |"
}
] |
{
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "Carina",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Securing Connections Across Untrusted Networks menu_order: 40 search_type: Documentation To connect containers across untrusted networks, Weave Net peers can be instructed to encrypt traffic by supplying a `--password` option or by using the `WEAVE_PASSWORD` environment variable during `weave launch`. For example: host1$ weave launch --password wfvAwt7sj or host1$ export WEAVE_PASSWORD=wfvAwt7sj host1$ weave launch NOTE: The command line option takes precedence over the environment variable. To avoid leaking your password via the kernel process table or your shell history, we recommend you store it in a file and capture it in a shell variable prior to launching weave: `export WEAVE_PASSWORD=$(cat /path/to/password-file)` To guard against dictionary attacks, the password needs to be reasonably strong, with at least 50 bits of entropy. An easy way to generate a random password that satisfies this requirement is: < /dev/urandom tr -dc A-Za-z0-9 | head -c9 ; echo The same password must be specified for all Weave Net peers, by default both control and data plane traffic will then use authenticated encryption. As an optimization, you can selectively disable data plane encryption if some of your peers are co-located in a trusted network, for example within the boundary of your own data center. List these networks using the `--trusted-subnets` argument with `weave launch`: weave launch --password wfvAwt7sj --trusted-subnets 10.0.2.0/24,192.168.48.0/24 If both peers at the end of a connection consider the other to be in the same trusted subnet, Weave Net attempts to establish non-encrypted connectivity. Otherwise communication is encrypted which imposes overheads. Configured trusted subnets are shown in . Be aware that: Containers will be able to access the router REST API if fast datapath is disabled. You can prevent this by setting: . Containers are able to access the router control and data plane ports, but this can be mitigated by enabling encryption. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "security-untrusted-networks.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Before we begin be sure to . confd supports the following backends: etcd consul vault environment variables redis zookeeper dynamodb stackengine rancher metad This guide assumes you have a working , or server up and running and the ability to add new keys. ``` etcdctl set /myapp/database/url db.example.com etcdctl set /myapp/database/user rob ``` ``` curl -X PUT -d 'db.example.com' http://localhost:8500/v1/kv/myapp/database/url curl -X PUT -d 'rob' http://localhost:8500/v1/kv/myapp/database/user ``` ``` vault mount -path myapp generic vault write myapp/database url=db.example.com user=rob ``` ``` export MYAPPDATABASEURL=db.example.com export MYAPPDATABASEUSER=rob ``` ``` redis-cli set /myapp/database/url db.example.com redis-cli set /myapp/database/user rob ``` ``` [zk: localhost:2181(CONNECTED) 1] create /myapp \"\" [zk: localhost:2181(CONNECTED) 2] create /myapp/database \"\" [zk: localhost:2181(CONNECTED) 3] create /myapp/database/url \"db.example.com\" [zk: localhost:2181(CONNECTED) 4] create /myapp/database/user \"rob\" ``` First create a table with the following schema: ``` aws dynamodb create-table \\ --region <YOURREGION> --table-name <YOURTABLE> \\ --attribute-definitions AttributeName=key,AttributeType=S \\ --key-schema AttributeName=key,KeyType=HASH \\ --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 ``` Now create the items. The attribute value `value` must be of type : ``` aws dynamodb put-item --table-name <YOURTABLE> --region <YOURREGION> \\ --item '{ \"key\": { \"S\": \"/myapp/database/url\" }, \"value\": {\"S\": \"db.example.com\"}}' aws dynamodb put-item --table-name <YOURTABLE> --region <YOURREGION> \\ --item '{ \"key\": { \"S\": \"/myapp/database/user\" }, \"value\": {\"S\": \"rob\"}}' ``` ``` curl -k -X PUT -d 'value' https://mesh-01:8443/api/kv/key --header \"Authorization: Bearer stackengineapikey\" ``` This backend consumes the metadata service. For available keys, see the . ``` curl http://127.0.0.1:9611/v1/data -X PUT -d '{\"myapp\":{\"database\":{\"url\":\"db.example.com\",\"user\":\"rob\"}}}' ``` The confdir is where template resource configs and source templates are stored. ``` sudo mkdir -p /etc/confd/{conf.d,templates} ``` Template resources are defined in config files under the `confdir`. /etc/confd/conf.d/myconfig.toml ``` [template] src = \"myconfig.conf.tmpl\" dest = \"/tmp/myconfig.conf\" keys = [ \"/myapp/database/url\", \"/myapp/database/user\", ] ``` Source templates are . /etc/confd/templates/myconfig.conf.tmpl ``` [myconfig] database_url = {{getv \"/myapp/database/url\"}} database_user = {{getv \"/myapp/database/user\"}} ``` confd supports two modes of operation daemon and onetime. In daemon mode confd polls a backend for changes and updates destination configuration files if necessary. ``` confd -onetime -backend etcd -node"
},
{
"data": "``` ``` confd -onetime -backend consul -node 127.0.0.1:8500 ``` ``` ROOT_TOKEN=$(vault read -field id auth/token/lookup-self) confd -onetime -backend vault -node http://127.0.0.1:8200 \\ -auth-type token -auth-token $ROOT_TOKEN ``` ``` confd -onetime -backend dynamodb -table <YOUR_TABLE> ``` ``` confd -onetime -backend env ``` ``` confd -onetime -backend stackengine -auth-token stackengineapikey -node 192.168.255.210:8443 -scheme https ``` ``` confd -onetime -backend redis -node 192.168.255.210:6379 ``` ``` confd -onetime -backend rancher -prefix /2015-07-25 ``` ``` confd --onetime --backend metad --node 127.0.0.1:80 --watch ``` ======= Note: The metadata api prefix can be defined on the cli, or as part of your keys in the template toml file. Output: ``` 2014-07-08T20:38:36-07:00 confd[16252]: INFO Target config /tmp/myconfig.conf out of sync 2014-07-08T20:38:36-07:00 confd[16252]: INFO Target config /tmp/myconfig.conf has been updated ``` The `dest` configuration file should now be in sync. ``` cat /tmp/myconfig.conf ``` Output: ``` [myconfig] database_url = db.example.com database_user = rob ``` In this example we will use confd to manage two nginx config files using a single template. ``` etcdctl set /myapp/subdomain myapp etcdctl set /myapp/upstream/app2 \"10.0.1.100:80\" etcdctl set /myapp/upstream/app1 \"10.0.1.101:80\" etcdctl set /yourapp/subdomain yourapp etcdctl set /yourapp/upstream/app2 \"10.0.1.102:80\" etcdctl set /yourapp/upstream/app1 \"10.0.1.103:80\" ``` ``` curl -X PUT -d 'myapp' http://localhost:8500/v1/kv/myapp/subdomain curl -X PUT -d '10.0.1.100:80' http://localhost:8500/v1/kv/myapp/upstream/app1 curl -X PUT -d '10.0.1.101:80' http://localhost:8500/v1/kv/myapp/upstream/app2 curl -X PUT -d 'yourapp' http://localhost:8500/v1/kv/yourapp/subdomain curl -X PUT -d '10.0.1.102:80' http://localhost:8500/v1/kv/yourapp/upstream/app1 curl -X PUT -d '10.0.1.103:80' http://localhost:8500/v1/kv/yourapp/upstream/app2 ``` /etc/confd/conf.d/myapp-nginx.toml ``` [template] prefix = \"/myapp\" src = \"nginx.tmpl\" dest = \"/tmp/myapp.conf\" owner = \"nginx\" mode = \"0644\" keys = [ \"/subdomain\", \"/upstream\", ] check_cmd = \"/usr/sbin/nginx -t -c {{.src}}\" reload_cmd = \"/usr/sbin/service nginx reload\" ``` /etc/confd/conf.d/yourapp-nginx.toml ``` [template] prefix = \"/yourapp\" src = \"nginx.tmpl\" dest = \"/tmp/yourapp.conf\" owner = \"nginx\" mode = \"0644\" keys = [ \"/subdomain\", \"/upstream\", ] check_cmd = \"/usr/sbin/nginx -t -c {{.src}}\" reload_cmd = \"/usr/sbin/service nginx reload\" ``` /etc/confd/templates/nginx.tmpl ``` upstream {{getv \"/subdomain\"}} { {{range getvs \"/upstream/*\"}} server {{.}}; {{end}} } server { server_name {{getv \"/subdomain\"}}.example.com; location / { proxy_pass http://{{getv \"/subdomain\"}}; proxy_redirect off; proxysetheader Host $host; proxysetheader X-Real-IP $remote_addr; proxysetheader X-Forwarded-For $proxyaddxforwardedfor; } } ```"
}
] |
{
"category": "Runtime",
"file_name": "quick-start-guide.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "A Gomega release is a tagged sha and a GitHub release. To cut a release: Ensure CHANGELOG.md is up to date. Use ```bash LAST_VERSION=$(git tag --sort=version:refname | tail -n1) CHANGES=$(git log --pretty=format:'- %s [%h]' HEAD...$LAST_VERSION) echo -e \"## NEXT\\n\\n$CHANGES\\n\\n### Features\\n\\n### Fixes\\n\\n### Maintenance\\n\\n$(cat CHANGELOG.md)\" > CHANGELOG.md ``` to update the changelog Categorize the changes into Breaking Changes (requires a major version) New Features (minor version) Fixes (fix version) Maintenance (which in general should not be mentioned in `CHANGELOG.md` as they have no user impact) Update GOMEGAVERSION in `gomegadsl.go` Commit, push, and release: ``` git commit -m \"vM.m.p\" git push gh release create \"vM.m.p\" git fetch --tags origin master ```"
}
] |
{
"category": "Runtime",
"file_name": "RELEASING.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "name: Refactor task about: Suggest a refactoring request for an existing implementation title: \"[REFACTOR] \" labels: kind/refactoring assignees: '' <!--A clear and concise description of what the problem is.--> <!--A clear and concise description of what you want to happen.--> <!--A clear and concise description of any alternative solutions or features you've considered.--> <!--Add any other context or screenshots about the request here.-->"
}
] |
{
"category": "Runtime",
"file_name": "refactor.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Add ability to use Fully Qualified Domain Names (FQDNs) in egress policy rules when defining Antrea-native policies: both exact matches and wildcards are supported. ( , [@Dyanngg] [@antoninbas] [@GraysonWu] [@madhukark] [@lzhecheng]) Add support for WireGuard to encrypt inter-Node Pod traffic (as an alternative to IPsec); traffic mode must be set to encap and the \"tunnelType\" option will be ignored. ( , [@xliuxu] [@tnqn]) Support for configurable transport interface for Pod traffic. (, [@wenyingd]) Use the \"transportInterface\" configuration parameter for the Antrea Agent to choose an interface by name; the default behavior is unchanged (interface to which the K8s Node IP is assigned is used) On Windows, SNAT is now performed by the host and no longer by OVS, to accommodate for this change [Windows] Support for dual-stack transport interfaces (the IPv4 and IPv6 addresses have to be assigned to the same interface); this in turn enables support for the noEncap traffic mode in dual-stack clusters. (, [@lzhecheng]) Add Status field to the ExternalIPPool CRD: it is used to report usage information for the pool (total number of IPs in the pool and number of IPs that are currently assigned). (, [@wenqiq]) Add Egress support for IPv6 and dual-stack clusters. ( , [@wenqiq]) Add ability to filter logs by timestamp with the \"antctl supportbundle\" command. (, [@hangyan] [@weiqiangt]) Support for IPv6 / dual-stack Kind clusters. (, [@adobley] [@christianang] [@gwang550]) Add support for sending JSON records from the Flow Aggregator instead of IPFIX records (which is still the default), as it can achieve better performance with Logstash. (, [@zyiou]) Support \"--sort-by\" flag for \"antctl get networkpolicy\" in Agent mode. (, [@antoninbas]) Remove the restriction that a ClusterGroup must exist before it can be used as a child group to define other ClusterGroups. (, [@Dyanngg]) Remove the restriction that a ClusterGroup must exist before it can be used in an Antrea ClusterNetworkPolicy. (, [@Dyanngg] [@abhiraut]) Remove \"controlplane.antrea.tanzu.vmware.com/v1beta1\" API as per our API deprecation policy. ( , [@luolanzone]) Controller responses to ClusterGroup membership queries (\"/clustergroupmembers\" API) now include the list of IPBlocks when appropriate. (, [@Dyanngg] [@abhiraut]) Install all Endpoint flows belonging to a Service via a single OpenFlow bundle, to reduce flow installation time when the Agent starts. (, [@tnqn]) Improve the batch installation of NetworkPolicy rules when the Agent starts: only generate flow operations based on final desired state instead of incrementally. (, [@tnqn] [@Dyanngg]) Use GroupMemberSet.Merge instead of GroupMemberSet.Union to reduce CPU usage and memory footprint in the Agent's policy"
},
{
"data": "(, [@tnqn]) When checking for the existence of an iptables chain, stop listing all the chains and searching through them; this change reduces the Agent's memory footprint. (, [@tnqn]) Tolerate more failures for the Agent's readiness probe, as the Agent may stay disconnected from the Controller for a long time in some scenarios. (, [@tnqn]) Remove restriction that only GRE tunnels can be used when enabling IPsec: VXLAN can also be used, and so can Geneve (if the Linux kernel version for the Nodes is recent enough). (, [@luolanzone]) Automatically perform deduplication on NetworkPolicy audit logs for denied connections: all duplicate connections received within a 1 second buffer window will be merged and the corresponding log entry will include the connection count. ( , [@qiyueyao]) Support returning partial supportbundle results when some Nodes fail to respond. (, [@hangyan]) When listing NetworkPolicyStats through the Controller API, return an empty list if the `NetworkPolicyStats` Feature Gate is disabled, instead of returning an error. (, [@PeterEltgroth]) Update OVS version from 2.14.2 to 2.15.1: the new version fixes Geneve tunnel support in the userspace datapath (used for Kind clusters). (, [@antoninbas]) Update , [@srikartati] [@zyiou]) Support pretty-printing for AntreaAgentInfo and AntreaControllerInfo CRDs. (, [@antoninbas]) Improve the process of updating the Status of an Egress resource to report the name of the Node to which the Egress IP is assigned. (, [@wenqiq]) Change the singular name of the ClusterGroup CRD from \"group\" to \"clustergroup\". (, [@abhiraut]) Officially-supported Go version is no longer 1.15 but 1.17. ( , [@antoninbas]) There was a notable change in the implementation of the \"ParseIP\" and \"ParseCIDR\" functions, but Antrea users should not be affected; refer to this Standardize the process of reserving OVS register ranges and defining constant values for them; OVS registers are used to store per-packet information when required to implement specific features. (, [@wenyingd]) Update ELK stack reference configuration to support TCP transport. (, [@zyiou]) Update Windows installation instructions. (, [@lzheheng]) Update Antrea-native policies documentation to reflect the addition of the \"kubernetes.io/metadata.name\" in upstream K8s. (, [@abhiraut]) Default to containerd as the container runtime in the Vagrant-based test K8s cluster. (, [@stanleywbwong]) Update AllowToCoreDNS example in Antrea-native policies documentation. (, [@btrieger]) Update actions/setup-go to v2 in all Github workflows. (, [@MysteryBlokHed]) Fix panic in Agent when calculating the stats for a rule newly added to an existing NetworkPolicy. (, [@tnqn]) Fix bug in iptables rule installation for dual-stack clusters: if a rule was already present for one protocol but not the other, its installation may have been"
},
{
"data": "(, [@lzhecheng]) Fix deadlock in the Agent's FlowExporter, between the export goroutine and the conntrack polling goroutine. (, [@srikartati]) Upgrade OVS version to 2.14.2-antrea.1 for Windows Nodes; this version of OVS is built on top of the upstream 2.14.2 release and also includes a patch to fix TCP checksum computation when the DNAT action is used. (, [@lzhecheng]) [Windows] Handle transient iptables-restore failures (caused by xtables lock contention) in the NodePortLocal initialization logic. (, [@antoninbas]) Query and check the list of features supported by the OVS datapath during Agent initialization: if any required feature is not supported, the Agent will log an error and crash, instead of continuing to run which makes it hard to troubleshoot such issues. (, [@tnqn]) On Linux, wait for the ovs-vswitchd PID file to be ready before running ovs-apptcl commands. (, [@tnqn]) Periodically delete stale connections in the Flow Exporter if they cannot be exported (e.g. because the collector is not available), to avoid running out-of-memory. (, [@srikartati]) Fix handling of the \"reject\" packets generated by the Antrea Agent in the OVS pipeline, to avoid infinite looping when traffic between two endpoints is rejected by network policies in both directions. (, [@GraysonWu]) Fix Linux kernel version parsing to accommodate for more Linux distributions, in particular RHEL / CentOS. (, [@Jexf]) Fix interface naming for IPsec tunnels: based on Node names, the first char could sometimes be a dash, which is not valid. (, [@luolanzone]) When creating an IPsec OVS tunnel port to a remote Node, handle the case where the port already exists but with a stale config graciously: delete the existing port first, then recreate it. (, [@luolanzone]) Fix the policy information reported by the Flow Exporter when a Baseline Antrea-native policy is applied to the flow. (, [@zyiou]) Clean up log files for the Flow Aggregator periodically: prior to this fix, the \"--logfilemaxsize\" and \"--logfilemaxnum\" command-line flags were ignore for the flow-aggregator Pod. (, [@srikartati]) Fix missing template ID when sending the first IPFIX flow record from the FlowAggregator. (, [@zyiou]) Ensure that the Windows Node name obtained from the environment or from hostname is converted to lower-case. (, [@shettyg]) [Windows] Fix Antrea network clean-up script for Windows; in particular remove Hyper-V binding on network adapter used as OVS uplink so that it can recover its IP address correctly. (, [@wenyingd]) [Windows] Fix reference Logstash configuration to avoid division by zero in throughput calculation. (, [@zyiou]) Fix nil pointer error when collecting a supportbundle on a Node for which the antrea-agent container image does not include \"iproute2\"; this does not affect the standard antrea/antrea-ubuntu container image. (, [@liu4480])"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.3.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document describes how you can build and test Kilo. To follow along, you need to install the following utilities: `go` not for building but formatting the code and running unit tests `make` `jq` `git` `curl` `docker` Clone the Repository and `cd` into it. ```shell git clone https://github.com/squat/kilo.git cd kilo ``` For consistency, the Kilo binaries are compiled in a Docker container, so make sure the `docker` package is installed and the daemon is running. To compile the `kg` and `kgctl` binaries run: ```shell make ``` Binaries are always placed in a directory corresponding to the local system's OS and architecture following the pattern `bin/<os>/<architecture>/`, so on an AMD64 machine running Linux, the binaries will be stored in `bin/linux/amd64/`. You can build the binaries for a different architecture by setting the `ARCH` environment variable before invoking `make`, e.g.: ```shell ARCH=<arm|arm64|amd64> make ``` Likewise, to build `kg` for another OS, set the `OS` environment variable before invoking `make`: ```shell OS=<windows|darwin|linux> make ``` To execute the unit tests, run: ```shell make unit ``` To lint the code in the repository, run: ```shell make lint ``` To execute basic end to end tests, run: ```shell make e2e ``` Note: The end to end tests are currently flaky, so try running them again if they fail. To instead run all of the tests with a single command, run: ```shell make test ``` If you want to build containers for a processor architecture that is different from your computer's, then you will first need to configure QEMU as the interpreter for binaries built for non-native architectures: ```shell docker run --rm --privileged multiarch/qemu-user-static --reset -p yes ``` Set the `$IMAGE` environment variable to `<your Docker Hub user name>/kilo`. This way the generated container images and manifests will be named accordingly. By skipping this step, you will be able to tag images but will not be able to push the containers and manifests to your own Docker Hub. ```shell export IMAGE=<docker hub user name>/kilo ``` If you want to use a different container registry, run: ```shell export REGISTRY=<your registry without a trailing slash> ``` To build containers with the `kg` image for `arm`, `arm64` and `amd64`, run: ```shell make all-container ``` Push the container images and build a manifest with: ```shell make manifest ``` To tag and push the manifest with `latest`, run: ```shell make manifest-latest ``` Now you can deploy the custom build of Kilo to your cluster. If you are already running Kilo, change the image from `squat/kilo` to `[registry/]<username>/kilo[:sha]`."
}
] |
{
"category": "Runtime",
"file_name": "building_kilo.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "{{ template \"chart.header\" . }} {{ template \"chart.deprecationWarning\" . }} {{ template \"chart.badgesSection\" . }} {{ template \"chart.description\" . }} {{ template \"chart.homepageLine\" . }} ```bash helm repo add k8up-io https://k8up-io.github.io/k8up helm install {{ template \"chart.name\" . }} k8up-io/{{ template \"chart.name\" . }} ```"
}
] |
{
"category": "Runtime",
"file_name": "helm-docs-header.gotmpl.md",
"project_name": "K8up",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document explains how to analyze core-dumps obtained from regression machines, with examples. Download the core-tarball and extract it. `cd` into directory where the tarball is extracted. ``` [sh]# pwd /home/user/Downloads [sh]# ls build build-install-201506250542_39.tar.bz2 lib64 usr ``` Determine the core file you need to examine. There can be more than one core file. You can list them from './build/install/cores' directory. ``` [sh]# ls build/install/cores/ core.9341 liblist.txt liblist.txt.tmp ``` In case you are unsure which binary generated the core-file, executing 'file' command on it will help. ``` [sh]# file ./build/install/cores/core.9341 ./build/install/cores/core.9341: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/build/install/sbin/glusterfsd -s slave26.cloud.gluster.org --volfile-id patchy' ``` As seen, the core file was generated by glusterfsd binary, and path to it is provided (/build/install/sbin/glusterfsd). Now, run the following command on the core: ``` gdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/core.xxx' <target, say ./build/install/sbin/glusterd> In this case, gdb -ex 'set sysroot ./' -ex 'core-file ./build/install/cores/core.9341' ./build/install/sbin/glusterfsd ``` You can cross check if all shared libraries are available and loaded by using 'info sharedlibrary' command from inside gdb. Once verified, usual gdb commands based on requirement can be used to debug the core. `bt` or `backtrace` from gdb of core used in examples: ``` Core was generated by `/build/install/sbin/glusterfsd -s slave26.cloud.gluster.org --volfile-id patchy'. Program terminated with signal SIGABRT, Aborted. (gdb) bt at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/features/marker/src/marker-quota.c:2921 at /home/jenkins/root/workspace/rackspace-regression-2GB-triggered/xlators/features/marker/src/marker-quota.c:2789 (gdb) ```"
}
] |
{
"category": "Runtime",
"file_name": "analyzing-regression-cores.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.