content
listlengths
1
171
tag
dict
[ { "data": "In order to cut a new release, a few things must be done: auto-generate the CHANGELOG using the provided script bump version.go and docs/installation.md to the new release push a tag for the new release draft a bump version.go to the next release, appending `-dev` For the former, you can use the following script: $ echo -e \"$(./contrib/generate-changelog.sh v$LATEST_RELEASE)\\n\" | cat - CHANGELOG | sponge CHANGELOG You can find `sponge` in the `moreutils` package on Ubuntu. This script will generate all merged changes since $LATEST_RELEASE and append it to the top of the CHANGELOG. However, this will show up as \"HEAD\" at the top: $ ./contrib/generate-changelog.sh v$LATEST_RELEASE abc123 Some merged PR summary ... You'll need to manually modify \"HEAD\" to show up as the latest release. When drafting a new release, you must make sure that a `darwin` and `linux` build of confd have been uploaded. If you have cross-compile support, you can use the following command to generate those binaries: $ CONFDCROSSPLATFORMS=\"darwin/amd64 linux/amd64\" NEWRELEASE=\"x.y.z\" $ for platform in $CONFD_CROSSPLATFORMS; do \\ GOOS=${platform%/} GOARCH=${platform##/} ./build; \\ mv bin/confd bin/confd-$NEW_RELEASE-${platform%/}-${platform##/}; \\ done You can then drag and drop these binaries into the release draft." } ]
{ "category": "Runtime", "file_name": "release-checklist.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "Longhorn system supports volume snapshotting and stores the snapshot disk files on the local disk. However, it is impossible to check the data integrity of snapshots due to the lack of the checksums of the snapshots in current implementation. As a result, if the underlying storage bit rots, no way is available to detect the data corruption and repair the replicas. In the enhancement, the snapshot checksum is calculated after the snapshot is taken and is checked periodically. When a corrupted snapshot is detected, a replica rebuild is triggered to repair the snapshot. Automatic snapshot hashing Identify corrupted snapshot Trigger replica rebuild when a corrupted snapshot is detected The hashing/checking mechanism is applied to detached volumes Support concurrent snapshot hashing In current architecture, the instance-manager-r does not have a proxy, so the snapshot requests are directly sent to the replica processes sync-agent servers. Hence, the concurrent limit cannot achieved in the instance-manager-r internally. From the benchmarking result, the checksum calculation eats too much io resource and impacts the system performance a lot. We also dont know if the longhorn disks on a same physical disk or not. If they are on the same physical disk and the concurrent limit is larger than 1, the other workloads will be impacted significantly, and there might be a disaster for the entire system. Bit rot in storage is rare but real, and it can corrupt the data silently. Longhorn supports volume snapshotting and restoring a volume to a previous version. However, due to the lack of the checksums of the snapshots in current implementation, it is impossible to ensure the data integrity of the replicas/snapshots. Although, we provide a method () to identify the corrupted snapshots/replicas, the process is tedious and time-consuming for users. Users' operations will not be affected by snapshot hashing and checking. The system will consume computing and disk IO resources while hashing snapshot disk files. In the meantime, the CPU usages are 380m and 900m when computing the CRC64 (ISO) and SHA256 values, respectively. In the implementation, the CRC64 (ISO) is utilized for detecting corruption. The snapshot hashing benchmarking result is provided The read performance will be impacted as well, as summarized in the below table. Environment Host: AWS EC2 c5d.2xlarge CPU: Intel(R) Xeon(R) Platinum 8124M CPU @ 3.00GHz Memory: 16 GB Network: Up to 10Gbps Kubernetes: v1.24.4+rke2r1 Result Disk: 200 GiB NVMe SSD as the instance store 100 GiB snapshot with full random data Disk: 200 GiB throughput optimized HDD (st1) 30 GiB snapshot with full random data Add `snapshot hash` and `snapshot hash-status` commands `snaphost hash` issues a snapshot hashing request to engine. Usage: `longhorn --url ${engine-ip}:${engine-port} snapshot hash tcp://${replica-sync-agent-ip}:${replica-sync-agent-port} --snapshot-name ${name}` `snapshot hash-status` requests the snapshot hashing status from engine. Usage: `longhorn --url ${engine-ip}:${engine-port} snapshot hash-status tcp://${replica-sync-agent-ip}:${replica-sync-agent-port}` `snapshot hash-cancel` cancels the snapshot hashing task. Usage: `longhorn --url ${engine-ip}:${engine-port} snapshot hash-cancel tcp://${replica-sync-agent-ip}:${replica-sync-agent-port}` Add `SnapshotHash`, `SnapshotHashStatus` and `SnapshotHashCancel` methods and their request and response messages. `SnapshotHash` issues a snapshot hashing request to engine. `SnapshotHashStatus` requests the snapshot hashing status from engine. `SnapshotHashCancel` cancels the snapshot hashing task. Add `SnapshotHash`, `SnapshotHashStatus` and `SnapshotHashCancel` methods and their request and response" }, { "data": "`SnapshotHash` issues a snapshot hashing request to replica sync-agent. `SnapshotHashStatus` requests the snapshot hashing status from replica sync-agent. `SnapshotHashCancel` cancels the snapshot hashing task. snapshot-data-integrity Description: A global setting for enabling or disabling snapshot data integrity checking mode. Type: string Value: disabled: Disable snapshot disk file hashing and data integrity checking. enabled: Enables periodic snapshot disk file hashing and data integrity checking. To detect the filesystem-unaware corruption caused by bit rot or other issues in snapshot disk files, Longhorn system periodically hashes files and finds corrupted ones. Hence, the system performance will be impacted during the periodical checking. fast-check: Enable snapshot disk file hashing and fast data integrity checking. Longhorn system only hashes snapshot disk files if they are not hashed or if the modification time changed. In this mode, filesystem-unaware corruption cannot be detected, but the impact on system performance can be minimized. Default: `disabled` snapshot-data-integrity-immediate-checking-after-snapshot-creation Description: Hashing snapshot disk files impacts the performance of the system. The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot. Type: bool Default: `false` snapshot-data-integrity-cron-job Description: The setting is a set of five fields in a line, indicating when Longhorn checks the data integrity of snapshot disk files. Type: string (Cron job format) Default: `0 0 /7 *` (once a week) Volume Add `volume.spec.snapshotDataIntegrity` for setting the volume's snapshot data integrity checking mode. The value can be `ignored`, `disabled`, `enabled` or `fast-check`. `ignored` means the the volume's snapshot check is following the global setting `snapshot-data-integrity`. After upgrading Longhorn-system, the value is set to `ignored` for an existing volumes whose `volume.spec.snapshotDataIntegrity` is not set. For a newly created volume, the value is `ignored` by default. Snapshot Add `snapshot.status.checksum` for recording the snapshot `crc64(iso)` checksum. Node Add `node.status.snapshotPeriodicCheckStatus.state` for indicating current periodic check state. The value can be `idle` or `in-progress`. Add `node.status.snapshotPeriodicCheckStatus.lastCheckedAt` for recording the start timestamp of the last checksum checking. Node controller creates a snapshot monitor for hashing snapshot disk files as well as checking their data integrity. The monitor is consist of 1 goroutine `processSnapshotChangeEvent()`: Send a snapshot hashing/checking task to the `snapshotCheckTaskQueue` workqueue after receiving one snapshot `UPDATE` event. 1 goroutine `processPeriodicSnapshotCheck()`: Periodically create snapshot hashing/checking tasks. The period is determined by the global setting `snapshot-data-integrity-cron-job`. When the job is started, it populates engines' snapshots and sends snapshot hashing/checking tasks to the `snapshotCheckTaskQueue` channel. N task workers: Issue hashing requests to engines and detect corrupted replicas according to the results. Task workers fetch tasks from `snapshotCheckTaskQueue` and check if the snapshot disk file needs to be hashed. The rules are If one of the following conditions are met, do not hash the file Volume-head disk file, i.e. `Volume Head` in the following figure System-generated snapshot disk file, e.g. `ccb017f6` and `9a8d5c9c`. Issue snapshot hashing requests to their associated engines. Then, the checksum of the snapshot disk file is calculated individually in the replica process. To ensure only one in-progress calculation, the worker holds the per-node file lock (`/host/var/lib/longhorn/.lock/hash`) when calculating the checksum to avoid significant storage performance drop caused by the concurrent calculations. The worker waits until each snapshot disk file's checksum calculation has been" }, { "data": "It periodically polls the engine and checks the status during the waiting period. The worker gets the result once the calculation is completed. The result is like ``` map[string]string{ \"pvc-abc-r-001\": 0470c08fbc4dc702, \"pvc-abc-r-002\": 0470c08fbc4dc702, \"pvc-abc-r-003\": ce7c12a4d568fddf, } ``` The final checksum is determined by the majority of the checksums with `SilentCorrupted=false` from replicas. For instance, the final checksum of the result in 4. is `0470c08fbc4dc702`. When all checksums differ, the final checksum is unable to be determined If `snapshot.status.checksum` is empty Set all replicas to `ERR` If `snapshot.status.checksum` is already set Use the `snapshot.status.checksum` as the final checksum, and set the replicas that have mismatching checksums to `ERR` When the final checksum is successfully determined Assign the final checksum to `snapshot.status.checksum` Set the replica to `ERR` if its snapshot disk file's checksum is not equal to `snapshot.status.checksum` If the final checksum cannot be determined, the event of the corruption detected is also emitted. For example, Longhorn will not do any error handling and just emits a event when the silent corruption is found in a single-replica volume. Then, the replicas in `ERR` mode will be rebuilt and fixed. The event of the corruption detected is also emitted. When the replica process received the request of snapshot disk file hashing, the checking mode is determined by `volume.spec.snapshotDataIntegrity`. If the value is `ignored`, the checking mode follows the global setting `snapshot-data-integrity`. `fask-check` Flow Get `ctime` information of the snapshot disk file. Get the value of the extended attribute `user.longhorn-system.metadata` recording the checksum and `ctime` of the file in the last calculation. The value of `user.longhorn-system.metadata` is JSON formatted string and records `hashing method`, `checksum`, `ctime` and etc. Compare the `ctime` from 1. and 2. Recalculate the checksum if one of the conditions is met The two `ctime` are mismatched. 2's `ctime` is not existing. Ensure that the checksum is reliable by getting the `ctime` information of the disk file again after the checksum calculation. If it is matched with 1's `ctime`, update the extended attribute with the latest result. Instead, it indicates the file is changed by snapshot pruning, merging or other operations. Thus, recalculate the checksum. A maximum retries controls the recalculation. Return checksum or error to engine. enabled Because the silent data corruption in snapshot disk files can be caused by the host's storage device such as bit rot or somewhere within the storage stack. Filesystem cannot be aware of the corruption. To detect the corruption, the checksums of the disk files are always be recalculated and return back to engine. Silent corruption is detected when the disk file's `ctime` matches the `ctime` in the extended attribute, but the checksums do not match. The extended attribute will not be updated, and the `SilentCorrupted` of the hash status will be set to `true`. disabled Do nothing. Integration tests Test snapshot disk files hashing Compare the checksum recorded in `snapshot.status.checksum` and the checksum (calculated by a of each replica's snapshot disk file. Test snapshot disk files check Corrupt a snapshot disk file in one of the replicas. Then, check the corruption is detected by Longhorn, and the replica rebuilding should be triggered. Install Java (`apt install default-jre default-jdk`) Download jacksum (`wget https://github.com/jonelo/jacksum/releases/download/v3.4.0/jacksum-3.4.0.jar`) Calculate checksum by `java -jar jacksum-3.4.0.jar -a crc64_go-iso ${path-to-file}`" } ]
{ "category": "Runtime", "file_name": "20220922-snapshot-checksum-and-bit-rot-detection.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Manual Testing Requirements for Velero\" layout: docs Although we have automated unit and end-to-end tests, there is still a need for Velero to undergo manual tests during a release. This document outlines the manual test operations that Velero needs to correctly perform in order to be considered ready for release. The following are test cases that are currently performed as part of a Velero release. Verify that Velero CRDs are compatible with the earliest and latest versions of Kubernetes that we support: Kubernetes v1.12 Kubernetes v1.20 Verify that Velero upgrade instructions work The \"Backup and Restore\" test cases below describe general backup and restore functionality that needs to run successfully on all the following providers that we maintain plugins for: AWS GCP Microsoft Azure VMware vSphere Verify that a backup and restore using Volume Snapshots can be performed Verify that a backup and restore using Restic can be performed Verify that a backup of a cluster workload can be restored in a new cluster Verify that an installation using the latest version can be used to restore from backups created with the last 3 versions. e.g. Install Velero 1.6 and use it to restore backups from Velero v1.3, v1.4, v1.5. The following are test cases that exercise Velero behaviour when interacting with multiple providers: Verify that a backup and restore to multiple BackupStorageLocations using the same provider with unique credentials can be performed Verify that a backup and restore to multiple BackupStorageLocations using different providers with unique credentials can be performed Verify that a backup and restore that includes volume snapshots using different providers for the snapshots and object storage can be performed e.g. perform a backup and restore using AWS for the VolumeSnapshotLocation and Azure Blob Storage as the BackupStorageLocation The following are test cases that are not currently performed as part of a Velero release but cases that we will want to cover with future releases. Verify that schedules create a backup upon creation and create Backup resources at the correct frequency Verify that deleted backups are successfully removed from object storage Verify that backups that have been removed from object storage can still be deleted with `velero delete backup` Verify that Volume Snapshots associated with a deleted backup are removed Verify that backups that exceed their TTL are deleted Verify that existing backups in object storage are synced to Velero Verify that restic repository maintenance is performed as the specified interval Verify that a pre backup hook provided via pod annotation is performed during backup Verify that a pre backup hook provided via Backup spec is performed during backup Verify that a post backup hook provided via pod annotation is performed during backup Verify that a post backup hook provided via Backup spec is performed during backup Verify that an InitContainer restore hook provided via pod annotation is performed during restore Verify that an InitContainer restore hook provided via Restore spec is performed during restore Verify that an InitContainer restore hook provided via Restore spec is performed during restore that includes restoring restic volumes Verify that an Exec restore hook provided via pod annotation is performed during restore Verify that an Exec restore hook provided via Restore spec is performed during restore Verify that backups and restores correctly apply the following resource filters: `--include-namespaces` `--include-resources` `--include-cluster-resources` `--exclude-namespaces` `--exclude-resources` `velero.io/exclude-from-backup=true` label" } ]
{ "category": "Runtime", "file_name": "manual-testing.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This document is meant to contain all related information about implementation and usability. DPDK Cryptodev is an asynchronous crypto API that supports both Hardware and Software implementations (for more details refer to ). When there are enough Cryptodev resources for all workers, the node graph is reconfigured by adding and changing the default next nodes. The following nodes are added: dpdk-crypto-input : polling input node, dequeuing from crypto devices. dpdk-esp-encrypt : internal node. dpdk-esp-decrypt : internal node. dpdk-esp-encrypt-post : internal node. dpdk-esp-decrypt-post : internal node. Set new default next nodes: for esp encryption: esp-encrypt -> dpdk-esp-encrypt for esp decryption: esp-decrypt -> dpdk-esp-decrypt When building DPDK with VPP, Cryptodev support is always enabled. Additionally, on x86_64 platforms, DPDK is built with SW crypto support. VPP allocates crypto resources based on a best effort approach: first allocate Hardware crypto resources, then Software. if there are not enough crypto resources for all workers, the graph node is not modified and the default VPP IPsec implementation based in OpenSSL is used. The following message is displayed: 0: dpdkipsecinit: not enough Cryptodevs, default to OpenSSL IPsec To enable DPDK Cryptodev the user just need to provide cryptodevs in the startup.conf. Below is an example startup.conf, it is not meant to be a default configuration: ``` dpdk { dev 0000:81:00.0 dev 0000:81:00.1 dev 0000:85:01.0 dev 0000:85:01.1 vdev cryptoaesnimb0,socket_id=1 vdev cryptoaesnimb1,socket_id=1 } ``` In the above configuration: 0000:81:01.0 and 0000:81:01.1 are Ethernet device BDFs. 0000:85:01.0 and 0000:85:01.1 are Crypto device BDFs and they require the same driver binding as DPDK Ethernet devices but they do not support any extra configuration options. Two AESNI-MB Software (Virtual) Cryptodev PMDs are created in NUMA node 1. For further details refer to The following CLI command displays the Cryptodev/Worker mapping: show crypto device mapping [verbose] Building the DPDK Crypto Libraries requires the open source project nasm (The Netwide Assembler) to be installed. Recommended version of nasm is 2.12.02. Minimum supported version of nasm is 2.11.06. Use the following command to determine the current nasm version: nasm -v CentOS 7.3 and earlier and Fedora 21 and earlier use unsupported versions of nasm. Use the following set of commands to build a supported version: wget http://www.nasm.us/pub/nasm/releasebuilds/2.12.02/nasm-2.12.02.tar.bz2 tar -xjvf nasm-2.12.02.tar.bz2 cd nasm-2.12.02/ ./configure make sudo make install" } ]
{ "category": "Runtime", "file_name": "dpdk_crypto_ipsec_doc.md", "project_name": "FD.io", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- If you need help or think you have found a bug, please help us with your issue by entering the following information (otherwise you can delete this text): --> Output of `lsblk`: ``` ``` Output of `blkid`: ``` ``` Output of `kubectl version`: ``` ``` Output of `kubectl get lsn -o yaml`: ``` ``` Cloud Provider/Platform (AKS, GKE, Minikube etc.): ``` ```" } ]
{ "category": "Runtime", "file_name": "issue_template.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "slug: /comparison/juicefsvss3ql Similar to JuiceFS, S3QL is also an open source network file system driven by object storage and database. All data will be split into blocks and stored in object storage services such as Amazon S3, Backblaze B2, or OpenStack Swift, and the corresponding metadata will be stored in the database. Both support the standard POSIX file system interface through the FUSE module, so that massive cloud storage can be mounted locally and used like local storage. Both provide standard file system features: hard links, symbolic links, extended attributes, file permissions. Both support data compression and encryption, but the algorithms used are different. Both support metadata backup, S3QL automatically backs up SQLite databases to object storage, and JuiceFS automatically exports metadata to JSON format files every hour and backs them up to object storage for easy recovery and migration between various metadata engines. S3QL only supports SQLite. But JuiceFS supports more databases, such as Redis, TiKV, MySQL, PostgreSQL, and SQLite. S3QL has no distributed capability and does not support multi-host shared mounting. JuiceFS is a typical distributed file system. When using a network-based database, it supports multi-host distributed mount read and write. S3QL commits a data block to S3 when it has not been accessed for more than a few seconds. After a file closed or even fsynced, it is only guaranteed to stay in system memory, which may result in data loss if node fails. JuiceFS ensures high data durability, uploading all blocks synchronously when a file is closed. S3QL provides data deduplication. Only one copy of the same data is stored, which can reduce the storage usage, but it will also increase the performance overhead of the system. JuiceFS pays more attention to performance, and it is too expensive to perform deduplication on large-scale data, so this function is temporarily not provided. | | S3QL | JuiceFS | | : | :-- | :- | | Project status | Active development | Active development | | Metadata engine | SQLite | Redis, MySQL, SQLite, TiKV | | Storage engine | Object Storage, Local | Object Storage, WebDAV, Local | | Operating system | Unix-like | Linux, macOS, Windows | | Compression algorithm | LZMA, bzip2, gzip | LZ4, zstd | | Encryption algorithm | AES-256 | AES-GCM, RSA | | POSIX compatible | | | | Hard link | | | | Symbolic link | | | | Extended attributes | | | | Standard Unix permissions | | | | Data block | | | | Local cache | | | | Elastic storage | | | | Metadata backup | | | | Data deduplication | | | | Immutable trees | | | | Snapshots | | | | Share mount | | | | Hadoop SDK | | | | Kubernetes CSI Driver | | | | S3 gateway | | | | Language | Python | Go | | Open source license | GPLv3 | Apache License 2.0 | | Open source date | 2011 |" }, { "data": "| This part mainly evaluates the ease of installing and using the two products. During the installation process, we use Rocky Linux 8.4 operating system (kernel version 4.18.0-305.12.1.el84.x8664). S3QL is developed in Python and requires `python-devel` 3.7 or higher to be installed. In addition, at least the following dependencies must be satisfied: `fuse3-devel`, `gcc`, `pyfuse3`, `sqlite-devel`, `cryptography`, `defusedxml`, `apsw`, `dugong`. In addition, you need to pay special attention to Python's package dependencies and location issues. S3QL will install 12 binary programs in the system, and each program provides an independent function, as shown in the figure below. JuiceFS is developed in Go and can be used directly by downloading the pre-compiled binary file. The JuiceFS client has only one binary program `juicefs`. You can just copy it to any executable path of the system, for example: `/usr/local/bin`. Both S3QL and JuiceFS use database to store metadata. S3QL only supports SQLite databases, while JuiceFS supports databases such as Redis, TiKV, MySQL, MariaDB, PostgreSQL, and SQLite. Here we create a file system using S3QL and JuiceFS separately with locally created MinIO as object storage: S3QL uses `mkfs.s3ql` to create a file system: ```shell mkfs.s3ql --plain --backend-options no-ssl -L s3ql s3c://127.0.0.1:9000/s3ql/ ``` Mount a file system using `mount.s3ql`: ```shell mount.s3ql --compress none --backend-options no-ssl s3c://127.0.0.1:9000/s3ql/ mnt-s3ql ``` S3QL needs the access key of the object storage API to be interactively provided through the command line when creating and mounting a file system. JuiceFS uses the `format` subcommand to create a file system: ```shell juicefs format --storage minio \\ --bucket http://127.0.0.1:9000/myjfs \\ --access-key minioadmin \\ --secret-key minioadmin \\ sqlite3://myjfs.db \\ myjfs ``` Mount a file system using `mount` subcommand: ```shell sudo juicefs mount -d sqlite3://myjfs.db mnt-juicefs ``` JuiceFS only sets the object storage API access key when creating a file system, and the relevant information will be written into the metadata engine. After created, there is no need to repeatedly provide the object storage url, access key and other information. S3QL adopts the storage structure of object storage + SQLite. Storing data in blocks can not only improve the read and write efficiency of the file but also reduce the resource overhead when the file is modified. The advanced features such as snapshots, data deduplication, and data retention, as well as the default data compression and data encryption make S3QL very suitable for individuals to store files in cloud storage at a lower cost and with higher security. JuiceFS supports object storage, HDFS, WebDAV, and local disks as data storage engines, and supports popular databases such as Redis, TiKV, MySQL, MariaDB, PostgreSQL, and SQLite as metadata storage engines. It provides a standard POSIX file system interface through FUSE and a Java API, which can directly replace HDFS to provide storage for Hadoop. At the same time, it also provides , which can be used as the storage layer of Kubernetes for data persistent storage. JuiceFS is a file system designed for enterprise-level distributed data storage scenarios. It is widely used in various scenarios such as big data analysis, machine learning, container shared storage, data sharing, and backup." } ]
{ "category": "Runtime", "file_name": "juicefs_vs_s3ql.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: REST API While users should use for data I/O operations, admins can interact with Alluxio through REST API for actions not supported by S3 API. For example, mount and unmount operations. For portability with other language, the ) is also accessible via an HTTP proxy in the form of a REST API. Alluxio's Python and Go clients rely on this REST API to talk to Alluxio. The is generated as part of the Alluxio build and accessible through `${ALLUXIO_HOME}/core/server/proxy/target/miredot/index.html`. The main difference between the REST API and the Alluxio Java API is in how streams are represented. While the Alluxio Java API can use in-memory streams, the REST API decouples the stream creation and access (see the `create` and `open` REST API methods and the `streams` resource endpoints for details). The HTTP proxy is a standalone server that can be started using `${ALLUXIOHOME}/bin/alluxio process start proxy` and stopped using `${ALLUXIOHOME}/bin/alluxio process stop proxy`. By default, the REST API is available on port 39999. There are performance implications of using the HTTP proxy. In particular, using the proxy requires an extra network hop to perform filesystem operations. For optimal performance, it is recommended to run the proxy server and an Alluxio worker on each compute node. Alluxio has a for interacting with Alluxio through its . The Python client exposes an API similar to the ). See the for detailed documentation about all available methods. See the on how to perform basic file system operations in Alluxio. The Python client requires an Alluxio proxy that exposes the to function. ```shell $ pip install alluxio ``` The following program includes examples of how to create directory, download, upload, check existence for, and list status for files in Alluxio. This example can also be found in the Python package's repository. ```python import json import sys import alluxio from alluxio import option def colorize(code): def _(text, bold=False): c = code if bold: c = '1;%s' % c return '\\033[%sm%s\\033[0m' % (c, text) return _ green = colorize('32') def info(s): print(green(s)) def pretty_json(obj): return json.dumps(obj, indent=2) pytestroot_dir = '/py-test-dir' pytestnested_dir = '/py-test-dir/nested' pytest = pytestnesteddir + '/py-test' pytestrenamed = pytestroot_dir + '/py-test-renamed' client =" }, { "data": "39999) info(\"creating directory %s\" % pytestnested_dir) opt = option.CreateDirectory(recursive=True) client.createdirectory(pytestnesteddir, opt) info(\"done\") info(\"writing to %s\" % py_test) with client.open(py_test, 'w') as f: f.write('Alluxio works with Python!\\n') with open(sys.argv[0]) as this_file: f.write(this_file) info(\"done\") info(\"getting status of %s\" % py_test) stat = client.getstatus(pytest) print(pretty_json(stat.json())) info(\"done\") info(\"renaming %s to %s\" % (pytest, pytest_renamed)) client.rename(pytest, pytest_renamed) info(\"done\") info(\"getting status of %s\" % pytestrenamed) stat = client.getstatus(pytest_renamed) print(pretty_json(stat.json())) info(\"done\") info(\"reading %s\" % pytestrenamed) with client.open(pytestrenamed, 'r') as f: print(f.read()) info(\"done\") info(\"listing status of paths under /\") rootstats = client.liststatus('/') for stat in root_stats: print(pretty_json(stat.json())) info(\"done\") info(\"deleting %s\" % pytestroot_dir) opt = option.Delete(recursive=True) client.delete(pytestroot_dir, opt) info(\"done\") info(\"asserting that %s is deleted\" % pytestroot_dir) assert not client.exists(pytestroot_dir) info(\"done\") ``` Alluxio has a for interacting with Alluxio through its . The Go client exposes an API similar to the . See the for detailed documentation about all available methods. The godoc includes examples of how to download, upload, check existence for, and list status for files in Alluxio. The Go client requires an Alluxio proxy that exposes the to function. ```shell $ go get -d github.com/Alluxio/alluxio-go ``` If there is no Alluxio proxy running locally, replace \"localhost\" below with a hostname of a proxy. ```go package main import ( \"fmt\" \"io/ioutil\" \"log\" \"strings\" \"time\" alluxio \"github.com/Alluxio/alluxio-go\" \"github.com/Alluxio/alluxio-go/option\" ) func write(fs *alluxio.Client, path, s string) error { id, err := fs.CreateFile(path, &option.CreateFile{}) if err != nil { return err } defer fs.Close(id) _, err = fs.Write(id, strings.NewReader(s)) return err } func read(fs *alluxio.Client, path string) (string, error) { id, err := fs.OpenFile(path, &option.OpenFile{}) if err != nil { return \"\", err } defer fs.Close(id) r, err := fs.Read(id) if err != nil { return \"\", err } defer r.Close() content, err := ioutil.ReadAll(r) if err != nil { return \"\", err } return string(content), err } func main() { fs := alluxio.NewClient(\"localhost\", 39999, 10*time.Second) path := \"/test_path\" exists, err := fs.Exists(path, &option.Exists{}) if err != nil { log.Fatal(err) } if exists { if err := fs.Delete(path, &option.Delete{}); err != nil { log.Fatal(err) } } if err := write(fs, path, \"Success\"); err != nil { log.Fatal(err) } content, err := read(fs, path) if err != nil { log.Fatal(err) } fmt.Printf(\"Result: %v\\n\", content) } ```" } ]
{ "category": "Runtime", "file_name": "REST-API.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "The NodePortLocal feature is graduated from Alpha to Beta. Support for proxying all Service traffic by Antrea Proxy, including NodePort, LoadBalancer, and ClusterIP traffic. Therefore, running kube-proxy is no longer required. ( , [@hongliangl] [@lzhecheng]) The feature works for both Linux and Windows The feature is experimental and therefore disabled by default. Use the `antreaProxy.proxyAll` configuration parameter for the Antrea Agent to enable it If kube-proxy is removed, the `kubeAPIServerOverride` configuration parameter for the Antrea Agent must be set to access kube-apiserver directly Add , [@gran-vmv] [@annakhm]) Add new IPPool API to define ranges of IP addresses which can be used as Pod IPs; the IPs in the IPPools must be in the same \"underlay\" subnet as the Node IP A Pod's IP will be allocated from the IPPool specified by the `ipam.antrea.io/ippools` annotation of the Pod's Namespace if there is one When the feature is enabled, the Node's network interface will be connected to the OVS bridge, in order to forward cross-Node traffic of AntreaIPAM Pods through the underlay network Refer to the for more information Add , [@ksamoray]) Refer to the for instructions on how to configure it Support for configurable transport interface CIDRs for Pod traffic. (, [@Jexf]) Use the `transportInterfaceCIDRs` configuration parameter for the Antrea Agent to choose an interface by network CIDRs Add UDP support for NodePortLocal. (, [@chauhanshubham]) Add the `nodePortLocal.enable` configuration parameter for the Antrea Agent to enable NodePortLocal. (, [@antoninbas]) Add more visibility metrics to report the connection status of the Antrea Agent to the Flow Aggregator. (, [@zyiou]) Add the `antreaProxy.skipServices` configuration parameter for the Antrea Agent to specify Services which should be ignored by AntreaProxy. (, [@luolanzone]) A typical use case is setting `antreaProxy.skipServices` to `[\"kube-system/kube-dns\"]` to make [NodeLocal DNSCache] work when AntreaProxy is enabled Add support for `ToServices` in the rules of Antrea-native policies to allow matching traffic intended for Services. (, [@GraysonWu]) Add the `egress.exceptCIDRs` configuration parameter for the Antrea Agent, to specify IP destinations for which SNAT should not be performed on outgoing traffic. (, [@leonstack]) Add user documentation for , [@jianjuns]) Add user documentation for , [@jianjuns]) Remove chmod for OVSDB file from start_ovs, as the permissions are set correctly by OVS" }, { "data": "(, [@antoninbas]) Reduce memory usage of antctl when collecting supportbundle. (, [@tnqn]) Do not perform SNAT for egress traffic to Kubernetes Node IPs. (, [@leonstack]) Send gratuitous ARP for EgressIP via the transport interface, as opposed to the interface with Node IP (if they are different). (, [@Jexf]) Ignore hostNetwork Pods selected by Egress, as they are not supported. (, [@Jexf]) Avoid duplicate processing of Egress. (, [@Jexf]) Ignore the IPs of kube-ipvs0 for Egress as they cannot be used for SNAT. (, [@Jexf]) Change flow exporter export expiry mechanism to priority queue based, to reduce CPU usage and memory footprint. (, [@heanlan]) Make Pod labels optional in the flow records. By default, they will not be included in the flow records. Use the `recordContents.podLabels` configuration parameter for the Flow Aggregator to include them. (, [@yanjunz97]) Wait for AntreaProxy to be ready before accessing any K8s Service if `antreaProxy.proxyAll` is enabled, to avoid connection issues on Agent startup. (, [@tnqn]) Update , [@hongliangl]) Remove offensive words from scripts and documentation. (, [@xiaoxiaobaba]) Use readable names for OpenFlow tables. (, [@wenyingd]) Improve the OpenAPI schema for CRDs to validate the `matchExpressions` field. (, [@wenqiq]) Fail fast if the source Pod for non-live-traffic Traceflow is invalid. (, [@gran-vmv]) Use the `RenewIPConfig` parameter to indicate whether to renew ipconfig on the host for `Clean-AntreaNetwork.ps1`. It defaults to false. (, [@wenyingd]) [Windows] Add Windows task delay up to 30s to improve job resiliency of `Prepare-AntreaAgent.ps1`, to avoid a failure in initialization after Windows startup. (, [@perithompson]) [Windows] Fix nil pointer error when antrea-agent updates OpenFlow priorities of Antrea-native policies without Service ports. (, [@wenyingd]) Fix panic in the Antrea Controller when it processes ClusterGroups that are used by multiple ClusterNetworkPolicies. (, [@tnqn]) Fix an issue with NodePortLocal when a given Pod port needs to be exposed for both TCP and UDP. (, [@antoninbas]) Fix handling of the \"Reject\" action of Antrea-native policies when the traffic is intended for Services. (, [@GraysonWu]) Fix Agent crash when removing the existing NetNat on Windows Nodes. (, [@wenyingd]) [Windows] Fix container network interface MTU configuration error when using containerd as the runtime on Windows. (, [@wenyingd]) [Windows] Fix path to Prepare-AntreaAgent.ps1 in Windows docs. (, [@perithompson]) [Windows] Fix NetNeighbor Powershell error handling. (, [@lzhecheng]) [Windows]" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.4.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "`Access` is the access module, mainly responsible for data upload, download, deletion, etc. Access configuration is based on the , and the following configuration instructions mainly apply to private configuration for Access. | Configuration Item | Description | Required | |:|:-|:| | Public Configuration Items | Refer to the section | Yes | | service_register | | Yes, can be used for service discovery in Access after configuration | | limit | | No, single-machine rate limiting configuration | | stream | Main Access configuration item | Yes, refer to the following second-level configuration options | | Configuration Item | Description | Required | |:--|:|:| | idc | IDC for the service | Yes | | maxblobsize | File segment blob size | No, default is 4MB | | mempoolsize_classes | Memory control for file read/write | No | | encoder_concurrency | EC encoding/decoding concurrency | No, default is 1000 | | encoder_enableverify | Whether to enable EC encoding/decoding verification | No, default is enabled | | minreadshards_x | Number of shards to download concurrently for EC reading | No, default is 1. The larger the number, the higher the fault tolerance, but also the higher the" }, { "data": "| | shardcrcdisabled | Whether to verify the data CRC of the blobnode | No, default is enabled | | diskpunishinterval_s | Interval for temporarily marking a bad disk | No, default is 60s | | servicepunishinterval_s | Interval for temporarily marking a bad service | No, default is 60s | | blobnode_config | Blobnode RPC configuration | Refer to the RPC configuration section | | proxy_config | Proxy RPC configuration | Refer to the RPC configuration section | | cluster_config | Main cluster configuration | Yes, refer to the following third-level configuration options | | Configuration Item | Description | Required | |:-|:--|:| | region | Region information | Yes, do not change after configuration | | region_magic | CRC field used for encoding file location | Yes, do not change after configuration. If changed, all locations will be invalidated. | | consulagentaddr | Consul address for cluster information | Yes | | clusterreloadsecs | Interval for synchronizing cluster information | No, default is 3s | | servicereloadsecs | Interval for synchronizing service information | No, default is 3s | | clustermgrclientconfig | Clustermgr RPC configuration | Refer to the RPC configuration example | ::: tip Note Support for `health_port` began with version v3.2.1. ::: consul_addr: Consul address for Access service registration service_ip: Access service bind IP node: Hostname health_port: Health check port range for Consul ```json { \"consul_addr\": \"127.0.0.1:8500\", \"service_ip\": \"127.0.0.1\", \"node\": \"access-node1\", \"health_port\": [9700, 9799] } ``` reader_mbps: Single-machine download bandwidth (MB/s) writer_mbps: Single-machine upload bandwidth (MB/s) name_rps: RPS limit for each interface ```json { \"name_rps\": { \"alloc\": 0, \"put\": 100, \"putat\": 0, \"get\": 0, \"delete\": 0, \"sign\": 0 }, \"reader_mbps\": 1000, \"writer_mbps\": 200 } ``` key: Memory allocation ladder value: Limit on the number of items, 0 means no limit (Access currently does not enable quantity limits) ```json { \"2048\": 0, \"65536\": 0, \"524288\": 0, \"2097152\": 10240, \"8389632\": 4096, \"16777216\": 1024, \"33554432\": 512, \"67108864\": 64 } ``` ```json { \"max_procs\": 0, \"shutdowntimeouts\": 30, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/access.log\", \"maxsize\": 1024, \"maxage\": 7, \"maxbackups\": 7 }, \"bind_addr\": \":9500\", \"service_register\": { \"consul_addr\": \"127.0.0.1:8500\", \"service_ip\": \"127.0.0.1\", \"node\": \"access-node1\", \"health_port\": [9700, 9799] }, \"limit\": { \"name_rps\": { \"put\": 100 }, \"reader_mbps\": 1000, \"writer_mbps\": 200 }, \"stream\": { \"idc\": \"idc\", \"maxblobsize\": 4194304, \"mempoolsize_classes\": { \"2048\": 0, \"65536\": 0, \"524288\": 0, \"2097152\": 10240, \"8389632\": 4096, \"16777216\": 1024, \"33554432\": 512, \"67108864\": 64 }, \"encoder_concurrency\": 1000, \"encoder_enableverify\": true, \"minreadshards_x\": 1, \"shardcrcdisabled\": false, \"cluster_config\": { \"region\": \"region\", \"region_magic\": \"region\", \"clusterreloadsecs\": 3, \"servicereloadsecs\": 3, \"clustermgrclientconfig\": { \"clienttimeoutms\": 3000, \"transport_config\": { \"auth\": { \"enable_auth\": true, \"secret\": \"secret key\" }, \"dialtimeoutms\": 2000 } }, \"consulagentaddr\": \"127.0.0.1:8500\" }, \"diskpunishinterval_s\": 60, \"servicepunishinterval_s\": 60, \"blobnode_config\": { \"clienttimeoutms\": 10000 }, \"proxy_config\": { \"clienttimeoutms\": 5000 } } } ```" } ]
{ "category": "Runtime", "file_name": "access.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "slug: /comparison/juicefsvscephfs description: Ceph is a unified system that provides object storage, block storage and file storage. This article compares the similarities and differences between JuiceFS and Ceph. This document offers a comprehensive comparison between JuiceFS and CephFS. You will learn their similarities and differences in their system architectures and features. Both are highly reliable, high-performance, resilient distributed file systems with good POSIX compatibility, suitable for various scenarios. Both JuiceFS and CephFS employ an architecture that separates data and metadata, but they differ greatly in implementations. CephFS is a complete and independent system used mainly for private cloud deployments. Through CephFS, all file metadata and data are persistently stored in Ceph's distributed object store (RADOS). Metadata Metadata Server (MDS): Stateless and theoretically horizontally scalable. There are mature primary-secondary mechanisms, while concerns about performance and stability still exist in multi-primary deployments. Production environments typically adopt one-primary-multiple-secondary or multi-primary static isolation. Persistent: Independent RADOS storage pools, usually used with SSDs or higher-performance hardware storage. Data: Stored in one or more RADOS storage pools, supporting different configurations through Layout, such as chunk size (default 4 MiB) and redundancy (multi-copy, EC). Client: Supports kernel client (`kcephfs`), user-state client (`ceph-fuse`) and libcephfs-based SDKs for C++, Python, etc.; recently the community has also provided a Windows client (`ceph-dokan`). VFS object for Samba and an FSAL module for NFS-Ganesha are also available in the ecosystem. JuiceFS provides a libjfs library, a FUSE client application, Java SDK, etc. It supports various metadata engines and object storages, and can be deployed in public, private, or hybrid cloud environments. Metadata: Supports , including: Redis and various variants of the Redis-compatible protocol (transaction supports are required) SQL family: MySQL, PostgreSQL, SQLite, etc. Distributed K/V storage: TiKV, FoundationDB, etcd A self-developed engine: a JuiceFS fully managed service used on the public cloud. Data: Supports over 30 types of on the public cloud and can also be used with MinIO, Ceph RADOS, Ceph RGW, etc. Clients: Supports Unix user-state mounting, Windows mounting, Java SDK with full HDFS semantic compatibility, , and a built-in S3" }, { "data": "| Comparison basis | CephFS | JuiceFS | | - | | | | File chunking<sup> [1]</sup> | | | | Metadata transactions | | | | Strong consistency | | | | Kubernetes CSI Driver | | | | Hadoop-compatible | | | | Data compression<sup> [2]</sup> | | | | Data encryption<sup> [3]</sup> | | | | Snapshot | | | | Client data caching | | | | Hadoop data locality | | | | S3-compatible | | | | Quota | Directory level quota | Directory level quota | | Languages | C++ | Go | | License | LGPLv2.1 & LGPLv3 | Apache License 2.0 | CephFS splits files by (default 4MiB). Each chunk corresponds to a RADOS object. In contrast, JuiceFS splits files into 64MiB chunks and it further divides each chunk into logical slices during writing according to the actual situation. These slices are then split into logical blocks when writing to the object store, with each block corresponding to an object in the object storage. When handling overwrites, CephFS modifies corresponding objects directly, which is a complicated process. Especially, when the redundancy policy is EC or the data compression is enabled, part of the object content needs to be read first, modified in memory, and then written. This leads to great performance overhead. In comparison, JuiceFS handles overwrites by writing the updated data as new objects and modifying the metadata at the same time, which greatly improves the performance. Any redundant data generated during the process will go to garbage collection asynchronously. Strictly speaking, CephFS itself does not provide data compression but relies on the BlueStore compression on the RADOS layer. JuiceFS, on the other hand, has already compressed data once before uploading a block to the object storage to reduce the capacity cost in the object storage. In other words, if you use JuiceFS to interact with RADOS, you compress a block both before and after it enters RADOS, twice in total. Also, as mentioned in File chunking, to guarantee overwrite performance, CephFS usually does not enable the BlueStore compression. On network transport layer, Ceph encrypts data by using Messenger v2, while on data storage layer, the data encryption is done at OSD creation, which is similar to data compression. JuiceFS encrypts objects before uploading and decrypts them after downloading. This is completely transparent to the object storage." } ]
{ "category": "Runtime", "file_name": "juicefs_vs_cephfs.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Snapshot prune is a new snapshot-purge-related operation that helps reclaim some space from the snapshot file that is already marked as Removed but cannot be completely deleted. This kind of snapshot is typically the one directly stands behind the volume head. https://github.com/longhorn/longhorn/issues/3613 Snapshots could store historical data for a volume. This means extra space will be required, and the volume actual size can be much greater than the spec size. To avoid existing volumes using too much space, users can clean up snapshots by marking the snapshots as Removed then waiting for Longhorn purging them. But there is one issue: By design, the snapshot that directly stands behind the volume head, as known as the latest snapshot, cannot be purged by Longhorn after being marked as Removed. The space consumed by it cannot be released any matter if users care about historical data or not. Hence, Longhorn should do something special to reclaim space \"wasted\" by this kind of snapshot. Volume trim/shrink: https://github.com/longhorn/longhorn/issues/836 Deleting a snapshot consists of 2 steps, marking the snapshot as Removed then waiting for Longhorn purging it. And the snapshot purge consists of 3 steps: copy data from the newer snapshot to the old snapshot, replace the new snapshot with the updated old snapshot, remove the new snapshot. This operation is named \"coalesce\" or \"fold\" in Longhorn. As mentioned before, it cannot be applied to the latest snapshot file since the newer one of it is actually the volume head, which cannot be modified by others except for users/workloads. In other words, we cannot use this operation to handle the latest snapshot. ``` +--+ +--+ +--+ | Snapshot A | | Snapshot B | | Volume head | +--+ +--+ +--+ ^ | Marked Snapshot A (the old snapshot) as Removed +--+ +--+ +--+ | Snapshot A | | Snapshot B | | Volume head | +--+ +--+ +--+ ^ | ++ Copy data from the Snapshot B (the newer snapshot) to Snapshot A ++ +--+ | Rename snapshot A to snapshot B | -- | Volume head | ++ +--+ ^ | Delete Snapshot B then rename snapshot A to Snapshot B ``` Longhorn needs to somehow reclaim the space from the latest snapshot without directly deleting the file itself or modifying the volume head. Notice that Longhorn can still read the volume head as well as modify the snapshot once the snapshot itself is marked as Removed. This means we can detect which part of the latest snapshot is overwritten by the volume head. Then punching holes in the overlapping parts of the snapshot would reclaim the space. Here, we call this new operation as" }, { "data": "``` +--+ ++ | Snapshot A | | Volume head | +--+ ++ ^ | ++ Snapshot A is the latest snapshot of the volume. Longhorn will scan the volume head. For each data chunk of the volume head, Longhorn will punch a hole at the same position for snapshot A. ``` Punching holes means modifying the data of the snapshot. Therefore, once the snapshot is marked as Removed and the cleanup happens, Longhorn should not allow users to revert to the snapshot anymore. This is the prerequisite of this enhancement. This snapshot revert issue is handled in https://github.com/longhorn/longhorn/issues/3748. Before the enhancement, users need to create a new snapshot, then remove the target snapshot so that Longhorn will coalesce the target snapshot with the newly created one. But the issue is, the volume head would be filled up later, and users may loop into redoing the operation to reclaim the space occupied by the historical data of the snapshot. After the enhancement, as long as there is no newer snapshot created, users can directly reclaim the space from the latest snapshot by simply deleting the snapshot via UI. Assume that there are heavy writing tasks for a volume and the only snapshot is filled up with the historical data (this snapshot may be created by rebuilding or backup). The actual size of the volume is typical twice the spec size. Now users just need to remove the only/latest snapshot via UI, Longhorn would reclaim almost all space used by the snapshot, which is the spec size here. Then as long as users don't create a new snapshot, the actual size of this volume is the space used by the volume head only, which is up to the spec size in total. N/A When the snapshot purge is triggered, replicas will identify if the snapshot being removed is the latest snapshot by checking one child of it is the volume head. If YES, they will start the snapshot pruning operation: Before pruning, replicas will make sure the apparent size of the snapshot is the same as that of the volume head. If No, we will truncate/expand the snapshot first. During pruning, replicas need to iterate the volume head fiemap. Then as long as there is a data chunk found in the volume head file, they will blindly punch a hole at the same position of the snapshot file. If there are multiple snapshots including the latest one being removed simultaneously, we need to make sure the pruning is done only after all the other snapshots have done coalescing and deletion. Allow users to remove the snapshots that are already marked as Removed. And in this case, the frontend just needs to send a `SnapshotPurge` call to the backend. Test this snapshot prune operations with snapshot coalesce, snapshot revert, and volume expansion. N/A" } ]
{ "category": "Runtime", "file_name": "20220317-snapshot-prune.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Backup Hooks\" layout: docs Velero supports executing commands in containers in pods during a backup. When performing a backup, you can specify one or more commands to execute in a container in a pod when that pod is being backed up. The commands can be configured to run before any custom action processing (\"pre\" hooks), or after all custom actions have been completed and any additional items specified by custom action have been backed up (\"post\" hooks). Note that hooks are not executed within a shell on the containers. There are two ways to specify hooks: annotations on the pod itself, and in the Backup spec. You can use the following annotations on a pod to make Velero execute a hook when backing up the pod: `pre.hook.backup.velero.io/container` The container where the command should be executed. Defaults to the first container in the pod. Optional. `pre.hook.backup.velero.io/command` The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `. Optional. `pre.hook.backup.velero.io/on-error` What to do if the command returns a non-zero exit code. Defaults is `Fail`. Valid values are Fail and Continue. Optional. `pre.hook.backup.velero.io/timeout` How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults is 30s. Optional. `post.hook.backup.velero.io/container` The container where the command should be executed. Default is the first container in the pod. Optional. `post.hook.backup.velero.io/command` The command to execute. This command is not executed within a shell by default. If a shell is needed to run your command, include a shell command, like `/bin/sh`, that is supported by the container at the beginning of your command. If you need multiple arguments, specify the command as a JSON array, such as `. Optional." }, { "data": "What to do if the command returns a non-zero exit code. Defaults is `Fail`. Valid values are Fail and Continue. Optional. `post.hook.backup.velero.io/timeout` How long to wait for the command to execute. The hook is considered in error if the command exceeds the timeout. Defaults is 30s. Optional. Please see the documentation on the for how to specify hooks in the Backup spec. This examples walks you through using both pre and post hooks for freezing a file system. Freezing the file system is useful to ensure that all pending disk I/O operations have completed prior to taking a snapshot. This example uses . Follow the to setup this example. The Velero serves as an example of adding the pre and post hook annotations directly to your declarative deployment. Below is an example of what updating an object in place might look like. ```shell kubectl annotate pod -n nginx-example -l app=nginx \\ pre.hook.backup.velero.io/command='[\"/sbin/fsfreeze\", \"--freeze\", \"/var/log/nginx\"]' \\ pre.hook.backup.velero.io/container=fsfreeze \\ post.hook.backup.velero.io/command='[\"/sbin/fsfreeze\", \"--unfreeze\", \"/var/log/nginx\"]' \\ post.hook.backup.velero.io/container=fsfreeze ``` Now test the pre and post hooks by creating a backup. You can use the Velero logs to verify that the pre and post hooks are running and exiting without error. ```shell velero backup create nginx-hook-test velero backup get nginx-hook-test velero backup logs nginx-hook-test | grep hookCommand ``` To use multiple commands, wrap your target command in a shell and separate them with `;`, `&&`, or other shell conditional constructs. ```shell pre.hook.backup.velero.io/command='[\"/bin/bash\", \"-c\", \"echo hello > hello.txt && echo goodbye > goodbye.txt\"]' ``` You are able to use environment variables from your pods in your pre and post hook commands by including a shell command before using the environment variable. For example, `MYSQLROOTPASSWORD` is an environment variable defined in pod called `mysql`. To use `MYSQLROOTPASSWORD` in your pre-hook, you'd include a shell, like `/bin/sh`, before calling your environment variable: ``` pre: exec: container: mysql command: /bin/sh -c mysql --password=$MYSQLROOTPASSWORD -e \"FLUSH TABLES WITH READ LOCK\" onError: Fail ``` Note that the container must support the shell command you use." } ]
{ "category": "Runtime", "file_name": "backup-hooks.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "() This section describes how an operator can maintain a Manta deployment: upgrading components; using Manta's alarming system, \"madtom\" dashboard, and service logs; and some general inspection/debugging tasks. <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> - - - - - - - - - - - - - - - - <!-- END doctoc generated TOC please keep comment here to allow auto update --> There are two distinct methods of updating instances: you may deploy additional instances, or you may reprovision existing instances. With the first method (new instances), additional instances are provisioned using a newer image. This approach allows you to add additional capacity without disrupting extant instances, and may prove useful when an operator needs to validate a new version of a service before adding it to the fleet. With the second method (reprovision), this update will swap one image out for a newer image, while preserving any data in the instance's delegated dataset. Any data or customizations in the instance's main dataset, i.e. zones/UUID, will be lost. Services which have persistent state (manatee, mako, redis) must use this method to avoid discarding their data. This update moves the service offline for 15-30 seconds. If the image onto which an image is reprovisioned doesn't work, the instance can be reprovisioned back to its original image. This procedure uses \"manta-adm\" to do the upgrade, which uses the reprovisioning method for all zones. Figure out which image you want to install. You can list available images by running updates-imgadm: headnode$ channel=$(sdcadm channel get) headnode$ updates-imgadm list -C $channel name=mantav2-webapi Replace mantav2-webapi with some other image name, or leave it off to see all images. Typically you'll want the most recent one. Note the uuid of the image in the first column. Figure out which zones you want to reprovision. In the headnode GZ of a given datacenter, you can enumerate the zones and versions for a given manta\\_role using: headnode$ manta-adm show webapi You'll want to note the VM UUIDs for the instances you want to update. Run this in each datacenter: Download updated images. The supported approach is to re-run the `manta-init` command that you used when initially deploying Manta inside the manta-deployment zone. For us-east, use: $ manta-init -e [email protected] -s production -c 10 Do not run `manta-init` concurrently in multiple datacenters. Inside the Manta deployment zone, generate a configuration file representing the current deployment state: $ manta-adm show -s -j > config.json Modify the file as desired. See \"manta-adm configuration\" above for details on the format. In most cases, you just need to change the image uuid for the service that you're updating. You can get the latest image for a service with the following command: $ sdc-sapi \"/services?name=[service name]&include_master=true\" | \\ json -Ha params.image_uuid Pass the updated file to `manta-adm update`: $ manta-adm update config.json Do not run `manta-adm update` concurrently in multiple datacenters. Update the alarm configuration as needed. See \"Amon Alarm Updates\" below for details. Since the Manta deployment zone is actually a Triton component, use `sdcadm` to update it: headnode$ sdcadm self-update --latest headnode$ sdcadm update manta Manta's Amon probes are managed using the `manta-adm alarm` subcommand. The set of configured probes and probe groups needs to be updated whenever the set of probes delivered with `manta-adm` itself changes" }, { "data": "if new probes were added, or bugs were fixed in existing probes) or when new components are deployed or old components are removed. In all cases, it's strongly recommended to address and close any open alarms. If the update process removes a probe group, any alarms associated with that probe group will remain, but without much information about the underlying problem. To update the set of probes and probe groups deployed, use: headnode$ sdc-login manta manta$ manta-adm alarm config update This command is idempotent. Triton zones and agents are upgraded exactly as they are in non-Manta installs. Platform updates for compute nodes (including the headnode) are exactly the same as for any other Triton compute node. Use the `sdcadm platform` command to download platform images and assign them to compute nodes. Platform updates require rebooting CNs. Note that: Rebooting the system hosting the ZooKeeper leader will trigger a new leader election. This should have minimal impact on service. Rebooting the primary peer in any Manatee shard will trigger a Manatee takeover. Write service will be lost for a minute or two while this happens. Other than the above constraints, you may reboot any number of nodes within a single AZ at the same time, since Manta survives loss of an entire AZ. If you reboot more than one CN from different AZs at the same time, you may lose availability of some services or objects. The certificates used for the front door TLS terminators can be updated. Verify your PEM file. Your PEM file should contain the private key and the certificate chain, including your leaf certificate. It should be in the format: --BEGIN RSA PRIVATE KEY-- [Base64 Encoded Private Key] --END RSA PRIVATE KEY-- --BEGIN CERTIFICATE-- [Base64 Encoded Certificate] --END CERTIFICATE-- --BEGIN DH PARAMETERS-- [Base64 Encoded dhparams] --END DH PARAMETERS-- You may need to include the certificate chain in the PEM file. The chain should be a series of CERTIFICATE sections, each section having been signed by the next CERTIFICATE. In other words, the PEM file should be ordered by the PRIVATE KEY, the leaf certificate, zero or more intermediate certificates, the root certificate, and then DH parameters as the very last section. To generate the DH parameters section, use the command: $ openssl dhparam <bits> >> ssl_cert.pem Replace `<bits>` with at least the same number of bits as are in your RSA private key (if you are unsure, 2048 is probably safe). Take a backup of your current certificate, just in case anything goes wrong. headnode$ sdc-sapi /services?name=loadbalancer | \\ json -Ha metadata.SSL_CERTIFICATE \\ >/var/tmp/mantasslcert_backup.pem headnode$ mv /var/tmp/mantasslcert_backup.pem \\ /zones/$(vmadm lookup alias=~manta)/root/var/tmp/. Copy your certificate to the Manta zone after getting your certificate on your headnode: headnode$ mv /var/tmp/ssl_cert.pem \\ /zones/$(vmadm lookup alias=~manta)/root/var/tmp/. Replace your certificate in the loadbalancer application. Log into the manta zone: headnode$ sdc-login manta manta$ /opt/smartdc/manta-deployment/cmd/manta-replace-cert.js \\ /var/tmp/ssl_cert.pem HAProxy should automatically pick up the new certificate. To confirm: headnode$ manta-oneach -s loadbalancer 'cat /opt/smartdc/muppet/etc/ssl.pem` headnode$ manta-oneach -s loadbalancer \\ 'echo QUIT | openssl s_client -host 127.0.0.1 -port 443 -showcerts' The contacts that are notified for new alarm events are configured using SAPI metadata on the \"manta\" service within the \"sdc\" application (not the \"manta\" application). This metadata identifies one or more contacts already configured within Amon. See the Amon docs for how to configure these contacts. For historical reasons, high-severity notifications are delivered to the list of contacts called \"MANTAMON\\_ALERT\". Other notifications are delivered to the list of contacts called" }, { "data": "Here is an example update to send \"alert\" level notifications to both an email address and an XMPP endpoint and have \"info\" level notifications sent just to XMPP: headnode$ echo '{ \"metadata\": { \"MANTAMON_ALERT\": [ { \"contact\": \"email\" }, { \"contact\": \"mantaxmpp\", \"last\": true } ], \"MANTAMON_INFO\": [ { \"contact\": \"mantaxmpp\", \"last\": true } ] } }' | sapiadm update $(sdc-sapi /services?name=manta | json -Ha uuid) Note that the last object of the list must have the `\"last\": true` key/value. You will need to update the alarm configuration for this change to take effect. See \"Amon Alarm Updates\". Manta integrates with Amon, the Triton alarming and monitoring system, to notify operators when something is wrong with a Manta deployment. It's recommended to review Amon basics in the [Amon documentation](https://github.com/TritonDataCenter/sdc-amon/blob/master/docs/index.md). The `manta-adm` tool ships with configuration files that specify Amon probes and probe groups, referred to elsewhere as the \"Amon configuration\" for Manta. This configuration specifies which checks to run, on what period, how failures should be processed to open alarms (which generate notifications), and how these alarms should be organized. Manta includes built-in checks for events like components dumping core, logging serious errors, and other kinds of known issues. Typically, the only step that operators need to take to manage the Amon configuration is to run: manta-adm alarm config update after initial setup and after other deployment operations. See \"Amon Alarm Updates\" for more information. With alarms configured, you can use the `manta-adm alarm show` subcommand and related subcommands to view information about open alarms. When a problem is resolved, you can use `manta-adm alarm close` to close it. You can also disable notifications for alarms using `manta-adm alarm notify` (e.g., when you do not need more notifications about a known issue). See the `manta-adm` manual page for more information. Madtom is a dashboard that presents the state of all Manta components in a region-wide deployment. You can access the Madtom dashboard by pointing a web browser to port 80 at the IP address of the \"madtom\" zone that's deployed with Manta. For the JPC deployment, this is usually accessed through an ssh tunnel to the corresponding headnode. Historical logs for all components are uploaded to Manta hourly at `/poseidon/stor/logs/COMPONENT/YYYY/MM/DD/HH`. This works by rotating them hourly into /var/log/manta/upload inside each zone, and then uploading the files in that directory to Manta. The most commonly searched logs are the muskie logs, since these contain logs for all requests to the public API. There's one object in each `/poseidon/stor/logs/muskie/YYYY/MM/DD/HH/` directory per muskie server instance. If you need to look at the live logs (because you're debugging a problem within the hour that it happened, or because Manta is currently down), see \"real-time logs\" below. Either way, if you have the x-server-name from a request, that will tell you which muskie instance handled the request so that you don't need to search all of them. If Manta is not up, then the first priority is generally to get Manta up, and you'll have to use the real-time logs to do that. Unfortunately, logging is not standardized across all Manta components. There are three common patterns: Services log to their SMF log file (usually in the format, though startup scripts tend to log with bash(1) xtrace output). Services log to a service-specific log file in bunyan format (e.g., `/var/log/$service.log`). Services log to an application-specific log file (e.g., haproxy," }, { "data": "Most custom services use the bunyan format. The \"bunyan\" tool is installed in /usr/bin to view these logs. You can also [snoop logs of running services in more detail using bunyan's built-in DTrace probes](http://www.joyent.com/blog/node-js-in-production-runtime-log-snooping). If you find yourself needing to look at the current log file for a component (i.e., can't wait for the next hourly upload into Manta), here's a reference for the service's that don't use the SMF log file: | Service | Path | Format | | - | -- | | | muskie | /var/svc/log/\\muskie\\.log | bunyan | | moray | /var/log/moray.log | bunyan | | mbackup<br />(the log file uploader itself) | /var/log/mbackup.log | bash xtrace | | haproxy | /var/log/haproxy.log | haproxy-specific | | zookeeper | /var/log/zookeeper/zookeeper.log | zookeeper-specific | | redis | /var/log/redis/redis.log | redis-specific | Most of the remaining components log in bunyan format to their service log file (including binder, config-agent, electric-moray, manatee-sitter, and others). Manta provides a coarse request throttle intended to be used when the system is under extreme load and is suffering availability problems that cannot be isolated to a single Manta component. When the throttle is enabled and muskie has reached its configured capacity, the throttle will cause muskie to drop new requests and notify clients their requests have been throttled by sending a response with HTTP status code 503. The throttle is disabled by default. Inbound request activity and throttle statistics can be observed by running $ /opt/smartdc/muskie/bin/throttlestat.d in a webapi zone in which the muskie processes have the throttle enabled. The script will output rows of a table with the following columns every second: `THROTTLED-PER-SEC` - The number of requests throttled in the last second. `AVG-LATENCY-MS` - The average number of milliseconds that requests which completed in the last second spent in the request queue. `MAX-QLEN` - The maximum number of queued requests observed in the last second. `MAX-RUNNING` - The maximum number of concurrent dispatched request handlers observed in the last second. If the throttle is not enabled, this script will print an error message indicating a missing DTrace provider. The message will look like this: dtrace: failed to compile script ./throttlestat.d: line 16: probe description muskie-throttle*:::queue_enter does not match any probes If the script is run when the throttle is enabled, and it continues running as the throttle is disabled, it will subsequently appear to indicate no request activity. This is neither an error nor a sign of service availability lapse. It just indicates the fact that the DTrace probes being used by the script are not firing. Care should be taken to ensure that this script is used to collect metrics only when the throttle is enabled. The throttle is \"coarse\" because its capacity is a function of all requests to the system, regardless of their originating user, IP address, or API operation. Any type of request can be throttled. The request throttle is implemented on a per-process level, with each \"muskie\" process in a \"webapi\" zone having its own throttle. The throttle exposes three tunables: | Tunable Name | Default Value | Description | | -- | - | - | | `MUSKIETHROTTLEENABLED` | false | whether the throttle enabled | | `MUSKIETHROTTLECONCURRENCY` | 50 | number of allowed concurrent requests | | `MUSKIETHROTTLEQUEUE_TOLERANCE` | 25 | number of allowed queued requests | These tunables can be modified with commands of the following form: $ sapiadm update $(sdc-sapi /services?name=webapi | json -Ha uuid) \\" }, { "data": "Muskies must be restarted to use the new configuration: $ manta-oneach -s webapi 'svcadm restart \"muskie-\"' Requests are throttled when the muskie process has exhausted all slots available for concurrent requests and reached its queue threshold. In general, higher concurrency values will result in a busier muskie process that handles more requests at once. Lower concurrency values will limit the number of requests the muskie will handle at once. Lower concurrency values should be set to limit the CPU load on Manta. Higher queue tolerance values will decrease the likelihood of requests being rejected when Manta is under high load but may increase the average latency of queued requests. This latency increase can be the result of longer queues inducing longer delays before dispatch. Lower queue tolerance values will make requests more likely to be throttled quickly under load. Lower queue tolerance values should be used when high latency is not acceptable and the application is likely to retry on receipt of a 503. Low queue tolerance values are also desirable if the zone is under memory pressure. The Manta multipart upload API (MPU) stores the part directories of an account's ongoing multipart uploads under the directory tree `/$MANTA_USER/uploads`. Within the top-level directory, part directories are stored in subdirectories based on some number of the first characters of the multipart upload's UUID. The number of characters used to split multipart uploads is referred to as the \"prefix length\". For example, in a Manta deployment for which the prefix length is set to 3, a multipart upload would have an upload directory that looks like this: /$MANTA_USER/uploads/f00/f00e51d2-7e47-4732-8edf-eb871296b343 Note that the parent directory of the parts directory, also referred to as its \"prefix directory\", has 3 characters, the same as the prefix length. The following multipart upload would have been created in a Manta deployment with a prefix length of 1: /$MANTA_USER/uploads/d/d77feb78-cd7f-481f-a6c7-f653c80c7331 The prefix length is configurable in SAPI, represented as the `MUSKIEMPUPREFIXDIRLEN` SAPI variable under the \"webapi\" service. For example, to change the prefix length of a deployment to 2, you could run: $ sapiadm update $(sdc-sapi /services?name=webapi | json -Ha uuid) \\ metadata.\"MUSKIEMPUPREFIXDIRLEN\"=2 As with other configuration changes to the \"webapi\" service, you must restart the \"webapi\" zones to see the configuration change. Multipart uploads created with a different prefix length within the same Manta deployment will continue to work after the prefix length is changed. The prefix length dictates the number of subdirectories allowed in the top-level `/$MANTA_USER/uploads` directory. Because the number of entries in a Manta directory should be limited, this affects how many ongoing multipart uploads are available for a given account. Increasing the prefix length also increases the number of requests required to list all multipart uploads under a given account. Consequently, a smaller prefix length allows for fewer ongoing multipart uploads for a single account, but less work to list them all; larger prefix directories allow more ongoing multipart uploads, but require more work to list them. For example, in a Manta deployment with a prefix length of 3, a given account may have up to 4096 prefix directories, allowing for about 4 billion ongoing multipart uploads for a given account. Listing all of the multipart uploads ongoing requires a maximum of 4096 directory listings" }, { "data": "Compare this to a deployment with a prefix length of 1, which has a maximum of 256 prefix directories and allows for about 256 million multipart uploads, but only up to 256 directory listings are required to list all multipart uploads under an account. There are two options for webapi to obtain storage node information - \"picker\" and \"storinfo\". Both of them query the moray shard that maintains the storage node `statvfs` data, keep a local cache and periodically refresh it, and select storage nodes for object write requests. Storinfo is an optional service which is separate from webapi. If storinfo is not deployed (because rebalancer and buckets API components are not in use), you should configure webapi to use the local picker function by setting the `WEBAPIUSEPICKER` SAPI variable to `true` under the \"webapi\" service: $ sdc-sapi /services/$(sdc-sapi /services?name=webapi | json -Ha uuid) \\ -X PUT -d '{\"action\": \"update\", \"metadata\": {\"WEBAPIUSEPICKER\": true}}' All Triton compute nodes have at least two unique identifiers: the server UUID, provided by the system firmware and used by Triton the hostname, provided by operators The global zone's \"admin\" network IP address should also be unique. The `manta-adm cn` command shows information about the Triton compute nodes in the current datacenter on which Manta components are deployed. For example, to fetch the server uuid and global zone IP for RM08218, use: HOST SERVER UUID ADMIN IP RM08218 00000000-0000-0000-0000-00259094c058 10.10.0.34 See the `manta-adm(1)` manual page for details. Manta Storage CNs have additional identifiers known as storage IDs. The one or more manta storage IDs are used for object metadata. There's one storage ID per storage zone deployed on a server, so there can be more than one storage ID per CN, although this is usually only the case in development environments. You can generate a table that maps hostnames to storage IDs for the current datacenter: HOST STORAGE IDS RM08213 2.stor.us-east.joyent.us RM08211 1.stor.us-east.joyent.us RM08216 3.stor.us-east.joyent.us RM08219 4.stor.us-east.joyent.us Note that the column name is \"storage\\_ids\" (with a trailing \"s\") since there may be more than one. See the `manta-adm(1)` manual page for details. To find a particular manta zone, log into one of the headnodes and run `manta-adm show` to list all Manta-related zones in the current datacenter. You can list all of the zones in all datacenters with `manta-adm show -a`. `manta-adm show` supports a number of other features, including summary output, filtering by service name, grouping results by compute node, and printing various other properties about a zone. For more information and examples, see the `manta-adm(1)` manual page. | To access ... | do this... | | | -- | | a&nbsp;headnode | ssh directly to the headnode. | | a&nbsp;compute&nbsp;node | ssh to the headnode for that datacenter, then ssh to the CN's GZ ip<br />(see \"manta-adm cn\" above) | | a&nbsp;compute&nbsp;zone | ssh to the headnode for that datacenter, then use `manta-login ZONETYPE` or `manta-login ZONENAME`, where ZONENAME can actually be any unique part of the zone's name. | | a&nbsp;compute&nbsp;node's&nbsp;console | ssh to the headnode for that datacenter, find the compute node's service processor IP, then:<br/>`ipmitool -I lanplus -H SERVICEPROCESSIP -U ADMIN -P ADMIN sol activate`<br />To exit the console, press enter, then `~.`, prefixed with as many \"~\"s as you have ssh sessions. (If ssh'd to the headnode, use enter, then `~~.`) If you don't use the prefix `~`s, you'll kill your ssh connection too. | | a&nbsp;headnode's&nbsp;console | ssh to the headnode of one of the other datacenters, then \"sdc-login\" to the \"manta\" zone. From there, use the above \"ipmitool\" command in the usual way with the headnode's SP" }, { "data": "| This section explains how to locate persisted object data throughout Manta. There are only two places where data is persisted: In `postgres` zones: Object metadata, in a Postgres database. In `storage` zones: Object contents, as a file on disk The \"mlocate\" tool takes a Manta object name (like \"/dap/stor/cmd.tgz\"), figures out which shard it's stored on, and prints out the internal metadata for it. You run this inside any \"muskie\" (webapi) zone: [root@204ac483 (webapi) ~]$ /opt/smartdc/muskie/bin/mlocate /dap/stor/cmd.tgz | json { \"dirname\": \"/bc8cd146-fecb-11e1-bd8a-bb6f54b49808/stor\", \"key\": \"/bc8cd146-fecb-11e1-bd8a-bb6f54b49808/stor/cmd.tgz\", \"headers\": {}, \"mtime\": 1396994173363, \"name\": \"cmd.tgz\", \"creator\": \"bc8cd146-fecb-11e1-bd8a-bb6f54b49808\", \"owner\": \"bc8cd146-fecb-11e1-bd8a-bb6f54b49808\", \"type\": \"object\", \"contentLength\": 17062152, \"contentMD5\": \"vVRjo74mJquDRsoW2HJM/g==\", \"contentType\": \"application/octet-stream\", \"etag\": \"cb1036e4-3b57-c118-cd46-961f6ebe12d0\", \"objectId\": \"cb1036e4-3b57-c118-cd46-961f6ebe12d0\", \"sharks\": [ { \"datacenter\": \"staging-2\", \"mantastorageid\": \"2.stor.staging.joyent.us\" }, { \"datacenter\": \"staging-1\", \"mantastorageid\": \"1.stor.staging.joyent.us\" } ], \"_moray\": \"tcp://electric-moray.staging.joyent.us:2020\", \"_node\": { \"pnode\": \"tcp://3.moray.staging.joyent.us:2020\", \"vnode\": 7336153, \"data\": 1 } } All of these implementation details are subject to change, but for reference, these are the pieces you need to locate the object: \"sharks\": indicate which backend storage servers contain copies of this object \"creator\": uuid of the user who created the object \"objectId\": uuid for this object in the system. Note that an objectid is allocated when an object is first created. You won't need the following fields to locate the object, but they may be useful to know about: \"key\": the internal name of this object (same as the public name, but the login is replaced with the user's uuid) \"owner\": uuid of the user being billed for this link. \"\\_node\".\"pnode\": indicates which metadata shard stores information about this object. \"type\": indicates whether something refers to an object or directory \"contentLength\", \"contentMD5\", \"contentType\": see corresponding HTTP headers Now that you know what sharks the object is on you can pull the object contents directly from the ops box by creating a URL with the format: http://[mantastorageid]/[creator]/[objectId] You can use \"curl\" to fetch this from the \"ops\" zone, for example. More commonly, you'll want to look at the actual file on disk. For that, first map the \"manta\\storage\\id\" to a specific storage zone, using a command like this to print out the full mapping: STORAGE ID DATACENTER ZONENAME 1.stor.staging.joyent.us staging-1 f7954cad-7e23-434f-be98-f077ca7bc4c0 2.stor.staging.joyent.us staging-2 12fa9eea-ba7a-4d55-abd9-d32c64ae1965 3.stor.staging.joyent.us staging-3 6dbfb615-b1ac-4f9a-8006-2cb45b87e4cb Then use \"manta-login\" to log into the corresponding storage zone: [Connected to zone '12fa9eea-ba7a-4d55-abd9-d32c64ae1965' pts/2] [root@12fa9eea (storage) ~]$ The object's data will be stored at /manta/$creator\\_uuid/$objectid: [root@12fa9eea (storage) ~]$ ls -l /manta/bc8cd146-fecb-11e1-bd8a-bb6f54b49808/cb1036e4-3b57-c118-cd46-961f6ebe12d0 -rw-r--r-- 1 nobody nobody 17062152 Apr 8 2014 /manta/bc8cd146-fecb-11e1-bd8a-bb6f54b49808/cb1036e4-3b57-c118-cd46-961f6ebe12d0 There will be a copy of the object at that path in each of the `sharks` listed in the metadata record. When debugging their own programs, in effort to rule out Manta as the cause (or when they suspect Manta as the cause), users sometimes ask if there was a Manta outage at a given time. In rare cases when there was a major Manta-wide outage at that time, the answer to the user's question may be \"yes\". More often, though, there may have been very transient issues that went unnoticed or that only affected some of that user's requests. First, it's important to understand what an \"outage\" actually means. Manta provides two basic services: an HTTP-based request API and a compute service managed by the same API. As a result, an \"outage\" usually translates to elevated error rates from either the HTTP API or the compute" }, { "data": "To check for a major event affecting the API, locate the muskie logs for the hour in question (see \"Logs\" above) and look for elevated server-side error rates. An easy first cut is to count requests and group by HTTP status code. In HTTP, codes under 300 are normal. Codes from 400 to 500 (including 400, not 500) are generally client problems. Codes over 500 indicate server problems. Some number of 500 errors don't necessarily indicate a problem with the service -- it could be a bug or a transient problem -- but if the number is high (particularly compared to normal hours), then that may indicate a serious Manta issue at the time in question. If the number of 500-level requests is not particularly high, then that may indicate a problem specific to this user or even just a few of their requests. See \"Debugging API failures\" below. Users often report problems with their own programs acting as Manta clients (possibly using our Node client library). This may manifest as an error message from the program or an error reported in the program's log. Users may ask simply: \"was there a Manta outage at such-and-such time?\" To answer that question, see \"was there an outage?\" above. If you've established that there wasn't an outage, here's how you can get more information about what happened. For most problems that are caused by the Manta service itself (as opposed to client-side problems), there will be information in the Manta server logs that will help explain the root cause. The best way to locate the corresponding log entries is for clients to log the request id of failed requests and for users to provide the request ids when seeking support. The request id is reported by a server HTTP header, and it's normally logged by the Node client library. While it's possible to search for log entries by timestamp, account name, Manta path, or status code, not only is it much slower, but it's also not sufficient for many client applications that end up performing a lot of similar operations on similar paths for the same user around the same time (e.g., creating a directory tree). For requests within the last hour, it's very helpful to get the x-server-name header as well. To find the logs you need, see \"Logs\" above. Once you've found the logs (either in Manta or inside the muskie zones, depending on whether you're looking at a historical or very recent request): If you have a request id and server name, pick the log for that server name and grep for the request id. If you have a request id, grep all the logs for that hour for that request id. If you don't have either of these, you can try grepping for the user's account uuid (which you can retrieve by searching adminui for the user's account login) or other relevant parameters of the request. This process is specific to whatever information you have. The logs are just plaintext JSON. You should find exactly one request matching a given request id. The log entry for the request itself should include a lot of details about the request and response, including internal error messages. For 500-level errors (server errors), there will be additional log entries for all the debug-level logging for that request. Obviously, most of this requires Manta to be operating. If it's not, that's generally the top priority, and you can use the local log files on muskie servers to debug that. Please see the docs included in the mahi repository." } ]
{ "category": "Runtime", "file_name": "maintenance.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": "This directory contains examples on how to use together with a to add NATS to the list of bucket notifications endpoints. To test your setup: Install and start a nats-server. Subscribe to the NATS server using a , choosing the topic to be 'Bucket_Notification' (as defined in the ) ```bash nats-sub \"Bucket_Notification\" ``` . Alternatively, configure the script to point to an existing NATS broker by editing the following part in the script to match the parameters of your existing nats server. ``` nats_host = '{host}', nats_port = {port}, ``` Upload the : ```bash radosgw-admin script put --infile=nats_adapter.lua --context=postRequest ``` Add the packages used in the script: ```bash radosgw-admin script-package add --package=nats --allow-compilation radosgw-admin script-package add --package=lunajson --allow-compilation radosgw-admin script-package add --package='lua-cjson 2.1.0-1' --allow-compilation ``` Restart radosgw. create a bucket: ``` s3cmd --host=localhost:8000 --host-bucket=\"localhost:8000/%(bucket)\" mb s3://mybucket ``` upload a file to the bucket and make sure that the nats server received the notification ``` s3cmd --host=localhost:8000 --host-bucket=\"localhost:8000/%(bucket)\" put hello.txt s3://mybucket ``` Expected output: ``` Received on [Bucket_Notification]: {\"Records\":[ { \"eventVersion\":\"2.1\", \"eventSource\":\"ceph:s3\", \"awsRegion\":\"default\", \"eventTime\":\"2019-11-22T13:47:35.124724Z\", \"eventName\":\"ObjectCreated:Put\", \"userIdentity\":{ \"principalId\":\"tester\" }, \"requestParameters\":{ \"sourceIPAddress\":\"\" }, \"responseElements\":{ \"x-amz-request-id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5330.903595\", \"x-amz-id-2\":\"14d2-zone1-zonegroup1\" }, \"s3\":{ \"s3SchemaVersion\":\"1.0\", \"configurationId\":\"mynotif1\", \"bucket\":{ \"name\":\"mybucket\", \"ownerIdentity\":{ \"principalId\":\"tester\" }, \"arn\":\"arn:aws:s3:us-east-1::mybucket1\", \"id\":\"503a4c37-85eb-47cd-8681-2817e80b4281.5332.38\" }, \"object\":{ \"key\":\"hello.txt\", \"size\":\"1024\", \"eTag\":\"\", \"versionId\":\"\", \"sequencer\": \"F7E6D75DC742D108\", \"metadata\":[], \"tags\":[] } }, \"eventId\":\"\", \"opaqueData\":\"[email protected]\" } ]} ``` Lua 5.3 (or higher) Luarocks" } ]
{ "category": "Runtime", "file_name": "nats_adapter.md", "project_name": "Ceph", "subcategory": "Cloud Native Storage" }
[ { "data": "Rook is composed of a golang project and can be built directly with standard `golang` tools, and storage software (like Ceph) that are built inside containers. We currently support these platforms for building: Linux: most modern distributions should work although most testing has been done on Ubuntu Mac: macOS 10.6+ is supported Recommend 2+ cores, 8+ GB of memory and 128GB of SSD. Inside your build environment (Docker for Mac or a VM), 2+ GB memory is also recommended. The following tools are need on the host: curl docker (1.12+) or Docker for Mac (17+) git make golang helm You can build the Rook binaries and all container images for the host platform by simply running the command below. Building in parallel with the `-j` option is recommended. ```console make -j4 build ``` Run `make help` for more options. Every PR and every merge to master triggers the CI process in GitHub actions. On every commit to PR and master the CI will build, run unit tests, and run integration tests. If the build is for master or a release, the build will also be published to . You can also run the build for all supported platforms: ```console make -j4 build.all ```" } ]
{ "category": "Runtime", "file_name": "INSTALL.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "A non-root user can execute containerd by using . For example can be used for setting up a user namespace (along with mount namespace and optionally network namespace). Please refer to RootlessKit documentation for further information. See also https://rootlesscontaine.rs/ . The easiest way is to use `containerd-rootless-setuptool.sh` included in . ```console $ containerd-rootless-setuptool.sh install $ nerdctl run -d --restart=always --name nginx -p 8080:80 nginx:alpine ``` See for further information. <details> <summary>Click here to show the \"hard way\"</summary> <p> ```console $ rootlesskit --net=slirp4netns --copy-up=/etc --copy-up=/run \\ --state-dir=/run/user/1001/rootlesskit-containerd \\ sh -c \"rm -f /run/containerd; exec containerd -c config.toml\" ``` `--net=slirp4netns --copy-up=/etc` is only required when you want to unshare network namespaces. See for further information about the network drivers. `--copy-up=/DIR` mounts a writable tmpfs on `/DIR` with symbolic links to the files under the `/DIR` on the parent namespace so that the user can add/remove files under `/DIR` in the mount namespace. `--copy-up=/etc` and `--copy-up=/run` are needed on typical setup. Depending on the containerd plugin configuration, you may also need to add more `--copy-up` options. `rm -f /run/containerd` removes the \"copied-up\" symbolic link to `/run/containerd` on the parent namespace (if exists), which cannot be accessed by non-root users. The actual `/run/containerd` directory on the host is not affected. `--state-dir` is set to a random directory under `/tmp` if unset. RootlessKit writes the PID to a file named `child_pid` under this directory. You need to provide `config.toml` with your own path configuration. e.g. ```toml version = 2 root = \"/home/penguin/.local/share/containerd\" state = \"/run/user/1001/containerd\" [grpc] address = \"/run/user/1001/containerd/containerd.sock\" ``` A client program such as `ctr` also needs to be executed inside the daemon namespaces. ```console $ nsenter -U --preserve-credentials -m -n -t $(cat /run/user/1001/rootlesskit-containerd/child_pid) $ export CONTAINERD_ADDRESS=/run/user/1001/containerd/containerd.sock $ export CONTAINERD_SNAPSHOTTER=native $ ctr images pull docker.io/library/ubuntu:latest $ ctr run -t --rm --fifo-dir /tmp/foo-fifo --cgroup \"\" docker.io/library/ubuntu:latest foo ``` The `overlayfs` snapshotter does not work inside user namespaces before kernel 5.11, except on Ubuntu and Debian kernels. However, can be used instead if running kernel >= 4.18. Enabling cgroup requires cgroup v2 and systemd, e.g. `ctr run --cgroup \"user.slice:foo:bar\" --runc-systemd-cgroup ...` . See also . </p> </details>" } ]
{ "category": "Runtime", "file_name": "rootless.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "To run all tests, including the Go tests, run from repository root: sudo -E make check To run only the integration tests, run from the test directory: sudo -E ./main.sh Name | Default | Description :-- | : | :- `INCUS_BACKEND` | dir | What backend to test against (btrfs, ceph, dir, lvm, zfs, or random) `INCUSCEPHCLUSTER` | ceph | The name of the ceph cluster to create osd pools in `INCUSCEPHCEPHFS` | \"\" | Enables the CephFS tests using the specified cephfs filesystem for `cephfs` pools `INCUSCEPHCEPHOBJECT_RADOSGW` | \"\" | Enables the Ceph Object tests using the specified radosgw HTTP endpoint for `cephobject` pools `INCUS_CONCURRENT` | 0 | Run concurrency tests, very CPU intensive `INCUS_DEBUG` | 0 | Run incusd, incus and the shell in debug mode (very verbose) `INCUS_INSPECT` | 0 | Don't teardown the test environment on failure `INCUS_LOGS ` | \"\" | Path to a directory to copy all the Incus logs to `INCUS_OFFLINE` | 0 | Skip anything that requires network access `INCUSSKIPTESTS` | \"\" | Space-delimited list of test names to skip `INCUSTESTIMAGE` | \"\" (busybox test image) | Path to an image tarball to use instead of the default busybox image `INCUS_TMPFS` | 0 | Sets up a tmpfs for the whole testsuite to run on (fast but needs memory) `INCUSNICSRIOV_PARENT` | \"\" | Enables SR-IOV NIC tests using the specified parent device `INCUSIBPHYSICAL_PARENT` | \"\" | Enables Infiniband physical tests using the specified parent device `INCUSIBSRIOV_PARENT` | \"\" | Enables Infiniband SR-IOV tests using the specified parent device `INCUSNICBRIDGED_DRIVER` | \"\" | Specifies bridged NIC driver for tests (either native or openvswitch, defaults to native) `INCUSREQUIREDTESTS` | \"\" | Space-delimited list of test names that must not be skipped if their prerequisites are not met `INCUS_VERBOSE` | 0 | Run incusd, incus and the shell in verbose mode ger:meta` (global file/package level documentation), `swagger:route` (API endpoints), `swagger:params` (function parameters), `swagger:operation` (method documentation), `swagger:response` (API response content documentation), `swagger:model` (struct documentation)" }, { "data": "In our use case, we would want a config variable spec generator that can bundle any key-value data pairs alongside metadata to build a sense of hierarchy and identity (we want to associate a unique key to each gendoc comment group that will also be displayed in the generated documentation) In a swagger fashion, `incus-doc` can associate metadata key-value pairs (here for example, `group` and `key`) to data key-value pairs. As a result, it can generate a YAML tree out of the code documentation and also a Markdown document. Here is the JSON output of the example shown above: ```json { \"configs\": { \"cluster\": [ { \"scheduler.instance\": { \"condition\": \"container\", \"defaultdesc\": \"`all`\", \"liveupdate\": \"`yes`\", \"longdesc\": \"<Possibly a very long documentation on multiple lines with Markdown tables, etc.>\", \"shortdesc\": \" Possible values are all, manual and group. See Automatic placement of instances for more\", \"type\": \"integer\" } }, { \"user.*\": { \"condition\": \"container\", \"defaultdesc\": \"-\", \"liveupdate\": \"`yes`\", \"longdesc\": \" This is the real long desc. With two paragraphs. And a list: Item Item Item And a table: Key | Type | Scope | Default | Description :-- | : | :- | : | :- `acme.agree_tos` | bool | global | `false` | Agree to ACME terms of service `acme.ca_url` | string | global | `https://acme-v02.api.letsencrypt.org/directory` | URL to the directory resource of the ACME service `acme.domain` | string | global | - | Domain for which the certificate is issued `acme.email` | string | global | - | Email address used for the account registration \", \"shortdesc\": \"Free form user key/value storage (can be used in search).\", \"type\": \"string\" } } ], } } ``` Here is the `.txt` output of the example shown above: ``` <!-- config group cluster start --> \\`\\`\\`{config:option} user.* cluster :type: string :liveupdate: `yes` :shortdesc: Free form user key/value storage (can be used in search). :condition: container :default: - This is the real long desc. With two paragraphs. And a list: Item Item Item example of a table: Key | Type | Scope | Default | Description :-- | : | :- | : | :- `acme.agree_tos` | bool | global | `false` | Agree to ACME terms of service `acme.ca_url` | string | global | `https://acme-v02.api.letsencrypt.org/directory` | URL to the directory resource of the ACME service `acme.domain` | string | global | - | Domain for which the certificate is issued `acme.email` | string | global | - | Email address used for the account registration \\`\\`\\` \\`\\`\\`{config:option} scheduler.instance cluster :liveupdate: `yes` :shortdesc: Possible values are all, manual and group. See Automatic placement of instances for more information. :condition: container :default: `all` :type: integer \\`\\`\\` <!-- config group cluster end --> ```" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark schedule\" layout: docs Work with schedules Work with schedules ``` -h, --help help for schedule ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Create a schedule - Delete a schedule - Describe schedules - Get schedules" } ]
{ "category": "Runtime", "file_name": "ark_schedule.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: adding bucket policy for ceph object store target-version: release-1.4 Adding bucket policy support for ceph object store The bucket policy is the feature in which permissions for specific user can be set on s3 bucket. Read more about it from Currently can be consumed either via and . As of now there is no direct way for ceph object user to access the OBC. The idea behind this feature to allow that functionality via Rook. Refer bucket policy examples from . Please note it is different from . The following settings are needed to add for defining policies: bucketPolicy in the `Spec` section of `CephObjectStoreUser` CR bucketPolicy in the `parameters` section of `StorageClass` for `ObjectBucketClaim` Policies need to be provided in generic `json` format. A policy can have multiple `statements`. Rook must perform the following checks to verify whether the `bucketpolicy` applicable to OBC. for ceph object user, `Principal` value should have username and `Resource` should have specific bucket names. It can be defined for buckets which are not part of the OBC as well, the `bucketname` of an OBC can be fetched from its `configmap`. In `StorageClass`, `Principal` value should be `*`(applicable to all users) and `Resource` should have the bucket name can be empty since it can be generated name from Rook as well and will be attached before setting the policy. Examples: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-delete-bucket provisioner: rook-ceph.ceph.rook.io/bucket reclaimPolicy: Delete parameters: objectStoreName: my-store objectStoreNamespace: rook-ceph region: us-east-1 bucketName: ceph-bkt bucketPolicy: \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"listobjs\", \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam:::*\"]}, \"Action\": \"s3:ListObject\", \"Resource\": \"arn:aws:s3:::/*\" } ] apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: my-user namespace: rook-ceph spec: store: my-store displayName: \"my display name\" bucketPolicy: \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"putobjs\" \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam:::my-user\"]}, \"Action\": \"s3:PutObject\", \"Resource\": \"arn:aws:s3:::ceph-bkt-1/*\" }, { \"Sid\": \"getobjs\" \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam:::my-user\"]}, \"Action\": \"s3:GettObject\", \"Resource\": \"arn:aws:s3:::ceph-bkt-2/*\" } ] ``` In the above examples, the `bucket policy` mentioned in the `storage class` will be inherited to all the OBCs created from it. And this policy needs to be for the anonymous users(all users in the ceph object store), it will be attached to the bucket during the OBC creation. In the case of `ceph object store user` the policy can have multiple statements and each represents a policy for the existing buckets in the `ceph object store` for the user `my-user`. During the creation of the user, the `bucketPolicy` CRD will convert into and divide into different bucket policy statement, then fetch each bucket info, and using the credentials of bucket owner this policy will be set via s3 API. The `bucketPolicy` defined on CRD won't override any existing policies on that bucket, will just append. But this can be easily overwritten with help of S3 client since does not have much control over there. The following field will be added to `ObjectStoreUserSpec` and this need to reflected on the existing API's for `CephObjectStoreUser` ``` type ObjectStoreUserSpec struct { Store string `json:\"store,omitempty\"` //The display name for the ceph users DisplayName string `json:\"displayName,omitempty\"` //The list of bucket policy for this user BucketPolicy string `json:\"bucketPolicy,omitempty\"` } ``` The `bucket policy` feature is consumed by the brownfield use case of `OBC`, so supporting apis and structures already exists in . Still few more api's are need to read the policy from CRD, validate it and then convert it into `bucketpolicy`, so that can be consumed by existing api's" } ]
{ "category": "Runtime", "file_name": "bucketpolicy.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Weave Net Tasks menu_order: 50 search_type: Documentation The section describes the configuration options available in Weave Net. It is divided into: Managing Containerized Applications, Administering IP Addresses, Managing WeaveDNS, and finally attaching Docker Containers via the Weave API Proxy. * * * * * * * *" } ]
{ "category": "Runtime", "file_name": "tasks.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "title: Dynamically Attaching and Detaching Applications menu_order: 10 search_type: Documentation When containers may not know the network to which they will be attached, Weave Net enables you to dynamically attach and detach containers to and from a given network, even when a container is already running. To illustrate... host1$ C=$(docker run -e WEAVE_CIDR=none -dti weaveworks/ubuntu) host1$ weave attach $C 10.2.1.3 where, `C=$(docker run -e WEAVE_CIDR=none -dti weaveworks/ubuntu)` starts a container and assigns its ID to a variable `weave attach` the Weave Net command to attach to the specified container `10.2.1.3` - the allocated IP address output by `weave attach`, in this case in the default subnet Note If you are using the Weave Docker API proxy, it will have modified `DOCKERHOST` to point to the proxy and therefore you will have to pass `-e WEAVECIDR=none` to start a container that doesn't get automatically attached to the weave network for the purposes of this example. If `weave attach` sees the container has a hostname with a domain-name, it will add those into WeaveDNS (unless you turn this off with the `--without-dns` argument). host1$ docker run -dti --name=c1 --hostname=c1.weave.local weaveworks/ubuntu host1$ weave attach c1 10.32.0.1 host1$ weave dns-lookup c1 10.32.0.1 If you would like `/etc/hosts` to contain the Weave Net address (the same way ), specify `--rewrite-hosts` when running `weave attach`: host1$ weave attach --rewrite-hosts c1 A container can be detached from a subnet, by using the `weave detach` command: host1$ weave detach $C 10.2.1.3 You can also detach a container from one network and then attach it to a different one: host1$ weave detach net:default $C 10.2.1.3 host1$ weave attach net:10.2.2.0/24 $C 10.2.2.3 or, attach a container to multiple application networks, effectively sharing the same container between applications: host1$ weave attach net:default 10.2.1.3 host1$ weave attach net:10.2.2.0/24 10.2.2.3 Finally, multiple addresses can be attached or detached using a single command: host1$ weave attach net:default net:10.2.2.0/24 net:10.2.3.0/24 $C 10.2.1.3 10.2.2.3 10.2.3.1 host1$ weave detach net:default net:10.2.2.0/24 net:10.2.3.0/24 $C 10.2.1.3 10.2.2.3 10.2.3.1 Important! Any addresses that were dynamically attached will not be re-attached if the container restarts. See Also" } ]
{ "category": "Runtime", "file_name": "dynamically-attach-containers.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "English This chapter introduces how POD access network with the infiniband interface of the host. Different from RoCE, Infiniband network cards are proprietary devices for the Infiniband network, and the Spiderpool offers two CNI options: provides SR-IOV network card with the RDMA device. It is suitable for workloads requiring RDMA communication. It offers two RDMA modes: Shared mode: Pod will have a SR-IOV network interface with RDMA feature, but all RDMA devices cloud be seen by all PODs running in the same node. POD may be confused for which RDMA device it should use. Exclusive mode: Pod will have a SR-IOV network interface with RDMA feature, and POD just enable to see its own RDMA device. For isolated RDMA network cards, at least one of the following conditions must be met: (1) Kernel based on 5.3.0 or newer, RDMA modules loaded in the system. rdma-core package provides means to automatically load relevant modules on system start (2) Mellanox OFED version 4.7 or newer is required. In this case it is not required to use a Kernel based on 5.3.0 or newer. provides an IPoIB network card for POD, without RDMA device. It is suitable for conventional applications that require TCP/IP communication, as it does not require an SRIOV network card, allowing more PODs to run on the host The following steps demonstrate how to use on a cluster with 2 nodes. It enables POD to own SR-IOV network card and the RDMA devices of isolated network namespace Ensure that the host machine has an Infiniband card installed and the driver is properly installed. In our demo environment, the host machine is equipped with a Mellanox ConnectX-5 VPI NIC. Follow to install the latest OFED driver. For Mellanox's VPI series network cards, you can refer to the official to ensure that the network card is working in Infiniband mode. To confirm the presence of Inifiniband devices, use the following command: ~# lspci -nn | grep Infiniband 86:00.0 Infiniband controller [0207]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] ~# rdma link link mlx50/1 subnetprefix fe80:0000:0000:0000 lid 2 smlid 2 lmc 0 state ACTIVE physicalstate LINK_UP ~# ibstat mlx5_0 | grep \"Link layer\" Link layer: InfiniBand Make sure that the RDMA subsystem of the host is in exclusive mode. If not, switch to shared mode. ~# rdma system set netns exclusive ~# echo \"options ibcore netnsmode=0\" >> /etc/modprobe.d/ib_core.conf ~# reboot ~# rdma system netns exclusive copy-on-fork on > if it is expected to work under shared mode, `rm /etc/modprobe.d/ib_core.conf && reboot` (Optional) In an SR-IOV scenario, applications can enable NVIDIA's GPUDirect RDMA feature. For instructions on installing the kernel module, please refer to . Install Spiderpool, and notice the helm options: helm upgrade spiderpool spiderpool/spiderpool --namespace kube-system --reuse-values --set sriov.install=true If you are a user from China, you can specify the parameter `--set" }, { "data": "to pull image from China registry. Once the installation is complete, the following components will be installed: ~# kubectl get pod -n kube-system spiderpool-agent-9sllh 1/1 Running 0 1m spiderpool-agent-h92bv 1/1 Running 0 1m spiderpool-controller-7df784cdb7-bsfwv 1/1 Running 0 1m spiderpool-sriov-operator-65b59cd75d-89wtg 1/1 Running 0 1m spiderpool-init 0/1 Completed 0 1m configure SR-IOV operator. Use the following commands to look up the device information of infiniband card ~# lspci -nn | grep Infiniband 86:00.0 Infiniband controller [0207]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] The number of VFs determines how many SR-IOV network cards can be provided for PODs on a host. The network card from different manufacturers have different amount limit of VFs. For example, the Mellanox connectx5 used in this example can create up to 127 VFs. Apply the following configuration, and the VFs will be created on the host. Notice, this may cause the nodes to reboot, owing to taking effect the new configuration in the network card driver. cat <<EOF | kubectl apply -f - apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: ib-sriov namespace: kube-system spec: nodeSelector: kubernetes.io/os: \"linux\" resourceName: mellanoxibsriov priority: 99 numVfs: 12 nicSelector: deviceID: \"1017\" rootDevices: 0000:86:00.0 vendor: \"15b3\" deviceType: netdevice isRdma: true EOF View the available resources on a node, including the reported RDMA device resources: ~# kubectl get no -o json | jq -r '[.items[] | {name:.metadata.name, allocable:.status.allocatable}]' [ { \"name\": \"10-20-1-10\", \"allocable\": { \"cpu\": \"40\", \"pods\": \"110\", \"spidernet.io/mellanoxibsriov\": \"12\", ... } }, ... ] Create the CNI configuration of IB-SRIOV, and the ippool resource cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: v4-91 spec: gateway: 172.91.0.1 ips: 172.91.0.100-172.91.0.120 subnet: 172.91.0.0/16 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ib-sriov namespace: kube-system spec: cniType: ib-sriov ibsriov: resourceName: spidernet.io/mellanoxibsriov ippools: ipv4: [\"v4-91\"] EOF Following the configurations from the previous step, create a DaemonSet application that spans across nodes for testing ANNOTATION_MULTUS=\"v1.multus-cni.io/default-network: kube-system/ib-sriov\" RESOURCE=\"spidernet.io/mellanoxibsriov\" NAME=ib-sriov cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: DaemonSet metadata: name: ${NAME} labels: app: $NAME spec: selector: matchLabels: app: $NAME template: metadata: name: $NAME labels: app: $NAME annotations: ${ANNOTATION_MULTUS} spec: containers: image: docker.io/mellanox/rping-test imagePullPolicy: IfNotPresent name: mofed-test securityContext: capabilities: add: [ \"IPC_LOCK\" ] resources: limits: ${RESOURCE}: 1 command: sh -c | ls -l /dev/infiniband /sys/class/net sleep 1000000 EOF Verify that RDMA data transmission is working correctly between the Pods across nodes. Open a terminal and access one Pod to launch a service: ~# rdma link link mlx54/1 subnetprefix fe80:0000:0000:0000 lid 8 smlid 1 lmc 0 state ACTIVE physicalstate LINK_UP ~# ibreadlat Open a terminal and access another Pod to launch a service: ~# rdma link link mlx58/1 subnetprefix fe80:0000:0000:0000 lid 7 smlid 1 lmc 0 state ACTIVE physicalstate LINK_UP ~# ibreadlat 172.91.0.115 The following steps demonstrate how to use on a cluster with 2 nodes, it enables Pod to own a regular TCP/IP network cards without RDMA" }, { "data": "Ensure that the host machine has an Infiniband card installed and the driver is properly installed. In our demo environment, the host machine is equipped with a Mellanox ConnectX-5 VPI NIC. Follow to install the latest OFED driver. For Mellanox's VPI series network cards, you can refer to the official to ensure that the network card is working in Infiniband mode. To confirm the presence of Inifiniband devices, use the following command: ~# lspci -nn | grep Infiniband 86:00.0 Infiniband controller [0207]: Mellanox Technologies MT27800 Family [ConnectX-5] [15b3:1017] ~# rdma link link mlx50/1 subnetprefix fe80:0000:0000:0000 lid 2 smlid 2 lmc 0 state ACTIVE physicalstate LINK_UP ~# ibstat mlx5_0 | grep \"Link layer\" Link layer: InfiniBand Check the ipoib interface of the Inifiniband device ~# ip a show ibs5f0 9: ibs5f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc mq state UP group default qlen 256 link/infiniband 00:00:10:49:fe:80:00:00:00:00:00:00:e8:eb:d3:03:00:93:ae:10 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff altname ibp134s0f0 inet 172.91.0.10/16 brd 172.91.255.255 scope global ibs5f0 validlft forever preferredlft forever inet6 fd00:91::172:91:0:10/64 scope global validlft forever preferredlft forever inet6 fe80::eaeb:d303:93:ae10/64 scope link validlft forever preferredlft forever Install Spiderpool If you are a user from China, you can specify the parameter `--set global.imageRegistryOverride=ghcr.m.daocloud.io` to pull image from China registry. Once the installation is complete, the following components will be installed: ~# kubectl get pod -n kube-system spiderpool-agent-9sllh 1/1 Running 0 1m spiderpool-agent-h92bv 1/1 Running 0 1m spiderpool-controller-7df784cdb7-bsfwv 1/1 Running 0 1m spiderpool-init 0/1 Completed 0 1m Create the CNI configuration of ipoib, and the ippool. The `spec.ipoib.master` of SpiderMultusConfig should be set to the infiniband interface of the node. cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: v4-91 spec: gateway: 172.91.0.1 ips: 172.91.0.100-172.91.0.120 subnet: 172.91.0.0/16 apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: ipoib namespace: kube-system spec: cniType: ipoib ipoib: master: \"ibs5f0\" ippools: ipv4: [\"v4-91\"] EOF Following the configurations from the previous step, create a DaemonSet application that spans across nodes for testing ANNOTATION_MULTUS=\"v1.multus-cni.io/default-network: kube-system/ipoib\" NAME=ipoib cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: DaemonSet metadata: name: ${NAME} labels: app: $NAME spec: selector: matchLabels: app: $NAME template: metadata: name: $NAME labels: app: $NAME annotations: ${ANNOTATION_MULTUS} spec: containers: image: docker.io/mellanox/rping-test imagePullPolicy: IfNotPresent name: mofed-test securityContext: capabilities: add: [ \"IPC_LOCK\" ] command: sh -c | ls -l /dev/infiniband /sys/class/net sleep 1000000 EOF Verify that the network communication is correct between the PODs across nodes. ~# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ipoib-psf4q 1/1 Running 0 34s 172.91.0.112 10-20-1-20 <none> <none> ipoib-t9hm7 1/1 Running 0 34s 172.91.0.116 10-20-1-10 <none> <none> Succeed to access each other ~# kubectl exec -it ipoib-psf4q bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@ipoib-psf4q:/# ping 172.91.0.116 PING 172.91.0.116 (172.91.0.116) 56(84) bytes of data. 64 bytes from 172.91.0.116: icmp_seq=1 ttl=64 time=1.10 ms 64 bytes from 172.91.0.116: icmp_seq=2 ttl=64 time=0.235 ms" } ]
{ "category": "Runtime", "file_name": "rdma-ib.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "(database)= Incus uses a distributed database to store the server configuration and state, which allows for quicker queries than if the configuration was stored inside each instance's directory (as it is done by LXC, for example). To understand the advantages, consider a query against the configuration of all instances, like \"what instances are using `br0`?\". To answer that question without a database, you would have to iterate through every single instance, load and parse its configuration, and then check which network devices are defined in there. With a database, you can run a simple query on the database to retrieve this information. In an Incus cluster, all members of the cluster must share the same database state. Therefore, Incus uses , a distributed version of SQLite. Cowsql provides replication, fault-tolerance, and automatic failover without the need of external database processes. When using Incus as a single machine and not as a cluster, the Cowsql database effectively behaves like a regular SQLite database. The database files are stored in the `database` sub-directory of your Incus data directory (`/var/lib/incus/database/`). Upgrading Incus to a newer version might require updating the database schema. In this case, Incus automatically stores a backup of the database and then runs the update. See {ref}`installing-upgrade` for more information. See {ref}`backup-database` for instructions on how to back up the contents of the Incus database." } ]
{ "category": "Runtime", "file_name": "database.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "The results of the small file performance test by are as follows: ``` bash set -e TARGET_PATH=\"/home/service/chubaofs/adls/mnt-perform-test\" # mount point of CubeFS volume for FILE_SIZE in 1024 2048 4096 8192 16384 32768 65536 131072 # file size do CMD=\"/usr/lib64/openmpi/bin/mpirun --allow-run-as-root -mca plmrshargs '-p 18822' -np 512 --hostfile hfile64 mdtest -n 1000 -w $FILESIZE -e $FILESIZE -y -u -i 3 -N 1 -F -R -d $TARGET_PATH\" echo echo $CMD eval $CMD | tee -a ${LOGPREFIX}.txt echo \"start to sleep 5s\" sleep 5 done ``` | File Size (KB) | 1 | 2 | 4 | 8 | 16 | 32 | 64 | 128 | |-|--|--|--|--|--|--|--|--| | Creation (TPS) | 49808 | 37726 | 42296 | 44826 | 41481 | 35699 | 31609 | 35622 | | Read (TPS) | 76743 | 81085 | 84831 | 75397 | 73165 | 69665 | 62135 | 53658 | | Deletion (TPS) | 72522 | 67749 | 70919 | 68689 | 69819 | 71671 | 71568 | 71647 | | Stat (TPS) | 188609 | 185945 | 188542 | 180602 | 188274 | 174771 | 171100 | 183334 |" } ]
{ "category": "Runtime", "file_name": "tiny.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "The sys/unix package provides access to the raw system call interface of the underlying operating system. See: https://godoc.org/golang.org/x/sys/unix Porting Go to a new architecture/OS combination or adding syscalls, types, or constants to an existing architecture/OS pair requires some manual effort; however, there are tools that automate much of the process. There are currently two ways we generate the necessary files. We are currently migrating the build system to use containers so the builds are reproducible. This is being done on an OS-by-OS basis. Please update this documentation as components of the build system change. The old build system generates the Go files based on the C header files present on your system. This means that files for a given GOOS/GOARCH pair must be generated on a system with that OS and architecture. This also means that the generated code can differ from system to system, based on differences in the header files. To avoid this, if you are using the old build system, only generate the Go files on an installation with unmodified header files. It is also important to keep track of which version of the OS the files were generated from (ex. Darwin 14 vs Darwin 15). This makes it easier to track the progress of changes and have each OS upgrade correspond to a single change. To build the files for your current OS and architecture, make sure GOOS and GOARCH are set correctly and run `mkall.sh`. This will generate the files for your specific system. Running `mkall.sh -n` shows the commands that will be run. Requirements: bash, go The new build system uses a Docker container to generate the go files directly from source checkouts of the kernel and various system libraries. This means that on any platform that supports Docker, all the files using the new build system can be generated at once, and generated files will not change based on what the person running the scripts has installed on their computer. The OS specific files for the new build system are located in the `${GOOS}` directory, and the build is coordinated by the `${GOOS}/mkall.go` program. When the kernel or system library updates, modify the Dockerfile at `${GOOS}/Dockerfile` to checkout the new release of the source. To build all the files under the new build system, you must be on an amd64/Linux system and have your GOOS and GOARCH set accordingly. Running `mkall.sh` will then generate all of the files for all of the GOOS/GOARCH pairs in the new build system. Running `mkall.sh -n` shows the commands that will be" }, { "data": "Requirements: bash, go, docker This section describes the various files used in the code generation process. It also contains instructions on how to modify these files to add a new architecture/OS or to add additional syscalls, types, or constants. Note that if you are using the new build system, the scripts/programs cannot be called normally. They must be called from within the docker container. The hand-written assembly file at `asm${GOOS}${GOARCH}.s` implements system call dispatch. There are three entry points: ``` func Syscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr) func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr) func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr) ``` The first and second are the standard ones; they differ only in how many arguments can be passed to the kernel. The third is for low-level use by the ForkExec wrapper. Unlike the first two, it does not call into the scheduler to let it know that a system call is running. When porting Go to an new architecture/OS, this file must be implemented for each GOOS/GOARCH pair. Mksysnum is a Go program located at `${GOOS}/mksysnum.go` (or `mksysnum_${GOOS}.go` for the old system). This program takes in a list of header files containing the syscall number declarations and parses them to produce the corresponding list of Go numeric constants. See `zsysnum${GOOS}${GOARCH}.go` for the generated constants. Adding new syscall numbers is mostly done by running the build on a sufficiently new installation of the target OS (or updating the source checkouts for the new build system). However, depending on the OS, you may need to update the parsing in mksysnum. The `syscall.go`, `syscall${GOOS}.go`, `syscall${GOOS}_${GOARCH}.go` are hand-written Go files which implement system calls (for unix, the specific OS, or the specific OS/Architecture pair respectively) that need special handling and list `//sys` comments giving prototypes for ones that can be generated. The mksyscall.go program takes the `//sys` and `//sysnb` comments and converts them into syscalls. This requires the name of the prototype in the comment to match a syscall number in the `zsysnum${GOOS}${GOARCH}.go` file. The function prototype can be exported (capitalized) or not. Adding a new syscall often just requires adding a new `//sys` function prototype with the desired arguments and a capitalized name so it is exported. However, if you want the interface to the syscall to be different, often one will make an unexported `//sys` prototype, an then write a custom wrapper in `syscall_${GOOS}.go`. For each OS, there is a hand-written Go file at `${GOOS}/types.go` (or `types_${GOOS}.go` on the old system). This file includes standard C headers and creates Go type aliases to the corresponding C" }, { "data": "The file is then fed through godef to get the Go compatible definitions. Finally, the generated code is fed though mkpost.go to format the code correctly and remove any hidden or private identifiers. This cleaned-up code is written to `ztypes${GOOS}${GOARCH}.go`. The hardest part about preparing this file is figuring out which headers to include and which symbols need to be `#define`d to get the actual data structures that pass through to the kernel system calls. Some C libraries preset alternate versions for binary compatibility and translate them on the way in and out of system calls, but there is almost always a `#define` that can get the real ones. See `types_darwin.go` and `linux/types.go` for examples. To add a new type, add in the necessary include statement at the top of the file (if it is not already there) and add in a type alias line. Note that if your type is significantly different on different architectures, you may need some `#if/#elif` macros in your include statements. This script is used to generate the system's various constants. This doesn't just include the error numbers and error strings, but also the signal numbers an a wide variety of miscellaneous constants. The constants come from the list of include files in the `includes_${uname}` variable. A regex then picks out the desired `#define` statements, and generates the corresponding Go constants. The error numbers and strings are generated from `#include <errno.h>`, and the signal numbers and strings are generated from `#include <signal.h>`. All of these constants are written to `zerrors${GOOS}${GOARCH}.go` via a C program, `_errors.c`, which prints out all the constants. To add a constant, add the header that includes it to the appropriate variable. Then, edit the regex (if necessary) to match the desired constant. Avoid making the regex too broad to avoid matching unintended constants. This program is used to extract duplicate const, func, and type declarations from the generated architecture-specific files listed below, and merge these into a common file for each OS. The merge is performed in the following steps: Construct the set of common code that is idential in all architecture-specific files. Write this common code to the merged file. Remove the common code from all architecture-specific files. A file containing all of the system's generated error numbers, error strings, signal numbers, and constants. Generated by `mkerrors.sh` (see above). A file containing all the generated syscalls for a specific GOOS and GOARCH. Generated by `mksyscall.go` (see above). A list of numeric constants for all the syscall number of the specific GOOS and GOARCH. Generated by mksysnum (see above). A file containing Go types for passing into (or returning from) syscalls. Generated by godefs and the types file (see above)." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "title: Troubleshooting Methods sidebar_position: 5 slug: /faultdiagnosisand_analysis description: This article introduces troubleshooting methods for JuiceFS mount point, CSI Driver, Hadoop Java SDK, S3 Gateway, and other clients. import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; JuiceFS client will output logs for troubleshooting while running. The level of logs in terms of fatality follows DEBUG < INFO < WARNING < ERROR < FATAL. Since DEBUG logs are not printed by default, you need to explicitly enable it if needed, e.g. by adding the `--debug` option when running the JuiceFS client. Different JuiceFS clients print logs in different ways, which are described as follows. When a JuiceFS file system is mounted with the (indicating running in the background), it will print logs to the system log file and local log file simultaneously. Depending on which user is running when mounting the file system, the paths of the local log files are slightly different. For root, the local log file locates at `/var/log/juicefs.log`, while it locates at `$HOME/.juicefs/juicefs.log` for non-root users. Please refer to for details. Depending on the operating system, there are different commands to retrieve system logs or read local log files directly. <Tabs> <TabItem value=\"local-log-file\" label=\"Local log file\"> ```bash tail -n 100 /var/log/juicefs.log ``` </TabItem> <TabItem value=\"macos-syslog\" label=\"macOS system log\"> ```bash syslog | grep 'juicefs' ``` </TabItem> <TabItem value=\"debian-syslog\" label=\"Debian system log\"> ```bash cat /var/log/syslog | grep 'juicefs' ``` </TabItem> <TabItem value=\"centos-syslog\" label=\"CentOS system log\"> ```bash cat /var/log/messages | grep 'juicefs' ``` </TabItem> </Tabs> You can use the `grep` command to filter different levels of logs for performance analysis or troubleshooting: ```shell cat /var/log/syslog | grep 'juicefs' | grep '<ERROR>' ``` Depending on the version of the JuiceFS CSI Driver, there are different ways to retrieve logs. Please refer to for details. The S3 gateway can only run in the foreground, so client logs are output directly to the terminal. If you deploys the S3 gateway in Kubernetes, you can get logs from the corresponding pods. The JuiceFS client logs will be mixed into the logs of processes using JuiceFS Hadoop Java SDK, e.g. Spark executor. Thus, you need to use keywords, e.g. `juicefs` (case-insensitive), to filter out the logs you do not want. Each JuiceFS client has an access log that records all operations on the file system in detail, such as operation type, user ID, group ID, file inodes and time cost. Access logs can be used for various purposes such as performance analysis, auditing, and troubleshooting. An example format of an access log is as follows: ``` 2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010> ``` The meaning of each column is: `2021.01.15 08:26:11.003330`: The time of the current operation `[uid:0,gid:0,pid:4403]`: User ID, group ID, process ID of the current operation `write`: Operation type `(17669,8666,4993160)`: The input parameters of the current operation type. For example, the input parameters of the `write` operation in the example are the inode of the written file, the size of the written data, and the offset of the written" }, { "data": "Different operation types have different parameters. For details, please refer to the file. `OK`: Indicate the current operation is successful or not. If it is unsuccessful, specific failure information will be output. `<0.000010>`: The time (in seconds) that the current operation takes. Access logs tend to get very large and difficult for human to process directly, use to quickly visualize performance data based on these logs. Different JuiceFS clients obtain access log in different ways, which are described below. There is a virtual file named `.accesslog` in the root directory of the JuiceFS file system mount point, the contents of which can be viewed by the `cat` command (the command will not exit), for example (assuming the root directory of the mount point is `/jfs`): ```bash cat /jfs/.accesslog ``` ```output 2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010> 2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014> 2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006> ``` Please refer to to find the mount pod or CSI Driver pod depending on the version of JuiceFS CSI Driver you are using, and the `.accesslog` file can be viewed in the root directory of the JuiceFS file system mount point in the pod. The mount point path in the pod is `/jfs/<pvvolumeHandle>`. Assuming there is a mount pod named as `juicefs-1.2.3.4-pvc-d4b8fb4f-2c0b-48e8-a2dc-530799435373`, in which `pvc-d4b8fb4f-2c0b-48e8-a2dc-530799435373` is `<pvvolumeHandle>`, you can then use the following command to view the `.accesslog` file: ```bash kubectl -n kube-system exec juicefs-chaos-k8s-002-pvc-d4b8fb4f-2c0b-48e8-a2dc-530799435373 -- cat /jfs/pvc-d4b8fb4f-2c0b-48e8-a2dc-530799435373/.accesslog ``` You need to add the when starting the S3 gateway to specify the path to output the access log. By default, the S3 gateway does not output the access log. You need to add the `juicefs.access-log` configuration item in the of the JuiceFS Hadoop Java SDK to specify the path of the access log output, and the access log is not output by default. The `juicefs debug` subcommand can help you automatically collect various information about a specified mount point, facilitating troubleshooting and diagnosis. ```shell juicefs debug <mountpoint> ``` This command collects the following information: JuiceFS version Operating system version and kernel version Contents of the JuiceFS `.config` internal file Contents of the `.stat` internal file in JuiceFS and recorded again after 5 seconds Command-line parameters used for mounting Go pprof information JuiceFS logs (defaulting to the last 5000 lines) By default, a `debug` directory is created in the current directory, and the collected information is saved in that directory. Here's an example: ```shell $ juicefs debug /tmp/mountpoint $ tree ./debug ./debug tmp-test1-20230609104324 config.txt juicefs.log pprof juicefs.allocs.pb.gz juicefs.block.pb.gz juicefs.cmdline.txt juicefs.goroutine.pb.gz juicefs.goroutine.stack.txt juicefs.heap.pb.gz juicefs.mutex.pb.gz juicefs.profile.30s.pb.gz juicefs.threadcreate.pb.gz juicefs.trace.5s.pb.gz stats.5s.txt stats.txt system-info.log tmp-test1-20230609104324.zip ``` JuiceFS provides the `profile` and `stats` subcommands to visualize real-time performance data, the `profile` command is based on the , while the `stats` command uses" }, { "data": "will collect data from , run the `juicefs profile MOUNTPOINT` command, you can see the real-time statistics of each file system operation based on the latest access log: Apart from real-time mode, this command also provides a play-back mode, which performs the same visualization on existing access log files: ```shell cat /jfs/.accesslog > /tmp/juicefs.accesslog juicefs profile -f /tmp/juicefs.accesslog ``` If the replay speed is too fast, pause anytime using <kbd>Enter/Return</kbd>, and continue by pressing it again. If too slow, use `--interval 0` and it will replay the whole log file as fast as possible, and directly show the final result. If you're only interested in a certain user or process, you can set filters: ```bash juicefs profile /tmp/juicefs.accesslog --uid 12345 ``` The command reads JuiceFS Client internal metrics data, and output performance data in a format similar to `dstat`: Metrics description: `cpu`: CPU usage of the process. `mem`: Physical memory used by the process. `buf`: Current , if this value is constantly close to (or even exceeds) the configured , you should increase buffer size or decrease application workload. `cache`: Internal metric, ignore this. `ops`/`lat`: Operations processed by FUSE per second, and their average latency (in milliseconds). `read`/`write`: Read/write bandwidth usage of FUSE. `ops`/`lat`: Metadata operations processed per second, and their average latency (in milliseconds). Please note that, operations returned directly from cache are not counted in, in order to show a more accurate latency of clients actually interacting with metadata engine. `txn`/`lat`: Write transactions per second processed by the metadata engine and their average latency (in milliseconds). Read-only requests such as `getattr` are only counted as `ops` but not `txn`. `retry`: Write transactions per second that the metadata engine retries. The `blockcache` stands for local cache data, if read requests are already handled by kernel page cache, they won't be counted into the `blockcache` read metric. If there's consistent `blockcache` read traffic while you are conducting repeated read on a fixed file, this means read requests never enter page cache, and you should probably troubleshoot in this direction (e.g. not enough memory). `read`/`write`: Read/write bandwidth of client local data cache The `object` stands for object storage related metrics, when cache is enabled, penetration to object storage will significantly hinder read performance, use these metrics to check if data has been fully cached. On the other hand, you can also compare `object.get` and `fuse.read` traffic to get a rough idea of the current status. `get`/`get_c`/`lat`: Bandwidth, requests per second, and their average latency (in milliseconds) for object storage processing read requests. `put`/`put_c`/`lat`: Bandwidth, requests per second, and their average latency (in milliseconds) for object storage processing write requests. `del_c`/`lat`: Delete requests per second the object storage can process, and the average latency (in milliseconds). By default, JuiceFS clients will listen to a TCP port locally via to get runtime information such as Goroutine stack information, CPU performance statistics, memory allocation statistics. You can see the specific port number that the current JuiceFS client is listening on by using the system command (e.g. `lsof`): :::tip If you mount JuiceFS as the root user, you need to add `sudo` before the `lsof`" }, { "data": "::: ```bash lsof -i -nP | grep LISTEN | grep juicefs ``` ```shell juicefs 19371 user 6u IPv4 0xa2f1748ad05b5427 0t0 TCP 127.0.0.1:6061 (LISTEN) juicefs 19371 user 11u IPv4 0xa2f1748ad05cbde7 0t0 TCP 127.0.0.1:9567 (LISTEN) ``` By default, pprof listens on port numbers ranging from 6060 to 6099. That's why the actual port number in the above example is 6061. Once you get the listening port number, you can view all the available runtime information by accessing `http://localhost:<port>/debug/pprof`, and some important runtime information will be shown as follows: Goroutine stack information: `http://localhost:<port>/debug/pprof/goroutine?debug=1` CPU performance statistics: `http://localhost:<port>/debug/pprof/profile?seconds=30` Memory allocation statistics: `http://localhost:<port>/debug/pprof/heap` To make it easier to analyze this runtime information, you can save it locally, e.g.: ```bash curl 'http://localhost:<port>/debug/pprof/goroutine?debug=1' > juicefs.goroutine.txt ``` ```bash curl 'http://localhost:<port>/debug/pprof/profile?seconds=30' > juicefs.cpu.pb.gz ``` ```bash curl 'http://localhost:<port>/debug/pprof/heap' > juicefs.heap.pb.gz ``` :::tip You can also use the `juicefs debug` command to automatically collect these runtime information and save it locally. By default, it is saved to the `debug` directory under the current directory, for example: ```bash juicefs debug /mnt/jfs ``` For more information about the `juicefs debug` command, see . ::: If you have the `go` command installed, you can analyze it directly with the `go tool pprof` command. For example to analyze CPU performance statistics: ```bash $ go tool pprof 'http://localhost:<port>/debug/pprof/profile?seconds=30' Fetching profile over HTTP from http://localhost:<port>/debug/pprof/profile?seconds=30 Saved profile in /Users/xxx/pprof/pprof.samples.cpu.001.pb.gz Type: cpu Time: Dec 17, 2021 at 1:41pm (CST) Duration: 30.12s, Total samples = 32.06s (106.42%) Entering interactive mode (type \"help\" for commands, \"o\" for options) (pprof) top Showing nodes accounting for 30.57s, 95.35% of 32.06s total Dropped 285 nodes (cum <= 0.16s) Showing top 10 nodes out of 192 flat flat% sum% cum cum% 14.73s 45.95% 45.95% 14.74s 45.98% runtime.cgocall 7.39s 23.05% 69.00% 7.41s 23.11% syscall.syscall 2.92s 9.11% 78.10% 2.92s 9.11% runtime.pthreadcondwait 2.35s 7.33% 85.43% 2.35s 7.33% runtime.pthreadcondsignal 1.13s 3.52% 88.96% 1.14s 3.56% runtime.nanotime1 0.77s 2.40% 91.36% 0.77s 2.40% syscall.Syscall 0.49s 1.53% 92.89% 0.49s 1.53% runtime.memmove 0.31s 0.97% 93.86% 0.31s 0.97% runtime.kevent 0.27s 0.84% 94.70% 0.27s 0.84% runtime.usleep 0.21s 0.66% 95.35% 0.21s 0.66% runtime.madvise ``` Runtime information can also be exported to visual charts for a more intuitive analysis. The visual charts can be exported to various formats such as HTML, PDF, SVG, PNG, etc. For example, the command to export memory allocation statistics as a PDF file is as follows: :::note The export to visual chart function relies on , so please install it first. ::: ```bash go tool pprof -pdf 'http://localhost:<port>/debug/pprof/heap' > juicefs.heap.pdf ``` For more information about pprof, please see the . is an open source continuous profiling platform. It will help you: Find performance issues and bottlenecks in your code Resolve issues of high CPU utilization Understand the call tree of your application Track changes over time JuiceFS supports using the `--pyroscope` option to pass in the pyroscope server address, and metrics are pushed to the server every 10 seconds. If permission verification is enabled on the server, the verification information API Key can be passed in by the environment variable `PYROSCOPEAUTHTOKEN`: ```bash export PYROSCOPEAUTHTOKEN=xxxxxxxxxxxxxxxx juicefs mount --pyroscope http://localhost:4040 redis://localhost /mnt/jfs juicefs dump --pyroscope http://localhost:4040 redis://localhost dump.json ```" } ]
{ "category": "Runtime", "file_name": "fault_diagnosis_and_analysis.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Initialize enclave runtime with specific attributes. ```c struct palattrt { const char *args; const char *log_level; }; int palinit(const struct palattr_t *attr); ``` ``` @args: the enclave runtime specific argument string. @log_level: the output log level of enclave runtime. ``` ``` 0: Success -ENOENT: Invalid instance path of enclave runtime Others: Enclave runtime specific error codes ``` Pass the path of the application to be executed, and synchronously wait for the end of the application to run and return the result. ```c struct palstdiofds { int stdin, stdout, stderr; }; int palexec(char *path, char *argv[], struct palstdiofds *stdio, int *exitcode); ``` ``` @path: The path of the application to be run. @argv: The array of argument strings passed to the application, terminated by a NULL pointer. @stdio: The stdio fds consumed by the application. @exit_code: Return the exit code of an application. ``` ``` 0: success -ENOENT: The path does not exist -EACCES: Permission denied -ENOEXEC: The path is not an executable file -ENOMEM: No Memory -EINVAL: Invalid argument ``` Destroy the enclave runtime instance ```c int pal_destroy(); ``` ``` N/A ``` ``` 0: Success -ENOSYS: The function is not supported ```" } ]
{ "category": "Runtime", "file_name": "spec_v1.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "GlusterFS Coding Standards ========================== Before you get started Before starting with other part of coding standard, install `clang-format` On Fedora: ``` $ dnf install clang ``` On debian/Ubuntu: ``` $ apt-get install clang ``` Once you are done with all the local changes, you need to run below set of commands, before submitting the patch for review. ``` $ git add $file # if any $ git commit -a -s -m \"commit message\" $ git show --pretty=\"format:\" --name-only | grep -v \"contrib/\" | egrep \"*\\.[ch]$\" | xargs clang-format -i $ git diff # see if there are any changes $ git commit -a --amend # get the format changes done $ ./submit-for-review.sh ``` Structure definitions should have a comment per member Every member in a structure definition must have a comment about its purpose. The comment should be descriptive without being overly verbose. For pointer members, lifecycle concerns for the pointed-to object should be noted. For lock members, the relationship between the lock member and the other members it protects should be explicit. Bad: ``` gflockt lock; / lock / ``` Good: ``` DBTYPE access_mode; /* access mode for accessing the databases, can be DBHASH, DBBTREE (option access-mode <mode>) */ ``` Structure members should be aligned based on the padding requirements The compiler will make sure that structure members have optimum alignment, but at the expense of suboptimal padding. More important is to optimize the padding. The compiler won't do that for you. This also will help utilize the memory better Bad: ``` struct bad { bool b; / 0 / / 1..7 pad / void p; / 8..15 */ char c; / 16 / char a[16]; / 17..33 / / 34..39 pad / int64_t ii; / 40..47 / int32_t i; / 48..51 / / 52..55 pad / int64_t iii; / 56..63 / }; ``` Good: ``` struct good { int64_t ii; / explicit 64-bit types / void p; / may be 64- or 32-bit */ long l; / may be 64- or 32-bit / int i; / 32-bit / short s; / 16-bit / char c; / 8-bit / bool b; / 8-bit / char a[1024]; ); ``` Make sure the items with the most stringent alignment requirements will need to come earliest (ie, pointers and perhaps uint64_t etc), and those with less stringent alignment requirements at the end (uint16/uint8 and char). Also note that the long array (if any) should be at the end of the structure, regardless of the type. Also note, if your structure's overall size is crossing 1k-4k limit, it is recommended to mention the reason why the particular structure needs so much memory as a comment at the top. Use \\typename for struct tags and typename\\t for typedefs Being consistent here makes it possible to automate navigation from use of a type to its true definition (not just the typedef). Bad: ``` struct thing {...}; struct thing_t" }, { "data": "typedef struct _thing thing; ``` Good: ``` typedef struct thing {...} thingt; ``` No double underscores Identifiers beginning with double underscores are supposed to reserved for the compiler. http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf When you need to define inner/outer functions, use a different prefix/suffix. Bad: ``` void do_something (void); void do_something (void) { LOCK (); do_something (); UNLOCK (); } ``` Good: ``` void dosomethinglocked (void); ``` Only use safe pointers in initializers Some pointers, such as `this` in a fop function, can be assumed to be non-NULL. However, other parameters and further-derived values might be NULL. Good: ``` pid_t pid = frame->root->pid; ``` Bad: ``` datat *mydata = dict_get (xdata, \"fubar\"); ``` No giant stack allocations -- Synctasks have small finite stacks. To avoid overflowing these stacks, avoid allocating any large data structures on the stack. Use dynamic allocation instead. Bad: ``` gfbooleant port_inuse[65536]; / 256KB, this actually happened / ``` NOTE: Ideal is to limit the stack array to less than 256 bytes. Character array initializing It is recommended to keep the character array initializing to empty string. Good: ``` char msg[1024] = \"\"; ``` Not so much recommended, even though it means the same. ``` char msg[1024] = {0,}; ``` We recommend above to structure initialization. Validate all arguments to a function All pointer arguments to a function must be checked for `NULL`. A macro named `GFVALIDATEOR_GOTO` (in `common-utils.h`) takes two arguments; if the first is `NULL`, it writes a log message and jumps to a label specified by the second aergument after setting errno appropriately. There are several variants of this function for more specific purposes, and their use is recommended. Bad: ``` / top of function / ret = dict_get (xdata, ...) ``` Good: ``` / top of function / GFVALIDATEOR_GOTO(xdata,out); ret = dict_get (xdata, ...) ``` Never rely on precedence of operators Never write code that relies on the precedence of operators to execute correctly. Such code can be hard to read and someone else might not know the precedence of operators as accurately as you do. This includes precedence of increment/decrement vs. field/subscript. The only exceptions are arithmetic operators (which have had defined precedence since before computers even existed) and boolean negation. Bad: ``` if (op_ret == -1 && errno != ENOENT) ++foo->bar / incrementing foo, or incrementing foo->bar? / a && b || !c ``` Good: ``` if ((op_ret == -1) && (errno != ENOENT)) (++foo)->bar ++(foo->bar) (a && b) || !c a && (b || !c) ``` Use exactly matching types -- Use a variable of the exact type declared in the manual to hold the return value of a function. Do not use an 'equivalent' type. Bad: ``` int len = strlen (path); ``` Good: ``` size_t len = strlen (path); ``` Avoid code such as `foo->bar->baz`; check every pointer Do not write code that blindly follows a chain of pointer" }, { "data": "Any pointer in the chain may be `NULL` and thus cause a crash. Verify that each pointer is non-null before following it. Even if `foo->bar` has been checked and is known safe, repeating it can make code more verbose and less clear. This rule includes `[]` as well as `->` because both dereference pointers. Bad: ``` foo->bar->field1 = value1; xyz = foo->bar->field2 + foo->bar->field3 * foo->bar->field4; foo->bar[5].baz ``` Good: ``` my_bar = foo->bar; if (!my_bar) ... return; my_bar->field1 = value1; xyz = mybar->field2 + mybar->field3 * my_bar->field4; ``` Document unchecked return values In general, return values should be checked. If a function is being called for its side effects and the return value really doesn't matter, an explicit cast to void is required (to keep static analyzers happy) and a comment is recommended. Bad: ``` close (fd); doimportantthing (); ``` Good (or at least OK): ``` (void) sleep (1); ``` Gracefully handle failure of malloc (and other allocation functions) -- GlusterFS should never crash or exit due to lack of memory. If a memory allocation fails, the call should be unwound and an error returned to the user. Use result args and reserve the return value to indicate success or failure: The return value of every functions must indicate success or failure (unless it is impossible for the function to fail e.g., boolean functions). If the function needs to return additional data, it must be returned using a result (pointer) argument. Bad: ``` int32t dictgetint32 (dictt this, char key); ``` Good: ``` int dictgetint32 (dictt *this, char *key, int32t *val); ``` Always use the 'n' versions of string functions -- Unless impossible, use the length-limited versions of the string functions. Bad: ``` strcpy (entrypath, realpath); ``` Good: ``` strncpy (entrypath, realpath, entrypathlen); ``` Do not use memset prior to sprintf/snprintf/vsnprintf etc... snprintf(and other similar string functions) terminates the buffer with a '\\0'(null character). Hence, there is no need to do a memset before using snprintf. (Of course you need to account one extra byte for the null character in your allocation). Note: Similarly if you are doing pre-memory allocation for the buffer, use GFMALLOC instead of GFCALLOC, since the later is bit costlier. Bad: ``` char buffer[x]; memset (buffer, 0, x); bytes_read = snprintf (buffer, sizeof buffer, \"bad standard\"); ``` Good: ``` char buffer[x]; bytes_read = snprintf (buffer, sizeof (buffer), \"good standard\"); ``` And it is always to good initialize the char array if the string is static. E.g. ``` char buffer[] = \"good standard\"; ``` No dead or commented code There must be no dead code (code to which control can never be passed) or commented out code in the codebase. Function length or Keep functions small We live in the UNIX-world where modules do one thing and do it well. This rule should apply to our functions also. If a function is very long, try splitting it into many little helper" }, { "data": "The question is, in a coding spree, how do we know a function is long and unreadable. One rule of thumb given by Linus Torvalds is that, a function should be broken-up if you have 4 or more levels of indentation going on for more than 3-4 lines. Example for a helper function: ``` static int sameowner (posixlockt *l1, posixlock_t *l2) { return ((l1->clientpid == l2->clientpid) && (l1->transport == l2->transport)); } ``` Define functions as static -- Declare functions as static unless they're exposed via a module-level API for use from other modules. No nested functions Nested functions have proven unreliable, e.g. as callbacks in code that uses ucontext (green) threads, Use inline functions instead of macros whenever possible -- Inline functions enforce type safety; macros do not. Use macros only for things that explicitly need to be type-agnostic (e.g. cases where one might use generics or templates in other languages), or that use other preprocessor features such as `#` for stringification or `##` for token pasting. In general, \"static inline\" is the preferred form. Avoid copypasta Code that is copied and then pasted into multiple functions often creates maintenance problems later, e.g. updating all but one instance for a subsequent change. If you find yourself copying the same \"boilerplate\" many places, consider refactoring to use helper functions (including inline) or macros, or code generation. Ensure function calls wrap around after 80-columns -- Place remaining arguments on the next line if needed. Functions arguments and function definition Place all the arguments of a function definition on the same line until the line goes beyond 80-cols. Arguments that extend beyind 80-cols should be placed on the next line. Style issues Use K&R/Linux style of brace placement for blocks. Good: ``` int some_function (...) { if (...) { / ... / } else if (...) { / ... / } else { / ... / } do { / ... / } while (cond); } ``` Use eight spaces for indenting blocks. Ensure that your file contains only spaces and not tab characters. You can do this in Emacs by selecting the entire file (`C-x h`) and running `M-x untabify`. To make Emacs indent lines automatically by eight spaces, add this line to your `.emacs`: ``` (add-hook 'c-mode-hook (lambda () (c-set-style \"linux\"))) ``` Write a comment before every function describing its purpose (one-line), its arguments, and its return value. Mention whether it is an internal function or an exported function. Write a comment before every structure describing its purpose, and write comments about each of its members. Follow the style shown below for comments, since such comments can then be automatically extracted by doxygen to generate documentation. Good: ``` / hash_name -hash function for filenames @par: parent inode number @name: basename of inode @mod: number of buckets in the hashtable @return: success: bucket number failure: -1 Not for external" }, { "data": "*/ ``` To clearly show regions of code which execute with locks held, use the following format: ``` pthreadmutexlock (&mutex); { / code / } pthreadmutexunlock (&mutex); ``` Even around single statements. Bad: ``` if (condition) action (); if (condition) action (); ``` Good: ``` if (condition) { action (); } ``` These can be hard to read and even harder to modify later. Predicate functions and helper variables are always better for maintainability. Bad: ``` if ((thing1 && othercomplexcondition (thing1, lots, of, args)) || (!thing2 || evenmorecomplex_condition (thing2)) || allsortsofstuffwith_thing3) { return; } ``` Better: ``` thing1_ok = predicate1 (thing1, lots, of, args thing2_ok = predicate2 (thing2); thing3_ok = predicate3 (thing3); if (!thing1ok || !thing2ok || !thing3_ok) { return; } ``` Best: ``` if (thing1 && othercomplexcondition (thing1, lots, of, args)) { return; } if (!thing2 || evenmorecomplex_condition (thing2)) { / Note potential for a different message here. / return; } if (allsortsofstuffwith_thing3) { / And here too. / return; } ``` If a value isn't supposed/expected to change, there's no cost to adding a 'const' keyword and it will help prevent violation of expectations. Almost all state in Gluster is contextual and should be contained in the appropriate structure reflecting its scope (e.g. `call\\frame\\t`, `call\\stack\\t`, `xlator\\t`, `glusterfs\\ctx\\_t`). With dynamic loading and graph switches in play, each global requires careful consideration of when it should be initialized or reinitialized, when it might accidentally be reinitialized, when its value might become stale, and so on. A few global variables are needed to serve as 'anchor points' for these structures, and more exceptions to the rule might be approved in the future, but new globals should not be added to the codebase without explicit approval. This is the recommended template for any fop. In the beginning come the initializations. After that, the 'success' control flow should be linear. Any error conditions should cause a `goto` to a label at the end. By convention this is 'out' if there is only one such label, but a cascade of such labels is allowable to support multi-stage cleanup. At that point, the code should detect the error that has occurred and do appropriate cleanup. ``` int32_t samplefop (callframet *frame, xlatort *this, ...) { char * var1 = NULL; int32t opret = -1; int32t operrno = 0; DIR * dir = NULL; struct posix_fd * pfd = NULL; VALIDATEORGOTO (frame, out); VALIDATEORGOTO (this, out); / other validations / dir = opendir (...); if (dir == NULL) { op_errno = errno; gflog (this->name, GFLOG_ERROR, \"opendir failed on %s (%s)\", loc->path, strerror (op_errno)); goto out; } / another system call / if (...) { op_errno = ENOMEM; gflog (this->name, GFLOG_ERROR, \"out of memory :(\"); goto out; } / ... / out: if (op_ret == -1) { /* check for all the cleanup that needs to be done */ if (dir) { closedir (dir); dir = NULL; } if (pfd) { FREE (pfd->path); FREE (pfd); pfd = NULL; } } STACKUNWIND (frame, opret, op_errno, fd); return 0; } ```" } ]
{ "category": "Runtime", "file_name": "coding-standard.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete a key ``` cilium-dbg kvstore delete [options] <key> [flags] ``` ``` cilium kvstore delete --recursive foo ``` ``` -h, --help help for delete --recursive Recursive lookup ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API --kvstore string Key-Value Store type --kvstore-opt map Key-Value Store options ``` - Direct access to the kvstore" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_kvstore_delete.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Display the current encryption state ``` cilium-dbg encrypt status [flags] ``` ``` -h, --help help for status -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage transparent encryption" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_encrypt_status.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "CRI-O is a lightweight runtime for Kubernetes that provides the CRI (Container Runtime Interface) socket required for automating the deployment, scaling, and management of containerized applications. It serves as an alternative to Docker within Kubernetes environments. However, it's important to note that CRI-O is not a drop-in replacement for Docker, and there are some differences in functionality and tooling. The CRI is a standardized interface between Kubernetes and container runtimes, allowing Kubernetes to manage and interact with containers. It defines a set of operations and APIs that Kubernetes uses to create, start, stop, and delete containers. CRI-O implements this interface and provides the necessary functionalities for Kubernetes to work seamlessly. When transitioning from Docker to CRI-O, it's crucial to understand that many traditional Docker commands and tools may not directly apply to CRI-O. While some equivalents exist, such as `crictl` (a command-line utility that serves as a client for the Container Runtime Interface (CRI)), they are primarily focused on fulfilling the requirements of the Kubernetes CRI. For operational tasks and troubleshooting within a Kubernetes environment, it is recommended to leverage additional tools like . These tools offer a feature-rich set of commands that can address various operational needs. However, it's important to note that direct interaction with CRI-O containers using Podman commands is not possible. While images can be shared between Podman and CRI-O, containers themselves cannot be directly managed or interacted with across these tools. To interact with CRI-O containers, you should use tools that interface with the CRI, such as `crictl`. Many traditional tools will still be useful, such as `pstree`, `nsenter` and `lsns`. As well as some systemd helpers like `systemd-cgls` and `systemd-cgtop` are still just as applicable. If you are primarily interested in debugging containers and require a tool that offers extensive command-line capabilities, Podman is a viable alternative. Podman is a daemonless container engine that provides a command-line interface similar to Docker. It can run containers, manage container images, and perform various container-related operations. While Podman and CRI-O are separate projects with different purposes, Podman offers a more comprehensive set of commands that can facilitate debugging and troubleshooting tasks within a containerized environment. You can use Podman commands to perform actions like executing commands within a container (`podman exec`), inspecting container metadata (`podman inspect`), viewing container logs (`podman logs`), and many others. It's important to note that Podman and CRI-O are not interchangeable. Podman is a standalone container engine that operates independently of Kubernetes, while CRI-O is specifically designed for Kubernetes environments. However, Podman can be a valuable tool when it comes to container debugging and development workflows. For many troubleshooting and information collection steps, there may be an existing pattern. Following provides equivalent with CRI-O tools for gathering information or jumping into containers, for operational use. | Existing Step | CRI-O (and friends) | |:-:|:--:| | `docker exec` | | | `docker inspect` | `podman inspect` | | `docker logs` | `podman logs` | | `docker ps` | or | | `docker stats` | `podman stats` | If you were already using steps like `kubectl exec` (or `oc exec` on OpenShift), they will continue to function the same way. In summary, CRI-O is a lightweight runtime that implements the CRI interface for Kubernetes, providing container management capabilities within a Kubernetes environment. While it's not a direct replacement for Docker, it offers compatibility and integration with Kubernetes. For operational tasks, it is recommended to utilize additional tools like Podman, which provides a more extensive command-line interface for container debugging and troubleshooting." } ]
{ "category": "Runtime", "file_name": "transfer.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "Progress represents a followers progress in the view of the leader. Leader maintains progresses of all followers, and sends `replication message` to the follower based on its progress. `replication message` is a `msgApp` with log entries. A progress has two attribute: `match` and `next`. `match` is the index of the highest known matched entry. If leader knows nothing about followers replication status, `match` is set to zero. `next` is the index of the first entry that will be replicated to the follower. Leader puts entries from `next` to its latest one in next `replication message`. A progress is in one of the three state: `probe`, `replicate`, `snapshot`. ``` +--+ | send snapshot | | | ++-+ +-v+ +> probe | | snapshot | | | max inflight = 1 <-+ max inflight = 0 | | ++-+ +--+ | | 1. snapshot success | | (next=snapshot.index + 1) | | 2. snapshot failure | | (no change) | | 3. receives msgAppResp(rej=false&&index>lastsnap.index) | | (match=m.index,next=match+1) receives msgAppResp(rej=true) (next=match+1)| | | | | | | | receives msgAppResp(rej=false&&index>match) | | (match=m.index,next=match+1) | | | | | | | +v-+ | | replicate | ++ max inflight = n | +--+ ``` When the progress of a follower is in `probe` state, leader sends at most one `replication message` per heartbeat interval. The leader sends `replication message` slowly and probing the actual progress of the follower. A `msgHeartbeatResp` or a `msgAppResp` with reject might trigger the sending of the next `replication message`. When the progress of a follower is in `replicate` state, leader sends `replication message`, then optimistically increases `next` to the latest entry sent. This is an optimized state for fast replicating log entries to the follower. When the progress of a follower is in `snapshot` state, leader stops sending any `replication message`. A newly elected leader sets the progresses of all the followers to `probe` state with `match` = 0 and `next` = last index. The leader slowly (at most once per heartbeat) sends `replication message` to the follower and probes its progress. A progress changes to `replicate` when the follower replies with a non-rejection `msgAppResp`, which implies that it has matched the index sent. At this point, leader starts to stream log entries to the follower fast. The progress will fall back to `probe` when the follower replies a rejection `msgAppResp` or the link layer reports the follower is unreachable. We aggressively reset `next` to `match`+1 since if we receive any `msgAppResp` soon, both `match` and `next` will increase directly to the `index` in `msgAppResp`. (We might end up with sending some duplicate entries when aggressively reset `next` too low. see open question) A progress changes from `probe` to `snapshot` when the follower falls very far behind and requires a snapshot. After sending `msgSnap`, the leader waits until the success, failure or abortion of the previous snapshot sent. The progress will go back to `probe` after the sending result is applied. limit the max size of message sent per message. Max should be configurable. Lower the cost at probing state as we limit the size per message; lower the penalty when aggressively decreased to a too low `next` limit the # of in flight messages < N when in `replicate` state. N should be configurable. Most implementation will have a sending buffer on top of its actual network transport layer (not blocking raft node). We want to make sure raft does not overflow that buffer, which can cause message dropping and triggering a bunch of unnecessary resending repeatedly." } ]
{ "category": "Runtime", "file_name": "design.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Cobra supports native Zsh completion generated from the root `cobra.Command`. The generated completion script should be put somewhere in your `$fpath` named `_<YOUR COMMAND>`. Completion for all non-hidden subcommands using their `.Short` description. Completion for all non-hidden flags using the following rules: Filename completion works by marking the flag with `cmd.MarkFlagFilename...` family of commands. The requirement for argument to the flag is decided by the `.NoOptDefVal` flag value - if it's empty then completion will expect an argument. Flags of one of the various `Array` and `*Slice` types supports multiple specifications (with or without argument depending on the specific type). Completion of positional arguments using the following rules: Argument position for all options below starts at `1`. If argument position `0` is requested it will raise an error. Use `command.MarkZshCompPositionalArgumentFile` to complete filenames. Glob patterns (e.g. `\"*.log\"`) are optional - if not specified it will offer to complete all file types. Use `command.MarkZshCompPositionalArgumentWords` to offer specific words for completion. At least one word is required. It's possible to specify completion for some arguments and leave some unspecified (e.g. offer words for second argument but nothing for first argument). This will cause no completion for first argument but words completion for second argument. If no argument completion was specified for 1st argument (but optionally was specified for 2nd) and the command has `ValidArgs` it will be used as completion options for 1st argument. Argument completions only offered for commands with no subcommands. Custom completion scripts are not supported yet (We should probably create zsh specific one, doesn't make sense to re-use the bash one as the functions will be different). Whatever other feature you're looking for and doesn't exist :)" } ]
{ "category": "Runtime", "file_name": "zsh_completions.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-operator-azure completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-operator-azure completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-azure_completion_powershell.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Container deployments utilize explicit or implicit file sharing between host filesystem and containers. From a trust perspective, avoiding a shared file-system between the trusted host and untrusted container is recommended. This is not always feasible. In Kata Containers, block-based volumes are preferred as they allow usage of either device pass through or `virtio-blk` for access within the virtual machine. As of the 2.0 release of Kata Containers, is the default filesystem sharing mechanism. virtio-fs support works out of the box for `cloud-hypervisor` and `qemu`, when Kata Containers is deployed using `kata-deploy`. Learn more about `kata-deploy` and how to use `kata-deploy` in Kubernetes ." } ]
{ "category": "Runtime", "file_name": "how-to-use-virtio-fs-with-kata.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Run cilium-operator ``` cilium-operator [flags] ``` ``` --alibaba-cloud-vpc-id string Specific VPC ID for AlibabaCloud ENI. If not set use same VPC as operator --auto-create-cilium-pod-ip-pools map Automatically create CiliumPodIPPool resources on startup. Specify pools in the form of <pool>=ipv4-cidrs:<cidr>,[<cidr>...];ipv4-mask-size:<size> (multiple pools can also be passed by repeating the CLI flag) --aws-enable-prefix-delegation Allows operator to allocate prefixes to ENIs instead of individual IP addresses --aws-instance-limit-mapping map Add or overwrite mappings of AWS instance limit in the form of {\"AWS instance type\": \"Maximum Network Interfaces\",\"IPv4 Addresses per Interface\",\"IPv6 Addresses per Interface\"}. cli example: --aws-instance-limit-mapping=a1.medium=2,4,4 --aws-instance-limit-mapping=a2.somecustomflavor=4,5,6 configmap example: {\"a1.medium\": \"2,4,4\", \"a2.somecustomflavor\": \"4,5,6\"} --aws-release-excess-ips Enable releasing excess free IP addresses from AWS ENI. --aws-use-primary-address Allows for using primary address of the ENI for allocations on the node --azure-resource-group string Resource group to use for Azure IPAM --azure-subscription-id string Subscription ID to access Azure API --azure-use-primary-address Use Azure IP address from interface's primary IPConfigurations --azure-user-assigned-identity-id string ID of the user assigned identity used to auth with the Azure API --bgp-announce-lb-ip Announces service IPs of type LoadBalancer via BGP --bgp-config-path string Path to file containing the BGP configuration (default \"/var/lib/cilium/bgp/config.yaml\") --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cilium-endpoint-gc-interval duration GC interval for cilium endpoints (default 5m0s) --cilium-pod-labels string Cilium Pod's labels. Used to detect if a Cilium pod is running to remove the node taints where its running and set NetworkUnavailable to false (default \"k8s-app=cilium\") --cilium-pod-namespace string Name of the Kubernetes namespace in which Cilium is deployed in. Defaults to the same namespace defined in k8s-namespace --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --cluster-pool-ipv4-cidr strings IPv4 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' --cluster-pool-ipv4-mask-size int Mask size for each IPv4 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv4=true' (default 24) --cluster-pool-ipv6-cidr strings IPv6 CIDR Range for Pods in cluster. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' --cluster-pool-ipv6-mask-size int Mask size for each IPv6 podCIDR per node. Requires 'ipam=cluster-pool' and 'enable-ipv6=true' (default 112) --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network)" }, { "data": "(default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --cnp-status-cleanup-burst int Maximum burst of requests to clean up status nodes updates in CNPs (default 20) --cnp-status-cleanup-qps float Rate used for limiting the clean up of the status nodes updates in CNP, expressed as qps (default 10) --config string Configuration file (default \"$HOME/ciliumd.yaml\") --config-dir string Configuration directory that contains a file for each option --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. -D, --debug Enable debugging mode --ec2-api-endpoint string AWS API endpoint for the EC2 service --enable-cilium-endpoint-slice If set to true, the CiliumEndpointSlice feature is enabled. If any CiliumEndpoints resources are created, updated, or deleted in the cluster, all those changes are broadcast as CiliumEndpointSlice updates to all of the Cilium agents. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-ipv4 Enable IPv4 support (default true) --enable-ipv6 Enable IPv6 support (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-metrics Enable Prometheus metrics --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --eni-gc-interval duration Interval for garbage collection of unattached ENIs. Set to 0 to disable (default 5m0s) --eni-gc-tags map Additional tags attached to ENIs created by" }, { "data": "Dangling ENIs with this tag will be garbage collected --eni-tags map ENI tags in the form of k1=v1 (multiple k/v pairs can be passed by repeating the CLI flag) --excess-ip-release-delay int Number of seconds operator would wait before it releases an IP previously marked as excess (default 180) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host network. --gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for cilium-operator --identity-allocation-mode string Method to use for identity allocation (default \"kvstore\") --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --instance-tags-filter map EC2 Instance tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --ipam string Backend to use for IPAM (default \"cluster-pool\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-namespace string Name of the Kubernetes namespace in which Cilium Operator is deployed in --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without" }, { "data": "label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --kvstore string Key-value store type --kvstore-opt map Key-value store options e.g. etcd.address=127.0.0.1:4001 --leader-election-lease-duration duration Duration that non-leader operator candidates will wait before forcing to acquire leadership (default 15s) --leader-election-renew-deadline duration Duration that current acting master will retry refreshing leadership in before giving up the lock (default 10s) --leader-election-retry-period duration Duration that LeaderElector clients should wait between retries of the actions (default 2s) --limit-ipam-api-burst int Upper burst limit when accessing external APIs (default 20) --limit-ipam-api-qps float Queries per second limit when accessing external IPAM APIs (default 4) --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --log-driver strings Logging endpoints to use for example syslog --log-opt map Log driver options for cilium-operator, configmap example for syslog driver: {\"syslog.level\":\"info\",\"syslog.facility\":\"local4\"} --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --nodes-gc-interval duration GC interval for CiliumNodes (default 5m0s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --parallel-alloc-workers int Maximum number of parallel IPAM workers (default 50) --pod-restart-selector string cilium-operator will delete/restart any pods with these labels if the pod is not managed by Cilium. If this option is empty, then all pods may be restarted (default \"k8s-app=kube-dns\") --remove-cilium-node-taints Remove node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes once Cilium is up and running (default true) --set-cilium-is-up-condition Set CiliumIsUp Node condition to mark a Kubernetes Node that a Cilium pod is up and running in that node (default true) --set-cilium-node-taints Set node taint \"node.cilium.io/agent-not-ready\" from Kubernetes nodes if Cilium is scheduled but not up and running --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created --subnet-ids-filter strings Subnets IDs (separated by commas) --subnet-tags-filter map Subnets tags in the form of k1=v1,k2=v2 (multiple k/v pairs can also be passed by repeating the CLI flag --synchronize-k8s-nodes Synchronize Kubernetes nodes to kvstore and perform CNP GC (default true) --synchronize-k8s-services Synchronize Kubernetes services to kvstore (default true) --unmanaged-pod-watcher-interval int Interval to check for unmanaged kube-dns pods (0 to disable) (default 15) --update-ec2-adapter-limit-via-api Use the EC2 API to update the instance type to adapter limits (default true) --version Print version information ``` - Generate the autocompletion script for the specified shell - Inspect the hive - Access metric status of the operator - Display status of operator - Run troubleshooting utilities to check control-plane connectivity" } ]
{ "category": "Runtime", "file_name": "cilium-operator.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | ||--|-|-|--|-| | A00001 | It fails to run a pod with different VLANs for IPv4 and IPv6 IPPools | p3 | | done | | | A00002 | Added fields such as `\"dist\":\"1.0.0.0/16\"`, `\"gw\":\"1.0.0.1\"`, and `nics` and the pod was running successfully | p2 | | done | | | A00003 | Failed to run a pod with invalid annotations | p3 | | done | | | A00004 | Take a test with the Priority: pod annotation > namespace annotation > specified in a CNI profile | p1 | | done | | | A00005 | The \"IPPools\" annotation has the higher Priority over the \"IPPool\" annotation | p1 | | done | | | A00006 | The namespace annotation has precedence over global default IPPool | p1 | true | done | | | A00007 | Use wildcard for namespace annotation to specify IPPools | p1 | true | done | | | A00008 | Successfully run an annotated multi-container pod | p2 | | done | | | A00009 | Modify the annotated IPPool for a specified Deployment pod<br />Modify the annotated IPPool for a specified StatefulSet pod | p2 | | done | | | A00010 | Modify the annotated IPPool for a pod running on multiple NICs | p3 | | done | | | A00011 | Use the ippool route with `cleanGateway=false` in the pod annotation as a default route | p3 | | done | | | A00012 | Specify the default route NIC through Pod annotation: `ipam.spidernet.io/default-route-nic` | p2 | | done | | | A00013 | It's invalid to specify one NIC corresponding IPPool in IPPools annotation with multiple NICs | p2 | | done | | | A00014 | It's invalid to specify same NIC name for IPPools annotation with multiple NICs | p2 | | done | | | A00015 | Use wildcard for 'ipam.spidernet.io/ippools' annotation to specify IPPools | p2 | | done | |" } ]
{ "category": "Runtime", "file_name": "annotation.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "The `footprint_data.sh` script runs a number of identical containers sequentially via ctr and takes a number of memory related measurements after each launch. The script is generally not used in a CI type environment, but is intended to be run and analyzed manually. You can configure the script by setting a number of environment variables. The following sections list details of the configurable variables, along with a small example invocation script. Environment variables can take effect in two ways. Some variables affect how the payload is executed. The `RUNTIME` and `PAYLOAD` arguments directly affect the payload execution with the following line in the script: `$ ctr run --memory-limit $PAYLOADRUNTIMEARGS --rm --runtime=$CONTAINERDRUNTIME $PAYLOAD $NAME sh -c $PAYLOADARGS` Other settings affect how memory footprint is measured and the test termination conditions. | Variable | Function | -- | -- | `PAYLOAD` | The ctr image to run | `PAYLOAD_ARGS` | Any arguments passed into the ctr image | `PAYLOADRUNTIMEARGS` | Any extra arguments passed into the ctr `run` command | `PAYLOAD_SLEEP` | Seconds to sleep between launch and measurement, to allow settling | `MAXNUMCONTAINERS` | The maximum number of containers to run before terminating | `MAXMEMORYCONSUMED` | The maximum amount of memory to be consumed before terminating | `MINMEMORYFREE` | The minimum amount of memory allowed to be free before terminating | `DUMP_CACHES` | A flag to note if the system caches should be dumped before capturing stats | `DATAFILE` | Can be set to over-ride the default JSON results filename The names of the JSON files generated by the test are dictated by some of the parameters the test is utilising. The default filename is generated in the form of: `footprint-${PAYLOAD}[-ksm].json` The test measures, calculates, and stores a number of data items: | Item | Description | - | -- | `uss` | USS for all the VM runtime components | `pss` | PSS for all the VM runtime components | `all_pss` | PSS of all of userspace - to monitor if we had other impact on the system | `user_smem` | `smem` \"userspace\" consumption value | `avail` | \"available\" memory from `free` | `avail_decr` | \"available\" memory decrease since start of test | `cached` | \"Cached\" memory from `/proc/meminfo` | `smem_free` | Free memory as reported by `smem` | `free_decr` | Decrease in Free memory reported by `smem` since start of test | `anon` | `AnonPages` as reported from `/proc/meminfo` | `mapped` | Mapped pages as reported from `/proc/meminfo` | `cached` | Cached pages as reported from `/proc/meminfo` | `slab` | Slab as reported from `/proc/meminfo` The following script is an example of how to configure the environment variables and invoke the test script to run a number of different container tests. ``` set -e set -x export MAXNUMCONTAINERS=10 export MAXMEMORYCONSUMED=610241024*1024 function run() { export PAYLOAD=\"quay.io/prometheus/busybox:latest\" export PAYLOAD_ARGS=\"tail -f /dev/null\" export PAYLOAD_SLEEP=10 export PAYLOADRUNTIMEARGS=\"5120\" sudo -E bash $(pwd)/density/footprint_data.sh } export CONTAINERD_RUNTIME=io.containerd.kata.v2 run ```" } ]
{ "category": "Runtime", "file_name": "footprint_data.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "English | This upgrade guide is intended for Spiderpool running on Kubernetes. If you have questions, feel free to ping us on . Read the full upgrade guide to understand all the necessary steps before performing them. When rolling out an upgrade with Kubernetes, Kubernetes will first terminate the pod followed by pulling the new image version and then finally spin up the new image. In order to reduce the downtime of the agent and to prevent ErrImagePull errors during upgrade. You can refer to the following command to pull the corresponding version of the image in advance. ```bash docker pull ghcr.io/spidernet-io/spiderpool/spiderpool-agent:[upgraded-version] docker pull ghcr.io/spidernet-io/spiderpool/spiderpool-controller:[upgraded-version] docker pull ghcr.m.daocloud.io/spidernet-io/spiderpool/spiderpool-agent:[upgraded-version] docker pull ghcr.m.daocloud.io/spidernet-io/spiderpool/spiderpool-controller:[upgraded-version] ``` It is recommended to always upgrade to the latest and maintained patch version of Spiderpool. Check to learn about the latest supported patch versions. Make sure you have installed. Setup Helm repository and update ```bash helm repo add spiderpool https://spidernet-io.github.io/spiderpool helm repo update spiderpool ``` Remove spiderpool-init Pod `spiderpool-init` Pod will help initialize environment information, and it will be in `complete` state after each run. During `helm upgrade`, since `spiderpool-init` is essentially a Pod, patching some resources will fail. So delete it via `kubectl delete spiderpool-init` before upgrading. ```bash Error: UPGRADE FAILED: cannot patch \"spiderpool-init\" with kind Pod: Pod \"spiderpool-init\" is invalid: spec: Forbidden: pod updates may not change fields other than `spec.containers[].image`,`spec.initContainers[].image`,`spec.activeDeadlineSeconds`,`spec.tolerations` (only additions to existing tolerations),`spec.terminationGracePeriodSeconds` (allow it to be set to 1 if it was previously negative) ``` Upgrade via `helm upgrade` ```bash helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version] ``` You can use `--set` to update the Spiderpool configuration when upgrading. For available values parameters, please see the documentation. The following example shows how to enable Spiderpool's ```bash helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version] --set ipam.spidersubnet.enable=true ``` You can also use `--reuse-values` to reuse the values from the previous release and merge any overrides from the command line. However, it is only safe to use the `--reuse-values` flag if the Spiderpool chart version remains unchanged, e.g. when using helm upgrade to change the Spiderpool configuration without upgrading the Spiderpool components. For `--reuse-values` usage, see the following example: ```bash helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version] --set ipam.spidersubnet.enable=true --reuse-values ``` Conversely, if the Spiderpool chart version has changed and you want to reuse the values from the existing installation, save the old values in a values file, check that file for any renamed or deprecated values, and pass it to helm upgrade command, you can retrieve and save values from existing installations using. ```bash helm get values spiderpool --namespace=kube-system -o yaml > old-values.yaml helm upgrade spiderpool spiderpool/spiderpool -n kube-system --version [upgraded-version] -f old-values.yaml ``` Occasionally, it may be necessary to undo the rollout because a step was missed or something went wrong during upgrade. To undo the rollout run: ```bash helm history spiderpool --namespace=kube-system helm rollback spiderpool [REVISION] --namespace=kube-system ``` The following upgrade notes will be updated on a rolling basis with the release of new versions. They will have a priority relationship (from old to new). If your current version meets any one of them, when upgrading, you need to check in order from that item to Latest on every note. In versions lower than 0.3.6, `-` is used as a separator for delimiter for autopool names. It was ultimately difficult to extract it to trace the namespace and name of the application to which the autopool" }, { "data": "The SpiderSubnet functionality in these releases was flawed by design, and has been modified and optimised in the latest patch releases, as well as supporting multiple network interfaces for the SpiderSubnet functionality in releases from 0.3.6 onwards. As mentioned above, the names of the new auto pools created in the new release have been changed, e.g., the IPv4 auto pool corresponding to application `kube-system/test-app` is `auto4-test-app-eth0-40371`. At the same time, the auto pool is marked with some labels as follows. ```bash metadata: labels: ipam.spidernet.io/interface: eth0 ipam.spidernet.io/ip-version: IPv4 ipam.spidernet.io/ippool-cidr: 172-100-0-0-16 ipam.spidernet.io/ippool-reclaim: \"true\" ipam.spidernet.io/owner-application-gv: apps_v1 ipam.spidernet.io/owner-application-kind: DaemonSet ipam.spidernet.io/owner-application-name: test-app ipam.spidernet.io/owner-application-namespace: kube-system ipam.spidernet.io/owner-application-uid: 2f78ccdd-398e-49e6-a85b-40371db6fdbd ipam.spidernet.io/owner-spider-subnet: vlan100-v4 spec: podAffinity: matchLabels: ipam.spidernet.io/app-api-group: apps ipam.spidernet.io/app-api-version: v1 ipam.spidernet.io/app-kind: DaemonSet ipam.spidernet.io/app-name: test-app ipam.spidernet.io/app-namespace: kube-system ``` Upgrading below 0.3.6 to the latest patch version is an incompatible upgrade. If the SpiderSubnet feature is enabled, you will need to add a series of tags as described above to the stock auto pool in order to make it available to the stock auto pool, as follows: ```bash kubectl patch sp ${auto-pool} --type merge --patch '{\"metadata\": {\"labels\": {\"ipam.spidernet.io/owner-application-name\": \"test-app\"}}}' kubectl patch sp ${auto-pool} --type merge --patch '{\"metadata\": {\"labels\": {\"ipam.spidernet.io/owner-application-namespace\": \"kube-system\"}}}' ... ``` SpiderSubnet supports multiple network interfaces, you need to add the corresponding network interface `label` for the auto pool as follows: ```bash kubectl patch sp ${auto-pool} --type merge --patch '{\"metadata\": {\"labels\": {\"ipam.spidernet.io/interface\": \"eth0\"}}}}' ``` Due to architecture adjustment, `SpiderEndpoint.Status.OwnerControllerType` property is changed from `None` to `Pod`. Therefore, find all SpiderEndpoint objects with `Status.OwnerControllerType` of `None` and replace the `SpiderEndpoint.Status.OwnerControllerType` property from `None` to `Pod`. In versions higher than 0.5.0, the and functions are added. However, due to helm upgrade, the corresponding CRDs cannot be automatically installed: `spidercoordinators.spiderpool.spidernet.io` and `spidermultusconfigs.spiderpool.spidernet.io`. Therefore, before upgrading, you can obtain the latest stable version through the following commands, decompress the chart package and apply all CRDs. ```bash ~# helm search repo spiderpool --versions ~# helm fetch spiderpool/spiderpool --version [upgraded-version] ~# tar -xvf spiderpool-[upgraded-version].tgz && cd spiderpool/crds ~# ls | grep '\\.yaml$' | xargs -I {} kubectl apply -f {} ``` In versions below 0.7.3, Spiderpool will enable a set of DaemonSet: `spiderpool-multus` to manage Multus related configurations. In later versions, the DaemonSet was deprecated, and the Muluts configuration was moved to `spiderpool-agent` for management. At the same time, the function of `automatically cleaning up the Muluts configuration during uninstallation` was added, which is enabled by default. Disable it by `--set multus.multusCNI.uninstall=false` when upgrading to avoid CNI configuration files, CRDs, etc. being deleted during the upgrade phase, causing Pod creation to fail. Due to the addition of the `txQueueLen` field to the in version 0.9.0, you need to manually update the CRD before upgrading as Helm does not support upgrading or deleting CRDs during the upgrade process.(We suggest skipping version 0.9.0 and upgrading directly to version 0.9.1) TODO. Due to your high availability requirements for Spiderpool, you may set multiple replicas of the spiderpool-controller Pod through `--set spiderpoolController.replicas=5` during installation. The Pod of spiderpool-controller will occupy some port addresses of the node by default. The default port Please refer to for occupancy. If your number of replicas is exactly the same as the number of nodes, then the Pod will fail to start because the node has no available ports during the upgrade. You can refer to the following Modifications can be made in two ways. When executing the upgrade command, you can change the port by appending the helm parameter `--set spiderpoolController.httpPort`, and you can change the port through and to check the ports that need to be modified. The type of spiderpool-controller is `Deployment`. You can reduce the number of replicas and restore the number" } ]
{ "category": "Runtime", "file_name": "upgrade.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "An object store is a collection of resources and services that work together to serve HTTP requests to PUT and GET objects. Rook will automate the configuration of the Ceph resources and services that are necessary to start and maintain a highly available, durable, and performant object store. The Ceph object store supports S3 and Swift APIs and a multitude of features such as replication of object stores between different zones. The Rook object store is designed to support all of these features, though will take some time to implement them. We welcome contributions! In the meantime, features that are not yet implemented can be configured by using the to run the `radosgw-admin` and other tools for advanced configuration. A Rook storage cluster must be configured and running in Kubernetes. In this example, it is assumed the cluster is in the `rook` namespace. When the storage admin is ready to create an object storage, the admin will specify his desired configuration settings in a yaml file such as the following `object-store.yaml`. This example is a simple object store with metadata that is replicated across different hosts, and the data is erasure coded across multiple devices in the cluster. ```yaml apiVersion: ceph.rook.io/v1alpha1 kind: CephObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: device erasureCoded: dataChunks: 6 codingChunks: 2 gateway: port: 80 securePort: 443 instances: 3 hosting: dnsNames: \"my-ingress.mydomain.com\" \"rook-ceph-rgw-my-store.rook-ceph.svc\" ``` Now create the object store. ```bash kubectl create -f object-store.yaml ``` At this point the Rook operator recognizes that a new object store resource needs to be configured. The operator will create all of the resources to start the object store. Metadata pools are created (`.rgw.root`, `my-store.rgw.control`, `my-store.rgw.meta`, `my-store.rgw.log`, `my-store.rgw.buckets.index`) The data pool is created (`my-store.rgw.buckets.data`) A Kubernetes service is created to provide load balancing for the RGW pod(s) A Kubernetes deployment is created to start the RGW pod(s) with the settings for the new zone The zone is modified to add the RGW pod endpoint(s) if zone is mentioned in the configuration When the RGW pods start, the object store is ready to receive the http or https requests as configured. The object store settings are exposed to Rook as a Custom Resource Definition (CRD). The CRD is the Kubernetes-native means by which the Rook operator can watch for new resources. The operator stays in a control loop to watch for a new object store, changes to an existing object store, or requests to delete an object store. The pools are the backing data store for the object store and are created with specific names to be private to an object store. Pools can be configured with all of the settings that can be specified in the . The underlying schema for pools defined by a pool CRD is the same as the schema under the `metadataPool` and `dataPool` elements of the object store CRD. All metadata pools are created with the same settings, while the data pool can be created with independent settings. The metadata pools must use replication, while the data pool can use replication or erasure" }, { "data": "If `preservePoolsOnDelete` is set to 'true' the pools used to support the object store will remain when the object store will be deleted. This is a security measure to avoid accidental loss of data. It is set to 'false' by default. If not specified is also deemed as 'false'. ```yaml spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: device erasureCoded: dataChunks: 6 codingChunks: 2 preservePoolsOnDelete: true ``` If user want to use existing pools for metadata and data, the pools must be created before the object store is created. This will be useful if multiple objectstore can share same pools. The detail of pools need to shared in `sharedPools` settings in object-store CRD. Now the object stores can consume same pool isolated with different namespaces. Usually RGW server itself create different on the pools. User can create via , this is need to present before the object store is created. Similar to `preservePoolsOnDelete` setting, `preserveRadosNamespaceDataOnDelete` is used to preserve the data in the rados namespace when the object store is deleted. It is set to 'false' by default. ```yaml spec: sharedPools: metadataPoolName: rgw-meta-pool dataPoolName: rgw-data-pool preserveRadosNamespaceDataOnDelete: true ``` To create the pools that will be shared by multiple object stores, create the following CephBlockPool CRs: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: rgw-meta-pool spec: failureDomain: host replicated: size: 3 parameters: pg_num: 8 application: rgw apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: rgw-data-pool spec: failureDomain: osd erasureCoded: dataChunks: 6 codingChunks: 2 application: rgw apiVersion: ceph.rook.io/v1 kind: CephBlockPool metadata: name: .rgw.root spec: name: .rgw.root failureDomain: host replicated: size: 3 parameters: pg_num: 8 application: rgw ``` The pools for this configuration will be created as below: ```bash .rgw.root rgw-meta-pool rgw-data-pool ``` And the pool configuration in zone is as below: ```json { \"id\": \"2220eb5f-2751-4a51-9c7d-da4ce1b0e4e1\", \"name\": \"my-store\", \"domain_root\": \"rgw-meta-pool:my-store.meta.root\", \"control_pool\": \"rgw-meta-pool:my-store.control\", \"gc_pool\": \"rgw-meta-pool:my-store.log.gc\", \"lc_pool\": \"rgw-meta-pool:my-store.log.lc\", \"log_pool\": \"rgw-meta-pool:my-store.log\", \"intentlogpool\": \"rgw-meta-pool:my-store.log.intent\", \"usagelogpool\": \"rgw-meta-pool:my-store.log.usage\", \"roles_pool\": \"rgw-meta-pool:my-store.meta.roles\", \"reshard_pool\": \"rgw-meta-pool:my-store.log.reshard\", \"userkeyspool\": \"rgw-meta-pool:my-store.meta.users.keys\", \"useremailpool\": \"rgw-meta-pool:my-store.meta.users.email\", \"userswiftpool\": \"rgw-meta-pool:my-store.meta.users.swift\", \"useruidpool\": \"rgw-meta-pool:my-store.meta.users.uid\", \"otp_pool\": \"rgw-meta-pool:my-store.otp\", \"system_key\": { \"access_key\": \"\", \"secret_key\": \"\" }, \"placement_pools\": [ { \"key\": \"default-placement\", \"val\": { \"index_pool\": \"rgw-metadata-pool:my-store.buckets.index\", \"storage_classes\": { \"STANDARD\": { \"data_pool\": \"rgw-data-pool:my-store.buckets.data\" } }, \"dataextrapool\": \"rgw-data-pool:my-store.buckets.non-ec\", #only pool is not erasure coded, otherwise use different pool \"index_type\": 0, \"inline_data\": \"true\" } } ], \"realm_id\": \"65a7bf34-42d3-4344-ac40-035e160d7f9e\", \"notif_pool\": \"rgw-meta-pool:my-store.notif\" } ``` The following steps need to implement internally in Rook Operator to add this feature assuming zone and pool are already created: ```bash ``` After deleting the object store the data in the rados namespace can deleted by Rook Operator using following commands: ```bash ``` If there is a `zone` section in object-store configuration, then the pool creation will configured by the . The `CephObjectStore` CR will include below section to specify the zone name. ```yaml spec: zone: name: zone1 ``` The gateway settings correspond to the RGW service. `type`: Can be `s3`. In the future support for `swift` can be added. `sslCertificateRef`: If specified, this is the name of the Kubernetes secret that contains the SSL certificate to be used for secure connections to the object store. The secret must be in the same namespace as the Rook cluster. If it is an opaque Kubernetes Secret, Rook will look in the secret provided at the `cert` key" }, { "data": "The value of the `cert` key must be in the format expected by the [RGW service](https://docs.ceph.com/docs/master/install/ceph-deploy/install-ceph-gateway/#using-ssl-with-civetweb): \"The server key, server certificate, and any other CA or intermediate certificates be supplied in one file. Each of these items must be in pem form.\" If the certificate is not specified, SSL will not be configured. `caBundleRef`: If specified, this is the name of the Kubernetes secret (type `opaque`) that contains ca-bundle to use. The secret must be in the same namespace as the Rook cluster. Rook will look in the secret provided at the `cabundle` key name. `port`: The service port where the RGW service will be listening (http) `securePort`: The service port where the RGW service will be listening (https) `instances`: The number of RGW pods that will be started for this object store `placement`: The rgw pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, `podAntiAffinity`, and `topologySpreadConstraints` similar to placement defined for daemons configured by the . The RGW service can be configured to listen on both http and https by specifying both `port` and `securePort`. ```yaml gateway: sslCertificateRef: my-ssl-cert-secret securePort: 443 instances: 1 ``` By default, the object store will be created independently from any other object stores and replication to another object store will not be configured. This done by creating a new Ceph realm, zone group, and zone all with the name of the new object store. If desired to configure the object store to replicate and sync data amongst object-store or Ceph clusters, the `zone` section would be required. This section enables the object store to be part of a specified ceph-object-zone. Specifying this section also ensures that the pool section in the ceph-object-zone is used for the object-store. If pools are specified for the object-store they are neither created nor deleted. `name`: name of the the object store is in. This name must be of a ceph-object-zone resource not just of a zone that has been already created. ```yaml zone: name: \"name\" ``` The Ceph Object Gateway supports integrating with LDAP for authenticating and creating users, please refer . This means that the `rgw backend user` is also required to be part of groups in the LDAP server otherwise, authentication will fail. The `rgw backend user` can be generated from `CephObjectStoreUser` or `ObjectBucketClaim` CRDs. For the both resources credentials are saved in Kubernetes Secrets which may not be valid with `LDAP Server`, user need to follow the steps mentioned .The following settings need to be configured in the RGW server: ``` rgw ldap binddn = rgw ldap secret = /etc/ceph/ldap/bindpass.secret rgw ldap uri = rgw ldap searchdn = rgw ldap dnattr = rgw ldap searchfilter = rgw s3 auth use ldap = true ``` So the CRD for the Ceph Object Store will be modified to include the above changes: ```yaml spec: security ldap: config: uri: ldaps://ldap-server:636 binddn: \"uid=ceph,cn=users,cn=accounts,dc=example,dc=com\" searchdn: \"cn=users,cn=accounts,dc=example,dc=com\" dnattr: \"uid\" searchfilter: \"memberof=cn=s3,cn=groups,cn=accounts,dc=example,dc=com\" credential: volumeSource: secret: secretName: object-my-store-ldap-creds defaultMode: 0600 #required ``` The `config` section includes options used for RGW wrt LDAP server. These options are strongly typed rather than string map approach since very less chance to modify in future. `uri`: It specifies the address of LDAP server to" }, { "data": "`binddn`: The bind domain for the service account used by RGW server. `searchdn`: The search domain where can it look for the user details. `dnattr`: The attribute being used in the constructed search filter to match a username, this can either be `uid` or `cn`. `searchfilter`: A generic search filter. If `dnattr` is set, this filter is `&()`'d together with the automatically constructed filter. The `credential` defines where the password for accessing ldap server should be sourced from `volumeSource`: this is a standard Kubernetes VolumeSource for the Kerberos keytab file like what is normally used to configure Volumes for a Pod. For example, a Secret or HostPath. There are two requirements for the source's content: The config file must be mountable via `subPath: password`. For example, in a Secret, the data item must be named `password`, or `items` must be defined to select the key and give it path `password`. A HostPath directory must have the `password` file. The volume or config file must have mode 0600. The CA bundle for ldap can be added to the `caBundleRef` option in `Gateway` settings: ```yaml spec: gateway: caBundleRef: #ldaps-cabundle ``` The Ceph Object Gateway supports accessing buckets using which allows accessing buckets using the bucket name as a subdomain in the endpoint. The user can configure this option manually like below: ```sh ``` Multiple hostnames can be added to the list separated by comma. Each entry must be a valid RFC-1123 hostname, and Rook Operator will perform input validation using k8s apimachinery `IsDNS1123Subdomain()`. When `rgwdnsname` is changed for an RGW cluster, all RGWs need to be restarted. To enforce this in Rook, we can apply the `--rgw-dns-name` flag, which will restart RGWs with no user action needed. This is supported from Ceph Reef release(v18.0) onwards. This is different from `customEndpoints` which is used for configuring the object store to replicate and sync data amongst Multisite. The default service endpoint for the object store `rook-ceph-rgw-my-store.rook-ceph.svc` and `customEndpoints` in `CephObjectZone` need to be added automatically by the Rook operator otherwise existing object store may impacted. Also check for deduplication in the `rgwdnsname` list if user manually add the default service endpoint. For accessing the bucket point user need to configure wildcard dns in the cluster using or in openshift cluster use . Same for TLS certificate, user need to configure the TLS certificate for the wildcard dns for the RGW endpoint. This option won't be enabled by default, user need to enable it by adding the `hosting` section in the `Gateway` settings: ```yaml spec: hosting: dnsNames: \"my-ingress.mydomain.com\" ``` A list of hostnames to use for accessing the bucket directly like a subdomain in the endpoint. For example if the ingress service endpoint `my-ingress.mydomain.com` added and the object store contains a bucket named `sample`, then the s3 would be `http://sample.my-ingress.mydomain.com`. More details about the feature can be found in . When this feature is enabled the endpoint in OBCs and COSI Bucket Access can be clubbed with bucket name and host name. For OBC the `BUCKETNAME` and the `BUCKETHOST` from the config map combine to `http://$BUCKETNAME.$BUCKETHOST`. For COSI Bucket Access the `bucketName` and the `endpoint` in the `BucketInfo` to `http://bucketName.endpoint`." } ]
{ "category": "Runtime", "file_name": "store.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "If the --kube-subnet-mgr argument is true, flannel reads its configuration from `/etc/kube-flannel/net-conf.json`. If the --kube-subnet-mgr argument is false, flannel reads its configuration from etcd. By default, it will read the configuration from `/coreos.com/network/config` (which can be overridden using `--etcd-prefix`). Use the `etcdctl` utility to set values in etcd. The value of the config is a JSON dictionary with the following keys: `Network` (string): IPv4 network in CIDR format to use for the entire flannel network. (Mandatory if EnableIPv4 is true) `IPv6Network` (string): IPv6 network in CIDR format to use for the entire flannel network. (Mandatory if EnableIPv6 is true) `EnableIPv4` (bool): Enables ipv4 support Defaults to `true` `EnableIPv6` (bool): Enables ipv6 support Defaults to `false` `EnableNFTables` (bool): (EXPERIMENTAL) If set to true, flannel uses nftables instead of iptables to masquerade the traffic. Default to `false` `SubnetLen` (integer): The size of the subnet allocated to each host. Defaults to 24 (i.e. /24) unless `Network` was configured to be smaller than a /22 in which case it is two less than the network. `SubnetMin` (string): The beginning of IP range which the subnet allocation should start with. Defaults to the second subnet of `Network`. `SubnetMax` (string): The end of the IP range at which the subnet allocation should end with. Defaults to the last subnet of `Network`. `IPv6SubnetLen` (integer): The size of the ipv6 subnet allocated to each host. Defaults to 64 (i.e. /64) unless `Ipv6Network` was configured to be smaller than a /62 in which case it is two less than the network. `IPv6SubnetMin` (string): The beginning of IPv6 range which the subnet allocation should start with. Defaults to the second subnet of `Ipv6Network`. `IPv6SubnetMax` (string): The end of the IPv6 range at which the subnet allocation should end with. Defaults to the last subnet of `Ipv6Network`. `Backend` (dictionary): Type of backend to use and specific configurations for that backend. The list of available backends and the keys that can be put into the this dictionary are listed in . Defaults to `vxlan` backend. Subnet leases have a duration of 24 hours. Leases are renewed within 1 hour of their expiration, unless a different renewal margin is set with the ``--subnet-lease-renew-margin`` option. The following configuration illustrates the use of most options with `udp` backend. ```json { \"Network\": \"10.0.0.0/8\", \"SubnetLen\": 20, \"SubnetMin\": \"10.10.0.0\", \"SubnetMax\": \"10.99.0.0\", \"Backend\": { \"Type\": \"udp\", \"Port\": 7890 } } ``` ```bash --public-ip=\"\": IP accessible by other nodes for inter-host communication. Defaults to the IP of the interface being used for communication. --etcd-endpoints=http://127.0.0.1:4001: a comma-delimited list of etcd endpoints. --etcd-prefix=/coreos.com/network: etcd prefix. --etcd-keyfile=\"\": SSL key file used to secure etcd communication. --etcd-certfile=\"\": SSL certification file used to secure etcd communication. --etcd-cafile=\"\": SSL Certificate Authority file used to secure etcd communication. --kube-subnet-mgr: Contact the Kubernetes API for subnet assignment instead of etcd. --iface=\"\": interface to use (IP or name) for inter-host communication. Defaults to the interface for the default route on the machine. This can be specified multiple times to check each option in order. Returns the first match found. --iface-regex=\"\": regex expression to match the first interface to use (IP or name) for inter-host communication. If unspecified, will default to the interface for the default route on the machine. This can be specified multiple times to check each regex in" }, { "data": "Returns the first match found. This option is superseded by the iface option and will only be used if nothing matches any option specified in the iface options. --iface-can-reach=\"\": detect interface to use (IP or name) for inter-host communication based on which will be used for provided IP. This is exactly the interface to use of command \"ip route get <ip-address>\" (example: --iface-can-reach=192.168.1.1 results the interface can be reached to 192.168.1.1 will be selected) --iptables-resync=5: resync period for iptables rules, in seconds. Defaults to 5 seconds, if you see a large amount of contention for the iptables lock increasing this will probably help. --subnet-file=/run/flannel/subnet.env: filename where env variables (subnet and MTU values) will be written to. --net-config-path=/etc/kube-flannel/net-conf.json: path to the network configuration file to use --subnet-lease-renew-margin=60: subnet lease renewal margin, in minutes. --ip-masq=false: setup IP masquerade for traffic destined for outside the flannel network. Flannel assumes that the default policy is ACCEPT in the NAT POSTROUTING chain. -v=0: log level for V logs. Set to 1 to see messages related to data path. --healthz-ip=\"0.0.0.0\": The IP address for healthz server to listen (default \"0.0.0.0\") --healthz-port=0: The port for healthz server to listen(0 to disable) --version: print version and exit ``` MTU is calculated and set automatically by flannel. It then reports that value in `subnet.env`. This value can be changed as config. The command line options outlined above can also be specified via environment variables. For example `--etcd-endpoints=http://10.0.0.2:2379` is equivalent to `FLANNELDETCDENDPOINTS=http://10.0.0.2:2379` environment variable. Any command line option can be turned into an environment variable by prefixing it with `FLANNELD_`, stripping leading dashes, converting to uppercase and replacing all other dashes to underscores. `EVENTQUEUEDEPTH` is another environment variable to indicate the kubernetes scale. Set `EVENTQUEUEDEPTH` to adapter your cluster node numbers. If not set, default value is 5000. Flannel provides a health check http endpoint `healthz`. Currently this endpoint will blindly return http status ok(i.e. 200) when flannel is running. This feature is by default disabled. Set `healthz-port` to a non-zero value will enable a healthz server for flannel. Flannel supports dual-stack mode. This means pods and services could use ipv4 and ipv6 at the same time. Currently, dual-stack is only supported for vxlan, wireguard or host-gw(linux) backends. Requirements: v1.0.1 of flannel binary from Nodes must have an ipv4 and ipv6 address in the main interface Nodes must have an ipv4 and ipv6 address default route vxlan support ipv6 tunnel require kernel version >= 3.12 Configuration: Set \"EnableIPv6\": true and the \"IPv6Network\", for example \"IPv6Network\": \"2001:cafe:42:0::/56\" in the net-conf.json of the kube-flannel-cfg ConfigMap or in `/coreos.com/network/config` for etcd If everything works as expected, flanneld should generate a `/run/flannel/subnet.env` file with IPV6 subnet and network. For example: ```bash FLANNEL_NETWORK=10.42.0.0/16 FLANNEL_SUBNET=10.42.0.1/24 FLANNELIPV6NETWORK=2001:cafe:42::/56 FLANNELIPV6SUBNET=2001:cafe:42::1/64 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true ``` To use an IPv6-only environment use the same configuration of the Dual-stack section to enable IPv6 and add \"EnableIPv4\": false in the net-conf.json of the kube-flannel-cfg ConfigMap. In case of IPv6-only setup, please use the docker.io IPv6-only endpoint as described in the following link: https://www.docker.com/blog/beta-ipv6-support-on-docker-hub-registry/ To enable `nftables` mode in flannel, set `EnableNFTables` to true in flannel configuration. Note: to test with kube-proxy, use kubeadm with the following configuration: ```yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration kubernetesVersion: v1.29.0 controllerManager: extraArgs: feature-gates: NFTablesProxyMode=true apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: \"nftables\" featureGates: NFTablesProxyMode: true ```" } ]
{ "category": "Runtime", "file_name": "configuration.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "This tutorial will show you how to set up networking for a gVisor sandbox using the . First you will need to install the CNI plugins. CNI plugins are used to set up a network namespace that `runsc` can use with the sandbox. Start by creating the directories for CNI plugin binaries: ``` sudo mkdir -p /opt/cni/bin ``` Download the CNI plugins: ``` wget https://github.com/containernetworking/plugins/releases/download/v0.8.3/cni-plugins-linux-amd64-v0.8.3.tgz ``` Next, unpack the plugins into the CNI binary directory: ``` sudo tar -xvf cni-plugins-linux-amd64-v0.8.3.tgz -C /opt/cni/bin/ ``` This section will show you how to configure CNI plugins. This tutorial will use the \"bridge\" and \"loopback\" plugins which will create the necessary bridge and loopback devices in our network namespace. However, you should be able to use any CNI compatible plugin to set up networking for gVisor sandboxes. The bridge plugin configuration specifies the IP address subnet range for IP addresses that will be assigned to sandboxes as well as the network routing configuration. This tutorial will assign IP addresses from the `10.22.0.0/16` range and allow all outbound traffic, however you can modify this configuration to suit your use case. Create the bridge and loopback plugin configurations: ``` sudo mkdir -p /etc/cni/net.d sudo sh -c 'cat > /etc/cni/net.d/10-bridge.conf << EOF { \"cniVersion\": \"0.3.1\", \"name\": \"mynet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"isGateway\": true, \"ipMasq\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.22.0.0/16\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ] } } EOF' sudo sh -c 'cat > /etc/cni/net.d/99-loopback.conf << EOF { \"cniVersion\": \"0.3.1\", \"name\": \"lo\", \"type\": \"loopback\" } EOF' ``` For each gVisor sandbox you will create a network namespace and configure it using CNI. First, create a random network namespace name and then create the namespace. The network namespace path will then be" }, { "data": "``` export CNI_PATH=/opt/cni/bin export CNI_CONTAINERID=$(printf '%x%x%x%x' $RANDOM $RANDOM $RANDOM $RANDOM) export CNI_COMMAND=ADD export CNINETNS=/var/run/netns/${CNICONTAINERID} sudo ip netns add ${CNI_CONTAINERID} ``` Next, run the bridge and loopback plugins to apply the configuration that was created earlier to the namespace. Each plugin outputs some JSON indicating the results of executing the plugin. For example, The bridge plugin's response includes the IP address assigned to the ethernet device created in the network namespace. Take note of the IP address for use later. ``` export CNI_IFNAME=\"eth0\" sudo -E /opt/cni/bin/bridge < /etc/cni/net.d/10-bridge.conf export CNI_IFNAME=\"lo\" sudo -E /opt/cni/bin/loopback < /etc/cni/net.d/99-loopback.conf ``` Get the IP address assigned to our sandbox: ``` PODIP=$(sudo ip netns exec ${CNICONTAINERID} ip -4 addr show eth0 | grep -oP '(?<=inet\\s)\\d+(\\.\\d+){3}') ``` Now that our network namespace is created and configured, we can create the OCI bundle for our container. As part of the bundle's `config.json` we will specify that the container use the network namespace that we created. The container will run a simple python webserver that we will be able to connect to via the IP address assigned to it via the bridge CNI plugin. Create the bundle and root filesystem directories: ``` sudo mkdir -p bundle cd bundle sudo mkdir rootfs sudo docker export $(docker create python) | sudo tar --same-owner -pxf - -C rootfs sudo mkdir -p rootfs/var/www/html sudo sh -c 'echo \"Hello World!\" > rootfs/var/www/html/index.html' ``` Next create the `config.json` specifying the network namespace. ``` sudo runsc spec \\ --cwd /var/www/html \\ --netns /var/run/netns/${CNI_CONTAINERID} \\ -- python -m http.server ``` Now we can run and connect to the webserver. Run the container in gVisor. Use the same ID used for the network namespace to be consistent: ``` sudo runsc run -detach ${CNI_CONTAINERID} ``` Connect to the server via the sandbox's IP address: ``` curl http://${POD_IP}:8000/ ``` You should see the server returning `Hello World!`. After you are finished running the container, you can clean up the network namespace . ``` sudo runsc kill ${CNI_CONTAINERID} sudo runsc delete ${CNI_CONTAINERID} export CNI_COMMAND=DEL export CNI_IFNAME=\"lo\" sudo -E /opt/cni/bin/loopback < /etc/cni/net.d/99-loopback.conf export CNI_IFNAME=\"eth0\" sudo -E /opt/cni/bin/bridge < /etc/cni/net.d/10-bridge.conf sudo ip netns delete ${CNI_CONTAINERID} ```" } ]
{ "category": "Runtime", "file_name": "cni.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 6 sidebar_label: \"Migrate Volumes\" Volume Migration is an important operation and maintenance management function of HwameiStor. Application-mounted data volumes can be unmounted and migrated from a node with errors or an alert indicating an impending errors to a healthy node. After the data volume is successfully migrated, the Pods of related applications are also rescheduled to the new node and the new data volume is bound and mounted. `LocalVolumeGroup(LVG)` management is an important function in HwameiStor. When an application Pod applies for multiple data volume PVCs, in order to ensure the correct operation of the Pod, these data volumes must have certain attributes, such as the number of copies of the data volume and the node where the copies are located. Properly managing these associated data volumes through the data volume group management function is a very important capability in HwameiStor. `LocalVolumeMigrate` needs to be deployed in the Kubernetes system, and the deployed application needs to meet the following conditions: Support `lvm` type volumes When migrating based on `LocalVolume` granularity, the data volumes belonging to the same `LocalVolumeGroup` by default will not be migrated together (if they are migrated together, you need to configure the switch `MigrateAllVols: true`) ```console $ cd ../../deploy/ $ kubectl apply -f storageclass-convertible-lvm.yaml ``` ```console $ kubectl apply -f pvc-multiple-lvm.yaml ``` ```console $ kubectl apply -f nginx-multiple-lvm.yaml ``` ```console $ kubectl -n hwameistor scale --current-replicas=1 --replicas=0 deployment/nginx-local-storage-lvm ``` ```console $ cat << EOF | kubectl apply -f - apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeMigrate metadata: namespace: hwameistor name: <localVolumeMigrateName> spec: sourceNode: <sourceNodeName> targetNodesSuggested: <targetNodesName1> <targetNodesName2> volumeName: <volName> migrateAllVols: <true/false> EOF ``` Attentions: 1) HwameiStor will select a target node from targetNodesSuggested to migrate. If all the candidates don't have enough storage space, the migrate will fail. 2) If targetNodesSuggested is emtpy or not set, HwameiStore will automatically select a propriate node for the migrate. If there is no valid candidate, the migrate will fail. ```console $ cat << EOF | kubectl apply -f - apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeMigrate metadata: namespace: hwameistor name: <localVolumeMigrateName> spec: sourceNode: <sourceNodeName> targetNodesSuggested: [] volumeName: <volName> migrateAllVols: <true/false> EOF ``` ```console $ kubectl get LocalVolumeMigrate localvolumemigrate-1 -o yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalVolumeMigrate metadata: generation: 1 name: localvolumemigrate-1 namespace: hwameistor resourceVersion: \"12828637\" uid: 78af7f1b-d701-4b03-84de-27fafca58764 spec: abort: false migrateAllVols: true sourceNode: k8s-172-30-40-61 targetNodesSuggested: k8s-172-30-45-223 volumeName: pvc-1a0913ac-32b9-46fe-8258-39b4e3b696a4 status: originalReplicaNumber: 1 targetNode: k8s-172-30-45-223 state: Completed message: ``` ```console $ kubectl get lvr NAME CAPACITY NODE STATE SYNCED DEVICE AGE pvc-1a0913ac-32b9-46fe-8258-39b4e3b696a4-9cdkkn 1073741824 k8s-172-30-45-223 Ready true /dev/LocalStorage_PoolHDD-HA/pvc-1a0913ac-32b9-46fe-8258-39b4e3b696a4 77s pvc-d9d3ae9f-64af-44de-baad-4c69b9e0744a-7ppmrx 1073741824 k8s-172-30-45-223 Ready true /dev/LocalStorage_PoolHDD-HA/pvc-d9d3ae9f-64af-44de-baad-4c69b9e0744a 77s ``` ```console $ kubectl -n hwameistor scale --current-replicas=0 --replicas=1 deployment/nginx-local-storage-lvm ```" } ]
{ "category": "Runtime", "file_name": "migrate.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "ISC License Copyright 2015, John Chadwick <[email protected]> Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies. THE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE." } ]
{ "category": "Runtime", "file_name": "LICENSE.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Display cgroup metadata maintained by Cilium ``` cilium-dbg cgroups list [flags] ``` ``` -h, --help help for list --no-headers Do not print headers -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Cgroup metadata" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_cgroups_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- Github issues are used for bug reports. For support questions, please use . Please fill the template below as it will greatly help us track down your issue and reproduce it on our side. Feel free to remove anything which doesn't apply to you and add more information where it makes sense. --> Distribution: Distribution version: The output of \"incus info\" or if that fails: Kernel version: LXC version: Incus version: Storage backend in use: A brief description of the problem. Should include what you were attempting to do, what you did, what happened and what you expected to see happen. Step one Step two Step three [ ] Any relevant kernel output (`dmesg`) [ ] Container log (`incus info NAME --show-log`) [ ] Container configuration (`incus config show NAME --expanded`) [ ] Main daemon log (at /var/log/incus/incusd.log) [ ] Output of the client with --debug [ ] Output of the daemon with --debug (alternatively output of `incus monitor --pretty` while reproducing the issue)" } ]
{ "category": "Runtime", "file_name": "ISSUE_TEMPLATE.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | ||--|-|-|--|-| | G00001 | Related IP resource recorded in IPPool will be reclaimed after the namespace is deleted | p1 | true | done | | | G00002 | The IP of a running pod should not be reclaimed after a same-name pod within a different namespace is deleted | p1 | | done | | | G00003 | The IP can be reclaimed after its deployment, statefulset, daemonset, replica, or job is deleted, even when CNI binary is gone on the host | p1 | | done | | | G00004 | The IP should be reclaimed when deleting the pod with 0 second of grace period | p2 | | done | | | G00005 | A dirty IP record (pod name is wrong) in the IPPool should be auto clean by Spiderpool | p2 | | done | | | G00006 | The IP should be reclaimed for the job pod finished with success or failure Status | p2 | | done | | | G00007 | A dirty IP record (pod name is right but container ID is wrong) in the IPPool should be auto clean by Spiderpool | p3 | | done | | | G00008 | The Spiderpool component recovery from repeated reboot, and could correctly reclaim IP | p3 | | done | | | G00009 | stateless workload IP could be released with node not ready | p3 | | done | |" } ]
{ "category": "Runtime", "file_name": "reclaim.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project" }, { "data": "This Code of Conduct also applies outside the project spaces when the Project Steward has a reasonable belief that an individual's behavior may have a negative impact on the project or its community. We do not believe that all conflict is bad; healthy debate and disagreement often yield positive results. However, it is never okay to be disrespectful or to engage in behavior that violates the projects code of conduct. If you see someone violating the code of conduct, you are encouraged to address the behavior directly with those involved. Many issues can be resolved quickly and easily, and this gives people more control over the outcome of their dispute. If you are unable to resolve the matter for any reason, or if the behavior is threatening or harassing, report it. We are dedicated to providing an environment where participants feel welcome and safe. Reports should be directed to Jaice Singer DuMars, jaice at google dot com, the Project Steward for gVisor. It is the Project Stewards duty to receive and address reported violations of the code of conduct. They will then work with a committee consisting of representatives from the Open Source Programs Office and the Google Open Source Strategy team. If for any reason you are uncomfortable reaching out the Project Steward, please email [email protected]. We will investigate every complaint, but you may not receive a direct response. We will use our discretion in determining when and how to follow up on reported incidents, which may range from not taking action to permanent expulsion from the project and project-sponsored spaces. We will notify the accused of the report and provide them an opportunity to discuss it before any action is taken. The identity of the reporter will be omitted from the details of the report supplied to the accused. In potentially harmful situations, such as ongoing harassment or threats to anyone's safety, we may take action without notice. This Code of Conduct is adapted from the ." } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "title: Alibaba Cloud link: https://github.com/AliyunContainerService/velero-plugin objectStorage: true volumesnapshotter: true Used for backup and restore on Alibaba Cloud through Velero. You need to install and configure velero and the velero-plugin for alibabacloud." } ]
{ "category": "Runtime", "file_name": "05-alibaba-cloud.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "English | Spiderpool provides a solution for assigning static IP addresses in underlay networks. In this page, we'll demonstrate how to build a complete underlay network solution using , and , which meets the following kinds of requirements: Applications can be assigned static Underlay IP addresses through simple operations. Pods with multiple Underlay NICs connect to multiple Underlay subnets. Pods can communicate in various ways, such as Pod IP, clusterIP, and nodePort. Make sure a Kubernetes cluster is ready. has been already installed. If your OS is such as Fedora and CentOS and uses NetworkManager to manage network configurations, you need to configure NetworkManager in the following scenarios: If you are using Underlay mode, the plugin `coordinator` will create veth interfaces on the host. To prevent interference from NetworkManager with the veth interface. It is strongly recommended that you configure NetworkManager. If you want to create VLAN and Bond interfaces through , NetworkManager may interfere with these interfaces, leading to abnormal pod access. It is strongly recommended that you configure NetworkManager. ```shell ~# IFACER_INTERFACE=\"<NAME>\" ~# cat > /etc/NetworkManager/conf.d/spidernet.conf <<EOF [keyfile] unmanaged-devices=interface-name:^veth*;interface-name:${IFACER_INTERFACE} EOF ~# systemctl restart NetworkManager ``` Install Spiderpool. ```bash helm repo add spiderpool https://spidernet-io.github.io/spiderpool helm repo update spiderpool helm install spiderpool spiderpool/spiderpool --namespace kube-system --set multus.multusCNI.defaultCniCRName=\"macvlan-conf\" ``` > If Macvlan is not installed in your cluster, you can specify the Helm parameter `--set plugins.installCNI=true` to install Macvlan in your cluster. > > If you are a mainland user who is not available to access ghcr.io, you can specify the parameter `-set global.imageRegistryOverride=ghcr.m.daocloud.io` to avoid image pulling failures for Spiderpool. > > Specify the name of the NetworkAttachmentDefinition instance for the default CNI used by Multus via `multus.multusCNI.defaultCniCRName`. If the `multus.multusCNI.defaultCniCRName` option is provided, an empty NetworkAttachmentDefinition instance will be automatically generated upon installation. Otherwise, Multus will attempt to create a NetworkAttachmentDefinition instance based on the first CNI configuration found in the /etc/cni/net.d directory. If no suitable configuration is found, a NetworkAttachmentDefinition instance named `default` will be created to complete the installation of Multus. Please check if `Spidercoordinator.status.phase` is `Synced`: ```shell ~# kubectl get spidercoordinators.spiderpool.spidernet.io default -o yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderCoordinator metadata: finalizers: spiderpool.spidernet.io name: default spec: detectGateway: false detectIPConflict: false hijackCIDR: 169.254.0.0/16 hostRPFilter: 0 hostRuleTable: 500 mode: auto podCIDRType: calico podDefaultRouteNIC: \"\" podMACPrefix: \"\" tunePodRoutes: true status: overlayPodCIDR: 10.244.64.0/18 phase: Synced serviceCIDR: 10.233.0.0/18 ``` At present: Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information. If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information: ```shell export PODSUBNET=<YOURPOD_SUBNET> export SERVICESUBNET=<YOURSERVICE_SUBNET> cat << EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: kubeadm-config namespace: kube-system data: ClusterConfiguration: | networking: podSubnet: ${POD_SUBNET} serviceSubnet: ${SERVICE_SUBNET} EOF ``` Create a SpiderIPPool instance. Create an IP Pool in the same subnet as the network interface `eth0` for Pods to use, the following is an example of creating a related SpiderIPPool: ```bash cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: ippool-test spec: ips: \"172.18.30.131-172.18.30.140\" subnet: 172.18.0.0/16 gateway:" }, { "data": "multusName: kube-system/macvlan-conf EOF ``` Verify installation ```shell ~# kubectl get po -n kube-system | grep spiderpool spiderpool-agent-7hhkz 1/1 Running 0 13m spiderpool-agent-kxf27 1/1 Running 0 13m spiderpool-controller-76798dbb68-xnktr 1/1 Running 0 13m spiderpool-init 0/1 Completed 0 13m ~# kubectl get sp NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DISABLE ippool-test 4 172.18.0.0/16 0 10 false ``` To simplify writing Multus CNI configuration in JSON format, Spiderpool provides SpiderMultusConfig CR to automatically manage Multus NetworkAttachmentDefinition CR. Here is an example of creating a Macvlan SpiderMultusConfig configuration: Verify the required host parent interface for Macvlan. In this case, a Macvlan sub-interface will be created for Pods from the host parent interface --eth0. > * If there is a VLAN requirement, you can specify the VLAN ID in the `spec.vlanID` field. We will create the corresponding VLAN sub-interface for the network card. > * We also provide support for network card bonding. Just specify the name of the bond network card and its mode in the `spec.bond.name` and `spec.bond.mode` respectively. We will automatically combine multiple network cards into one bonded network card for you. ```shell MACVLANMASTERINTERFACE=\"eth0\" cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: macvlan-conf namespace: kube-system spec: cniType: macvlan macvlan: master: ${MACVLANMASTERINTERFACE} EOF ``` In the example of this article, use the above configuration to create the following Macvlan SpiderMultusConfig, which will automatically generate Multus NetworkAttachmentDefinition CR based on it, which corresponds to the eth0 network card of the host. ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -n kube-system NAME AGE macvlan-conf 10m ~# kubectl get network-attachment-definitions.k8s.cni.cncf.io -n kube-system NAME AGE macvlan-conf 10m ``` Create test Pods and service via the command below ```bash cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app spec: replicas: 2 selector: matchLabels: app: test-app template: metadata: annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"ippool-test\"] } v1.multus-cni.io/default-network: kube-system/macvlan-conf labels: app: test-app spec: containers: name: test-app image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: test-app-svc labels: app: test-app spec: type: ClusterIP ports: port: 80 protocol: TCP targetPort: 80 selector: app: test-app EOF ``` Check the status of Pods: ```bash ~# kubectl get po -l app=test-app -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-f9f94688-2srj7 1/1 Running 0 2m13s 172.18.30.139 ipv4-worker <none> <none> test-app-f9f94688-8982v 1/1 Running 0 2m13s 172.18.30.138 ipv4-control-plane <none> <none> ``` Spiderpool has created fixed IP pools for applications, ensuring that the applications' IPs are automatically fixed within the defined ranges. ```bash ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT ippool-test 4 172.18.0.0/16 2 10 false ~# kubectl get spiderendpoints NAME INTERFACE IPV4POOL IPV4 IPV6POOL IPV6 NODE CREATETION TIME test-app-f9f94688-2srj7 eth0 ippool-test 172.18.30.139/16 ipv4-worker 3m5s test-app-f9f94688-8982v eth0 ippool-test 172.18.30.138/16 ipv4-control-plane 3m5s ``` Test the communication between Pods: ```shell ~# kubectl exec -ti test-app-f9f94688-2srj7 -- ping 172.18.30.138 -c 2 PING 172.18.30.138 (172.18.30.138): 56 data bytes 64 bytes from 172.18.30.138: seq=0 ttl=64 time=1.524 ms 64 bytes from 172.18.30.138: seq=1 ttl=64 time=0.194 ms 172.18.30.138 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.194/0.859/1.524 ms ``` Test the communication between Pods and service IP: ```shell ~# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h test-app-svc ClusterIP 10.96.190.4 <none> 80/TCP 109m ~# kubectl exec -ti test-app-85cf87dc9c-7dm7m -- curl 10.96.190.4:80 -I HTTP/1.1 200 OK Server: nginx/1.23.1 Date: Thu, 23 Mar 2023 05:01:04 GMT Content-Type: text/html Content-Length: 4055 Last-Modified: Fri, 23 Sep 2022 02:53:30 GMT Connection: keep-alive ETag: \"632d1faa-fd7\" Accept-Ranges: bytes ```" } ]
{ "category": "Runtime", "file_name": "get-started-macvlan.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC" } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "The following issues were encountered when testing Antrea on different OSes, or reported by Antrea users. When possible we try to provide a workaround. | Issues | | | | | CoreOS Container Linux has reached its on May 26, 2020 and no longer receives updates. It is recommended to migrate to another Operating System as soon as possible. CoreOS uses networkd for network configuration. By default, all interfaces are managed by networkd because of the [configuration files](https://github.com/coreos/init/tree/master/systemd/network) that ship with CoreOS. Unfortunately, that includes the gateway interface created by Antrea (`antrea-gw0` by default). Most of the time, this is not an issue, but if networkd is restarted for any reason, it will cause the interface to lose its IP configuration, and all the routes associated with the interface will be deleted. To avoid this issue, we recommend that you create the following configuration files: ```text [Match] Name=antrea-gw0 ovs-system Driver=openvswitch [Link] Unmanaged=yes ``` ```text [Match] Driver=veth [Link] Unmanaged=yes ``` ```text [Match] Name=genevsys vxlan_sys_ gresys sttsys_* [Link] Unmanaged=yes ``` Note that this fix requires a version of CoreOS `>= 1262.0.0` (Dec 2016), as the networkd `Unmanaged` option was not supported before that. | Issues | | | | | | | If your K8s Nodes are running Photon OS 3.0, you may see error messages in the antrea-agent logs like this one: `\"Received bundle error msg: [...]\"`. These messages indicate that some flow entries could not be added to the OVS bridge. This usually indicates that the Kernel was not compiled with the `CONFIGNFCONNTRACK_ZONES` option, as this option was only enabled recently in Photon OS. This option is required by the Antrea OVS datapath. To confirm that this is indeed the issue, you can run the following command on one of your Nodes: ```bash grep CONFIGNFCONNTRACK_ZONES= /boot/config-`uname -r` ``` If you do not see the following output, then it confirms that your Kernel is indeed missing this option: ```text CONFIGNFCONNTRACK_ZONES=y ``` To fix this issue and be able to run Antrea on your Photon OS Nodes, you will need to upgrade to a more recent version: `>= 4.19.87-4` (Jan 2020). You can achieve this by running `tdnf upgrade linux-esx` on all your Nodes. After this fix, all the Antrea Agents should be running correctly. If you still experience connectivity issues, it may be because of Photon's default firewall rules, which are quite strict by . The easiest workaround is to accept all traffic on the gateway interface created by Antrea (`antrea-gw0` by default), which enables traffic to flow between the Node and the Pod network: ```bash iptables -A INPUT -i antrea-gw0 -j ACCEPT ``` Antrea provides support for Pod by leveraging the open-source maintained by the CNI project. This plugin requires the following Kernel modules: `ifb`, `schtbf` and `schingress`. It seems that at the moment Photon OS 3.0 is built without the `ifb` Kernel module, which you can confirm by running `modprobe --dry-run ifb`: an error would indicate that the module is indeed missing. Without this module, Pods with the `kubernetes.io/egress-bandwidth` annotation cannot be created successfully. Pods with no traffic shaping annotation, or which only use the `kubernetes.io/ingress-bandwidth` annotation, can still be created successfully as they do not require the creation of an `ifb` device. If Photon OS is patched to enable `ifb`, we will update this documentation to reflect this change, and include information about which Photon OS version can support egress traffic shaping." } ]
{ "category": "Runtime", "file_name": "os-issues.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Repository Maintenance\" layout: docs From v1.14 on, Velero decouples repository maintenance from the Velero server by launching a k8s job to do maintenance when needed, to mitigate the impact on the Velero server during backups. Before v1.14.0, Velero performs periodic maintenance on the repository within Velero server pod, this operation may consume significant CPU and memory resources in some cases, leading to Velero server being killed by OOM. Now Velero will launch independent k8s jobs to do the maintenance in Velero installation namespace. For repository maintenance jobs, there's no limit on resources by default. You could configure the job resource limitation based on target data to be backed up. You can customize the maintenance job resource requests and limit when using the CLI command. Maintenance job inherits the log level and log format settings from the Velero server, so if the Velero server enabled the debug log, the maintenance job will also open the debug level log. Velero will keep one specific number of the latest maintenance jobs for each repository. By default, we only keep 3 latest maintenance jobs for each repository, and Velero support configures this setting by the below command when Velero installs: ```bash velero install --keep-latest-maintenance-jobs <NUM> ``` The frequency of running maintenance jobs could be set by the below command when Velero is installed: ```bash velero install --default-repo-maintain-frequency <DURATION> ``` For Kopia the default maintenance frequency is 1 hour, and Restic is 7 * 24 hours. Maintenance jobs will inherit the labels, annotations, tolerations, affinity, nodeSelector, service account, image, environment variables, cloud-credentials etc. from Velero deployment." } ]
{ "category": "Runtime", "file_name": "repository-maintenance.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Ceph added support for the bucket notifications feature from Nautilus onwards. It allows sending messages to various endpoints when a new event occurs on a bucket Setup of those notifications are normally done by sending HTTP requests to the RGW, either to create/delete topics pointing to specific endpoints, or create/delete bucket notifications based on those topics. This functionality eases this process by avoiding to use external tools or scripts. It is replaced by creation of CR definitions that contain all the information necessary to create topics and/or notifications, which the rook operator processes. Creates a CRD for topics and a CRD for notifications, defining all the necessary and optional information for the various endpoints. Extends the rook operator to handle the CRs that would be submitted by users. The CR for a topic configuration takes this form: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBucketTopic metadata: name: # name of the topic namespace: # namespace where topic belongs spec: opaqueData: #(optional) opaque data is set in the topic configuration persistent: false #(optional) indication whether notifications to this endpoint are persistent or not (`false` by default) endpoint: #(mandatory) must contain exactly one of the following options http: uri: #(mandatory) URI of an endpoint to send push notification to disableVerifySSL: false #(optional) indicate whether the server certificate is validated by the client or not (`false` by default) amqp: uri: #(mandatory) URI of an endpoint to send push notification to disableVerifySSL: false #(optional) indicate whether the server certificate is validated by the client or not (`false` by default) caLocation: <filepath in rgw pod> #(optional) this specified CA will be used, instead of the default one, to authenticate the broker ackLevel: broker #(optional) none/routable/broker, optional (default - broker) amqpExchange: direct #(mandatory) exchanges must exist and be able to route messages based on topics kafka: uri: #(mandatory) URI of an endpoint to send push notification to disableVerifySSL: false #(optional) indicate whether the server certificate is validated by the client or not (`false` by default) useSSL: true #(optional) secure connection will be used for connecting with the broker (`false` by default) caLocation: <filepath in rgw pod> #(optional) this specified CA will be used, instead of the default one, to authenticate the broker ackLevel: broker #(optional) none/broker, optional (default - broker) ``` P.S : URI can be of different format depends on the server http -> `http` amqp -> `amqp://` kafka -> `kafka://[<user>:<password>@]<fqdn>[:<port]` The CR for bucket notification takes this form: ```yaml apiVersion: ceph.rook.io/v1 kind: CephBucketNotification metadata: name: # name of the notification namespace: # namespace where notification belongs spec: topic: #(mandatory) reference to the topic, topic_arn filter: #(optional) Prefix/Suffix/Regex, optional (default - {}) stringMatch: name: prefix value: hello name: suffix value: .png name: regex value: [a-z]* events: # applicable values , (default all) ``` The information about bucket notification can passed to OBC/BAR(from ) as labels. It can be set using `kubectl` commands, so the name of bucket notifications need to satisfy the . For OBC it will look like the following: ```yaml apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: ceph-bucket labels: bucket-notification: ignored # no name is appended bucket-notification-name-1: name-1 bucket-notification-name-2: name-2 bucket-notification-foo: foo spec: bucketName: mybucket storageClassName: rook-ceph-delete-bucket ``` Usually bucket notification will be created by user for consuming it for the applications, so it need to created on App's namespace similar to OBC/BAR." } ]
{ "category": "Runtime", "file_name": "ceph-bucket-notification-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_label: PostgreSQL sidebar_position: 2 slug: /postgresqlbestpractices For distributed file systems where data and metadata are stored separately, the read and write performance and security of metadata directly affects the efficiency and data security of the whole system, respectively. In the production environment, it is recommended to select hosted cloud databases provided by cloud computing platforms first, and comebine it with appropriate high availability architecture to use. Please always pay attention to the integrity and security of metadata when using JuiceFS no matter whether databases is build on your own or in the cloud. By default, JuiceFS clients will use SSL encryption to connect to PostgreSQL. If SSL encryption is not enabled on the database, you need to append the `sslmode=disable` parameter to the metadata URL. It is recommended to configure and keep SSL encryption enabled on the database server side all the time. Database password can be set directly through the metadata URL. Although it is easy and convenient, the password may leak during logging and process outputing processes. For the sake of security, it's better to pass the database password through an environment variable. `META_PASSWORD` is a predefined environment variable for the database password: ```shell export META_PASSWORD=mypassword juicefs mount -d \"postgres://[email protected]:5432/juicefs\" /mnt/jfs ``` PostgreSQL supports the md5 authentication method. The following section can be adapted in the pg_hba.conf of your PostgreSQL instance. ``` host juicefs juicefsuser 192.168.1.0/24 md5 ``` Please refer to the official manual to learn how to back up and restore databases. It is recommended to make a plan for regularly backing up your database, and at the same time, do some tests to restore the data in an experimental environment to confirm that the backup is valid. Connection pooler is a middleware that works between client and database and reuses the earlier connection from the pool, which improve connection efficiency and reduce the loss of short connections. Commonly used connection poolers are and . The official PostgreSQL document compares several common databases in terms of high availability solutions. Please choose the appropriate ones according to your needs. :::note JuiceFS uses to ensure atomicity of metadata operations. Since PostgreSQL does not yet support Multi-Shard (Distributed) transactions, do not use a multi-server distributed architecture for the JuiceFS metadata. :::" } ]
{ "category": "Runtime", "file_name": "postgresql_best_practices.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "The website https://metrics.longhorn.io/ offers valuable insights into how Longhorn is being utilized, which can be accessed by the public. This information serves as a useful reference for user who are new to Longhorn, as well as those considering upgrading Longhorn or the underlying Kubernetes version. Additionally, it is useful for the Longhorn team to understand how it is being used in the real world. To gain a deeper understanding of usage patterns, it would be beneficial to gather additional information on volumes, host systems, and features. This data would not only offer insights into how to further improve Longhorn but also provide valuable ideas on how to steer Longhorn development in the right direction. This proposal aims to enhance Longhorn's upgrade checker `extraInfo` by collecting additional information includes node and cluster information, and some Longhorn settings. This proposal introduces a new setting, `Allow Collecting Longhorn Usage Metrics`, to allow users to enable or disable the collection. https://github.com/longhorn/longhorn/issues/5235 Extend collections of user cluster info during upgrade check. Have a new setting to provide user with option to enable or disable the collection. `None` Collect and sends through upgrade responder request. Node info: Kernel release OS distro Disk types (HDD, SSD, NVMe) Node provider Cluster info: Longhorn namespace UID for adaption rate Number of nodes Longhorn components CPU and memory usage Volumes info; such as access mode, frontend, average snapshot per volume, etc. Some Longhorn settings Introduce new `Allow Collecting Longhorn Usage Metrics` setting. Users can view how Longhorn is being utilized on https://metrics.longhorn.io/. Additionally, users have the ability to disable the collection by Longhorn. Users can find a list of items that Longhorn collects as extra information in the Longhorn documentation. Users can enable or disable the collection through the `Allow Collecting Longhorn Usage Metrics` setting. This setting can be configured using the UI or through kubectl, similar to other settings. `None` If this value is set to false, extra information will not be collected. Setting definition: ``` DisplayName: \"Allow Collecting Longhorn Usage Metrics\" Description: \"Enabling this setting will allow Longhorn to provide additional usage metrics to https://metrics.longhorn.io. This information will help us better understand how Longhorn is being used, which will ultimately contribute to future improvements.\" Category: SettingCategoryGeneral Type: SettingTypeBool Required: true ReadOnly: false Default: \"true\" ``` The following information is sent from each cluster node: Number of disks of different device (`longhornnodedisk<hdd/ssd/nvme/unknown>count`). > Note: this value may not be accurate if the cluster node is a virtual" }, { "data": "Host kernel release (`hostkernelrelease`) Host Os distro (`hostosdistro`) Kubernetest node provider (`kubernetesnodeprovider`) The following information is sent from one of the cluster node: Longhorn namespace UID (`longhornnamespaceuid`). Number of nodes (`longhornnodecount`). Number of volumes of different access mode (`longhornvolumeaccessmode<rwo/rwx/unknown>_count`). Number of volumes of different data locality (`longhornvolumedatalocality<disabled/besteffort/strictlocal/unknown>_count`). Number of volumes of different frontend (`longhornvolumefrontend<blockdev/iscsi>count`). Average volume size (`longhornvolumeaverage_size`). Average volume actual size (`longhornvolumeaverageactualsize`). Average number of snapshots per volume (`longhornvolumeaveragesnapshotcount`). Average number of replicas per volume (`longhornvolumeaveragenumberof_replicas`). Average Longhorn component CPU usage (`longhorn<engineimage/instancemanager/manager/ui>averagecpuusage_core`) Average Longhorn component CPU usage (`longhorn<engineimage/instancemanager/manager/ui>averagememoryusage_mib`) Settings (`longhornsetting<name>`): Settings to exclude: SettingNameBackupTargetCredentialSecret SettingNameDefaultEngineImage SettingNameDefaultInstanceManagerImage SettingNameDefaultShareManagerImage SettingNameDefaultBackingImageManagerImage SettingNameSupportBundleManagerImage SettingNameCurrentLonghornVersion SettingNameLatestLonghornVersion SettingNameStableLonghornVersions SettingNameDefaultLonghornStaticStorageClass SettingNameDeletingConfirmationFlag SettingNameDefaultDataPath SettingNameUpgradeChecker SettingNameAllowCollectingLonghornUsage SettingNameDisableReplicaRebuild (deprecated) SettingNameGuaranteedEngineCPU (deprecated) Settings that requires processing to identify their general purpose: SettingNameBackupTarget (the backup target type/protocol, ex: cifs, nfs, s3) Settings that should be collected as boolean (true if configured, false if not): SettingNameTaintToleration SettingNameSystemManagedComponentsNodeSelector SettingNameRegistrySecret SettingNamePriorityClass SettingNameStorageNetwork Other settings that should be collected as it is. Example: ``` name: upgrade_request time appversion hostkernelrelease hostosdistro kubernetesnodeprovider kubernetesversion longhornengineimageaveragecpuusagecore longhornengineimageaveragememoryusagemib longhorninstancemanageraveragecpuusagecore longhorninstancemanageraveragememoryusagemib longhornmanageraveragecpuusagecore longhornmanageraveragememoryusagemib longhornnamespaceuid longhornnodecount longhornnodedisknvmecount longhornsettingallownodedrainwithlasthealthyreplica longhornsettingallowrecurringjobwhilevolumedetached longhornsettingallowvolumecreationwithdegradedavailability longhornsettingautocleanupsystemgeneratedsnapshot longhornsettingautodeletepodwhenvolumedetachedunexpectedly longhornsettingautosalvage longhornsettingbackingimagecleanupwaitinterval longhornsettingbackingimagerecoverywaitinterval longhornsettingbackupcompressionmethod longhornsettingbackupconcurrentlimit longhornsettingbackuptarget longhornsettingbackupstorepollinterval longhornsettingconcurrentautomaticengineupgradepernodelimit longhornsettingconcurrentreplicarebuildpernodelimit longhornsettingconcurrentvolumebackuprestorepernodelimit longhornsettingcrdapiversion longhornsettingcreatedefaultdisklabelednodes longhornsettingdefaultdatalocality longhornsettingdefaultreplicacount longhornsettingdisablerevisioncounter longhornsettingdisableschedulingoncordonednode longhornsettingenginereplicatimeout longhornsettingfailedbackupttl longhornsettingfastreplicarebuildenabled longhornsettingguaranteedenginemanagercpu longhornsettingguaranteedinstancemanagercpu longhornsettingguaranteedreplicamanagercpu longhornsettingkubernetesclusterautoscalerenabled longhornsettingnodedownpoddeletionpolicy longhornsettingnodedrainpolicy longhornsettingorphanautodeletion longhornsettingpriorityclass longhornsettingrecurringfailedjobshistorylimit longhornsettingrecurringsuccessfuljobshistorylimit longhornsettingregistrysecret longhornsettingremovesnapshotsduringfilesystemtrim longhornsettingreplicaautobalance longhornsettingreplicafilesynchttpclienttimeout longhornsettingreplicareplenishmentwaitinterval longhornsettingreplicasoftantiaffinity longhornsettingreplicazonesoftantiaffinity longhornsettingrestoreconcurrentlimit longhornsettingrestorevolumerecurringjobs longhornsettingsnapshotdataintegrity longhornsettingsnapshotdataintegritycronjob longhornsettingsnapshotdataintegrityimmediatecheckaftersnapshotcreation longhornsettingstorageminimalavailablepercentage longhornsettingstoragenetwork longhornsettingstorageoverprovisioningpercentage longhornsettingstoragereservedpercentagefordefaultdisk longhornsettingsupportbundlefailedhistorylimit longhornsettingsystemmanagedcomponentsnodeselector longhornsettingsystemmanagedpodsimagepullpolicy longhornsettingtainttoleration longhornuiaveragecpuusagecore longhornuiaveragememoryusagemib longhornvolumeaccessmoderwocount longhornvolumeaverageactualsize longhornvolumeaveragenumberofreplicas longhornvolumeaveragesize longhornvolumeaveragesnapshotcount longhornvolumedatalocalitydisabledcount longhornvolumefrontendblockdevcount value -- - -- -- - -- -- - - -- -- - -- - -- - -- - - - - -- -- -- -- -- - - - -- - - - - - -- -- -- - - - -- - - -- -- -- -- - - -- - - -- - -- -- -- 1683598256887331729 v1.5.0-dev 5.3.18-59.37-default \"sles\" k3s v1.23.15+k3s1 5m 11 4m 83 22m 85 1b96b299-b785-468b-ab80-b5b5b12fbe00 3 1 false false true true true true 60 300 lz4 5 none 300 0 5 5 longhorn.io/v1beta2 false disabled 3 false true 8 1440 true 12 12 12 false do-nothing block-if-contains-last-replica false false 1 1 false false disabled 30 600 false true 5 false fast-check 0 0 /7 * false 25 false 200 30 1 false if-not-present false 0 4 3 79816021 2 8589934592 0 3 3 1 1683598257082240493 v1.5.0-dev 5.3.18-59.37-default \"sles\" k3s v1.23.15+k3s1 1 1 1683598257825718008 v1.5.0-dev 5.3.18-59.37-default \"sles\" k3s v1.23.15+k3s1 1 1 ``` Set up the upgrade responder server. Verify the database when the `Allow Collecting Longhorn Usage Metrics` setting is enabled or disabled. `None` `None`" } ]
{ "category": "Runtime", "file_name": "20230420-upgrade-checker-info-collection.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- markdownlint-disable-next-line first-line-heading --> ](https://github.com/cloudnativelabs/kube-router/actions/workflows/ci.yml?query=branch%3Amaster) ](https://goreportcard.com/report/github.com/cloudnativelabs/kube-router) ](https://kubernetes.slack.com/messages/C8DCQGTSB/) ](https://hub.docker.com/r/cloudnativelabs/kube-router/) ](https://microbadger.com/images/cloudnativelabs/kube-router) ](https://github.com/cloudnativelabs/kube-router/releases) ](https://opensource.org/licenses/Apache-2.0) Kube-router is a turnkey solution for Kubernetes networking with aim to provide operational simplicity and high performance. kube-router does it all. With all features enabled, kube-router is a lean yet powerful alternative to several network components used in typical Kubernetes clusters. All this from a single DaemonSet/Binary. It doesn't get any easier. kube-router uses the Linux kernel's LVS/IPVS features to implement its K8s Services Proxy. Kube-router fully leverages power of LVS/IPVS to provide a rich set of scheduling options and unique features like DSR (Direct Server Return), L3 load balancing with ECMP for deployments where high throughput, minimal latency and high-availability are crucial. Read more about the advantages of IPVS for container load balancing: kube-router handles Pod networking efficiently with direct routing thanks to the BGP protocol and the GoBGP Go library. It uses the native Kubernetes API to maintain distributed pod networking state. That means no dependency on a separate datastore to maintain in your cluster. kube-router's elegant design also means there is no dependency on another CNI plugin. The provided by the CNI project is all you need. While it is likely that you already have this plugin on your file system if you've installed Kubernetes, kube-router will install the plugins it needs for you in `/opt/cni/bin` if it sees you're missing them. Read more about the advantages and potential of BGP with Kubernetes: Enabling Kubernetes is easy with kube-router -- just add a flag to kube-router. It uses ipsets with iptables to ensure your firewall rules have as little performance impact on your cluster as possible. Kube-router supports the networking.k8s.io/NetworkPolicy API or network policy V1/GA and also network policy beta semantics. Read more about kube-router's approach to Kubernetes Network Policies: If you have other networking devices or SDN systems that talk BGP, kube-router will fit in perfectly. From a simple full node-to-node mesh to per-node peering configurations, most routing needs can be" }, { "data": "The configuration is Kubernetes native (annotations) just like the rest of kube-router, so use the tools you already know! Since kube-router uses GoBGP, you have access to a modern BGP API platform as well right out of the box. Kube-router also provides a way to expose services outside the cluster by advertising ClusterIP and externalIPs to configured BGP peers. Kube-routesalso support MD5 password based authentication and uses strict export policies so you can be assured routes are advertised to the underlay only as you intended. For more details please refer to the . A key design tenet of Kube-router is to use standard Linux networking stack and toolset. There is no overlays or SDN pixie dust, but just plain good old networking. You can use standard Linux networking tools like iptables, ipvsadm, ipset, iproute, traceroute, tcpdump etc. to troubleshoot or observe data path. When kube-router is ran as a daemonset, the official kube-router image also ships with these automatically configured for your cluster. Although it does the work of several of its peers in one binary, kube-router does it all with a relatively , partly because IPVS is already there on your Kuberneres nodes waiting to help you do amazing things. kube-router brings that and GoBGP's modern BGP interface to you in an elegant package designed from the ground up for Kubernetes. A primary motivation for kube-router is performance. The combination of BGP for inter-node Pod networking and IPVS for load balanced proxy Services is a perfect recipe for high-performance cluster networking at scale. BGP ensures that the data path is dynamic and efficient, and IPVS provides in-kernel load balancing that has been thoroughly tested and optimized. Kube-router is being used in several production clusters by diverse set of users ranging from financial firms, gaming companies to universities. For years we have listened to users and incorporated feedback. The core functionality is now very stable. We encourage all kinds of contributions, be they documentation, code, fixing typos, testsanything at all. Please read the . If you experience any problems please reach us on kube-router for quick help. Feel free to leave feedback or raise questions by opening an issue . Kube-router build upon following libraries:" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage multicast BPF programs ``` -h, --help help for multicast ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Manage the multicast groups. - Manage the multicast subscribers." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_multicast.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This document provides an overview on how to run Kata Containers with the AWS Firecracker hypervisor. AWS Firecracker is an open source virtualization technology that is purpose-built for creating and managing secure, multi-tenant container and function-based services that provide serverless operational models. AWS Firecracker runs workloads in lightweight virtual machines, called `microVMs`, which combine the security and isolation properties provided by hardware virtualization technology with the speed and flexibility of Containers. Please refer to AWS Firecracker for more details. This document requires the presence of Kata Containers on your system. Install using the instructions available through the following links: Kata Containers Kata Containers manual installation: Automated installation does not seem to be supported for Clear Linux, so please use steps. Note: Create rootfs image and not initrd image. For information about the supported version of Firecracker, see the Kata Containers . To install Firecracker we need to get the `firecracker` and `jailer` binaries: ```bash $ release_url=\"https://github.com/firecracker-microvm/firecracker/releases\" $ version=$(yq read <kata-repository>/versions.yaml assets.hypervisor.firecracker.version) $ arch=`uname -m` $ curl ${release_url}/download/${version}/firecracker-${version}-${arch} -o firecracker $ curl ${release_url}/download/${version}/jailer-${version}-${arch} -o jailer $ chmod +x jailer firecracker ``` To make the binaries available from the default system `PATH` it is recommended to move them to `/usr/local/bin` or add a symbolic link: ```bash $ sudo ln -s $(pwd)/firecracker /usr/local/bin $ sudo ln -s $(pwd)/jailer /usr/local/bin ``` More details can be found in In order to run Kata with AWS Firecracker a block device as the backing store for a VM is required. To interact with `containerd` and Kata we use the `devmapper` `snapshotter`. To check support for your `containerd` installation, you can run: ``` $ ctr plugins ls |grep devmapper ``` if the output of the above command is: ``` io.containerd.snapshotter.v1 devmapper linux/amd64 ok ``` then you can skip this section and move on to `Configure Kata Containers with AWS Firecracker` If the output of the above command is: ``` io.containerd.snapshotter.v1 devmapper linux/amd64 error ``` then we need to setup `devmapper` `snapshotter`. Based on a [very useful guide](https://docs.docker.com/storage/storagedriver/device-mapper-driver/) from docker, we can set it up using the following scripts: Note: The following scripts assume a 100G sparse file for storing container images, a 10G sparse file for the thin-provisioning pool and 10G base image files for any sandboxed container created. This means that we will need at least 10GB free space. ``` set -ex DATA_DIR=/var/lib/containerd/devmapper POOL_NAME=devpool mkdir -p ${DATA_DIR} sudo touch \"${DATA_DIR}/data\" sudo truncate -s 100G \"${DATA_DIR}/data\" sudo touch \"${DATA_DIR}/meta\" sudo truncate -s 10G \"${DATA_DIR}/meta\" DATADEV=$(sudo losetup --find --show \"${DATADIR}/data\") METADEV=$(sudo losetup --find --show \"${DATADIR}/meta\") SECTOR_SIZE=512 DATASIZE=\"$(sudo blockdev --getsize64 -q ${DATADEV})\" LENGTHINSECTORS=$(bc <<< \"${DATASIZE}/${SECTORSIZE}\") DATABLOCKSIZE=128 LOWWATERMARK=32768 sudo dmsetup create \"${POOL_NAME}\" \\ --table \"0 ${LENGTHINSECTORS} thin-pool ${METADEV} ${DATADEV} ${DATABLOCKSIZE} ${LOWWATERMARK}\" cat << EOF [plugins]" }, { "data": "poolname = \"${POOLNAME}\" rootpath = \"${DATADIR}\" baseimagesize = \"10GB\" discard_blocks = true EOF ``` Make it executable and run it: ```bash $ sudo chmod +x ~/scripts/devmapper/create.sh $ cd ~/scripts/devmapper/ $ sudo ./create.sh ``` Now, we can add the `devmapper` configuration provided from the script to `/etc/containerd/config.toml`. Note: If you are using the default `containerd` configuration (`containerd config default >> /etc/containerd/config.toml`), you may need to edit the existing `[plugins.\"io.containerd.snapshotter.v1.devmapper\"]`configuration. Save and restart `containerd`: ```bash $ sudo systemctl restart containerd ``` We can use `dmsetup` to verify that the thin-pool was created successfully. ```bash $ sudo dmsetup ls ``` We should also check that `devmapper` is registered and running: ```bash $ sudo ctr plugins ls | grep devmapper ``` This script needs to be run only once, while setting up the `devmapper` `snapshotter` for `containerd`. Afterwards, make sure that on each reboot, the thin-pool is initialized from the same data directory. Otherwise, all the fetched containers (or the ones that you have created) will be re-initialized. A simple script that re-creates the thin-pool from the same data directory is shown below: ``` set -ex DATA_DIR=/var/lib/containerd/devmapper POOL_NAME=devpool DATADEV=$(sudo losetup --find --show \"${DATADIR}/data\") METADEV=$(sudo losetup --find --show \"${DATADIR}/meta\") SECTOR_SIZE=512 DATASIZE=\"$(sudo blockdev --getsize64 -q ${DATADEV})\" LENGTHINSECTORS=$(bc <<< \"${DATASIZE}/${SECTORSIZE}\") DATABLOCKSIZE=128 LOWWATERMARK=32768 sudo dmsetup create \"${POOL_NAME}\" \\ --table \"0 ${LENGTHINSECTORS} thin-pool ${METADEV} ${DATADEV} ${DATABLOCKSIZE} ${LOWWATERMARK}\" ``` We can create a systemd service to run the above script on each reboot: ```bash $ sudo nano /lib/systemd/system/devmapper_reload.service ``` The service file: ``` [Unit] Description=Devmapper reload script [Service] ExecStart=/path/to/script/reload.sh [Install] WantedBy=multi-user.target ``` Enable the newly created service: ```bash $ sudo systemctl daemon-reload $ sudo systemctl enable devmapper_reload.service $ sudo systemctl start devmapper_reload.service ``` To configure Kata Containers with AWS Firecracker, copy the generated `configuration-fc.toml` file when building the `kata-runtime` to either `/etc/kata-containers/configuration-fc.toml` or `/usr/share/defaults/kata-containers/configuration-fc.toml`. The following command shows full paths to the `configuration.toml` files that the runtime loads. It will use the first path that exists. (Please make sure the kernel and image paths are set correctly in the `configuration.toml` file) ```bash $ sudo kata-runtime --show-default-config-paths ``` Next, we need to configure containerd. Add a file in your path (e.g. `/usr/local/bin/containerd-shim-kata-fc-v2`) with the following contents: ``` KATACONFFILE=/etc/kata-containers/configuration-fc.toml /usr/local/bin/containerd-shim-kata-v2 $@ ``` Note: You may need to edit the paths of the configuration file and the `containerd-shim-kata-v2` to correspond to your setup. Make it executable: ```bash $ sudo chmod +x /usr/local/bin/containerd-shim-kata-fc-v2 ``` Add the relevant section in `containerd`s `config.toml` file (`/etc/containerd/config.toml`): ``` [plugins.cri.containerd.runtimes] [plugins.cri.containerd.runtimes.kata-fc] runtime_type = \"io.containerd.kata-fc.v2\" ``` Note: If you are using the default `containerd` configuration (`containerd config default >> /etc/containerd/config.toml`), the configuration should change to : ``` [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.kata-fc] runtime_type = \"io.containerd.kata-fc.v2\" ``` Restart `containerd`: ```bash $ sudo systemctl restart containerd ``` We are now ready to launch a container using Kata with Firecracker to verify that everything worked: ```bash $ sudo ctr images pull --snapshotter devmapper docker.io/library/ubuntu:latest $ sudo ctr run --snapshotter devmapper --runtime io.containerd.run.kata-fc.v2 -t --rm docker.io/library/ubuntu ```" } ]
{ "category": "Runtime", "file_name": "how-to-use-kata-containers-with-firecracker.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "If you are developing an application similar to AI model training and need to repeatedly read the same batch of data to iterate the model, it is highly recommended to use the CubeFS cache acceleration mechanism. This mechanism can significantly reduce read and write latency and improve throughput, making model training more efficient. Using the local disk of the computing node as a data block cache can significantly improve the efficiency of data reading. The read requests from the client will first access the data cache area of the local disk. If the cache is hit, the required data will be obtained directly from the local disk. Otherwise, the data will be read from the backend replicaction subsystem or erasure coding subsystem, and the cached data will be asynchronously written to the local disk to improve the access performance of subsequent requests. To enable local disk caching, you need to start the local cache service first. ``` bash ./cfs-bcache -c bcache.json ``` Here is a table describing the meanings of various parameters in the configuration file: | Parameter | Type | Meaning | Required | |--|--|--|--| | cacheDir | string | Local storage path for cached data: allocated space (in bytes)| Yes | | logDir | string | Path for log files| Yes | | logLevel | string slice | Log levels| Yes | Then you just need to add the \"bcacheDir\" item in the client's configuration file: ``` bash { ... \"bcacheDir\": \"path/to/data\" //paths of the directories that you want to cache locally. } ``` CubeFS ensures the eventual consistency of local cache through the following strategies. Disable cache based on file extension: For example, checkpoint files generated during training tasks will be repeatedly updated during task execution, so it is not recommended to do local caching for them. You can disable caching for such files by adding \"pt\" to \"bcacheFilterFiles\" in the client's configuration file. ``` bash { ... \"bcacheFilterFiles\": \"pt\" //disable caching for files with the \".pt\" extension } ``` Periodic checkThe client periodically compares the metadata of cached data to detect any changes and removes the local cache data if there are any modifications. Proactively invalidating. In the scenario of a single mount point, after the user updates the data, the local cache data will be deleted; while in the scenario of multiple mount points, other mount points can only wait for the cache data to expire after the lifecycle expires. If the amount of data is small and you want to further improve the read cache latency, you can use the memory of the compute node as local cache. /dev/shm is a Linux shared memory filesystem that supports dynamically adjusting its capacity size. Here, we will adjust /dev/shm to 15GB, indicating that up to 15GB of memory can be used to cache data. ``` bash ``` Then you can set the \"cacheDir\" item in the configuration file of the bache service to a subdirectory of /dev/shm. For reference: ``` bash { ... \"cacheDir\":\"/dev/shm/cubefs-cache:16106127360\" //Using 15GB of memory as the data cache. } ``` The client-side local cache is exclusively owned by the node where it is" }, { "data": "When a large number of different clients need to repeatedly read the same batch of data sets, caching the data in the replica subsystem (distributed caching) can improve cache efficiency. Assuming that training data is stored in a lower-cost erasure coding subsystem, by enabling pre-loading, the data can be cached in advance into the replica subsystem. The client will prioritize reading data from the replica subsystem. If the cache is hit, it will be directly returned; if not, the data will be read from the erasure coding subsystem, while asynchronously caching the data to the replica subsystem. If both local caching and distributed caching are enabled, data will be read in the order of local cache, replica subsystem, and erasure coding subsystem. When there is a cache miss, data will be read from the backend, and then asynchronously cached to each level of cache to ensure subsequent access performance. To use distributed caching, you need enable the cache switch and set the cache capacity by setting the `cacheAction` and `cacheCap` properties when creating an erasure-coded volume or through the volume management interface. For example, you can use the following command to configure a 100GB distributed cache for an erasure-coding volume. ``` bash curl -v \"http://127.0.0.1:17010/vol/update?name=test&cacheCap=100&cacheAction=1&authKey=md5(owner)\" ``` In hybrid cloud ML scenarios, to ensure data security and consistency, training data is usually stored in private cloud, and the computing nodes in public cloud access the data on the private cloud through dedicated lines or public networks. Such cross-cloud data reading and writing approach is prone to high latency and large bandwidth overhead, while longer training time can also lead to wasted computational resources. By using CubeFS's local and distributed cache mechanisms, training data can be cached on public cloud nodes, reducing cross-cloud data transmission and improving training iteration efficiency. If the storage path of the training data set is fixed, the training data can be preloaded into the replicate subsystem through warm-up to improve the training performance. ``` bash ./cfs-preload -c config.json ``` The meanings of each parameter in the configuration file are as shown in the following table: | Parameter | Type | Meaning | Required | |--|--|--|--| | volumeName | string | Name of the volume where preload data exists| Yes | | masterAddr | string | Master address of the cluster where preload data resides| Yes | | target | string | Storage path of the preload data within the volume | Yes | | logDir | string | Directory for storing logs | Yes | | logLevel | string | Log level| Yes | | ttl | string | TTL for preload data in seconds | Yes | | action | string | Preload action: \"preload\" for preloading operation; \"clear\" for preload data clearing | Yes | | action | string | Maximum concurrency for traversing the preload data directory | No | | action | string | Maximum file size for preheating, files smaller than this size will not be preloaded | No | | action | string | Maximum concurrency for preloading data files | No | In addition, computing nodes can enable local caching to further improve data access efficiency by caching the preloaded data from the replica subsystem to local disk/memory." } ]
{ "category": "Runtime", "file_name": "cache.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "(devices-infiniband)= ```{note} The `infiniband` device type is supported for both containers and VMs. It supports hotplugging only for containers, not for VMs. ``` Incus supports two different kinds of network types for InfiniBand devices: `physical`: Passes a physical device from the host through to the instance. The targeted device will vanish from the host and appear in the instance. `sriov`: Passes a virtual function of an SR-IOV-enabled physical network device into the instance. ```{note} InfiniBand devices support SR-IOV, but in contrast to other SR-IOV-enabled devices, InfiniBand does not support dynamic device creation in SR-IOV mode. Therefore, you must pre-configure the number of virtual functions by configuring the corresponding kernel module. ``` To create a `physical` `infiniband` device, use the following command: incus config device add <instancename> <devicename> infiniband nictype=physical parent=<device> To create an `sriov` `infiniband` device, use the following command: incus config device add <instancename> <devicename> infiniband nictype=sriov parent=<sriovenableddevice> `infiniband` devices have the following device options: Key | Type | Default | Required | Description :-- | :-- | :-- | :-- | :-- `hwaddr` | string | randomly assigned | no | The MAC address of the new interface (can be either the full 20-byte variant or the short 8-byte variant, which will only modify the last 8 bytes of the parent device) `mtu` | integer | parent MTU | no | The MTU of the new interface `name` | string | kernel assigned | no | The name of the interface inside the instance `nictype` | string | - | yes | The device type (one of `physical` or `sriov`) `parent` | string | - | yes | The name of the host device or bridge" } ]
{ "category": "Runtime", "file_name": "devices_infiniband.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "English | Why Spiderpool need a multi-cluster network connectivity solution? Spiderpool requires a multi-cluster network connectivity solution for the following reasons: if our different clusters are distributed in the same data center, their networks are naturally interconnected. However, if they are spread across different data centers, the cluster subnets are isolated from each other and cannot communicate directly across data centers. Therefore, Spiderpool needs a multi-cluster network connectivity solution to address the issue of cross-data center multi-cluster network access. is an open-source multi-cluster network connectivity solution that uses tunneling technology to establish direct communication between Pods and Services in different Kubernetes clusters (running locally or in the public cloud). For more information, please refer to the . We can leverage Submariner to assist Spiderpool in addressing cross-data center multi-cluster network access issues. we will provide a detailed explanation of this feature. At least two Kubernetes clusters without CNI installed. and tools are already installed. This network topology diagram provides the following information: Clusters ClusterA and ClusterB are distributed across different data centers, and their respective cluster underlay subnets (172.110.0.0/16 and 172.111.0.0/16) cannot communicate directly due to network isolation in different data center networks. The gateway nodes can communicate with each other through the ens192 interface (10.6.0.0/16). The two clusters are connected through an IPSec tunnel established by Submariner. The tunnel is based on the ens192 interface of the gateway nodes, and it also accesses the Submariner Broker component through the ens192 interface. You can refer to the to install Spiderpool. Configure SpiderIPPool Since Submariner currently does not support multiple subnets, you can split the PodCIDR of each cluster into smaller subnets. Specify MacVlan Pods to get IP addresses from their respective smaller subnets for Underlay communication. Note: Ensure that these smaller subnets correspond to the connected Underlay subnets. For example, the PodCIDR of cluster-a is 172.110.0.0/16, you can create multiple smaller subnets (e.g., 172.110.1.0/24) within this larger subnet for Pod usage: ```shell ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: cluster-a spec: default: true ips: \"172.110.1.1-172.110.1.200\" subnet: 172.110.1.0/24 gateway: 172.110.0.1 EOF ``` the PodCIDR of cluster-b is 172.111.0.0/16, you can create multiple smaller subnets (e.g., 172.111.1.0/24) within this larger subnet for Pod usage: ```shell ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: cluster-b spec: default: true ips: \"172.111.1.1-172.111.1.200\" subnet: 172.111.1.0/24 gateway: 172.111.0.1 EOF ``` Configure SpiderMultusConfig Configure an SpiderMultusConfig in Cluster-a: ```shell ~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: macvlan-conf namespace: kube-system spec: cniType: macvlan macvlan: master: ens224 ippools: ipv4: cluster-a coordinator: hijackCIDR: 10.243.0.0/18 172.111.0.0/16 EOF ``` Configure an SpiderMultusConfig in Cluster-b:" }, { "data": "~# cat <<EOF | kubectl apply -f - apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderMultusConfig metadata: name: macvlan-conf namespace: kube-system spec: cniType: macvlan macvlan: master: ens224 ippools: ipv4: cluster-b coordinator: hijackCIDR: 10.233.0.0/18 172.110.0.0/16 EOF ``` > * Configuration of the host interface ens224 as the parent interface for Macvlan. Macvlan will create sub-interfaces on this network card for Pod use. > * Configuration of coordinator.hijackCIDR to specify the subnet information for the Service and Pods in the remote cluster. When a Pod is started, the coordinator will insert routes for these subnets into the Pod, enabling traffic to these destinations to be forwarded from the node. This facilitates better collaboration with Submariner. To install Submariner using the Subctl tool, you can refer to the official . However, when executing `subctl join`, make sure to manually specify the subnet for the MacVlan Underlay Pods mentioned in the previous steps. ```shell subctl join --kubeconfig cluster-a.config broker-info.subm --clusterid=cluster-a --clustercidr=172.110.0.0/16 subctl join --kubeconfig cluster-b.config broker-info.subm --clusterid=cluster-b --clustercidr=172.111.0.0/16 ``` Currently, Submariner only supports specifying a single Pod subnet and does not support multiple subnets. After the installation is complete, check the status of the Submariner components: ```shell [root@controller-node-1 ~]# subctl show all Cluster \"cluster.local\" Detecting broker(s) NAMESPACE NAME COMPONENTS GLOBALNET GLOBALNET CIDR DEFAULT GLOBALNET SIZE DEFAULT DOMAINS submariner-k8s-broker submariner-broker service-discovery, connectivity no 242.0.0.0/8 65536 Showing Connections GATEWAY CLUSTER REMOTE IP NAT CABLE DRIVER SUBNETS STATUS RTT avg. controller-node-1 cluster-b 10.6.168.74 no libreswan 10.243.0.0/18, 172.111.0.0/16 connected 661.938s Showing Endpoints CLUSTER ENDPOINT IP PUBLIC IP CABLE DRIVER TYPE cluster01 10.6.168.73 140.207.201.152 libreswan local cluster02 10.6.168.74 140.207.201.152 libreswan remote Showing Gateways NODE HA STATUS SUMMARY controller-node-1 active All connections (1) are established Showing Network details Discovered network details via Submariner: Network plugin: \"\" Service CIDRs: [10.233.0.0/18] Cluster CIDRs: [172.110.0.0/16] Showing versions COMPONENT REPOSITORY CONFIGURED RUNNING submariner-gateway quay.io/submariner 0.16.0 release-0.16-d1b6c9e194f8 submariner-routeagent quay.io/submariner 0.16.0 release-0.16-d1b6c9e194f8 submariner-metrics-proxy quay.io/submariner 0.16.0 release-0.16-d48224e08e06 submariner-operator quay.io/submariner 0.16.0 release-0.16-0807883713b0 submariner-lighthouse-agent quay.io/submariner 0.16.0 release-0.16-6f1d3f22e806 submariner-lighthouse-coredns quay.io/submariner 0.16.0 release-0.16-6f1d3f22e806 ``` As shown above, the Submariner components are running normally, and the tunnels have been successfully established. If you encounter issues with the tunnel not being established and the submariner-gateway pod remains in a CrashLoopBackOff state, potential reasons could include: Select suitable nodes as gateway nodes, ensuring they can communicate with each other. Otherwise, the tunnel will not be established. If the pod logs show: \"Error creating local endpoint object error=\"error getting CNI Interface IP address: unable to find CNI Interface on the host which has IP from [\\\"172.100.0.0/16\\\"].Please disable the health check if your CNI does not expose a pod IP on the nodes\", Please check if the gateway nodes are configured with addresses in the \"172.100.0.0/16\" subnet. If not, configure" }, { "data": "Alternatively, when executing subctl join, you can disable the health-check feature for the gateways: `subctl join --health-check=false ...` Use the following commands to create test Pods and Services in clusters cluster-a and cluster-b: ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app spec: replicas: 2 selector: matchLabels: app: test-app template: metadata: annotations: v1.multus-cni.io/default-network: kube-system/macvlan-confg labels: app: test-app spec: containers: name: test-app image: nginx imagePullPolicy: IfNotPresent ports: name: http containerPort: 80 protocol: TCP apiVersion: v1 kind: Service metadata: name: test-app-svc labels: app: test-app spec: type: ClusterIP ports: port: 80 protocol: TCP targetPort: 80 selector: app: test-app EOF ``` Check the running status of the Pods: View in Cluster-a: ```shell [root@controller-node-1 ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-696bf7cf7d-bkstk 1/1 Running 0 20m 172.110.1.131 controller-node-1 <none> <none> [root@controller-node-1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-app-svc ClusterIP 10.233.62.51 <none> 80/TCP 20m ``` View in Cluster-b: ```shell [root@controller-node-1 ~]# kubectl get po -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-8f5cdd468-5zr8n 1/1 Running 0 21m 172.111.1.136 controller-node-1 <none> <none> [root@controller-node-1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-app-svc ClusterIP 10.243.2.135 <none> 80/TCP 21m ``` Test communication between Pods across clusters: First, enter the Pod and check the routing information to ensure that when accessing the remote Pod and Service, traffic goes through the host's network protocol stack for forwarding: ```shell [root@controller-node-1 ~]# kubectl exec -it test-app-696bf7cf7d-bkstk -- ip route 10.7.168.73 dev veth0 src 172.110.168.131 10.233.0.0/18 via 10.7.168.73 dev veth0 src 172.110.168.131 10.233.64.0/18 via 10.7.168.73 dev veth0 src 172.110.168.131 10.233.74.89 dev veth0 src 172.110.168.131 10.243.0.0/18 via 10.7.168.73 dev veth0 src 172.110.168.131 172.110.1.0/24 dev eth0 src 172.110.168.131 172.110.168.73 dev veth0 src 172.110.168.131 172.111.0.0/16 via 10.7.168.73 dev veth0 src 172.110.168.131 ``` Confirm from the routing information that 10.243.0.0/18 and 172.111.0.0/16 are forwarded through veth0. Test access from a Pod in Cluster-A to a Pod in the remote Cluster-b: ```shell [root@controller-node-1 ~]# kubectl exec -it test-app-696bf7cf7d-bkstk -- ping -c2 172.111.168.136 PING 172.111.168.136 (172.111.168.136): 56 data bytes 64 bytes from 172.111.168.136: seq=0 ttl=62 time=0.900 ms 64 bytes from 172.111.168.136: seq=1 ttl=62 time=0.796 ms 172.111.168.136 ping statistics 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max = 0.796/0.848/0.900 ms ``` Test access from a Pod in Cluster-a to a Service in the remote Cluster-b: ```shell [root@controller-node-1 ~]# kubectl exec -it test-app-696bf7cf7d-bkstk -- curl -I 10.243.2.135 HTTP/1.1 200 OK Server: nginx/1.23.1 Date: Fri, 08 Dec 2023 03:32:04 GMT Content-Type: text/html Content-Length: 4055 Last-Modified: Fri, 08 Dec 2023 03:32:04 GMT Connection: keep-alive ETag: \"632d1faa-fd7\" Accept-Ranges: bytes ``` Spiderpool can address the challenge of cross-datacenter multi-cluster network connectivity with the assistance of Submariner." } ]
{ "category": "Runtime", "file_name": "submariner.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "(images-copy)= To add images to an image store, you can either copy them from another server or import them from files (either local files or files on a web server). To copy an image from one server to another, enter the following command: incus image copy [<sourceremote>:]<image> <targetremote>: ```{note} To copy the image to your local image store, specify `local:` as the target remote. ``` See for a list of all available flags. The most relevant ones are: `--alias` : Assign an alias to the copy of the image. `--copy-aliases` : Copy the aliases that the source image has. `--auto-update` : Keep the copy up-to-date with the original image. `--vm` : When copying from an alias, copy the image that can be used to create virtual machines. If you have image files that use the required {ref}`image-format`, you can import them into your image store. There are several ways of obtaining such image files: Exporting an existing image (see {ref}`images-manage-export`) Building your own image using `distrobuilder` (see {ref}`images-create-build`) Downloading image files from a {ref}`remote image server <image-servers>` (note that it is usually easier to {ref}`use the remote image <images-remote>` directly instead of downloading it to a file and importing it) To import an image from the local file system, use the command. This command supports both {ref}`unified images <image-format-unified>` (compressed file or directory) and {ref}`split images <image-format-split>` (two files). To import a unified image from one file or directory, enter the following command: incus image import <imagefileordirectorypath> [<target_remote>:] To import a split image, enter the following command: incus image import <metadatatarballpath> <rootfstarballpath> [<target_remote>:] In both cases, you can assign an alias with the `--alias` flag. See for all available flags. You can import image files from a remote web server by URL. This method is an alternative to running an Incus server for the sole purpose of distributing an image to users. It only requires a basic web server with support for custom headers (see {ref}`images-copy-http-headers`). The image files must be provided as unified images (see {ref}`image-format-unified`). To import an image file from a remote web server, enter the following command: incus image import <URL> You can assign an alias to the local image with the `--alias` flag. (images-copy-http-headers)= Incus requires the following custom HTTP headers to be set by the web server: `Incus-Image-Hash` : The SHA256 of the image that is being downloaded. `Incus-Image-URL` : The URL from which to download the image. Incus sets the following headers when querying the server: `Incus-Server-Architectures` : A comma-separated list of architectures that the client supports. `Incus-Server-Version` : The version of Incus in use." } ]
{ "category": "Runtime", "file_name": "images_copy.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "name: Feature Request about: Suggest an idea for this project <!-- Make sure to include as much information as possible so we can add it as quickly as possible. --> <!-- If you know how to add this feature, please open a pull request on https://github.com/openebs/openebs/compare/?template=features.md --> <!-- If you can't answer some sections, please delete them --> <!-- Provide a description of this change or addition --> <!-- Why is this change important to you? How would you use it? How can it benefit other users? --> <!-- Suggest an idea for implementing this change or addition --> <!-- Add optional screenshots of this change or addition -->" } ]
{ "category": "Runtime", "file_name": "feature-request.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Piraeus Datastore uses to mirror your volume data to one or more nodes. This document aims to describe the way to install the necessary files for common Linux distributions. To check if your nodes already have the necessary files installed, try running: ``` $ test -d /lib/modules/$(uname -r)/build/ && echo found headers ``` If the command prints `found headers`, your nodes are good to go. Installation of Linux Kernel headers on Ubuntu can be done through the `apt` package manager: ``` $ sudo apt-get update $ sudo apt-get install -y linux-headers-$(uname -r) ``` In addition, you can install the `linux-headers-virtual` package. This causes `apt upgrade` to install the headers matching any newly installed kernel versions. ``` $ sudo apt-get update $ sudo apt-get install -y linux-headers-virtual ``` Installation of Linux Kernel headers on Debian can be done through the `apt` package manager: ``` $ sudo apt-get update $ sudo apt-get install -y linux-headers-$(uname -r) ``` In addition, you can install an additional package, that causes `apt` to also install Kernel headers on upgrade: ``` $ sudo apt-get update $ sudo apt-get install -y linux-headers-$(dpkg --print-architecture) ``` Installing on RedHat Enterprise Linux or compatible distributions, such as AlmaLinux or Rocky Linux can be done through the `dnf` package manager: ``` $ sudo dnf install -y kernel-devel-$(uname -r) ``` In addition, you can install an additional package, that causes `yum` to also install Kernel headers on upgrade: ``` $ sudo dnf install -y kernel-devel ```" } ]
{ "category": "Runtime", "file_name": "install-kernel-headers.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "Longhorn is capable of backing up or restoring volume in multiple threads and using more efficient compression methods for improving Recovery Time Objective (RTO). Support multi-threaded volume backup and restore. Support efficient compression algorithm (`lz4`) and disable compression. Support backward compatibility of existing backups compressed by `gzip`. Larger backup block size helps improve the backup efficiency more and decrease the block lookup operations. In the enhancement, the adaptive large backup block size is not supported and will be handled in https://github.com/longhorn/longhorn/issues/5215. Introduce multi-threaded volume backup and restore. Number of backup and restore threads are configurable by uses. Introduce efficient compression methods. By default, the compression method is `lz4`, and user can globally change it to `none` or `gzip`. Additionally, the per-volume compression method can be customized. Existing backups compressed by `gzip` will not be impacted. Longhorn supports the backup and restore of volumes. Although the underlying computing and storage are powerful, the single thread implementation and low efficiency `gzip` compression method lead to slower backup and restore times and poor RTO.The enhancement aims to increase backup and restore efficiency through the use of multiple threads and efficient compression methods. The new parameters can be configured to accommodate a variety of applications and platforms, such as limiting the number of threads in an edge device or disabling compression for multimedia data. For existing volumes that already have backups, the compression method remains `gzip` for backward compatibility. Multi-threaded backups and restores are supported for subsequent backups. By default, the global backup compression method is set to `lz4`. By editing the global setting `backup-compression-method`, users can configure the compression method to `none` or `gzip`. The backup compression method can be customized per volume by editing `volume.spec.backupCompressionMethod` for different data format in the volume. Number of backup threads per backup is configurable by the global setting `backup-concurrent-limit`. Number of restore threads per backup is configurable by the global setting `restore-concurrent-limit`. Changing the compression method of a volume having backups is not supported. Add `compression-method` to longhorn-engine binary `backup create` command. Add `concurrent-limit` to longhorn-engine binary `backup create` command. Add `concurrent-limit` to longhorn0engine binary `backup restore` command. engine-proxy Add `compressionMethod` and `concurrentLimit` to EngineSnapshotBackup method. Add `concurrentLimit` to `EngineBackupRestore` method. syncagent Add `compressionMethod` and `concurrentLimit` to syncagent `BackupCreate` method. Add `concurrentLimit` to syncagent `BackupRestore` method. backup-compression-method This setting allows users to specify backup compression method. Options: `none`: Disable the compression method. Suitable for multimedia data such as encoded images and videos. `lz4`: Suitable for text files. `gzip`: A bit of higher compression ratio but relatively slow. Not recommended. Default: lz4 backup-concurrent-limit This setting controls how many worker threads per backup job concurrently. Default: 5 restore-concurrent-limit This setting controls how many worker threads per restore job concurrently. Default: 5 Introduce `volume.spec.backupCompressionMethod` Introduce `backup.status.compressionMethod` A producer-consumer pattern is used to achieve multi-threaded backups. In this implementation, there are one producer and multiple consumers which is controlled by the global setting `backup-concurrent-limit`. Producer Open the disk file to be backed up and create a `Block` channel. Iterate the blocks in the disk" }, { "data": "Skip sparse blocks. Send the data blocks information including offset and size to `Block` channel. Close the `Block` channel after finishing the iteration. Consumers Block handling goroutines (consumers) are created and consume blocks from the `Block` channel. Processing blocks Calculate the checksum of the incoming block. Check the in-memory `processingBlocks` map to determine whether the block is being processed. If YES, end up appending the block to `Blocks` that record the blocks processed in the backup. If NO, check the remote backupstore to determine whether the block exists. If YES, append the block to `Blocks`. If NO, compress the block, upload the block, and end up appending it to the `Blocks`. After the blocks have been consumed and the `Block` channel has been closed, the goroutines are terminated. Then, update the volume and backup metadata files in remote backupstore. A producer-consumer pattern is used to achieve multi-threaded restores. In this implementation, there are one producer and multiple consumers which is controlled by the global setting `restore-concurrent-limit`. Producer Create a `Block` channel. Open the backup metadata file and get the information, offset, size and checksum, of the blocks. Iterate the blocks and send the block information to the `Block` channel. Close the `Block` channel after finishing the iteration. Consumers Block handling goroutines (consumers) are created and consume blocks from the `Block` channel. It is necessary for each consumer to open the disk file in order to avoid race conditions between the seek and write operations. Read the block data from the backupstore, verify the data integrity and write to the disk file. After the blocks have been consumed and the `Block` channel has been closed, the goroutines are terminated. In summary, the backup throughput is increased by 15X when using `lz4` and `10` concurrent threads in comparison with the backup in Longhorn v1.4.0. The restore (to a volume with 3 replica) throughput is increased by 140%, and the throughput is limited by the IO bound of the backupstore server. | | | ||| | Platform | Equinix | | Host | Japan-Tokyo/m3.small.x86 | | CPU | Intel(R) Xeon(R) E-2378G CPU @ 2.80GHz | | RAM | 64 GiB | | Disk | Micron5300MTFD | | OS | Ubuntu 22.04.1 LTS(kernel 5.15.0-53-generic) | | Kubernetes | v1.23.6+rke2r2 | | Longhorn | master-branch + backup improvement | | Nodes | 3 nodes | | Backupstore target | external MinIO S3 (m3.small.x86) | | Volume | 50 GiB containing 1GB filesystem metadata and 10 GiB random data (3 replicas) | Single-Threaded Backup and Restore by Different Compression Methods Multi-Threaded Backup Multi-Threaded Restore to One Volume with 3 Replicas Restore hit the IO bound of the backupstore server, because the throughput is saturated from 5 worker threads. Multi-Threaded Restore to One Volume with 1 Replica Create a volumes and then create backups using the compression method, `none`, `lz4` or `gzip` and different number of backup threads. The backups should succeed. Restore the backups created in step 1 by different number of restore threads. Verify the data integrity of the disk files." } ]
{ "category": "Runtime", "file_name": "20230108-improve-backup-and-restore-efficiency-using-multiple-threads-and-compression-methods.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Second State Functions, powered by WasmEdge, supports the Rust language as a first class citizen. It could Check out the website for more tutorials." } ]
{ "category": "Runtime", "file_name": "secondstate.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "title: \"ark restore logs\" layout: docs Get restore logs Get restore logs ``` ark restore logs RESTORE [flags] ``` ``` -h, --help help for logs --timeout duration how long to wait to receive logs (default 1m0s) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with restores" } ]
{ "category": "Runtime", "file_name": "ark_restore_logs.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Ceph Configuration These examples show how to perform advanced configuration tasks on your Rook storage cluster. Most of the examples make use of the `ceph` client command. A quick way to use the Ceph client suite is from a . The Kubernetes based examples assume Rook OSD pods are in the `rook-ceph` namespace. If you run them in a different namespace, modify `kubectl -n rook-ceph [...]` to fit your situation. If you wish to deploy the Rook Operator and/or Ceph clusters to namespaces other than the default `rook-ceph`, the manifests are commented to allow for easy `sed` replacements. Change `ROOKCLUSTERNAMESPACE` to tailor the manifests for additional Ceph clusters. You can choose to also change `ROOKOPERATORNAMESPACE` to create a new Rook Operator for each Ceph cluster (don't forget to set `ROOKCURRENTNAMESPACE_ONLY`), or you can leave it at the same value for every Ceph cluster if you only wish to have one Operator manage all Ceph clusters. If the operator namespace is different from the cluster namespace, the operator namespace must be created before running the steps below. The cluster namespace does not need to be created first, as it will be created by `common.yaml` in the script below. ```console kubectl create namespace $ROOKOPERATORNAMESPACE ``` This will help you manage namespaces more easily, but you should still make sure the resources are configured to your liking. ```console cd deploy/examples export ROOKOPERATORNAMESPACE=\"rook-ceph\" export ROOKCLUSTERNAMESPACE=\"rook-ceph\" sed -i.bak \\ -e \"s/\\(.\\):.# namespace:operator/\\1: $ROOKOPERATORNAMESPACE # namespace:operator/g\" \\ -e \"s/\\(.\\):.# namespace:cluster/\\1: $ROOKCLUSTERNAMESPACE # namespace:cluster/g\" \\ -e \"s/\\(.serviceaccount\\):.:\\(.*\\) # serviceaccount:namespace:operator/\\1:$ROOKOPERATORNAMESPACE:\\2 # serviceaccount:namespace:operator/g\" \\ -e \"s/\\(.serviceaccount\\):.:\\(.*\\) # serviceaccount:namespace:cluster/\\1:$ROOKCLUSTERNAMESPACE:\\2 # serviceaccount:namespace:cluster/g\" \\ -e \"s/\\(.\\): [-_A-Za-z0-9]\\.\\(.*\\) # driver:namespace:cluster/\\1: $ROOKCLUSTERNAMESPACE.\\2 # driver:namespace:cluster/g\" \\ common.yaml operator.yaml cluster.yaml # add other files or change these as desired for your config kubectl apply -f common.yaml -f operator.yaml -f cluster.yaml # add other files as desired for yourconfig ``` Also see the CSI driver to update the csi provisioner names in the storageclass and volumesnapshotclass. If you wish to create a new CephCluster in a separate namespace, you can easily do so by modifying the `ROOKOPERATORNAMESPACE` and `SECONDROOKCLUSTER_NAMESPACE` values in the below instructions. The default configuration in `common-second-cluster.yaml` is already set up to utilize `rook-ceph` for the operator and `rook-ceph-secondary` for the cluster. There's no need to run the `sed` command if you prefer to use these default values. ```console cd deploy/examples export ROOKOPERATORNAMESPACE=\"rook-ceph\" export SECONDROOKCLUSTER_NAMESPACE=\"rook-ceph-secondary\" sed -i.bak \\ -e \"s/\\(.\\):.# namespace:operator/\\1: $ROOKOPERATORNAMESPACE # namespace:operator/g\" \\ -e \"s/\\(.\\):.# namespace:cluster/\\1: $SECONDROOKCLUSTER_NAMESPACE # namespace:cluster/g\" \\ common-second-cluster.yaml kubectl create -f common-second-cluster.yaml ``` This will create all the necessary RBACs as well as the new namespace. The script assumes that `common.yaml` was already created. When you create the second CephCluster CR, use the same `NAMESPACE` and the operator will configure the second cluster. All Rook logs can be collected in a Kubernetes environment with the following command: ```console for p in $(kubectl -n rook-ceph get pods -o jsonpath='{.items[*].metadata.name}') do for c in $(kubectl -n rook-ceph get pod ${p} -o jsonpath='{.spec.containers[*].name}') do echo \"BEGIN logs from pod: ${p} ${c}\" kubectl -n rook-ceph logs -c ${c} ${p} echo \"END logs from pod: ${p} ${c}\" done done ``` This gets the logs for every container in every Rook pod and then compresses them into a `.gz` archive for easy" }, { "data": "Note that instead of `gzip`, you could instead pipe to `less` or to a single text file. Keeping track of OSDs and their underlying storage devices can be difficult. The following scripts will clear things up quickly. ```console OSD_PODS=$(kubectl get pods --all-namespaces -l \\ app=rook-ceph-osd,rook_cluster=rook-ceph -o jsonpath='{.items[*].metadata.name}') for pod in $(echo ${OSD_PODS}) do echo \"Pod: ${pod}\" echo \"Node: $(kubectl -n rook-ceph get pod ${pod} -o jsonpath='{.spec.nodeName}')\" kubectl -n rook-ceph exec ${pod} -- sh -c '\\ for i in /var/lib/ceph/osd/ceph-*; do [ -f ${i}/ready ] || continue echo -ne \"-$(basename ${i}) \" echo $(lsblk -n -o NAME,SIZE ${i}/block 2> /dev/null || \\ findmnt -n -v -o SOURCE,SIZE -T ${i}) $(cat ${i}/type) done | sort -V echo' done ``` The output should look something like this. ```console Pod: osd-m2fz2 Node: node1.zbrbdl -osd0 sda3 557.3G bluestore -osd1 sdf3 110.2G bluestore -osd2 sdd3 277.8G bluestore -osd3 sdb3 557.3G bluestore -osd4 sde3 464.2G bluestore -osd5 sdc3 557.3G bluestore Pod: osd-nxxnq Node: node3.zbrbdl -osd6 sda3 110.7G bluestore -osd17 sdd3 1.8T bluestore -osd18 sdb3 231.8G bluestore -osd19 sdc3 231.8G bluestore Pod: osd-tww1h Node: node2.zbrbdl -osd7 sdc3 464.2G bluestore -osd8 sdj3 557.3G bluestore -osd9 sdf3 66.7G bluestore -osd10 sdd3 464.2G bluestore -osd11 sdb3 147.4G bluestore -osd12 sdi3 557.3G bluestore -osd13 sdk3 557.3G bluestore -osd14 sde3 66.7G bluestore -osd15 sda3 110.2G bluestore -osd16 sdh3 135.1G bluestore ``` !!! attention It is deprecated to manually need to set this, the `deviceClass` property can be used on Pool structures in `CephBlockPool`, `CephFilesystem` and `CephObjectStore` CRD objects. By default Rook/Ceph puts all storage under one replication rule in the CRUSH Map which provides the maximum amount of storage capacity for a cluster. If you would like to use different storage endpoints for different purposes, you'll have to create separate storage groups. In the following example we will separate SSD drives from spindle-based drives, a common practice for those looking to target certain workloads onto faster (database) or slower (file archive) storage. !!! note Since Ceph Nautilus (v14.x), you can use the Ceph MGR `pg_autoscaler` module to auto scale the PGs as needed. It is highly advisable to configure default pg_num value on per-pool basis, If you want to enable this feature, please refer to [Default PG and PGP counts](configuration.md#default-pg-and-pgp-counts). The general rules for deciding how many PGs your pool(s) should contain is: Fewer than 5 OSDs set `pg_num` to 128 Between 5 and 10 OSDs set `pg_num` to 512 Between 10 and 50 OSDs set `pg_num` to 1024 If you have more than 50 OSDs, you need to understand the tradeoffs and how to calculate the pgnum value by yourself. For calculating pgnum yourself please make use of . Be sure to read the section before changing the number of PGs. ```console ceph osd pool set rbd pg_num 512 ``` !!! warning The advised method for controlling Ceph configuration is to use the in the `CephCluster` CRD. <br><br>It is highly recommended that this only be used when absolutely necessary and that the `config` be reset to an empty string if/when the configurations are no longer necessary. Configurations in the config file will make the Ceph cluster less configurable from the CLI and dashboard and may make future tuning or debugging" }, { "data": "Setting configs via Ceph's CLI requires that at least one mon be available for the configs to be set, and setting configs via dashboard requires at least one mgr to be available. Ceph also has a number of very advanced settings that cannot be modified easily via the CLI or dashboard. In order to set configurations before monitors are available or to set advanced configuration settings, the `rook-config-override` ConfigMap exists, and the `config` field can be set with the contents of a `ceph.conf` file. The contents will be propagated to all mon, mgr, OSD, MDS, and RGW daemons as an `/etc/ceph/ceph.conf` file. !!! warning Rook performs no validation on the config, so the validity of the settings is the user's responsibility. If the `rook-config-override` ConfigMap is created before the cluster is started, the Ceph daemons will automatically pick up the settings. If you add the settings to the ConfigMap after the cluster has been initialized, each daemon will need to be restarted where you want the settings applied: mons: ensure all three mons are online and healthy before restarting each mon pod, one at a time. mgrs: the pods are stateless and can be restarted as needed, but note that this will disrupt the Ceph dashboard during restart. OSDs: restart your the pods by deleting them, one at a time, and running `ceph -s` between each restart to ensure the cluster goes back to \"active/clean\" state. RGW: the pods are stateless and can be restarted as needed. MDS: the pods are stateless and can be restarted as needed. After the pod restart, the new settings should be in effect. Note that if the ConfigMap in the Ceph cluster's namespace is created before the cluster is created, the daemons will pick up the settings at first launch. To automate the restart of the Ceph daemon pods, you will need to trigger an update to the pod specs. The simplest way to trigger the update is to add to the CephCluster CR for the daemons you want to restart. The operator will then proceed with a rolling update, similar to any other update to the cluster. In this example we will set the default pool `size` to two, and tell OSD daemons not to change the weight of OSDs on startup. !!! warning Modify Ceph settings carefully. You are leaving the sandbox tested by Rook. Changing the settings could result in unhealthy daemons or even data loss if used incorrectly. When the Rook Operator creates a cluster, a placeholder ConfigMap is created that will allow you to override Ceph configuration settings. When the daemon pods are started, the settings specified in this ConfigMap will be merged with the default settings generated by Rook. The default override settings are blank. Cutting out the extraneous properties, we would see the following defaults after creating a cluster: ```console kubectl -n rook-ceph get ConfigMap rook-config-override -o yaml ``` ```yaml kind: ConfigMap apiVersion: v1 metadata: name: rook-config-override namespace: rook-ceph data: config: \"\" ``` To apply your desired configuration, you will need to update this ConfigMap. The next time the daemon pod(s) start, they will use the updated configs. ```console kubectl -n rook-ceph edit configmap rook-config-override ``` Modify the settings and" }, { "data": "Each line you add should be indented from the `config` property as such: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: rook-config-override namespace: rook-ceph data: config: | [global] osd crush update on start = false osd pool default size = 2 ``` !!! warning It is highly recommended to use the default setting that comes with CephCSI and this can only be used when absolutely necessary. The `ceph.conf` should be reset back to default values if/when the configurations are no longer necessary. If the `csi-ceph-conf-override` ConfigMap is created before the cluster is started, the CephCSI pods will automatically pick up the settings. If you add the settings to the ConfigMap after the cluster has been initialized, you can restart the Rook operator pod and wait for Rook to recreate CSI pods to take immediate effect. After the CSI pods are restarted, the new settings should be in effect. In this we will set the `rbdvalidatepool` to `false` to skip rbd pool validation. !!! warning Modify Ceph settings carefully to avoid modifying the default configuration. Changing the settings could result in unexpected results if used incorrectly. ```console kubectl create -f csi-ceph-conf-override.yaml ``` Restart the Rook operator pod and wait for CSI pods to be recreated. A useful view of the is generated with the following command: ```console ceph osd tree ``` In this section we will be tweaking some of the values seen in the output. The CRUSH weight controls the ratio of data that should be distributed to each OSD. This also means a higher or lower amount of disk I/O operations for an OSD with higher/lower weight, respectively. By default OSDs get a weight relative to their storage capacity, which maximizes overall cluster capacity by filling all drives at the same rate, even if drive sizes vary. This should work for most use-cases, but the following situations could warrant weight changes: Your cluster has some relatively slow OSDs or nodes. Lowering their weight can reduce the impact of this bottleneck. You're using bluestore drives provisioned with Rook v0.3.1 or older. In this case you may notice OSD weights did not get set relative to their storage capacity. Changing the weight can fix this and maximize cluster capacity. This example sets the weight of osd.0 which is 600GiB ```console ceph osd crush reweight osd.0 .600 ``` When pools are set with a size setting greater than one, data is replicated between nodes and OSDs. For every chunk of data a Primary OSD is selected to be used for reading that data to be sent to clients. You can control how likely it is for an OSD to become a Primary using the Primary Affinity setting. This is similar to the OSD weight setting, except it only affects reads on the storage device, not capacity or writes. In this example we will ensure that `osd.0` is only selected as Primary if all other OSDs holding data replicas are unavailable: ```console ceph osd primary-affinity osd.0 0 ``` !!! tip This documentation is left for historical purposes. It is still valid, but Rook offers native support for this feature via the . It is possible to configure ceph to leverage a dedicated network for the OSDs to communicate across. A useful overview is the section of the Ceph documentation. If you declare a cluster network, OSDs will route heartbeat, object replication, and recovery traffic over the cluster" }, { "data": "This may improve performance compared to using a single network, especially when slower network technologies are used. The tradeoff is additional expense and subtle failure modes. Two changes are necessary to the configuration to enable this capability: Enable the `hostNetwork` setting in the . For example, ```yaml network: provider: host ``` !!! important Changing this setting is not supported in a running Rook cluster. Host networking should be configured when the cluster is first created. Edit the `rook-config-override` configmap to define the custom network configuration: ```console kubectl -n rook-ceph edit configmap rook-config-override ``` In the editor, add a custom configuration to instruct ceph which subnet is the public network and which subnet is the private network. For example: ```yaml apiVersion: v1 data: config: | [global] public network = 10.0.7.0/24 cluster network = 10.0.10.0/24 public addr = \"\" cluster addr = \"\" ``` After applying the updated rook-config-override configmap, it will be necessary to restart the OSDs by deleting the OSD pods in order to apply the change. Restart the OSD pods by deleting them, one at a time, and running ceph -s between each restart to ensure the cluster goes back to \"active/clean\" state. If you have OSDs in which are not showing any disks, you can remove those \"Phantom OSDs\" by following the instructions below. To check for \"Phantom OSDs\", you can run (example output): ```console $ ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 57.38062 root default -13 7.17258 host node1.example.com 2 hdd 3.61859 osd.2 up 1.00000 1.00000 -7 0 host node2.example.com down 0 1.00000 ``` The host `node2.example.com` in the output has no disks, so it is most likely a \"Phantom OSD\". Now to remove it, use the ID in the first column of the output and replace `<ID>` with it. In the example output above the ID would be `-7`. The commands are: ```console ceph osd out <ID> ceph osd crush remove osd.<ID> ceph auth del osd.<ID> ceph osd rm <ID> ``` To recheck that the Phantom OSD was removed, re-run the following command and check if the OSD with the ID doesn't show up anymore: ```console ceph osd tree ``` 1) A deployed in dynamic provisioning environment with a `storageClassDeviceSet`. 2) Create the Rook . !!! note and [Prometheus ../Monitoring/ceph-monitoring.mdnitoring.md#prometheus-instances) are Prerequisites that are created by the auto-grow-storage script. Run the following script to auto-grow the size of OSDs on a PVC-based Rook cluster whenever the OSDs have reached the storage near-full threshold. ```console tests/scripts/auto-grow-storage.sh size --max maxSize --growth-rate percent ``` `growth-rate` percentage represents the percent increase you want in the OSD capacity and maxSize represent the maximum disk size. For example, if you need to increase the size of OSD by 30% and max disk size is 1Ti ```console ./auto-grow-storage.sh size --max 1Ti --growth-rate 30 ``` Run the following script to auto-grow the number of OSDs on a PVC-based Rook cluster whenever the OSDs have reached the storage near-full threshold. ```console tests/scripts/auto-grow-storage.sh count --max maxCount --count rate ``` Count of OSD represents the number of OSDs you need to add and maxCount represents the number of disks a storage cluster will support. For example, if you need to increase the number of OSDs by 3 and maxCount is 10 ```console ./auto-grow-storage.sh count --max 10 --count 3 ```" } ]
{ "category": "Runtime", "file_name": "ceph-configuration.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Kata Containers 3.3.0 introduces the guest image management feature, which enables the guest VM to directly pull images using `nydus snapshotter`. This feature is designed to protect the integrity of container images and guard against any tampering by the host, which is used for confidential containers. Please refer to for details. The k8s cluster with Kata 3.3.0+ is ready to use. `yq` is installed in the host and it's directory is included in the `PATH` environment variable. (optional, for DaemonSet only) To pull images in the guest, we need to do the following steps: Delete images used for pulling in the guest (optional, for containerd only) Install `nydus snapshotter`: Install `nydus snapshotter` by k8s DaemonSet (recommended) Install `nydus snapshotter` manually Though the `CRI Runtime Specific Snapshotter` is still an in containerd, which containerd is not supported to manage the same image in different `snapshotters`(The default `snapshotter` in containerd is `overlayfs`). To avoid errors caused by this, it is recommended to delete images (including the pause image) in containerd that needs to be pulled in guest later before configuring `nydus snapshotter` in containerd. To use DaemonSet to install `nydus snapshotter`, we need to ensure that `yq` exists in the host. Download `nydus snapshotter` repo ```bash $ nydussnapshotterinstall_dir=\"/tmp/nydus-snapshotter\" $ nydussnapshotterurl=https://github.com/containerd/nydus-snapshotter $ nydussnapshotterversion=\"v0.13.11\" $ git clone -b \"${nydussnapshotterversion}\" \"${nydussnapshotterurl}\" \"${nydussnapshotterinstall_dir}\" ``` Configure DaemonSet file ```bash $ pushd \"$nydussnapshotterinstall_dir\" $ yq write -i \\ misc/snapshotter/base/nydus-snapshotter.yaml \\ 'data.FS_DRIVER' \\ \"proxy\" --style=double $ yq write -i \\ misc/snapshotter/base/nydus-snapshotter.yaml \\ 'data.ENABLECONFIGFROM_VOLUME' \\ \"false\" --style=double $ yq write -i \\ misc/snapshotter/base/nydus-snapshotter.yaml \\ 'data.ENABLESYSTEMDSERVICE' \\ \"true\" --style=double $ yq write -i \\ misc/snapshotter/base/nydus-snapshotter.yaml \\ 'data.ENABLERUNTIMESPECIFIC_SNAPSHOTTER' \\ \"true\" --style=double ``` Install `nydus snapshotter` as a DaemonSet ```bash $ kubectl create -f \"misc/snapshotter/nydus-snapshotter-rbac.yaml\" $ kubectl apply -f \"misc/snapshotter/base/nydus-snapshotter.yaml\" ``` Wait 5 minutes until the DaemonSet is running ```bash $ kubectl rollout status DaemonSet nydus-snapshotter -n nydus-system --timeout 5m ``` Verify whether `nydus snapshotter` is running as a DaemonSet ```bash $ pods_name=$(kubectl get pods --selector=app=nydus-snapshotter -n nydus-system -o=jsonpath='{.items[*].metadata.name}') $ kubectl logs \"${pods_name}\" -n nydus-system deploying snapshotter install nydus snapshotter artifacts configuring snapshotter Not found nydus proxy plugin! running snapshotter as systemd service Created symlink /etc/systemd/system/multi-user.target.wants/nydus-snapshotter.service" }, { "data": "``` Download `nydus snapshotter` binary from release ```bash $ ARCH=$(uname -m) $ golang_arch=$(case \"$ARCH\" in aarch64) echo \"arm64\" ;; ppc64le) echo \"ppc64le\" ;; x86_64) echo \"amd64\" ;; s390x) echo \"s390x\" ;; esac) $ releasetarball=\"nydus-snapshotter-${nydussnapshotterversion}-linux-${golangarch}.tar.gz\" $ curl -OL ${nydussnapshotterurl}/releases/download/${nydussnapshotterversion}/${release_tarball} $ sudo tar -xfz ${release_tarball} -C /usr/local/bin --strip-components=1 ``` Download `nydus snapshotter` configuration file for pulling images in the guest ```bash $ curl -OL https://github.com/containerd/nydus-snapshotter/blob/main/misc/snapshotter/config-proxy.toml $ sudo install -D -m 644 config-proxy.toml /etc/nydus/config-proxy.toml ``` Run `nydus snapshotter` as a standalone process ```bash $ /usr/local/bin/containerd-nydus-grpc --config /etc/nydus/config-proxy.toml --log-to-stdout level=info msg=\"Start nydus-snapshotter. Version: v0.13.11-308-g106a6cb, PID: 1100169, FsDriver: proxy, DaemonMode: none\" level=info msg=\"Run daemons monitor...\" ``` Configure containerd for `nydus snapshotter` Configure `nydus snapshotter` to enable `CRI Runtime Specific Snapshotter` in containerd. This ensures run kata containers with `nydus snapshotter`. Below, the steps are illustrated using `kata-qemu` as an example. ```toml [plugins.\"io.containerd.grpc.v1.cri\".containerd] disablesnapshotannotations = false discardunpackedlayers = false [proxy_plugins.nydus] type = \"snapshot\" address = \"/run/containerd-nydus/containerd-nydus-grpc.sock\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.kata-qemu] snapshotter = \"nydus\" ``` Notes: The `CRI Runtime Specific Snapshotter` feature only works for containerd v1.7.0 and above. So for Containerd v1.7.0 below, in addition to the above settings, we need to set the global `snapshotter` to `nydus` in containerd config. For example: ```toml [plugins.\"io.containerd.grpc.v1.cri\".containerd] snapshotter = \"nydus\" ``` Restart containerd service ```bash $ sudo systemctl restart containerd ``` To verify pulling images in a guest VM, please refer to the following commands: Run a kata container ```bash $ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: busybox annotations: io.containerd.cri.runtime-handler: kata-qemu spec: runtimeClassName: kata-qemu containers: name: busybox image: quay.io/prometheus/busybox:latest imagePullPolicy: Always EOF pod/busybox created $ kubectl get pods NAME READY STATUS RESTARTS AGE busybox 1/1 Running 0 10s ``` Notes: The `CRI Runtime Specific Snapshotter` is still an experimental feature. To pull images in the guest under the specific kata runtime (such as `kata-qemu`), we need to add the following annotation in metadata to each pod yaml: `io.containerd.cri.runtime-handler: kata-qemu`. By adding the annotation, we can ensure that the feature works as expected. Verify that the pod's images have been successfully downloaded in the guest. If images intended for deployment are deleted prior to deploying with `nydus snapshotter`, the root filesystems required for the pod's images (including the pause image and the container image) should not be present on the host. ```bash $ sandbox_id=$(ps -ef| grep containerd-shim-kata-v2| grep -oP '(?<=-id\\s)[a-f0-9]+'| tail -1) $ rootfscount=$(find /run/kata-containers/shared/sandboxes/$sandboxid -name rootfs -type d| grep -o \"rootfs\" | wc -l) $ echo $rootfs_count 0 ```" } ]
{ "category": "Runtime", "file_name": "how-to-pull-images-in-guest-with-kata.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "List of all the awesome people working to make Gin the best Web Framework in Go. Gin Core Team: Bo-Yi Wu (@appleboy), (@thinkerou), Javier Provecho (@javierprovecho) Maintainers: Manu Martinez-Almeida (@manucorporat), Javier Provecho (@javierprovecho) People and companies, who have contributed, in alphabetical order. @858806258 () Fix typo in example @achedeuzot (Klemen Sever) Fix newline debug printing @adammck (Adam Mckaig) Add MIT license @AlexanderChen1989 (Alexander) Typos in README @alexanderdidenko (Aleksandr Didenko) Add support multipart/form-data @alexandernyquist (Alexander Nyquist) Using template.Must to fix multiple return issue Added support for OPTIONS verb Setting response headers before calling WriteHeader Improved documentation for model binding Added Content.Redirect() Added tons of Unit tests @austinheap (Austin Heap) Added travis CI integration @andredublin (Andre Dublin) Fix typo in comment @bredov (Ludwig Valda Vasquez) Fix html templating in debug mode @bluele (Jun Kimura) Fixes code examples in README @chad-russell Support for serializing gin.H into XML @dickeyxxx (Jeff Dickey) Typos in README Add example about serving static files @donileo (Adonis) Add NoMethod handler @dutchcoders (DutchCoders) Fix security bug that allows client to spoof ip Fix typo. r.HTMLTemplates -> SetHTMLTemplate @el3ctro- (Joshua Loper) Fix typo in example @ethankan (Ethan Kan) Unsigned integers in binding (Evgeny Persienko) Validate sub structures @frankbille (Frank Bille) Add support for HTTP Realm Auth @fmd (Fareed Dudhia) Fix typo. SetHTTPTemplate -> SetHTMLTemplate @ironiridis (Christopher Harrington) Remove old reference @jammie-stackhouse (Jamie Stackhouse) Add more shortcuts for router methods @jasonrhansen Fix spelling and grammar errors in documentation @JasonSoft (Jason Lee) Fix typo in comment @joiggama (Ignacio Galindo) Add utf-8 charset header on renders @julienschmidt (Julien Schmidt) gofmt the code examples @kelcecil (Kel Cecil) Fix readme typo @kyledinh (Kyle Dinh) Adds RunTLS() @LinusU (Linus Unnebck) Small fixes in README @loongmxbt (Saint Asky) Fix typo in example @lucas-clemente (Lucas Clemente) work around path.Join removing trailing slashes from routes @mattn (Yasuhiro Matsumoto) Improve color logger @mdigger (Dmitry Sedykh) Fixes Form binding when content-type is x-www-form-urlencoded No repeat call c.Writer.Status() in gin.Logger Fixes Content-Type for json render @mirzac (Mirza Ceric) Fix debug printing @mopemope (Yutaka Matsubara) Adds Godep support (Dependencies Manager) Fix variadic parameter in the flexible render API Fix Corrupted plain render Add Pluggable View Renderer Example @msemenistyi (Mykyta Semenistyi) update Readme.md. Add code to String method @msoedov (Sasha Myasoedov) Adds tons of unit tests. @ngerakines (Nick Gerakines) Improves API, c.GET() doesn't panic Adds MustGet() method @r8k (Rajiv Kilaparti) Fix Port usage in README. @rayrod2030 (Ray Rodriguez) Fix typo in example @rns Fix typo in example @RobAWilkinson (Robert Wilkinson) Add example of forms and params @rogierlommers (Rogier Lommers) Add updated static serve example @rw-access (Ross Wolf) Added support to mix exact and param routes @se77en (Damon Zhao) Improve color logging @silasb (Silas Baronda) Fixing quotes in README @SkuliOskarsson (Skuli Oskarsson) Fixes some texts in README II @slimmy (Jimmy Pettersson) Added messages for required bindings @smira (Andrey Smirnov) Add support for ignored/unexported fields in binding @superalsrk (SRK.Lyu) Update httprouter godeps @tebeka (Miki Tebeka) Use net/http constants instead of numeric values @techjanitor Update context.go reserved IPs @yosssi (Keiji Yoshida) Fix link in README @yuyabee Fixed README" } ]
{ "category": "Runtime", "file_name": "AUTHORS.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Access metric status of the operator ``` -h, --help help for metrics ``` - Run cilium-operator-aws - List all metrics for the operator" } ]
{ "category": "Runtime", "file_name": "cilium-operator-aws_metrics.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete vtep entries Delete vtep entries using vtep CIDR. ``` cilium-dbg bpf vtep delete [flags] ``` ``` -h, --help help for delete ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the VTEP mappings for IP/CIDR <-> VTEP MAC/IP" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_vtep_delete.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "(storage)= ```{toctree} :maxdepth: 1 About storage </explanation/storage> Manage pools <howto/storage_pools> Create an instance in a pool <howto/storagecreateinstance> Manage volumes <howto/storage_volumes> Move or copy a volume <howto/storagemovevolume> Back up a volume <howto/storagebackupvolume> Manage buckets <howto/storage_buckets> reference/storage_drivers ```" } ]
{ "category": "Runtime", "file_name": "storage.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "This document briefly describes some aspects of Sysbox's design. Sysbox is made up of the following components: sysbox-runc sysbox-fs sysbox-mgr sysbox-runc is a container runtime, the program that does the low level kernel setup for execution of system containers. It's the \"front-end\" of Sysbox: higher layers (e.g., Docker & containerd) invoke sysbox-runc to launch system containers. It's mostly (but not 100%) compatible with the OCI runtime specification (more on this ). sysbox-fs is a file-system-in-user-space (FUSE) daemon that emulates portions of the system container's filesystem, in particular portions of procfs and sysfs mounts inside the system container. It's purpose is to make the system container closely resemble a virtual host while ensuring proper isolation from the rest of the system. sysbox-mgr is a daemon that provides services to sysbox-runc and sysbox-fs. For example, it manages assignment user-ID and group-ID mappings to system containers, manages some special mounts that Sysbox adds to system containers, etc. Together, sysbox-fs and sysbox-mgr are the \"back-ends\" for sysbox. Communication between the sysbox components is done via gRPC. Users don't normally interact with the Sysbox components directly. Instead, they use higher level apps (e.g., Docker) that interact with Sysbox to deploy system containers. The Linux kernel >= 5.12 includes a feature called \"ID-Mapped mounts\" that allows remapping of the user and group IDs of files. It was developed primarily by Christian Brauner at Canonical (to whom we owe a large debt of gratitude). Starting with version 0.5.0, Sysbox leverages this feature to perform filesystem user-ID and group-ID mapping between the container's Linux user namespace and the host's initial user namespace. For example, inside a Sysbox container, user-ID range 0->65535 is always mapped to unprivileged user-ID range at host level chosen by Sysbox (e.g., 100000->165535) via the Linux user-namespace. This way, container processes are fully unprivileged at host level. However, this mapping implies that if a host file with user-ID 1000 is mounted into the container, it will show up as `nobody:nogroup` inside the container given that user-ID 1000 is outside of the range 100000->165536. The kernel's \"ID-mapped mounts\" feature solves this problem. It allows Sysbox to ask the kernel to remap the user-IDs (and group-IDs) for host files mounted into the container. Following the example above, a host file with user-ID 1000 will now show up inside the container with user-ID 1000 too, as the kernel will map user-ID 1000->101000 (and vice-versa). This is beneficial, because from a user's perspective you don't need to worry about what user-namespace mappings have been assigned by Sysbox to the container. It also means you can now share files between the host and the container, or between containers without problem, while enjoying the extra isolation & security provided by the Linux user-namespace. Refer to the for more" }, { "data": "ID-mapped mounts is a fairly recent kernel feature and therefore has some functional limitations as this time. One such limitation is that ID-mapped mounts can't be mounted on top file or directories backed by specialized filesystems at this time (e.g., device files). Sysbox understands these limitations and takes appropriate action to overcome them, such as using the shiftfs kernel module (when available) as described in the next section. Note that as of kernel 5.19+, ID-mapped mounts provide an almost full replacement for shiftfs. Ubuntu kernels carry a module called `shiftfs` that has a similar purpose to ID-mapped mounts (see prior section) but predates it. However, shiftfs is not a standard mechanism and therefore not available in most Linux distros. It's only included with Ubuntu, but can also be [installed manually](install-package.md#installing-shiftfs) on Debian and Flatcar. Sysbox detects the presence of the shiftfs module and uses it when appropriate (e.g., when ID-mapped mounts are not available or can't be used on top of a particular filesystem). Sysbox's requirement for shiftfs is as follows: | Linux Kernel Version | Shiftfs Required by Sysbox | | -- | :: | | < 5.12 | Yes | | 5.12 to 5.18 | No (but recommended) | | >= 5.19 | No | In kernels 5.12 to 5.18, shiftfs is not required but having it causes Sysbox to setup the container's filesystem more efficiently, so it's recommended. To verify the shiftfs module is loaded in your host, type: ```console $ sudo modprobe shiftfs $ lsmod | grep shiftfs shiftfs 24576 0 ``` If shiftfs is not present and you have a Ubuntu, Debian, or Flatcar host, see the for info on how to install it. It is common to run containers (e.g., Docker) inside a Sysbox container. These inner containers often use overlayfs, which means that overlayfs mounts will be set up inside the Sysbox container (which is itself on an overlayfs mount). Since it's not possible to stack overlayfs mounts, Sysbox works around this by creating implicit host mounts into the Sysbox container on specific directories where overlayfs mounts are known to take place, such as inside the container's `/var/lib/docker`, `/var/lib/kubelet`, and other similar directories. These mounts are managed by Sysbox, which understands when to create and destroy them. Sysbox is a fork of the . It is mostly (but not 100%) compatible with the OCI runtime specification. The incompatibilities arise from our desire to make deployment of system containers possible with Docker (to save users the trouble of having to learn yet another tool). We believe these incompatibilities won't negatively affect users of Sysbox and should mostly be transparent to them. Here is a list of OCI runtime incompatibilities: Sysbox requires that the system container's" }, { "data": "file have a namespace array field with at least the following namespaces: pid ipc uts mount network This is normally the case for Docker containers. Sysbox adds the following namespaces to all system containers: user cgroup By default, Sysbox assigns process capabilities in the container as follows: Enables all process capabilities for the system container's init process when owned by the root user. Disables all process capabilities for the system container's init process when owned by a non-root user. This mimics the way capabilities are assigned to processes on a physical host or VM. See the for more on this. Note that starting with Sysbox v0.5.0, it's possible to modify this behavior to have Sysbox honor the capabilities passed to it by the higher level container manager via the OCI spec. See the for more on this. Sysbox always mounts `/proc` and `/sys` read-write inside the system container. Note that by virtue of enabling the Linux user namespace, only namespaced resources under `/proc` and `/sys` will be writable from within the system container. Non-namespaced resources (e.g., those under `/proc/sys/kernel`) won't be writable from within the system container, unless they are virtualized by Sysbox. See and for more info. Sysbox always mounts the cgroupfs as read-write inside the system container (under `/sys/fs/cgroup`). This allows programs inside the system container (e.g., Docker) to assign cgroup resources to child containers. The assigned resources are always a subset of the cgroup resources assigned to the system container itself. Sysbox ensures that programs inside the system container can't modify the cgroup resources assigned to the container itself, or cgroup resources associated with the rest of the system. Sysbox modifies the system container's seccomp configuration to whitelist syscalls such as: mount, unmount, pivot_root, and a few others. This allows execution of system level programs within the system container, such as Docker. Sysbox currently ignores the Docker AppArmor profile, as it's too restrictive (e.g., prevents mounts inside the container, prevents write access to `/proc/sys`, etc.) See for more on this. Sysbox honors read-only paths in the system container's `config.json`, with the exception of paths at or under `/proc` or under `/sys`. The same applies to masked paths. Sysbox honors the mounts specified in the system container's `config.json` file, with a few exceptions such as: Mounts over /sys and some of it's sub-directories. Mounts over /proc and some of it's sub-directories. In addition, Sysbox creates some mounts of its own (i.e., implicitly) within the system container. For example: Read-only bind mount of the host's `/lib/modules/<kernel-release>` into a corresponding path within the system container. Read-only bind mount of the host's kernel header files into the corresponding path within the system container. Select mounts under the system container's `/sys` and `/proc` directories. Mounts that enable Systemd, Docker and Kubernetes to operate correctly within the system container." } ]
{ "category": "Runtime", "file_name": "design.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "name: Submariner Committer Request about: Request Submariner Committer rights on some files title: 'REQUEST: New Committer rights request for <your-GH-handle>' labels: committer-request assignees: '' e.g. (at)example_user e.g. submariner-operator/\\ or submariner-operator/scripts/\\ [ ] I have reviewed the contributor roles guidelines (https://submariner.io/community/contributor-roles/) [ ] I have enabled 2FA on my GitHub account (https://github.com/settings/security) [ ] I have subscribed to the submariner-dev e-mail list (https://groups.google.com/forum/#!forum/submariner-dev) [ ] I have been actively contributing to Submariner for at least 3 months [ ] I have two sponsors that meet the sponsor requirements listed in the community membership guidelines [ ] I have spoken to my sponsors ahead of this application, and they have agreed to sponsor my application (at)sponsor-1 (at)sponsor-2 At least 20 PRs to the relevant codebase that you reviewed, and at least 5 for which you were the primary reviewer `https://github.com/submariner-io/<repo>/pulls?q=is%3Apr+reviewed-by%3A<your-GH-handle>+-author%3A<your-GH-handle>`" } ]
{ "category": "Runtime", "file_name": "committership.md", "project_name": "Submariner", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark backup create\" layout: docs Create a backup Create a backup ``` ark backup create NAME [flags] ``` ``` --exclude-namespaces stringArray namespaces to exclude from the backup --exclude-resources stringArray resources to exclude from the backup, formatted as resource.group, such as storageclasses.storage.k8s.io -h, --help help for create --include-cluster-resources optionalBool[=true] include cluster-scoped resources in the backup --include-namespaces stringArray namespaces to include in the backup (use '' for all namespaces) (default ) --include-resources stringArray resources to include in the backup, formatted as resource.group, such as storageclasses.storage.k8s.io (use '*' for all resources) --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the backup -o, --output string Output display format. For create commands, display the object but do not send it to the server. Valid formats are 'table', 'json', and 'yaml'. -l, --selector labelSelector only back up resources matching this label selector (default <none>) --show-labels show labels in the last column --snapshot-volumes optionalBool[=true] take snapshots of PersistentVolumes as part of the backup --ttl duration how long before the backup can be garbage collected (default 720h0m0s) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with backups" } ]
{ "category": "Runtime", "file_name": "ark_backup_create.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "English Spiderpool supports the configuration of routing information for Pods. When setting the gateway address (`spec.gateway`) for a SpiderIPPool resource, a default route will be generated for Pods based on that gateway address: ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: ipv4-ippool-route spec: subnet: 172.18.41.0/24 ips: 172.18.41.51-172.18.41.60 gateway: 172.18.41.0 ``` SpiderIPPool resources also support configuring routes (`spec.routes`), which will be inherited by Pods during their creation process: If a gateway address is configured for the SpiderIPPool resource, avoid setting default routes in the routes field. Both `dst` and `gw` fields are required. ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: ipv4-ippool-route spec: subnet: 172.18.41.0/24 ips: 172.18.41.51-172.18.41.60 gateway: 172.18.41.0 routes: dst: 172.18.42.0/24 gw: 172.18.41.1 ``` You can customize routes for Pods by adding the annotation `ipam.spidernet.io/routes`: When a gateway address or default route is configured in the SpiderIPPool resource, avoid configuring default routes for Pods. Both `dst` and `gw` fields are required. ```yaml ipam.spidernet.io/routes: |- [{ \"dst\": \"10.0.0.0/16\", \"gw\": \"192.168.1.1\" },{ \"dst\": \"172.10.40.0/24\", \"gw\": \"172.18.40.1\" }] ```" } ]
{ "category": "Runtime", "file_name": "route.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: Troubleshooting Weave Net menu_order: 110 search_type: Documentation * - - * * Check the version of Weave Net you are running using: weave version If it is not the latest version, as shown in the list of , then it is recommended you upgrade using the . To check the Weave Net container logs: docker logs weave A reasonable amount of information, and all errors, get logged there. The log verbosity may be increased by using the `--log-level=debug` option during `weave launch`. To log information on a per-packet basis use `--pktdebug` - but be warned, as this can produce a lot of output. Another useful debugging technique is to attach standard packet capture and analysis tools, such as tcpdump and wireshark, to the `weave` network bridge on the host. A status summary can be obtained using `weave status`: ``` $ weave status Version: 1.1.0 (up to date; next check at 2016/04/06 12:30:00) Service: router Protocol: weave 1..2 Name: 4a:0f:f6:ec:1c:93(host1) Encryption: disabled PeerDiscovery: enabled Targets: [192.168.48.14 192.168.48.15] Connections: 5 (1 established, 1 pending, 1 retrying, 1 failed, 1 connecting) Peers: 3 (with 5 established, 1 pending connections) TrustedSubnets: none Service: ipam Status: ready Range: 10.32.0.0-10.47.255.255 DefaultSubnet: 10.32.0.0/12 Service: dns Domain: weave.local. TTL: 1 Entries: 9 Service: proxy Address: tcp://127.0.0.1:12375 Service: plugin (legacy) DriverName: weave ``` The terms used here are explained further at . Version* - shows the Weave Net version. If checkpoint is enabled (i.e. `CHECKPOINT_DISABLE` is not set), information about existence of a new version will be shown. Protocol*- indicates the Weave Router inter-peer communication protocol name and supported versions (min..max). Name* - identifies the local Weave Router as a peer on the Weave network. The nickname shown in parentheses defaults to the name of the host on which the Weave container was launched. It can be overridden by using the `--nickname` argument at `weave launch`. Encryption* - indicates whether is in use for communication between peers. PeerDiscovery* - indicates whether is enabled (which is the default). Targets* - are the number of hosts that the local Weave Router has been asked to connect to at `weave launch` and `weave connect`. The complete list can be obtained using `weave status targets`. Connections* - show the total number connections between the local Weave Router and other peers, and a break down of that figure by connection state. Further details are available with . Peers* - show the total number of peers in the network, and the total number of connections peers have to other peers. Further details are available with . TrustedSubnets* - show subnets which the router trusts as specified by the `--trusted-subnets` option at `weave launch`. Connections between Weave Net peers carry control traffic over TCP and data traffic over UDP. For a connection to be fully established, the TCP connection and UDP datapath must be able to transmit information in both" }, { "data": "Weave Net routers check this regularly with heartbeats. Failed connections are automatically retried, with an exponential back-off. To view detailed information on the local Weave Net router's type `weave status connections`: ``` $ weave status connections <- 192.168.48.12:33866 established unencrypted fastdp 7e:21:4a:70:2f:45(host2) mtu=1410 <- 192.168.48.13:60773 pending encrypted fastdp 7e:ae:cd:d5:23:8d(host3) -> 192.168.48.14:6783 retrying dial tcp4 192.168.48.14:6783: no route to host -> 192.168.48.15:6783 failed dial tcp4 192.168.48.15:6783: no route to host, retry: 2015-08-06 18:55:38.246910357 +0000 UTC -> 192.168.48.16:6783 connecting ``` The columns are as follows: Connection origination direction (`->` for outbound, `<-` for inbound) Remote TCP address Status `connecting` - first connection attempt in progress `failed` - TCP connection or UDP heartbeat failed `retrying` - retry of a previously failed connection attempt in progress; reason for previous failure follows `pending` - TCP connection up, waiting for confirmation of UDP heartbeat `established` - TCP connection and corresponding UDP path are up Info - the failure reason for failed and retrying connections, or the encryption mode, data transport method, remote peer name and nickname for pending and established connections, mtu if known Specific error messages: `IP allocation was seeded by different peers` - Detailed information on peers can be obtained with `weave status peers`: ``` $ weave status peers ce:31:e0:06:45:1a(host1) <- 192.168.48.12:39634 ea:2d:b2:e6:e4:f5(host2) established <- 192.168.48.13:49619 ee:38:33:a7:d9:71(host3) established ea:2d:b2:e6:e4:f5(host2) -> 192.168.48.11:6783 ce:31:e0:06:45:1a(host1) established <- 192.168.48.13:58181 ee:38:33:a7:d9:71(host3) established ee:38:33:a7:d9:71(host3) -> 192.168.48.12:6783 ea:2d:b2:e6:e4:f5(host2) established -> 192.168.48.11:6783 ce:31:e0:06:45:1a(host1) established ``` This lists all peers known to this router, including itself. Each peer is shown with its name and nickname, then each line thereafter shows another peer that it is connected to, with the direction, IP address and port number of the connection. In the above example, `host3` has connected to `host1` at `192.168.48.11:6783`; `host1` sees the `host3` end of the same connection as `192.168.48.13:49619`. Detailed information on DNS registrations can be obtained with `weave status dns`: ``` $ weave status dns one 10.32.0.1 eebd81120ee4 4a:0f:f6:ec:1c:93 one 10.43.255.255 4fcec78d2a9b 66:c4:47:c6:65:bf one 10.40.0.0 bab69d305cba ba:98:d0:37:4f:1c three 10.32.0.3 7615b6537f74 4a:0f:f6:ec:1c:93 three 10.44.0.1 c0b39dc52f8d 66:c4:47:c6:65:bf three 10.40.0.2 8a9c2e2ef00f ba:98:d0:37:4f:1c two 10.32.0.2 83689b8f34e0 4a:0f:f6:ec:1c:93 two 10.44.0.0 7edc306cb668 66:c4:47:c6:65:bf two 10.40.0.1 68a5e9c2641b ba:98:d0:37:4f:1c ``` The columns are as follows: Unqualified hostname IPv4 address Registering entity identifier (typically a container ID) Name of peer from which the registration originates weave report Produces a comprehensive dump of the internal state of the router, IPAM and DNS services in JSON format, including all the information available from the `weave status` commands. You can also supply a Golang text template to `weave report` in a similar fashion to `docker inspect`: $ weave report -f '{{.DNS.Domain}}' weave.local. Weave Net adds a template function, `json`, which can be applied to get results in JSON format. $ weave report -f '{{json .DNS}}'" }, { "data": "weave ps Produces a list of all containers running on this host that are connected to the Weave network, like this: weave:expose 7a:c4:8b:a1:e6:ad 10.2.5.2/24 b07565b06c53 ae:e3:07:9c:8c:d4 5245643870f1 ce:15:34:a9:b5:6d 10.2.5.1/24 e32a7d37a93a 7a:61:a2:49:4b:91 10.2.8.3/24 caa1d4ee2570 ba:8c:b9:dc:e1:c9 10.2.1.1/24 10.2.2.1/24 On each line are the container ID, its MAC address, then the list of IP address/routing prefix length ([CIDR notation](http://en.wikipedia.org/wiki/ClasslessInter-DomainRouting)) assigned on the Weave network. The special container name `weave:expose` displays the Weave bridge MAC and any IP addresses added to it via the `weave expose` command. You can also supply a list of container IDs/names to `weave ps`, like this: $ weave ps able baker able ce:15:34:a9:b5:6d 10.2.5.1/24 baker 7a:61:a2:49:4b:91 10.2.8.3/24 To stop Weave Net, if you have configured your environment to use the Weave Docker API Proxy, e.g. by running `eval $(weave env)` in your shell, you must first restore the environment using: eval $(weave env --restore) Then run: weave stop Note that this leaves the local application container network intact. Containers on the local host can continue to communicate, whereas communication with containers on different hosts, as well as service export/import, is disrupted but resumes once Weave is relaunched. To stop Weave Net and to completely remove all traces of the Weave network on the local host, run: weave reset Any running application containers permanently lose connectivity with the Weave network and will have to be restarted in order to re-connect. All the containers started by `weave launch` are configured with the Docker restart policy `--restart=always`, so they will come back again on reboot. This can be disabled via: weave launch --no-restart Note that the Weave Net router creates the `weave` network bridge if necessary when it restarts. The [Weave Net Docker API Proxy](/site/tasks/weave-docker-api/weave-docker-api.md) then re-attaches any application containers that it originally attached to the Weave network when they restart. If Weave Net is installed via `docker plugin install`, download the `weave` script to run `weave status`, `weave ps` or `weave report` as above. Install Weave Net by following the [install instructions](/site/install/installing-weave.md). Docker \"v2\" plugins do run as containers, but at a lower level within the Docker environment. Because of this, you cannot view them with `docker ps`, `docker inspect`, etc. Do not run `weave launch`, `weave stop` or similar commands when using this plugin; use the `docker plugin` commands instead. You can run `weave reset`, but only after disabling the plugin via `docker plugin disable`. Diagnostic logs from the plugin go to the same place as the Docker daemon, which will depend on your Linux install. For example, if it uses `systemd`, then do this to view the Docker and plugin logs: sudo journalctl -u docker Snapshot releases are published at times to provide previews of new features, assist in the validation of bug fixes, etc. One can install the latest snapshot release using: sudo curl -L git.io/weave-snapshot -o /usr/local/bin/weave sudo chmod a+x /usr/local/bin/weave weave setup Snapshot releases report the script version as \"unreleased\", and the container image versions as git hashes. See Also * *" } ]
{ "category": "Runtime", "file_name": "troubleshooting.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "[TOC] gVisor was created in order to provide additional defense against the exploitation of kernel bugs by untrusted userspace code. In order to understand how gVisor achieves this goal, it is first necessary to understand the basic threat model. An exploit takes advantage of a software or hardware bug in order to escalate privileges, gain access to privileged data, or disrupt services. All of the possible interactions that a malicious application can have with the rest of the system (attack vectors) define the attack surface. We categorize these attack vectors into several common classes. An operating system or hypervisor exposes an abstract System API in the form of system calls and traps. This API may be documented and stable, as with Linux, or it may be abstracted behind a library, as with Windows (i.e. win32.dll or ntdll.dll). The System API includes all standard interfaces that application code uses to interact with the system. This includes high-level abstractions that are derived from low-level system calls, such as system files, sockets and namespaces. Although the System API is exposed to applications by design, bugs and race conditions within the kernel or hypervisor may occasionally be exploitable via the API. This is common in part due to the fact that most kernels and hypervisors are written in , which is well-suited to interfacing with hardware but often prone to security issues. In order to exploit these issues, a typical attack might involve some combination of the following: Opening or creating some combination of files, sockets or other descriptors. Passing crafted, malicious arguments, structures or packets. Racing with multiple threads in order to hit specific code paths. For example, for the privilege escalation bug, an application would open a specific file in `/proc` or use a specific `ptrace` system call, and use multiple threads in order to trigger a race condition when touching a fresh page of memory. The attacker then gains control over a page of memory belonging to the system. With additional privileges or access to privileged data in the kernel, an attacker will often be able to employ additional techniques to gain full access to the rest of the system. While bugs in the implementation of the System API are readily fixed, they are also the most common form of exploit. The exposure created by this class of exploit is what gVisor aims to minimize and control, described in detail below. Hardware and software exploits occasionally exist in execution paths that are not part of an intended System API. In this case, exploits may be found as part of implicit actions the hardware or privileged system code takes in response to certain events, such as traps or interrupts. For example, the recent flaw required only native code execution (no specific system call or file access). In that case, the Xen hypervisor was similarly vulnerable, highlighting that hypervisors are not immune to this vector. Hardware side channels may be exploitable by any code running on a system: native, sandboxed, or virtualized. However, many host-level mitigations against hardware side channels are still effective with a" }, { "data": "For example, kernels built with retpoline protect against some speculative execution attacks (Spectre) and frame poisoning may protect against L1 terminal fault (L1TF) attacks. Hypervisors may introduce additional complications in this regard, as there is no mitigation against an application in a normally functioning Virtual Machine (VM) exploiting the L1TF vulnerability for another VM on the sibling hyperthread. The above categories in no way represent an exhaustive list of exploits, as we focus only on running untrusted code from within the operating system or hypervisor. We do not consider other ways that a more generic adversary may interact with a system, such as inserting a portable storage device with a malicious filesystem image, using a combination of crafted keyboard or touch inputs, or saturating a network device with ill-formed packets. Furthermore, high-level systems may contain exploitable components. An attacker need not escalate privileges within a container if theres an exploitable network-accessible service on the host or some other API path. *A sandbox is not a substitute for a secure architecture*. gVisors primary design goal is to minimize the System API attack vector through multiple layers of defense, while still providing a process model. There are two primary security principles that inform this design. First, the applications direct interactions with the host System API are intercepted by the Sentry, which implements the System API instead. Second, the System API accessible to the Sentry itself is minimized to a safer, restricted set. The first principle minimizes the possibility of direct exploitation of the host System API by applications, and the second principle minimizes indirect exploitability, which is the exploitation by an exploited or buggy Sentry (e.g. chaining an exploit). The first principle is similar to the security basis for a Virtual Machine (VM). With a VM, an applications interactions with the host are replaced by interactions with a guest operating system and a set of virtualized hardware devices. These hardware devices are then implemented via the host System API by a Virtual Machine Monitor (VMM). The Sentry similarly prevents direct interactions by providing its own implementation of the System API that the application must interact with. Applications are not able to directly craft specific arguments or flags for the host System API, or interact directly with host primitives. For both the Sentry and a VMM, its worth noting that while direct interactions are not possible, indirect interactions are still possible. For example, a read on a host-backed file in the Sentry may ultimately result in a host read system call (made by the Sentry, not by passing through arguments from the application), similar to how a read on a block device in a VM may result in the VMM issuing a corresponding host read system call from a backing file. An important distinction from a VM is that the Sentry implements a System API based directly on host System API primitives instead of relying on virtualized hardware and a guest operating system. This selects a distinct set of trade-offs, largely in the performance, efficiency and compatibility domains. Since transitions in and out of the sandbox are relatively expensive, a guest operating system will typically take ownership of resources. For example, in the above case, the guest operating system may read the block device data in a local page cache, to avoid subsequent" }, { "data": "This may lead to better performance but lower efficiency, since memory may be wasted or duplicated. The Sentry opts instead to defer to the host for many operations during runtime, for improved efficiency but lower performance in some use cases. An application in a gVisor sandbox is permitted to do most things a standard container can do: for example, applications can read and write files mapped within the container, make network connections, etc. As described above, gVisor's primary goal is to limit exposure to bugs and exploits while still allowing most applications to run. Even so, gVisor will limit some operations that might be permitted with a standard container. Even with appropriate capabilities, a user in a gVisor sandbox will only be able to manipulate virtualized system resources (e.g. the system time, kernel settings or filesystem attributes) and not underlying host system resources. While the sandbox virtualizes many operations for the application, we limit the sandbox's own interactions with the host to the following high-level operations: Communicate with a Gofer process via a connected socket. The sandbox may receive new file descriptors from the Gofer process, corresponding to opened files. These files can then be read from and written to by the sandbox. Make a minimal set of host system calls. The calls do not include the creation of new sockets (unless host networking mode is enabled) or opening files. The calls include duplication and closing of file descriptors, synchronization, timers and signal management. Read and write packets to a virtual ethernet device. This is not required if host networking is enabled (or networking is disabled). gVisor relies on the host operating system and the platform for defense against hardware-based attacks. Given the nature of these vulnerabilities, there is little defense that gVisor can provide (theres no guarantee that additional hardware measures, such as virtualization, memory encryption, etc. would actually decrease the attack surface). Note that this is true even when using hardware virtualization for acceleration, as the host kernel or hypervisor is ultimately responsible for defending against attacks from within malicious guests. gVisor similarly relies on the host resource mechanisms (cgroups) for defense against resource exhaustion and denial of service attacks. Network policy controls should be applied at the container level to ensure appropriate network policy enforcement. Note that the sandbox itself is not capable of altering or configuring these mechanisms, and the sandbox itself should make an attacker less likely to exploit or override these controls through other means. For gVisor development, there are several engineering principles that are employed in order to ensure that the system meets its design goals. No system call is passed through directly to the host. Every supported call has an independent implementation in the Sentry, that is unlikely to suffer from identical vulnerabilities that may appear in the host. This has the consequence that all kernel features used by applications require an implementation within the Sentry. Only common, universal functionality is implemented. Some filesystems, network devices or modules may expose specialized functionality to user space applications via mechanisms such as extended attributes, raw sockets or ioctls. Since the Sentry is responsible for implementing the full system call surface, we do not implement or pass through these specialized" }, { "data": "The host surface exposed to the Sentry is minimized. While the system call surface is not trivial, it is explicitly enumerated and controlled. The Sentry is not permitted to open new files, create new sockets or do many other interesting things on the host. Additionally, we have practical restrictions that are imposed on the project to minimize the risk of Sentry exploitability. For example: Unsafe code is carefully controlled. All unsafe code is isolated in files that end with \"unsafe.go\", in order to facilitate validation and auditing. No file without the unsafe suffix may import the unsafe package. No CGo is allowed. The Sentry must be a pure Go binary. External imports are not generally allowed within the core packages. Only limited external imports are used within the setup code. The code available inside the Sentry is carefully controlled, to ensure that the above rules are effective. Finally, we recognize that security is a process, and that vigilance is critical. Beyond our security disclosure process, the Sentry is fuzzed continuously to identify potential bugs and races proactively, and production crashes are recorded and triaged to similarly identify material issues. The security of a VM depends to a large extent on what is exposed from the host kernel and userspace support code. For example, device emulation code in the host kernel (e.g. APIC) or optimizations (e.g. vhost) can be more complex than a simple system call, and exploits carry the same risks. Similarly, the userspace support code is frequently unsandboxed, and exploits, while rare, may allow unfettered access to the system. Some platforms leverage the same virtualization hardware as VMs in order to provide better system call interception performance. However, gVisor does not implement any device emulation, and instead opts to use a sandboxed host System API directly. Both approaches significantly reduce the original attack surface. Ultimately, since gVisor is capable of using the same hardware mechanism, one should not assume that the mere use of virtualization hardware makes a system more or less secure, just as it would be a mistake to make the claim that the use of a unibody alone makes a car safe. In general, gVisor does not provide protection against hardware side channels, although it may make exploits that rely on direct access to the host System API more difficult to use. To minimize exposure, you should follow relevant guidance from vendors and keep your host kernel and firmware up-to-date. No: the term ptrace sandbox generally refers to software that uses the Linux ptrace facility to inspect and authorize system calls made by applications, enforcing a specific policy. These commonly suffer from two issues. First, vulnerable system calls may be authorized by the sandbox, as the application still has direct access to some System API. Second, its impossible to avoid time-of-check, time-of-use race conditions without disabling multi-threading. In gVisor, the platforms that use ptrace operate differently. The stubs that are traced are never allowed to continue execution into the host kernel and complete a call directly. Instead, all system calls are interpreted and handled by the Sentry itself, who reflects resulting register state back into the tracee before continuing execution in userspace. This is very similar to the mechanism used by User-Mode Linux (UML)." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for the specified shell Generate the autocompletion script for cilium-operator-azure for the specified shell. See each sub-command's help for details on how to use the generated script. ``` -h, --help help for completion ``` - Run cilium-operator-azure - Generate the autocompletion script for bash - Generate the autocompletion script for fish - Generate the autocompletion script for powershell - Generate the autocompletion script for zsh" } ]
{ "category": "Runtime", "file_name": "cilium-operator-azure_completion.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Please ensure your pull request adheres to the following guidelines: [ ] All code is covered by unit and/or runtime tests where feasible. [ ] All commits contain a well written commit description including a title, description and a `Fixes: #XXX` line if the commit addresses a particular GitHub issue. [ ] If your commit description contains a `Fixes: <commit-id>` tag, then please add the commit author[s] as reviewer[s] to this issue. [ ] Provide a title or release-note blurb suitable for the release notes. [ ] Thanks for contributing! <!-- Description of change --> Fixes: #issue-number ```release-note <!-- Enter the release note text here if needed or remove this section! --> ```" } ]
{ "category": "Runtime", "file_name": "pull_request_template.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "(server)= The Incus server can be configured through a set of key/value configuration options. The key/value configuration is namespaced. The following options are available: {ref}`server-options-core` {ref}`server-options-acme` {ref}`server-options-cluster` {ref}`server-options-images` {ref}`server-options-loki` {ref}`server-options-misc` {ref}`server-options-oidc` {ref}`server-options-openfga` See {ref}`server-configure` for instructions on how to set the configuration options. ```{note} Options marked with a `global` scope are immediately applied to all cluster members. Options with a `local` scope must be set on a per-member basis. ``` (server-options-core)= The following server options control the core daemon configuration: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-core start --> :end-before: <!-- config group server-core end --> ``` (server-options-acme)= The following server options control the {ref}`ACME <authentication-server-certificate>` configuration: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-acme start --> :end-before: <!-- config group server-acme end --> ``` (server-options-oidc)= The following server options configure external user authentication through {ref}`authentication-openid`: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-oidc start --> :end-before: <!-- config group server-oidc end --> ``` (server-options-openfga)= The following server options configure external user authorization through {ref}`authorization-openfga`: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-openfga start --> :end-before: <!-- config group server-openfga end --> ``` (server-options-cluster)= The following server options control {ref}`clustering`: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-cluster start --> :end-before: <!-- config group server-cluster end --> ``` (server-options-images)= The following server options configure how to handle {ref}`images`: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-images start --> :end-before: <!-- config group server-images end --> ``` (server-options-loki)= The following server options configure the external log aggregation system: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-loki start --> :end-before: <!-- config group server-loki end --> ``` (server-options-misc)= The following server options configure server-specific settings for {ref}`instances`, {ref}`OVN <network-ovn>` integration, {ref}`Backups <backups>` and {ref}`storage`: % Include content from ```{include} config_options.txt :start-after: <!-- config group server-miscellaneous start --> :end-before: <!-- config group server-miscellaneous end --> ```" } ]
{ "category": "Runtime", "file_name": "server_config.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- toc --> - - - - - - - - - - - - - - - - - - - - - - - <!-- /toc --> Antrea supports standard K8s NetworkPolicies to secure ingress/egress traffic for Pods. These NetworkPolicies are written from an application developer's perspective, hence they lack the ability to gain a finer-grained control over the security policies that a cluster administrator would require. This document describes a few new CRDs supported by Antrea to provide the administrator with more control over security within the cluster, and which are meant to co-exist with and complement the K8s NetworkPolicy. Starting with Antrea v1.0, Antrea-native policies are enabled by default, which means that no additional configuration is required in order to use the Antrea-native policy CRDs. Antrea supports grouping Antrea-native policy CRDs together in a tiered fashion to provide a hierarchy of security policies. This is achieved by setting the `tier` field when defining an Antrea-native policy CRD (e.g. an Antrea ClusterNetworkPolicy object) to the appropriate Tier name. Each Tier has a priority associated with it, which determines its relative order among other Tiers. Note: K8s NetworkPolicies will be enforced once all policies in all Tiers (except for the baseline Tier) have been enforced. For more information, refer to the following Creating Tiers as CRDs allows users the flexibility to create and delete Tiers as per their preference i.e. not be bound to 5 static tiering options as was the case initially. An example Tier might look like this: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: Tier metadata: name: mytier spec: priority: 10 description: \"my custom tier\" ``` Tiers have the following characteristics: Policies can associate themselves with an existing Tier by setting the `tier` field in an Antrea NetworkPolicy CRD spec to the Tier's name. A Tier must exist before an Antrea-native policy can reference it. Policies associated with higher ordered (low `priority` value) Tiers are enforced first. No two Tiers can be created with the same priority. Updating the Tier's `priority` field is unsupported. Deleting Tier with existing references from policies is not allowed. On startup, antrea-controller will create 5 static, read-only Tier CRD resources corresponding to the static tiers for default consumption, as well as a \"baseline\" Tier CRD object, that will be enforced after developer-created K8s NetworkPolicies. The details for these Tiers are shown below: ```text Emergency -> Tier name \"emergency\" with priority \"50\" SecurityOps -> Tier name \"securityops\" with priority \"100\" NetworkOps -> Tier name \"networkops\" with priority \"150\" Platform -> Tier name \"platform\" with priority \"200\" Application -> Tier name \"application\" with priority \"250\" Baseline -> Tier name \"baseline\" with priority \"253\" ``` Any Antrea-native policy CRD referencing a static tier in its spec will now internally reference the corresponding Tier resource, thus maintaining the order of enforcement. The static Tier CRD Resources are created as follows in the relative order of precedence compared to K8s NetworkPolicies: ```text Emergency > SecurityOps > NetworkOps > Platform > Application > K8s NetworkPolicy > Baseline ``` Thus, all Antrea-native Policy resources associated with the \"emergency\" Tier will be enforced before any Antrea-native Policy resource associated with any other Tiers, until a match occurs, in which case the policy rule's `action` will be applied. Any Antrea-native Policy resource without a `tier` name set in its spec will be associated with the \"application\"" }, { "data": "Policies associated with the first 5 static, read-only Tiers, as well as with all the custom Tiers created with a priority value lower than 250 (priority values greater than or equal to 250 are not allowed for custom Tiers), will be enforced before K8s NetworkPolicies. Policies created in the \"baseline\" Tier, on the other hand, will have lower precedence than developer-created K8s NetworkPolicies, which comes in handy when administrators want to enforce baseline policies like \"default-deny inter-namespace traffic\" for some specific Namespace, while still allowing individual developers to lift the restriction if needed using K8s NetworkPolicies. Note that baseline policies cannot counteract the isolated Pod behavior provided by K8s NetworkPolicies. To read more about this Pod isolation behavior, refer to [this document](https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-two-sorts-of-pod-isolation). If a Pod becomes isolated because a K8s NetworkPolicy is applied to it, and the policy does not explicitly allow communications with another Pod, this behavior cannot be changed by creating an Antrea-native policy with an \"allow\" action in the \"baseline\" Tier. For this reason, it generally does not make sense to create policies in the \"baseline\" Tier with the \"allow\" action. The following `kubectl` commands can be used to retrieve Tier resources: ```bash kubectl get tiers kubectl get tiers.crd.antrea.io kubectl get tr kubectl get tr.crd.antrea.io kubectl get tiers --sort-by=.spec.priority ``` All the above commands produce output similar to what is shown below: ```text NAME PRIORITY AGE emergency 50 27h securityops 100 27h networkops 150 27h platform 200 27h application 250 27h ``` Antrea ClusterNetworkPolicy (ACNP), one of the two Antrea-native policy CRDs introduced, is a specification of how workloads within a cluster communicate with each other and other external endpoints. The ClusterNetworkPolicy is supposed to aid cluster admins to configure the security policy for the cluster, unlike K8s NetworkPolicy, which is aimed towards developers to secure their apps and affects Pods within the Namespace in which the K8s NetworkPolicy is created. Rules belonging to ClusterNetworkPolicies are enforced before any rule belonging to a K8s NetworkPolicy. Example ClusterNetworkPolicies might look like these: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-with-stand-alone-selectors spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: role: db namespaceSelector: matchLabels: env: prod ingress: action: Allow from: podSelector: matchLabels: role: frontend podSelector: matchLabels: role: nondb namespaceSelector: matchLabels: role: db ports: protocol: TCP port: 8080 endPort: 9000 protocol: TCP port: 6379 name: AllowFromFrontend egress: action: Drop to: ipBlock: cidr: 10.0.10.0/24 ports: protocol: TCP port: 5978 name: DropToThirdParty ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-with-cluster-groups spec: priority: 8 tier: securityops appliedTo: group: \"test-cg-with-db-selector\" # defined separately with a ClusterGroup resource ingress: action: Allow from: group: \"test-cg-with-frontend-selector\" # defined separately with a ClusterGroup resource ports: protocol: TCP port: 8080 endPort: 9000 protocol: TCP port: 6379 name: AllowFromFrontend egress: action: Drop to: group: \"test-cg-with-ip-block\" # defined separately with a ClusterGroup resource ports: protocol: TCP port: 5978 name: DropToThirdParty ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: isolate-all-pods-in-namespace spec: priority: 1 tier: securityops appliedTo: namespaceSelector: matchLabels: app: no-network-access-required ingress: action: Drop # For all Pods in those Namespaces, drop and log all ingress traffic from anywhere name: drop-all-ingress egress: action: Drop # For all Pods in those Namespaces, drop and log all egress traffic towards anywhere name: drop-all-egress ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: strict-ns-isolation spec: priority: 5 tier: securityops appliedTo: namespaceSelector: # Selects all non-system Namespaces in the cluster matchExpressions: {key: kubernetes.io/metadata.name, operator: NotIn, values: [kube-system]} ingress: action: Pass from: namespaces: match: Self # Skip ACNP evaluation for traffic from Pods in the same Namespace name: PassFromSameNS action: Drop from: namespaceSelector:" }, { "data": "# Drop from Pods from all other Namespaces name: DropFromAllOtherNS egress: action: Pass to: namespaces: match: Self # Skip ACNP evaluation for traffic to Pods in the same Namespace name: PassToSameNS action: Drop to: namespaceSelector: {} # Drop to Pods from all other Namespaces name: DropToAllOtherNS ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: default-cluster-deny spec: priority: 1 tier: baseline appliedTo: namespaceSelector: {} # Selects all Namespaces in the cluster ingress: action: Drop egress: action: Drop ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-drop-to-services spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: role: client namespaceSelector: matchLabels: env: prod egress: action: Drop toServices: name: svcName namespace: svcNamespace name: DropToServices ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-reject-ping-request spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: role: server namespaceSelector: matchLabels: env: prod egress: action: Reject protocols: icmp: icmpType: 8 icmpCode: 0 name: DropPingRequest ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-with-igmp-drop spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: app: mcjoin6 ingress: action: Drop protocols: igmp: igmpType: 0x11 groupAddress: 224.0.0.1 name: dropIGMPQuery egress: action: Drop protocols: igmp: igmpType: 0x16 groupAddress: 225.1.2.3 name: dropIGMPReport ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-with-multicast-traffic-drop spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: app: mcjoin6 egress: action: Drop to: ipBlock: cidr: 225.1.2.3/32 name: dropMcastUDPTraffic ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: ingress-allow-http-request-to-api-v2 spec: priority: 5 tier: application appliedTo: podSelector: matchLabels: app: web ingress: name: allow-http # Allow inbound HTTP GET requests to \"/api/v2\" from Pods with app=client label. action: Allow # All other traffic from these Pods will be automatically dropped, and subsequent rules will not be considered. from: podSelector: matchLabels: app: client l7Protocols: http: path: \"/api/v2/*\" host: \"foo.bar.com\" method: \"GET\" name: drop-other # Drop all other inbound traffic (i.e., from Pods without the app=client label or from external clients). action: Drop ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: allow-web-access-to-internal-domain spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: egress-restriction: internal-domain-only egress: name: allow-dns # Allow outbound DNS requests. action: Allow ports: protocol: TCP port: 53 protocol: UDP port: 53 name: allow-http-only # Allow outbound HTTP requests towards foo.bar.com. action: Allow # As the rule's \"to\" and \"ports\" are empty, which means it selects traffic to any network l7Protocols: # peer's any port using any transport protocol, all outbound HTTP requests towards other http: # domains and non-HTTP requests will be automatically dropped, and subsequent rules will host: \"*.bar.com\" # not be considered. ``` Please refer to for extra information. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-node-egress-traffic-drop spec: priority: 5 tier: securityops appliedTo: nodeSelector: matchLabels: kubernetes.io/os: linux egress: action: Drop to: ipBlock: cidr: 192.168.1.0/24 ports: protocol: TCP port: 80 name: dropHTTPTrafficToCIDR ``` ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-node-ingress-traffic-drop spec: priority: 5 tier: securityops appliedTo: nodeSelector: matchLabels: kubernetes.io/os: linux ingress: action: Drop from: ipBlock: cidr: 192.168.1.0/24 ports: protocol: TCP port: 22 name: dropSSHTrafficFromCIDR ``` Please refer to for more information. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-with-log-setting spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: role: db namespaceSelector: matchLabels: env: prod ingress: action: Allow from: podSelector: matchLabels: role: frontend namespaceSelector: matchLabels: role: db name: AllowFromFrontend enableLogging: true logLabel: \"frontend-allowed\" ``` spec: The ClusterNetworkPolicy `spec` has all the information needed to define a cluster-wide security policy. appliedTo: The `appliedTo` field at the policy level specifies the grouping criteria of Pods to which the policy applies to. Pods can be selected cluster-wide using" }, { "data": "If set with a `namespaceSelector`, all Pods from Namespaces selected by the namespaceSelector will be selected. Specific Pods from specific Namespaces can be selected by providing both a `podSelector` and a `namespaceSelector` in the same `appliedTo` entry. The `appliedTo` field can also reference a ClusterGroup resource by setting the ClusterGroup's name in `group` field in place of the stand-alone selectors. The `appliedTo` field can also reference a Service by setting the Service's name and Namespace in `service` field in place of the stand-alone selectors. Only a NodePort Service can be referred by this field. More details can be found in the section. IPBlock cannot be set in the `appliedTo` field. An IPBlock ClusterGroup referenced in an `appliedTo` field will be ignored, and the policy will have no effect. This `appliedTo` field must not be set, if `appliedTo` per rule is used. In the , the policy applies to Pods, which either match the labels \"role=db\" in all the Namespaces, or are from Namespaces which match the labels \"env=prod\". The policy applies to all network endpoints selected by the \"test-cg-with-db-selector\" ClusterGroup. The policy applies to all Pods in the Namespaces that matches label \"app=no-network-access-required\". `appliedTo' also supports ServiceAccount based selection. This allows users using ServiceAccount to select Pods. More details can be found in the section. priority: The `priority` field determines the relative priority of the policy among all ClusterNetworkPolicies in the given cluster. This field is mandatory. A lower priority value indicates higher precedence. Priority values can range from 1.0 to 10000.0. Note: Policies with the same priorities will be enforced indeterministically. Users should therefore take care to use priorities to ensure the behavior they expect. tier: The `tier` field associates an ACNP to an existing Tier. The `tier` field can be set with the name of the Tier CRD to which this policy must be associated with. If not set, the ACNP is associated with the lowest priority default tier i.e. the \"application\" Tier. action: Each ingress or egress rule of a ClusterNetworkPolicy must have the `action` field set. As of now, the available actions are [\"Allow\", \"Drop\", \"Reject\", \"Pass\"]. When the rule action is \"Allow\" or \"Drop\", Antrea will allow or drop traffic which matches both `from/to`, `ports` and `protocols` sections of that rule, given that traffic does not match a higher precedence rule in the cluster (ACNP rules created in higher order Tiers or policy instances in the same Tier with lower priority number). If a \"Reject\" rule is matched, the client initiating the traffic will receive `ICMP host administratively prohibited` code for ICMP, UDP and SCTP request, or an explicit reject response for TCP request, instead of timeout. A \"Pass\" rule, on the other hand, skips this packet for further Antrea-native policy rule evaluations in regular Tiers, and delegates the decision to K8s namespaced NetworkPolicies (in networking.k8s.io API group). All ACNP/ANNP rules that have lower priority than the current \"Pass\" rule will be skipped (except for the Baseline Tier rules). If no K8s NetworkPolicy matches this traffic, then all Antrea-native policy Baseline Tier rules will be tested for a match. Note that the \"Pass\" action does not make sense when configured in Baseline Tier ACNP rules, and such configurations will be rejected by the admission controller. Also, \"Pass\" and \"Reject\" actions are not supported for rules applied to multicast traffic. ingress: Each ClusterNetworkPolicy may consist of zero or more ordered set of ingress" }, { "data": "Under `ports`, the optional field `endPort` can only be set when a numerical `port` is set to represent a range of ports from `port` to `endPort` inclusive. `protocols` defines additional protocols that are not supported by `ports`. Currently only ICMP protocol and IGMP protocol are under `protocols`. For `ICMP` protocol, `icmpType` and `icmpCode` could be used to specify the ICMP traffic that this rule matches. And for `IGMP` protocol, `igmpType` and `groupAddress` can be used to specify the IGMP traffic that this rule matches. Currently, only IGMP query is supported in ingress rules. Other IGMP types and multicast data traffic are not supported for ingress rules. Valid `igmpType` is: message type | value -- | -- Membership Query | 0x11 The group address in IGMP query packets can only be 224.0.0.1. As for Group-Specific IGMP query, which encodes the target group in the IGMP message, it is not supported yet because OVS can not recognize the address. Protocol `IGMP` can not be used with `ICMP` or properties like `from`, `to`, `ports` and `toServices`. Also, each rule has an optional `name` field, which should be unique within the policy describing the intention of this rule. If `name` is not provided for a rule, it will be auto-generated by Antrea. The auto-generated name will be of format `[ingress/egress]-[action]-[uid]`, e.g. ingress-allow-2f0ed6e, where [uid] is the first 7 bits of hash value of the rule based on sha1 algorithm. If a policy contains duplicate rules, or if a rule name is same as the auto-generated name of some other rules in the same policy, it will cause a conflict, and the policy will be rejected. A ClusterGroup name can be set in the `group` field of an ingress `from` section in place of stand-alone selectors to allow traffic from workloads/ipBlocks set in the ClusterGroup. The policy contains a single rule, which allows matched traffic on a single port, from one of two sources: the first specified by a `podSelector` and the second specified by a combination of a `podSelector` and a `namespaceSelector`. The policy contains a single rule, which allows matched traffic on multiple TCP ports (8000 through 9000 included, plus 6379) from all network endpoints selected by the \"test-cg-with-frontend-selector\" ClusterGroup. The policy contains a single rule, which drops all ingress traffic towards any Pod in Namespaces that have label `app` set to `no-network-access-required`. Note that an empty `From` in the ingress rule means that this rule matches all ingress sources. Ingress `From` section also supports ServiceAccount based selection. This allows users to use ServiceAccount to select Pods. More details can be found in the section. Note: The order in which the ingress rules are specified matters, i.e., rules will be enforced in the order in which they are written. egress: Each ClusterNetworkPolicy may consist of zero or more ordered set of egress rules. Each rule, depending on the `action` field of the rule, allows or drops traffic which matches all `from`, `ports` sections. Under `ports`, the optional field `endPort` can only be set when a numerical `port` is set to represent a range of ports from `port` to `endPort` inclusive. `protocols` defines additional protocols that are not supported by `ports`. Currently, only ICMP protocol and IGMP protocol are under `protocols`. For `ICMP` protocol, `icmpType` and `icmpCode` could be used to specify the ICMP traffic that this rule matches. And for `IGMP` protocol, `igmpType` and `groupAddress` can be used to specify the IGMP traffic that this rule matches. If `igmpType` is not set, all reports will be" }, { "data": "If `groupAddress` is empty, then all multicast group addresses will be matched here. Only IGMP reports are supported in egress rules. Protocol `IGMP` can not be used with `ICMP` or properties like `from`, `to`, `ports` and `toServices`. Valid `igmpType` are: message type | value -- | -- IGMPv1 Membership Report | 0x12 IGMPv2 Membership Report | 0x16 IGMPv3 Membership Report | 0x22 Also, each rule has an optional `name` field, which should be unique within the policy describing the intention of this rule. If `name` is not provided for a rule, it will be auto-generated by Antrea. The rule name auto-generation process is the same as ingress rules. A ClusterGroup name can be set in the `group` field of a egress `to` section in place of stand-alone selectors to allow traffic to workloads/ipBlocks set in the ClusterGroup. `toServices` field contains a list of combinations of Service Namespace and Service Name to match traffic to this Service. More details can be found in the section. The policy contains a single rule, which drops matched traffic on a single port, to the 10.0.10.0/24 subnet specified by the `ipBlock` field. The policy contains a single rule, which drops matched traffic on TCP port 5978 to all network endpoints selected by the \"test-cg-with-ip-block\" ClusterGroup. The policy contains a single rule, which drops all egress traffic initiated by any Pod in Namespaces that have `app` set to `no-network-access-required`. The policy contains a single rule, which drops traffic from \"role: client\" labeled Pods from \"env: prod\" labeled Namespaces to Service svcNamespace/svcName via ClusterIP. Note that an empty `to` + an empty `toServices` in the egress rule means that this rule matches all egress destinations. Egress `To` section also supports FQDN based filtering. This can be applied to exact FQDNs or wildcard expressions. More details can be found in the section. Egress `To` section also supports ServiceAccount based selection. This allows users to use ServiceAccount to select Pods. More details can be found in the section. Note: The order in which the egress rules are specified matters, i.e., rules will be enforced in the order in which they are written. enableLogging and logLabel: Antrea-native policy ingress or egress rules can be audited by setting its logging fields. When the `enableLogging` field is set to `true`, the first packet of any traffic flow that matches this rule will be logged to a file (`/var/log/antrea/networkpolicy/np.log`) on the Node on which the rule is enforced. The log files can then be used for further analysis. If `logLabel` is provided, the label will be added in the log. For example, in the , traffic that hits the \"AllowFromFrontend\" rule will be logged with log label \"frontend-allowed\". The logging feature is best-effort, and as such there is no guarantee that all the flows which match the policy rule will be logged. Additionally, we do not recommend enabling policy logging for older Antrea versions (all versions prior to v1.12, as well as v1.12.0 and v1.12.1). See this for more information. For drop and reject rules, deduplication is applied to reduce duplicated log messages, and the duplication buffer length is set to 1 second. When a rule does not have a name, an identifiable name will be generated for the rule and added to the" }, { "data": "For rules in layer 7 NetworkPolicy, packets are logged with action `Redirect` prior to analysis by the layer 7 engine, and the layer 7 engine can log more information in its own logs. The rules are logged in the following format: ```text <yyyy/mm/dd> <time> <ovs-table-name> <antrea-native-policy-reference> <rule-name> <direction> <action> <openflow-priority> <applied-to-reference> <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length> <log-label> Deduplication: <yyyy/mm/dd> <time> <ovs-table-name> <antrea-native-policy-reference> <rule-name> <direction> <action> <openflow-priority> <applied-to-reference> <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length> <log-label> [<num of packets> packets in <duplicate duration>] Examples: 2023/07/04 12:45:21.804416 IngressDefaultRule AntreaNetworkPolicy:default/reject-tcp-policy RejectTCPRequest Ingress Reject 16 default/nettoolv3 10.10.1.7 53646 10.10.1.14 80 TCP 60 tcp-log-label 2023/07/03 23:24:36.422233 AntreaPolicyEgressRule AntreaNetworkPolicy:default/reject-icmp-policy RejectICMPRequest Egress Reject 14500 default/nettool 10.10.1.7 <nil> 10.10.2.3 <nil> ICMP 84 icmp-log-label 2023/07/03 23:24:37.424024 AntreaPolicyEgressRule AntreaNetworkPolicy:default/reject-icmp-policy RejectICMPRequest Egress Reject 14500 default/nettool 10.10.1.7 <nil> 10.10.2.3 <nil> ICMP 84 icmp-log-label [2 packets in 1.000855539s] ``` Kubernetes NetworkPolicies can also be audited using Antrea logging to the same file (`/var/log/antrea/networkpolicy/np.log`). Add Annotation `networkpolicy.antrea.io/enable-logging: \"true\"` on a Namespace to enable logging for all NetworkPolicies in the Namespace. Packets of any network flow that match a NetworkPolicy rule will be logged with a reference to the NetworkPolicy name, but packets dropped by the implicit \"default drop\" (not allowed by any NetworkPolicy) will only be logged with consistent name `K8sNetworkPolicy` for reference. When using Antrea logging for Kubernetes NetworkPolicies, the rule name field is not set and defaults to `<nil>` value. The rules are logged in the following format: ```text <yyyy/mm/dd> <time> <ovs-table-name> <k8s-network-policy-reference> <nil> <direction> Allow <openflow-priority> <applied-to-reference> <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length> <log-label> Default dropped traffic: <yyyy/mm/dd> <time> <ovs-table-name> K8sNetworkPolicy <nil> <direction> Drop <nil> <applied-to-reference> <source-ip> <source-port> <destination-ip> <destination-port> <protocol> <packet-length> <log-label> [<num of packets> packets in <duplicate duration>] Examples: 2023/07/04 12:31:02.801442 IngressRule K8sNetworkPolicy:default/allow-tcp-80 <nil> Ingress Allow 190 default/nettool 10.10.1.13 57050 10.10.1.7 80 TCP 60 <nil> 2023/07/04 12:33:26.221413 IngressDefaultRule K8sNetworkPolicy <nil> Ingress Drop <nil> default/nettool 10.10.1.13 <nil> 10.10.1.7 <nil> ICMP 84 <nil> ``` Fluentd can be used to assist with collecting and analyzing the logs. Refer to the for documentation. appliedTo per rule: A ClusterNetworkPolicy ingress or egress rule may optionally contain the `appliedTo` field. Semantically, the `appliedTo` field per rule is similar to the `appliedTo` field at the policy level, except that the scope of the `appliedTo` is rule itself, as opposed to all rules in the policy, as is the case for `appliedTo` in policy spec. If used, the `appliedTo` field must be set for all the rules existing in the policy and cannot be set along with `appliedTo` at the policy level. Below is an example of appliedTo-per-rule ACNP usage: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-appliedto-per-rule spec: priority: 1 ingress: action: Drop appliedTo: podSelector: matchLabels: app: db-restricted-west from: podSelector: matchLabels: app: client-east action: Drop appliedTo: podSelector: matchLabels: app: db-restricted-east from: podSelector: matchLabels: app: client-west ``` Note: In a given ClusterNetworkPolicy, all rules/`appliedTo` fields must either contain stand-alone selectors or references to ClusterGroup. Usage of ClusterGroups along with stand-alone selectors is not allowed. The following selectors can be specified in an ingress `from` section or egress `to` section when defining networking peers for policy rules: podSelector: This selects particular Pods from all Namespaces as \"sources\", if set in `ingress` section, or as \"destinations\", if set in `egress` section. namespaceSelector: This selects particular Namespaces for which all Pods are grouped as `ingress` \"sources\" or `egress` \"destinations\". Cannot be set with `namespaces` field. podSelector and namespaceSelector: A single to/from entry that specifies both namespaceSelector and podSelector selects particular Pods within particular Namespaces. nodeSelector: This selects particular Nodes in" }, { "data": "The selected Node's IPs will be set as \"sources\" if `nodeSelector` set in `ingress` section, or as \"destinations\" if is set in the `egress` section. For more information on its usage, refer to . namespaces: The `namespaces` field allows users to perform advanced matching on Namespaces which cannot be done via label selectors. Refer to for more details, and for usage. group: A `group` refers to a ClusterGroup to which an ingress/egress peer, or an `appliedTo` must resolve to. More information on ClusterGroups can be found in . serviceAccount: This selects all the Pods which have been assigned a specific ServiceAccount. For more information on its usage, refer to . ipBlock: This selects particular IP CIDR ranges to allow as `ingress` \"sources\" or `egress` \"destinations\". These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable. fqdn: This selector is applicable only to the `to` section in an `egress` block. It is used to select Fully Qualified Domain Names (FQDNs), specified either by exact name or wildcard expressions, when defining `egress` rules. For more information on its usage, refer to . ClusterNetworkPolicy is at the cluster scope, hence a `podSelector` without any `namespaceSelector` selects Pods from all Namespaces. There is no automatic isolation of Pods on being selected in appliedTo. Ingress/Egress rules in ClusterNetworkPolicy has an `action` field which specifies whether the matched rule allows or drops the traffic. IPBlock field in the ClusterNetworkPolicy rules do not have the `except` field. A higher priority rule can be written to deny the specific CIDR range to simulate the behavior of IPBlock field with `cidr` and `except` set. Rules assume the priority in which they are written. i.e. rule set at top takes precedence over a rule set below it. The following `kubectl` commands can be used to retrieve ACNP resources: ```bash kubectl get clusternetworkpolicies kubectl get clusternetworkpolicies.crd.antrea.io kubectl get acnp kubectl get acnp.crd.antrea.io ``` All the above commands produce output similar to what is shown below: ```text NAME TIER PRIORITY AGE test-cnp emergency 5 54s ``` Antrea NetworkPolicy (ANNP) is another policy CRD, which is similar to the ClusterNetworkPolicy CRD, however its scope is limited to a Namespace. The purpose of introducing this CRD is to allow admins to take advantage of advanced NetworkPolicy features and apply them within a Namespace to complement the K8s NetworkPolicies. Similar to the ClusterNetworkPolicy resource, Antrea NetworkPolicy can also be associated with Tiers. An example Antrea NetworkPolicy might look like this: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: test-annp namespace: default spec: priority: 5 tier: securityops appliedTo: podSelector: matchLabels: role: db ingress: action: Allow from: podSelector: matchLabels: role: frontend podSelector: matchLabels: role: nondb namespaceSelector: matchLabels: role: db ports: protocol: TCP port: 8080 endPort: 9000 name: AllowFromFrontend egress: action: Drop to: ipBlock: cidr: 10.0.10.0/24 ports: protocol: TCP port: 5978 name: DropToThirdParty ``` Antrea NetworkPolicy shares its spec with ClusterNetworkPolicy. However, the following documents some of the key differences between the two Antrea policy CRDs. Antrea NetworkPolicy is Namespaced while ClusterNetworkPolicy operates at cluster scope. Unlike the `appliedTo` in a ClusterNetworkPolicy, setting a `namespaceSelector` in the `appliedTo` field is forbidden. `podSelector` without a `namespaceSelector`, set within a NetworkPolicy Peer of any rule, selects Pods from the Namespace in which the Antrea NetworkPolicy is created. This behavior is similar to the K8s NetworkPolicy. Antrea NetworkPolicy supports both stand-alone selectors and Group references. Antrea NetworkPolicy does not support `namespaces` field within a peer, as Antrea NetworkPolicy themselves are scoped to a single Namespace. Groups can be referenced in `appliedTo` and `to`/`from`. Refer to the section for detailed" }, { "data": "The following example Antrea NetworkPolicy realizes the same network policy as the . It refers to three separately defined Groups - \"test-grp-with-db-selector\" that selects all Pods labeled \"role: db\", \"test-grp-with-frontend-selector\" that selects all Pods labeled \"role: frontend\" and Pods labeled \"role: nondb\" in Namespaces labeled \"role: db\", \"test-grp-with-ip-block\" that selects `ipblock` \"10.0.10.0/24\". ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: annp-with-groups namespace: default spec: priority: 5 tier: securityops appliedTo: group: \"test-grp-with-db-selector\" ingress: action: Allow from: group: \"test-grp-with-frontend-selector\" ports: protocol: TCP port: 8080 endPort: 9000 name: AllowFromFrontend egress: action: Drop to: group: \"test-grp-with-ip-block\" ports: protocol: TCP port: 5978 name: DropToThirdParty ``` The following `kubectl` commands can be used to retrieve ANNP resources: ```bash kubectl get networkpolicies.crd.antrea.io kubectl get annp kubectl get annp.crd.antrea.io ``` All the above commands produce output similar to what is shown below: ```text NAME TIER PRIORITY AGE test-annp securityops 5 5s ``` Antrea-native policy CRDs are ordered based on priorities set at various levels. With the introduction of Tiers, Antrea-native policies are first enforced based on the Tier to which they are associated. i.e. all policies belonging to a higher precedenced Tier are enforced first, followed by policies belonging to the next Tier and so on, until the \"application\" Tier policies are enforced. K8s NetworkPolicies are enforced next, and \"baseline\" Tier policies will be enforced last. Within a Tier, Antrea-native policy CRDs are ordered by the `priority` at the policy level. Thus, the policy with the highest precedence (the smallest numeric priority value) is enforced first. This ordering is performed solely based on the `priority` assigned, as opposed to the \"Kind\" of the resource, i.e. the relative ordering between a and an within a Tier depends only on the `priority` set in each of the two resources. Within a policy, rules are enforced in the order in which they are set. For example, consider the following: ACNP1{tier: application, priority: 10, ingressRules: [ir1.1, ir1.2], egressRules: [er1.1, er1.2]} ANNP1{tier: application, priority: 15, ingressRules: [ir2.1, ir2.2], egressRules: [er2.1, er2.2]} ACNP3{tier: emergency, priority: 20, ingressRules: [ir3.1, ir3.2], egressRules: [er3.1, er3.2]} This translates to the following order: Ingress rules: ir3.1 > ir3.2 > ir1.1 -> ir1.2 -> ir2.1 -> ir2.2 Egress rules: er3.1 > er3.2 > er1.1 -> er1.2 -> er2.1 -> er2.2 Once a rule is matched, it is executed based on the action set. If none of the policy rules match, the packet is then enforced for rules created for K8s NP. If the packet still does not match any rule for K8s NP, it will then be evaluated against policies created in the \"baseline\" Tier. The with 'sort-by=effectivePriority' flag can be used to check the order of policy enforcement. An example output will look like the following: ```text antctl get netpol --sort-by=effectivePriority NAME APPLIED-TO RULES SOURCE TIER-PRIORITY PRIORITY 4c504456-9158-4838-bfab-f81665dfae12 85b88ddb-b474-5b44-93d3-c9192c09085e 1 AntreaClusterNetworkPolicy:acnp-1 250 1 41e510e0-e430-4606-b4d9-261424184fba e36f8beb-9b0b-5b49-b1b7-5c5307cddd83 1 AntreaClusterNetworkPolicy:acnp-2 250 2 819b8482-ede5-4423-910c-014b731fdba6 bb6711a1-87c7-5a15-9a4a-71bf49a78056 2 AntreaNetworkPolicy:annp-10 250 10 4d18e031-f05a-48f6-bd91-0197b556ccca e216c104-770c-5731-bfd3-ff4ccbc38c39 2 K8sNetworkPolicy:default/test-1 <NONE> <NONE> c547002a-d8c7-40f1-bdd1-8eb6d0217a67 e216c104-770c-5731-bfd3-ff4ccbc38c39 1 K8sNetworkPolicy:default/test-2 <NONE> <NONE> aac8b8bc-f3bf-4c41-b6e0-2af1863204eb bb6711a1-87c7-5a15-9a4a-71bf49a78056 3 AntreaClusterNetworkPolicy:baseline 253 10 ``` The contains more information on how policy rules are realized by OpenFlow, and how the priority of flows reflects the order in which they are enforced. Kubernetes NetworkPolicies and Antrea-native policies allow selecting workloads from Namespaces with the use of a label selector (i.e. `namespaceSelector`). However, it is often desirable to be able to select Namespaces directly by their `name` as opposed to using the `labels` associated with the Namespaces. Starting with K8s" }, { "data": "all Namespaces are labeled with the `kubernetes.io/metadata.name: <namespaceName>` provided that the `NamespaceDefaultLabelName` feature gate (enabled by default) is not disabled in K8s. K8s NetworkPolicy and Antrea-native policy users can take advantage of this reserved label to select Namespaces directly by their `name` in `namespaceSelectors` as follows: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: test-annp-by-name namespace: default spec: priority: 5 tier: application appliedTo: podSelector: {} egress: action: Allow to: podSelector: matchLabels: k8s-app: kube-dns namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system ports: protocol: TCP port: 53 protocol: UDP port: 53 name: AllowToCoreDNS ``` Note: `NamespaceDefaultLabelName` feature gate is scheduled to be removed in K8s v1.24, thereby ensuring that labeling Namespaces by their name cannot be disabled. In order to select Namespaces by name, Antrea labels Namespaces with a reserved label `antrea.io/metadata.name`, whose value is set to the Namespace's name. Users can then use this label in the `namespaceSelector` field, in both K8s NetworkPolicies and Antrea-native policies to select Namespaces by name. By default, Namespaces are not labeled with the reserved name label. In order for the Antrea controller to label the Namespaces, the `labelsmutator.antrea.io` `MutatingWebhookConfiguration` must be enabled. This can be done by applying the following webhook configuration YAML: ```yaml apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration metadata: name: \"labelsmutator.antrea.io\" webhooks: name: \"namelabelmutator.antrea.io\" clientConfig: service: name: \"antrea\" namespace: \"kube-system\" path: \"/mutate/namespace\" rules: operations: [\"CREATE\", \"UPDATE\"] apiGroups: [\"\"] apiVersions: [\"v1\"] resources: [\"namespaces\"] scope: \"Cluster\" admissionReviewVersions: [\"v1\", \"v1beta1\"] sideEffects: None timeoutSeconds: 5 ``` Note: `antrea-controller` Pod must be restarted after applying this YAML. Once the webhook is configured, Antrea will start labeling all new and updated Namespaces with the `antrea.io/metadata.name: <namespaceName>` label. Users may now use this reserved label to select Namespaces by name as follows: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: NetworkPolicy metadata: name: test-annp-by-name namespace: default spec: priority: 5 tier: application appliedTo: podSelector: {} egress: action: Allow to: podSelector: matchLabels: k8s-app: kube-dns namespaceSelector: matchLabels: antrea.io/metadata.name: kube-system ports: protocol: TCP port: 53 protocol: UDP port: 53 name: AllowToCoreDNS ``` The above example allows all Pods from Namespace \"default\" to connect to all \"kube-dns\" Pods from Namespace \"kube-system\" on TCP port 53. The `namespaces` field allows users to perform advanced matching on Namespace objects that cannot be done via label selectors. Currently, the `namespaces` field has only one matching strategy, `Self`. If set to `Self`, for each Pod targeted by the appliedTo of the policy/rule, this field will cause the rule to select endpoints in the same Namespace as that Pod. It enables policy writers to create per-Namespace rules within a single policy. This field is optional and cannot be set along with a `namespaceSelector` within the same peer. Consider a minimalistic cluster, where there are only three Namespaces labeled ns=x, ns=y and ns=z. Inside each of these Namespaces, there are three Pods labeled app=a, app=b and app=c. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: allow-self-ns spec: priority: 1 tier: platform appliedTo: namespaceSelector: {} ingress: action: Allow from: namespaces: match: Self action: Deny egress: action: Allow to: namespaces: match: Self action: Deny ``` The policy above ensures that x/a, x/b and x/c can communicate with each other, but nothing else (unless there are higher precedence policies that say otherwise). Same for Namespaces y and z. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: deny-self-ns-a-to-b spec: priority: 1 tier: securityops appliedTo: namespaceSelector: {} podSelector: matchLabels: app: b ingress: action: Deny from: namespaces: match: Self podSelector: matchLabels: app: a ``` The `deny-self-ns-a-to-b` policy ensures that traffic from x/a to x/b, y/a to y/b and z/a to z/b are" }, { "data": "It can be used in conjunction with the `allow-self-ns` policy. If both policies are applied, the only other Pod that x/a can reach in the cluster will be Pod x/c. These two policies shown above are for demonstration purposes only. For more realistic usage of the `namespaces` field, refer to this YAML in the previous section. Starting from Antrea v2.0, Antrea ClusterNetworkPolicy supports creating policy rules between groups of Namespaces that share common label values. The most prominent use case of this feature is to provide isolation between Namespaces that have different values for some pre-defined labels, e.g. \"org\", by applying a single ACNP in the cluster. Consider a minimalistic cluster with the following Namespaces: ```text NAME LABELS kube-system kubernetes.io/metadata.name=kube-system accounting1 kubernetes.io/metadata.name=accounting1, org=accounting, region=us-west accounting2 kubernetes.io/metadata.name=accounting2, org=accounting, region=us-east sales1 kubernetes.io/metadata.name=sales1, org=sales, region=us-west sales2 kubernetes.io/metadata.name=sales2, org=sales, region=us-east ``` An administrator of such cluster typically would want to enforce some boundaries between the \"tenants\" in the cluster (the accounting team and the sales team in this case, who each own two Namespaces). This can be easily achieved by the following ACNP: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: isolation-based-on-org spec: priority: 1 tier: securityops appliedTo: namespaceSelector: matchExpressions: { key: org, operator: Exists } ingress: action: Allow from: namespaces: sameLabels: [org] action: Deny egress: action: Allow to: namespaces: sameLabels: [org] action: Deny ``` The above policy will also automatically adapt to the changes in the cluster, i.e., any new Namespace created in the cluster with a different \"org\" label value will be automatically isolated from both the accounting and the sales Namespaces. In addition, the Namespace grouping criteria can be easily extended to match more than one label keys, and Namespaces will be grouped together ONLY IF ALL the values of the label keys listed in the `sameLabels` field have the same value. For example, if we change the `sameLabels` list to `[org, region]` in the example above, then this ACNP will create four Namespace groups instead of two, which are all isolated from each other. The reason is that individual Namespaces for the accounting or sales organizations have different values for the \"region\" label, even though they share the same value for the \"org\" label. Another important note is that such policy is a no-op on Namespaces that do not have all the labels listed in the `sameLabels` field, even if such Namespaces are selected in `appliedTo`. In other words, we can rewrite the `appliedTo` in the policy above to `- namespaceSelector: {}` and it will work exactly the same. There will be no effective rules created for the `kube-system` Namespace since it does not have the \"org\" label. On the other hand, if the following policy (alone) is applied in this cluster: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: isolation-based-on-org-and-env spec: priority: 1 tier: securityops appliedTo: namespaceSelector: {} ingress: action: Allow from: namespaces: sameLabels: [org, env] action: Deny from: namespaceSelector: {} ``` it will have no effect whatsoever because no Namespace has both the \"org\" and \"env\" label keys. To take the example further, if we now add another Namespace `dev` with labels \"org=dev, env=test\" the end result is that only the `dev` Namespace will be selected by the `isolation-based-on-org-and-env` ACNP, which denies ingress from all other Namespaces in the cluster since they don't have the same values for labels \"org\" and \"env\" compared to `dev` (in fact, there is no other Namespace with the \"env\" label" }, { "data": "All the other Namespaces, on the other hand, will not have effective ingress rules created by this policy. Antrea-native policy features a `fqdn` field in egress rules to select Fully Qualified Domain Names (FQDNs), specified either by exact FQDN name or wildcard expressions. The standard `Allow`, `Drop` and `Reject` actions apply to FQDN egress rules. An example policy using FQDN based filtering could look like this: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-fqdn-all-foobar spec: priority: 1 appliedTo: podSelector: matchLabels: app: client egress: action: Allow to: fqdn: \"*foobar.com\" ports: protocol: TCP port: 8080 action: Drop # Drop all other egress traffic, in-cluster or out-of-cluster ``` The above example allows all traffic destined to any FQDN that matches the wildcard expression `*foobar.com` on port 8080, originating from any Pod with label `app` set to `client` across any Namespace. For these `client` Pods, all other egress traffic are dropped. Note that for FQDN wildcard expressions, the `*` character can match multiple subdomains (i.e. `*foobar.com` will match `foobar.com`, `www.foobar.com` and `test.uswest.foobar.com`). Antrea will only program datapath rules for actual egress traffic towards these FQDNs, based on DNS results. It will not interfere with DNS packets, unless there is a separate policy dropping/rejecting communication between the DNS components and the Pods selected. Antrea respects the TTL of DNS records, expiring stale IPs that are absent in more recent records according to their TTL. Therefore, Pods employing FQDN based policies ought to refrain from caching a DNS record for a duration exceeding its TTL. Otherwise, FQDN based policies may intermittently fail to function as intended. Typically, the Java virtual machine (JVM) caches DNS records for a fixed period of time, controlled by `networkaddress.cache.ttl`. In this case, its crucial to set the JVMs TTL to 0 so that FQDN based policies can work properly. Note that FQDN based policies do not work for [Service DNS names created by Kubernetes](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#services) (e.g. `kubernetes.default.svc` or `antrea.kube-system.svc`), except for headless Services. The reason is that Antrea will use the information included in A or AAAA DNS records to implement FQDN based policies. In the case of \"normal\" (not headless) Services, the DNS name resolves to the ClusterIP for the Service, but policy rules are enforced after AntreaProxy Service Load-Balancing and at that stage the destination IP address has already been rewritten to the address of an endpoint backing the Service. For headless Services, a ClusterIP is not allocated and, assuming the Service has a selector, the DNS server returns A / AAAA records that point directly to the endpoints. In that case, FQDN based policies can be used successfully. For example, the following policy, which specifies an exact match on a DNS name, will drop all egress traffic destined to headless Service `svcA` defined in the `default` Namespace: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-fqdn-headless-service spec: priority: 1 appliedTo: podSelector: matchLabels: app: client egress: action: Drop to: fqdn: \"svcA.default.svc.cluster.local\" ``` More generally speaking, it is not recommended to use the FQDN selector for DNS names created by Kubernetes, as label-based selectors are more appropriate for Kubernetes workloads. NodeSelector selects certain Nodes which match the label selector. When used in the `to` field of an egress rule, it adds the Node IPs to the rule's destination address group; when used in the `from` field of an ingress rule, it adds the Node IPs to the rule's source address" }, { "data": "Notice that when a rule with a nodeSelector applies to a Node, it only restricts the traffic to/from certain IPs of the Node. The IPs include: The Node IP (the IP address in the Node API object) The Antrea gateway IP (the IP address of the interface antrea-agent will create and use for Node-to-Pod communication) The transport IP (the IP address of the interface used for tunneling or routing the traffic across Nodes) if it's different from Node IP Traffic to/from other IPs of the Node will be ignored. Meanwhile, `NodeSelector` doesnt affect the traffic from Node to Pods running on that Node. Such traffic will always be allowed to make sure that [agents on a Node (e.g. system daemons, kubelet) can communicate with all Pods on that Node](https://kubernetes.io/docs/concepts/services-networking/#the-kubernetes-network-model) to perform liveness and readiness probes. For more information, see . For example, the following rule applies to Pods with label `app=antrea-test-app` and will `Drop` egress traffic to Nodes on TCP port 6443 which have the labels `node-role.kubernetes.io/control-plane`. ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: egress-control-plane spec: priority: 1 appliedTo: podSelector: matchLabels: app: antrea-test-app egress: action: Drop to: nodeSelector: matchLabels: node-role.kubernetes.io/control-plane: \"\" ports: protocol: TCP port: 6443 ``` A combination of Service name and Service Namespace can be used in `toServices` in egress rules to refer to a K8s Service. `toServices` match traffic based on the clusterIP, port and protocol of Services. Thus, headless Service is not supported by this field. A sample policy can be found . Since `toServices` represents a combination of IP+port, it cannot be used with `to` or `ports` within the same egress rule. Also, since the matching process relies on the groupID assigned to Service by AntreaProxy, this field can only be used when AntreaProxy is enabled. This clusterIP-based match has one caveat: direct access to the Endpoints of this Service is not affected by `toServices` rules. To restrict access towards backend Endpoints of a Service, define a `ClusterGroup` with `ServiceReference` and use the name of ClusterGroup in the Antrea-native policy rule's `group` field instead. `ServiceReference` of a ClusterGroup is equivalent to a `podSelector` of a ClusterGroup that selects all backend Pods of a Service, based on the Service spec's matchLabels. Antrea will keep the Endpoint selection up-to-date in case the Service's matchLabels change, or Endpoints are added/deleted for that Service. For more information on `ServiceReference`, refer to the `serviceReference` paragraph of the . Antrea ClusterNetworkPolicy features a `serviceAccount` field to select all Pods that have been assigned the ServiceAccount referenced in this field. This field could be used in `appliedTo`, ingress `from` and egress `to` section. No matter which sections the `serviceAccount` field is used in, it cannot be used with any other fields. `serviceAccount` uses `namespace` and `name` to select the ServiceAccount with a specific name under a specific namespace. An example policy using `serviceAccount` could look like this: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-service-account spec: priority: 5 tier: securityops appliedTo: serviceAccount: name: sa-1 namespace: ns-1 egress: action: Drop to: serviceAccount: name: sa-2 namespace: ns-2 name: ServiceAccountEgressRule ``` In this example, the policy will be applied to all Pods whose ServiceAccount is `sa-1` of `ns-1`. Let's call those Pods \"appliedToPods\". The egress `to` section will select all Pods whose ServiceAccount is in `ns-2` Namespace and name as `sa-2`. Let's call those Pods \"egressPods\". After this policy is applied, traffic from \"appliedToPods\" to \"egressPods\" will be dropped. Note: Antrea will use a reserved label key for internal processing `serviceAccount`. The reserved label looks like:" }, { "data": "Users should avoid using this label key in any entities no matter if a policy with `serviceAccount` is applied in the cluster. Antrea ClusterNetworkPolicy features a `service` field in `appliedTo` field to enforce the ACNP rules on the traffic from external clients to a NodePort Service. `service` uses `namespace` and `name` to select the Service with a specific name under a specific Namespace; only a NodePort Service can be referred by `service` field. There are a few restrictions on configuring a policy/rule that applies to NodePort Services: This feature can only work when Antrea proxyAll is enabled and kube-proxy is disabled. `service` field cannot be used with any other fields in `appliedTo`. a policy or a rule can't be applied to both a NodePort Service and other entities at the same time. If a `appliedTo` with `service` is used at policy level, then this policy can only contain ingress rules. If a `appliedTo` with `service` is used at rule level, then this rule can only be an ingress rule. If an ingress rule is applied to a NodePort Service, then this rule can only use `ipBlock` in its `from` field. An example policy using `service` in `appliedTo` could look like this: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: acnp-deny-external-client-nodeport-svc-access spec: priority: 5 tier: securityops appliedTo: service: name: svc-1 namespace: ns-1 ingress: action: Drop from: ipBlock: cidr: 1.1.1.0/24 ``` In this example, the policy will be applied to the NodePort Service `svc-1` in Namespace `ns-1`, and drop all packets from CIDR `1.1.1.0/24`. A ClusterGroup (CG) CRD is a specification of how workloads are grouped together. It allows admins to group Pods using traditional label selectors, which can then be referenced in ACNP in place of stand-alone `podSelector` and/or `namespaceSelector`. In addition to `podSelector` and `namespaceSelector`, ClusterGroup also supports the following ways to select endpoints: Pod grouping by `serviceReference`. ClusterGroup specified by `serviceReference` will contain the same Pod members that are currently selected by the Service's selector. `ipBlock` or `ipBlocks` to share IPBlocks between ACNPs. `childGroups` to select other ClusterGroups by name. ClusterGroups allow admins to separate the concern of grouping of workloads from the security aspect of Antrea-native policies. It adds another level of indirection allowing users to update group membership without having to update individual policy rules. Below are some example ClusterGroup specs: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterGroup metadata: name: test-cg-sel spec: podSelector: matchLabels: role: db namespaceSelector: matchLabels: env: prod apiVersion: crd.antrea.io/v1beta1 kind: ClusterGroup metadata: name: test-cg-ip-block spec: ipBlocks: cidr: 10.0.10.0/24 apiVersion: crd.antrea.io/v1beta1 kind: ClusterGroup metadata: name: test-cg-svc-ref spec: serviceReference: name: test-service namespace: default apiVersion: crd.antrea.io/v1beta1 kind: ClusterGroup metadata: name: test-cg-nested spec: childGroups: [test-cg-sel, test-cg-ip-blocks, test-cg-svc-ref] ``` There are a few restrictions on how ClusterGroups can be configured: A ClusterGroup is a cluster-scoped resource and therefore can only be set in an Antrea ClusterNetworkPolicy's `appliedTo` and `to`/`from` peers. For the `childGroup` field, currently only one level of nesting is supported: If a ClusterGroup has childGroups, it cannot be selected as a childGroup by other ClusterGroups. ClusterGroup must exist before another ClusterGroup can select it by name as its childGroup. A ClusterGroup cannot be deleted if it is referred to by other ClusterGroup as childGroup. This restriction may be lifted in future releases. At most one of `podSelector`, `serviceReference`, `ipBlock`, `ipBlocks` or `childGroups` can be set for a ClusterGroup, i.e. a single ClusterGroup can either group workloads, represent IP CIDRs or select other" }, { "data": "A parent ClusterGroup can select different types of ClusterGroups (Pod/Service/CIDRs), but as mentioned above, it cannot select a ClusterGroup that has childGroups itself. spec: The ClusterGroup `spec` has all the information needed to define a cluster-wide group. podSelector: Pods can be grouped cluster-wide using `podSelector`. If set with a `namespaceSelector`, all matching Pods from Namespaces selected by the `namespaceSelector` will be grouped. namespaceSelector: All Pods from Namespaces selected by the namespaceSelector will be grouped. If set with a `podSelector`, all matching Pods from Namespaces selected by the `namespaceSelector` will be grouped. ipBlock: This selects a particular IP CIDR range to allow as `ingress` \"sources\" or `egress` \"destinations\". A ClusterGroup with `ipBlock` referenced in an ACNP's `appliedTo` field will be ignored, and the policy will have no effect. For a same ClusterGroup, `ipBlock` and `ipBlocks` cannot be set concurrently. ipBlock will be deprecated for ipBlocks in future versions of ClusterGroup. ipBlocks: This selects a list of IP CIDR ranges to allow as `ingress` \"sources\" or `egress` \"destinations\". A ClusterGroup with `ipBlocks` referenced in an ACNP's `appliedTo` field will be ignored, and the policy will have no effect. For a same ClusterGroup, `ipBlock` and `ipBlocks` cannot be set concurrently. serviceReference: Pods that serve as the backend for the specified Service will be grouped. Services without selectors are currently not supported, and will be ignored if referred by `serviceReference` in a ClusterGroup. When ClusterGroups with `serviceReference` are used in ACNPs as `appliedTo` or `to`/`from` peers, no Service port information will be automatically assumed for traffic enforcement. `ServiceReference` is merely a mechanism to group Pods and ensure that a ClusterGroup stays in sync with the set of Pods selected by a given Service. childGroups: This selects existing ClusterGroups by name. The effective members of the \"parent\" ClusterGroup will be the union of all its childGroups' members. See the section above for restrictions. status: The ClusterGroup `status` field determines the overall realization status of the group. groupMembersComputed: The \"GroupMembersComputed\" condition is set to \"True\" when the controller has calculated all the corresponding workloads that match the selectors set in the group. The following `kubectl` commands can be used to retrieve CG resources: ```bash kubectl get clustergroups.crd.antrea.io kubectl get cg kubectl get cg.crd.antrea.io ``` A Group CRD represents a different way for specifying how workloads are grouped together, and is conceptually similar to the ClusterGroup CRD. Users will be able to refer to Groups in Antrea NetworkPolicy resources instead of specifying Pod and Namespace selectors every time. Below are some example Group specs: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: Group metadata: name: test-grp-sel namespace: default spec: podSelector: matchLabels: role: db apiVersion: crd.antrea.io/v1beta1 kind: Group metadata: name: test-grp-with-namespace spec: podSelector: matchLabels: role: db namespaceSelector: matchLabels: env: prod apiVersion: crd.antrea.io/v1beta1 kind: Group metadata: name: test-grp-ip-block spec: ipBlocks: cidr: 10.0.10.0/24 apiVersion: crd.antrea.io/v1beta1 kind: Group metadata: name: test-grp-svc-ref spec: serviceReference: name: test-service namespace: default apiVersion: crd.antrea.io/v1beta1 kind: Group metadata: name: test-grp-nested spec: childGroups: [test-grp-sel, test-grp-ip-blocks, test-grp-svc-ref] ``` Group has a similar spec with ClusterGroup. However, there are key differences and restrictions. A Group can be set in an Antrea NetworkPolicy's `appliedTo` and `to`/`from` peers. When set in the `appliedTo` field, it cannot include `namespaceSelector`, since Antrea NetworkPolicy is Namespace scoped. For example, the `test-grp-with-namespace` Group in the cannot be used by Antrea NetworkPolicy" }, { "data": "Antrea will not validate the referenced Group resources for the `appliedTo` convention; if the convention is violated in the Antrea NetworkPolicy's `appliedTo` section or for any of the rules' `appliedTo`, then Antrea will report a condition `Realizable=False` in the NetworkPolicy status, the condition includes `NetworkPolicyAppliedToUnsupportedGroup` reason and a detailed message. `childGroups` only accepts strings, and they will be considered as names of the Groups and will be looked up in the policy's own Namespace. For example, if child Group `child-0` exists in `ns-2`, it should not be added as a child Group for `ns-1/parentGroup-0`. The following `kubectl` commands can be used to retrieve Group resources: ```bash kubectl get groups.crd.antrea.io kubectl get grp kubectl get grp.crd.antrea.io ``` Antrea-native policy CRDs are meant for admins to manage the security of their cluster. Thus, access to manage these CRDs must be granted to subjects which have the authority to outline the security policies for the cluster and/or Namespaces. On cluster initialization, Antrea grants the permissions to edit these CRDs with `admin` and the `edit` ClusterRole. In addition to this, Antrea also grants the permission to view these CRDs with the `view` ClusterRole. Cluster admins can therefore grant these ClusterRoles to any subject who may be responsible to manage the Antrea policy CRDs. The admins may also decide to share the `view` ClusterRole to a wider range of subjects to allow them to read the policies that may affect their workloads. Similar RBAC is applied to the ClusterGroup resource. There is a soft limit of 20 on the maximum number of Tier resources that are supported. But for optimal performance, it is recommended that the number of Tiers in a cluster be less than or equal to 10. In order to reduce the churn in the agent, it is recommended to set the policy priority (acnp/annp.spec.priority) within the range 1.0 to 100.0. The v1beta1 policy CRDs support up to 10,000 unique priorities at policy level, and up to 50,000 unique priorities at rule level, across all Tiers except for the \"baseline\" Tier. For any two Antrea-native policy rules, their rule level priorities are only considered equal if their policy objects share the same Tier and have the same policy priority, plus the rules themselves are of the same rule priority (rule priority is the sequence number of the rule within the policy's ingress or egress section). For the \"baseline\" Tier, the max supported unique priorities (at rule level) is 150. If there are multiple Antrea-native policy rules created at the same rule-level priority (same policy Tier, policy priority and rule priority), and happen to select overlapping traffic patterns but have conflicting rule actions (e.g.`Allow`v.s.`Deny`), the behavior of such traffic will be nondeterministic. In general, we recommended against creating rules with conflicting actions in policy resources at the same priority. For example, consider two AntreaNetworkPolicies created in the same Namespace and Tier with the same policy priority. The first policy applies to all `app=web` Pods in the Namespace and has only one ingress rule to `Deny` all traffic from `role=dev` Pods. The other policy also applies to all `app=web` Pods in the Namespace and has only one ingress rule, which is to `Allow` all traffic from `app=client` Pods. Those two ingress rules might not always conflict, but in case a Pod with both the `app=client` and `role=dev` labels initiates traffic towards the `app=web` Pods in the Namespace, both rules will be matched at the same priority with conflicting actions. It will be the policy writer's responsibility to identify such ambiguities in rule definitions and avoid potential nondeterministic rule enforcement results. NetworkPolicies are connection/flow oriented and" }, { "data": "They apply to connections, instead of individual packets, which means established connections won't be blocked by new rules. For hairpin Service traffic, when a Pod initiates traffic towards the Service it provides, and the same Pod is selected as the Endpoint, NetworkPolicies will consistently permit this traffic during ingress enforcement if AntreaProxy is enabled, irrespective of the ingress rules defined by the user. In the presence of ingress rules preventing access to the Service from Pods providing the Service, accessing the Service from one of these Pods will succeed if traffic is hairpinned back to the source Pod, and will fail if a different Endpoint is selected by AntreaProxy. However, when AntreaProxy is disabled, NetworkPolicies may not function as expected for hairpin Service traffic. This is due to kube-proxy performing SNAT, which conceals the original source IP from Antrea. Consequently, NetworkPolicies are unable to differentiate between hairpin Service traffic and external traffic in this scenario. Antrea policy logging is enabled by setting `enableLogging` to true for specific policy rules (or by using the `networkpolicy.antrea.io/enable-logging: \"true\"` annotation for K8s NetworkPolicies). Starting with Antrea v1.13, logging is \"best-effort\": if too much traffic needs to be logged, we will skip logging rather than start dropping packets or rather than risking to overrun the Antrea Agent, which could impact cluster health or other workloads. This behavior cannot be changed, and the logging feature is therefore not meant to be used for compliance purposes. By default, the Antrea datapath will send up to 500 packets per second (with a burst size of 1000 packets) to the Agent for logging. This rate applies to all the traffic that needs to be logged, and is enforced at the level of each Node. A rate of 500 packets per second roughly translates to 500 new TCP connections per second, or 500 UDP requests per second. While it is possible to adjust the rate and burst size by modifying the `packetInRate` parameter in the antrea-agent configuration, we do not recommend doing so. The default value was set to 500 after careful consideration. Prior to Antrea v1.13, policy logging was not best-effort. While we did have a rate limit for the number of packets that could be sent to the Agent for logging, the datapath behavior was to drop all packets that exceeded the rate limit, as opposed to skipping the logging and applying the specified policy rule action. This meant that the logging feature was more suited for audit / compliance applications, however, we ultimately decided that the behavior was too aggressive and that it was too easy to disrupt application workloads by enabling logging - the rate limit was also lower than the default one we use today (100 packets per second instead of 500). For example, the following policy which allows ingress DNS traffic for coreDNS Pods, and has logging enabled, would drastically restrict the number of possible DNS requests in the cluster, which in turn would cause a lot of errors in applications which rely on DNS: ```yaml apiVersion: crd.antrea.io/v1beta1 kind: ClusterNetworkPolicy metadata: name: allow-core-dns-access spec: priority: 5 tier: securityops appliedTo: podSelector: {} ingress: name: allow-dns enableLogging: true action: Allow ports: protocol: TCP port: 53 protocol: UDP port: 53 ``` For this reason, we do NOT recommend enabling logging for Antrea versions prior to v1.13, especially when the policy rule uses the `Allow` action. Note that v1.12 patch versions starting with v1.12.2 also do not suffer from this issue, as we backported the fix to the v1.12 release." } ]
{ "category": "Runtime", "file_name": "antrea-network-policy.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "We definitely welcome your patches and contributions to gRPC! Please read the gRPC organization's and before proceeding. If you are new to github, please start by reading In order to protect both you and ourselves, you will need to sign the . How to get your contributions merged smoothly and quickly. Create small PRs that are narrowly focused on addressing a single concern. We often times receive PRs that are trying to fix several things at a time, but only one fix is considered acceptable, nothing gets merged and both author's & review's time is wasted. Create more PRs to address different concerns and everyone will be happy. The grpc package should only depend on standard Go packages and a small number of exceptions. If your contribution introduces new dependencies which are NOT in the , you need a discussion with gRPC-Go authors and consultants. For speculative changes, consider opening an issue and discussing it first. If you are suggesting a behavioral or API change, consider starting with a [gRFC proposal](https://github.com/grpc/proposal). Provide a good PR description as a record of what change is being made and why it was made. Link to a github issue if it exists. Don't fix code style and formatting unless you are already changing that line to address an issue. PRs with irrelevant changes won't be merged. If you do want to fix formatting or style, do that in a separate PR. Unless your PR is trivial, you should expect there will be reviewer comments that you'll need to address before merging. We expect you to be reasonably responsive to those comments, otherwise the PR will be closed after 2-3 weeks of inactivity. Maintain clean commit history and use meaningful commit messages. PRs with messy commit history are difficult to review and won't be merged. Use `rebase -i upstream/master` to curate your commit history and/or to bring in latest changes from master (but avoid rebasing in the middle of a code review). Keep your PR up to date with upstream/master (if there are merge conflicts, we can't really merge your change). All tests need to be passing before your change can be merged. We recommend you run tests locally before creating your PR to catch breakages early on. `make all` to test everything, OR `make vet` to catch vet errors `make test` to run the tests `make testrace` to run tests in race mode Exceptions to the rules can be made if there's a compelling reason for doing so." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Volume clone The CSI Volume Cloning feature adds support for specifying existing PVCs in the `dataSource` field to indicate a user would like to clone a Volume. A Clone is defined as a duplicate of an existing Kubernetes Volume that can be consumed as any standard Volume would be. The only difference is that upon provisioning, rather than creating a \"new\" empty Volume, the back end device creates an exact duplicate of the specified Volume. Refer to for more info. In , `dataSource` should be the name of the `PVC` which is already created by RBD CSI driver. The `dataSource` kind should be the `PersistentVolumeClaim`. The `storageClassName` can be any RBD storageclass (not necessarily same as Parent PVC) Please note: `provisioner` must be the same for both the Parent PVC and the Clone PVC. The non-encrypted PVC cannot be cloned to an encrypted one and vice-versa. encrypted -> encrypted (possible) non-encrypted -> non-encrypted (possible) encrypted -> non-encrypted (not possible) non-encrypted -> encrypted (not possible) Create a new PVC Clone from the PVC ```console kubectl create -f deploy/examples/csi/rbd/pvc-clone.yaml ``` ```console kubectl get pvc ``` ```console NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-74734901-577a-11e9-b34f-525400581048 1Gi RWO rook-ceph-block 34m rbd-pvc-clone Bound pvc-70473135-577f-11e9-b34f-525400581048 1Gi RWO rook-ceph-block 8s ``` To clean your cluster of the resources created by this example, run the following: ```console kubectl delete -f deploy/examples/csi/rbd/pvc-clone.yaml ``` Requires Kubernetes v1.16+ which supports volume clone. Ceph-csi diver v3.1.0+ which supports volume clone. In , `dataSource` should be the name of the `PVC` which is already created by CephFS CSI driver. The `dataSource` kind should be the `PersistentVolumeClaim` and also storageclass should be same as the source `PVC`. Create a new PVC Clone from the PVC ```console kubectl create -f deploy/examples/csi/cephfs/pvc-clone.yaml ``` ```console kubectl get pvc ``` ```console NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-1ea51547-a88b-4ab0-8b4a-812caeaf025d 1Gi RWX rook-cephfs 39m cephfs-pvc-clone Bound pvc-b575bc35-d521-4c41-b4f9-1d733cd28fdf 1Gi RWX rook-cephfs 8s ``` To clean your cluster of the resources created by this example, run the following: ```console kubectl delete -f deploy/examples/csi/cephfs/pvc-clone.yaml ```" } ]
{ "category": "Runtime", "file_name": "ceph-csi-volume-clone.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Weave Net: v2.2.1. Kubernetes: v1.10.0. kube-proxy: iptables/ipvs mode with `--masquerade-all=false` and `--cluster-cidr` unspecified(default kubeadm options). Hosts: `h1` and `h2`. \\# | Src | Target | Dst | Src IP | `-j WEAVE-NPC` |-|-|--||-- 1 | Podh1 | ip(Podh2) | Podh2 | ip(Podh1) | OK 2 | h1 | ClusterIP | Podh1 | ip(weaveh1) | NOK 3 | h1 | ClusterIP | Podh2 | ip(weaveh1) | OK 4 | Podh1 | ClusterIP | Podh1 | ip(Pod_h1) | OK 5 | Podh1 | ClusterIP | Podh2 | ip(weave_h1) | OK 6 | h1 | ip(h1):NodePort | Podh1 | ip(weaveh1) | NOK 7 | h1 | ip(h1):NodePort | Podh2 | ip(weaveh1) | OK 8 | h1 | ip(h2):NodePort | Podh1 | ip(weaveh2) | OK 9 | h1 | ip(h2):NodePort | Podh2 | ip(weaveh2) | ??? Can't reproduce 10 | Podh1 | ip(h1):NodePort | Podh1 | ip(weave_h1) | NOK 11 | Podh1 | ip(h1):NodePort | Podh2 | ip(weave_h1) | OK 12 | Podh1 | ip(h2):NodePort | Podh1 | ip(weave_h2) | OK 13 | Podh1 | ip(h2):NodePort | Podh2 | ip(weave_h2) | OK Remarks: Src IP* is of a packet which is captured on the weave bridge. -j WEAVE-NPC* - whether a packet enters the `filter/WEAVE-NPC` iptables chain (OK = NetworkPolicy is enforced as required). Pod_h1* - a Pod running on the `h1` host. ip(weave_h1)* - IP addr of the weave bridge on the `h1` host." } ]
{ "category": "Runtime", "file_name": "k8s-src-ip.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "Feature 1: CNI-Genie \"Multiple CNI Plugins\" Interface Connector to 3rd party CNI-Plugins. The user can Feature 2: CNI-Genie \"Multiple IP Addresses\" Injects multiple IPs to a single container. The container is reachable using any of the Feature 3: CNI-Genie \"Network Attachment Definition\" feature incorporates Kubernetes Network Custom Resource Definition De-facto Standard in CNI-Genie Feature 4: CNI-Genie \"Smart CNI Plugin Selection\" Intelligence in selecting the CNI plugin. CNI-Genie the CNI plugin, accordingly Feature 5: CNI-Genie \"Default Plugin Selection\" Support to set default plugin of user choice to be used for all the pods being created Feature 6: CNI-Genie \"Network Isolation\" Dedicated 'physical' network for a tenant Isolated 'logical' networks for different tenants on a shared 'physical'network Feature 7: CNI-Genie \"Network Policy Engine\" allows for network level ACLs Feature 8: CNI-Genie \"Real-time Network Switching\" Price minimization: dynamically switching workload to a cheaper network as network prices change Maximizing network utilization: dynamically switching workload to the less congested network at a threshold" } ]
{ "category": "Runtime", "file_name": "CNIGenieFeatureSet.md", "project_name": "CNI-Genie", "subcategory": "Cloud Native Network" }
[ { "data": "CoreOS projects are and accept contributions via GitHub pull requests. This document outlines some of the conventions on development workflow, commit message formatting, contact points and other resources to make it easier to get your contribution accepted. By contributing to this project you agree to the Developer Certificate of Origin (DCO). This document was created by the Linux Kernel community and is a simple statement that you, as a contributor, have the legal right to make the contribution. See the file for details. Fork the repository on GitHub Read the for build and test instructions Play with the project, submit bugs, submit patches! This is a rough outline of what a contributor's workflow looks like: Create a topic branch from where you want to base your work (usually master). Make commits of logical units. Make sure your commit messages are in the proper format (see below). Push your changes to a topic branch in your fork of the repository. Make sure the tests pass, and add any new tests as appropriate. Submit a pull request to the original repository. Thanks for your contributions! We follow a rough convention for commit messages that is designed to answer two questions: what changed and why. The subject line should feature the what and the body of the commit should describe the why. ``` scripts: add the test-cluster command this uses tmux to setup a test cluster that you can easily kill and start for debugging. Fixes #38 ``` The format can be described more formally as follows: ``` <subsystem>: <what changed> <BLANK LINE> <why this change was made> <BLANK LINE> <footer> ``` The first line is the subject and should be no longer than 70 characters, the second line is always blank, and other lines should be wrapped at 80 characters. This allows the message to be easier to read on GitHub as well as in various git tools." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Flannel", "subcategory": "Cloud Native Network" }
[ { "data": "To enable the integration of custom authentication methods, MinIO can be configured with an Identity Management Plugin webhook. When configured, this plugin enables the `AssumeRoleWithCustomToken` STS API extension. A user or application can now present a token to the `AssumeRoleWithCustomToken` API, and MinIO verifies this token by sending it to the Identity Management Plugin webhook. This plugin responds with some information and MinIO is able to generate temporary STS credentials to interact with object storage. The authentication flow is similar to that of OpenID, however the token is \"opaque\" to MinIO - it is simply sent to the plugin for verification. CAVEAT: There is no console UI integration for this method of authentication and it is intended primarily for machine authentication. It can be configured via MinIO's standard configuration API (i.e. using `mc admin config set/get`), or equivalently with environment variables. For brevity we show only environment variables here: ```sh $ mc admin config set myminio identity_plugin --env KEY: identity_plugin enable Identity Plugin via external hook ARGS: MINIOIDENTITYPLUGIN_URL* (url) plugin hook endpoint (HTTP(S)) e.g. \"http://localhost:8181/path/to/endpoint\" MINIOIDENTITYPLUGINAUTHTOKEN (string) authorization token for plugin hook endpoint MINIOIDENTITYPLUGINROLEPOLICY* (string) policies to apply for plugin authorized users MINIOIDENTITYPLUGINROLEID (string) unique ID to generate the ARN MINIOIDENTITYPLUGIN_COMMENT (sentence) optionally add a comment to this setting ``` If provided, the auth token parameter is sent as an authorization header. `MINIOIDENTITYPLUGINROLEPOLICY` is a required parameter and can be list of comma separated policy names. On setting up the plugin, the MinIO server prints the Role ARN to its log. The Role ARN is generated by default based on the given plugin URL. To avoid this and use a configurable value set a unique role ID via `MINIOIDENTITYPLUGINROLEID`. To verify the custom token presented in the `AssumeRoleWithCustomToken` API, MinIO makes a POST request to the configured identity management plugin endpoint and expects a response with some details as shown below: Query parameters: | Parameter Name | Value Type | Purpose | |-||-| | token | string | Token from the AssumeRoleWithCustomToken call for external verification | If the token is valid and access is approved, the plugin must return a `200` (OK) HTTP status code. A `200 OK` Response should have `application/json` content-type and body with the following structure: ```json { \"user\": <string>, \"maxValiditySeconds\": <integer>, \"claims\": <key-value-pairs> } ``` | Parameter Name | Value Type | Purpose | |--|--|--| | user | string | Identifier for owner of requested credentials | | maxValiditySeconds | integer (>= 900 seconds and < 365 days) | Maximum allowed expiry duration for the credentials | | claims | key-value pairs | Claims to be associated with the requested credentials | The keys \"exp\", \"parent\" and \"sub\" in the `claims` object are reserved and if present are ignored by MinIO. If the token is not valid or access is not approved, the plugin must return a `403` (forbidden) HTTP status code. The body must have an `application/json` content-type with the following structure: ```json { \"reason\": <string> } ``` The reason message is returned to the client. A toy example for the Identity Management Plugin is given ." } ]
{ "category": "Runtime", "file_name": "identity-management-plugin.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "Cobra will follow a steady release cadence. Non breaking changes will be released as minor versions quarterly. Patch bug releases are at the discretion of the maintainers. Users can expect security patch fixes to be released within relatively short order of a CVE becoming known. For more information on security patch fixes see the CVE section below. Releases will follow . Users tracking the Master branch should expect unpredictable breaking changes as the project continues to move forward. For stability, it is highly recommended to use a release. We will maintain two major releases in a moving window. The N-1 release will only receive bug fixes and security updates and will be dropped once N+1 is released. Deprecation of Go versions or dependent packages will only occur in major releases. To reduce the change of this taking users by surprise, any large deprecation will be preceded by an announcement in the and an Issue on Github. Maintainers will make every effort to release security patches in the case of a medium to high severity CVE directly impacting the library. The speed in which these patches reach a release is up to the discretion of the maintainers. A low severity CVE may be a lower priority than a high severity one. Cobra maintainers will use GitHub issues and the as the primary means of communication with the community. This is to foster open communication with all users and contributors. Breaking changes are generally allowed in the master branch, as this is the branch used to develop the next release of Cobra. There may be times, however, when master is closed for breaking changes. This is likely to happen as we near the release of a new version. Breaking changes are not allowed in release branches, as these represent minor versions that have already been released. These version have consumers who expect the APIs, behaviors, etc, to remain stable during the lifetime of the patch stream for the minor release. Examples of breaking changes include: Removing or renaming exported constant, variable, type, or function. Updating the version of critical libraries such as `spf13/pflag`, `spf13/viper` etc... Some version updates may be acceptable for picking up bug fixes, but maintainers must exercise caution when reviewing. There may, at times, need to be exceptions where breaking changes are allowed in release branches. These are at the discretion of the project's maintainers, and must be carefully considered before merging. Maintainers will ensure the Cobra test suite utilizes the current supported versions of Golang. Changes to this document and the contents therein are at the discretion of the maintainers. None of the contents of this document are legally binding in any way to the maintainers or the users." } ]
{ "category": "Runtime", "file_name": "CONDUCT.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "sidebar_position: 3 sidebar_label: \"Post-Check after Deployment\" This page takes 3-node kubernetes cluster as an example to perform post-check after installing HwameiStor. ```console $ kubectl get node NAME STATUS ROLES AGE VERSION 10-6-234-40 Ready control-plane,master 140d v1.21.11 10-6-234-41 Ready <none> 140d v1.21.11 10-6-234-42 Ready <none> 140d v1.21.11 ``` The following pods should be up and running: ```console $ kubectl -n hwameistor get pod NAME READY STATUS RESTARTS AGE drbd-adapter-k8s-master-rhel7-gtk7t 0/2 Completed 0 23m drbd-adapter-k8s-node1-rhel7-gxfw5 0/2 Completed 0 23m drbd-adapter-k8s-node2-rhel7-lv768 0/2 Completed 0 23m hwameistor-admission-controller-dc766f976-mtlvw 1/1 Running 0 23m hwameistor-apiserver-86d6c9b7c8-v67gg 1/1 Running 0 23m hwameistor-auditor-54f46fcbc6-jb4f4 1/1 Running 0 23m hwameistor-exporter-6498478c57-kr8r4 1/1 Running 0 23m hwameistor-failover-assistant-cdc6bd665-56wbw 1/1 Running 0 23m hwameistor-local-disk-csi-controller-6587984795-fztcd 2/2 Running 0 23m hwameistor-local-disk-manager-7gg9x 2/2 Running 0 23m hwameistor-local-disk-manager-kqkng 2/2 Running 0 23m hwameistor-local-disk-manager-s66kn 2/2 Running 0 23m hwameistor-local-storage-csi-controller-5cdff98f55-jj45w 6/6 Running 0 23m hwameistor-local-storage-mfqks 2/2 Running 0 23m hwameistor-local-storage-pnfpx 2/2 Running 0 23m hwameistor-local-storage-whg9t 2/2 Running 0 23m hwameistor-pvc-autoresizer-86dc79d57-s2l68 1/1 Running 0 23m hwameistor-scheduler-6db69957f-r58j6 1/1 Running 0 23m hwameistor-ui-744cd78d84-vktjq 1/1 Running 0 23m hwameistor-volume-evictor-5db99cf979-4674n 1/1 Running 0 23m ``` :::info The components of `local-disk-manager` and `local-storage` are `DaemonSets`, and should have one pod on each Kubernetes node. ::: HwameiStor CRDs create the following APIs. ```console $ kubectl api-resources --api-group hwameistor.io NAME SHORTNAMES APIVERSION NAMESPACED KIND clusters hmcluster hwameistor.io/v1alpha1 false Cluster events evt hwameistor.io/v1alpha1 false Event localdiskclaims ldc hwameistor.io/v1alpha1 false LocalDiskClaim localdisknodes ldn hwameistor.io/v1alpha1 false LocalDiskNode localdisks ld hwameistor.io/v1alpha1 false LocalDisk localdiskvolumes ldv hwameistor.io/v1alpha1 false LocalDiskVolume localstoragenodes lsn hwameistor.io/v1alpha1 false LocalStorageNode localvolumeconverts lvconvert hwameistor.io/v1alpha1 false LocalVolumeConvert localvolumeexpands lvexpand hwameistor.io/v1alpha1 false LocalVolumeExpand localvolumegroups lvg hwameistor.io/v1alpha1 false LocalVolumeGroup localvolumemigrates lvmigrate hwameistor.io/v1alpha1 false LocalVolumeMigrate localvolumereplicas lvr hwameistor.io/v1alpha1 false LocalVolumeReplica localvolumereplicasnapshotrestores lvrsrestore,lvrsnaprestore hwameistor.io/v1alpha1 false LocalVolumeReplicaSnapshotRestore localvolumereplicasnapshots lvrs hwameistor.io/v1alpha1 false LocalVolumeReplicaSnapshot localvolumes lv hwameistor.io/v1alpha1 false LocalVolume localvolumesnapshotrestores lvsrestore,lvsnaprestore hwameistor.io/v1alpha1 false LocalVolumeSnapshotRestore localvolumesnapshots lvs hwameistor.io/v1alpha1 false LocalVolumeSnapshot resizepolicies hwameistor.io/v1alpha1 false ResizePolicy ``` Please refer to for details. HwameiStor automatically scans each node and registers each disk as CRD `LocalDisk(ld)`. The unused disks are displayed with `PHASE: Available`. ```console $ kubectl get localdisknodes NAME FREECAPACITY TOTALCAPACITY TOTALDISK STATUS AGE k8s-master Ready 28h k8s-node1 Ready 28h k8s-node2 Ready 28h $ kubectl get localdisks NAME NODEMATCH DEVICEPATH PHASE AGE localdisk-2307de2b1c5b5d051058bc1d54b41d5c k8s-node1 /dev/sdb Available 28h localdisk-311191645ea00c62277fe709badc244e k8s-node2 /dev/sdb Available 28h localdisk-37a20db051af3a53a1c4e27f7616369a k8s-master /dev/sdb Available 28h localdisk-b57b108ad2ccc47f4b4fab6f0b9eaeb5 k8s-node2 /dev/sda Bound 28h localdisk-b682686c65667763bda58e391fbb5d20 k8s-master /dev/sda Bound 28h localdisk-da121e8f0dabac9ee1bcb6ed69840d7b k8s-node1 /dev/sda Bound 28h ``` HwameiStor automatically generates the LocalStorageNode (i.e. LSN) resource for each node. Each LSN will record the resources and status of the node, including Storage Pool, Volumes, etc. ```console $ kubectl get lsn NAME IP STATUS AGE 10-6-234-40 10.6.234.40 Ready 3m52s 10-6-234-41 10.6.234.41 Ready 3m54s 10-6-234-42 10.6.234.42 Ready 3m55s $ kubectl get lsn 10-6-234-41 -o yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalStorageNode metadata: creationTimestamp: \"2023-04-11T06:46:52Z\" generation: 1 name: 10-6-234-41 resourceVersion: \"13575433\" uid: 4986f7b8-6fe1-43f1-bdca-e68b6fa53f92 spec: hostname: 10-6-234-41 storageIP: 10.6.234.41 topogoly: region: default zone: default status: pools: LocalStorage_PoolHDD: class: HDD disks: capacityBytes: 10733223936 devPath: /dev/sdb state: InUse type: HDD capacityBytes: 1069547520 devPath: /dev/sdc state: InUse type: HDD capacityBytes: 1069547520 devPath: /dev/sdd state: InUse type: HDD capacityBytes: 1069547520 devPath: /dev/sde state: InUse type: HDD capacityBytes: 1069547520 devPath: /dev/sdf state: InUse type: HDD capacityBytes: 1069547520 devPath: /dev/sdg state: InUse type: HDD freeCapacityBytes: 16080961536 freeVolumeCount: 1000 name: LocalStorage_PoolHDD totalCapacityBytes: 16080961536 totalVolumeCount: 1000 type: REGULAR usedCapacityBytes: 0 usedVolumeCount: 0 volumeCapacityBytesLimit: 16080961536 state: Ready ``` The Operator will automatically create the StorageClasses as following according to the HwameiStor system's configuration (e.g. HA enabled or not, disk type, and more.) ```console $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE hwameistor-storage-lvm-hdd lvm.hwameistor.io Delete WaitForFirstConsumer false 23h hwameistor-storage-lvm-hdd-convertible lvm.hwameistor.io Delete WaitForFirstConsumer false 23h hwameistor-storage-lvm-hdd-ha lvm.hwameistor.io Delete WaitForFirstConsumer false 23h ```" } ]
{ "category": "Runtime", "file_name": "post_check.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "are a Linux feature for organizing processes in hierarchical groups and applying resources limits to them. Each rkt pod is placed in a different cgroup to separate the processes of the pod from the processes of the host. Memory and CPU isolators are also implemented with cgroups. Every pod and application within that pod is run within its own cgroup. When a recent version of systemd is running on the host and `rkt` is not started as a systemd service (typically, from the command line), `rkt` will call `systemd-nspawn` with `--register=true`. This will cause `systemd-nspawn` to call the D-Bus method `CreateMachineWithNetwork` on `systemd-machined` and the cgroup `/machine.slice/machine-rkt...` will be created. This requires systemd v216+ as detected by the function in stage1's `init`. When systemd is not running on the host, or the systemd version is too old (< v216), `rkt` uses `systemd-nspawn` with the `--register=false` parameter. In this case, `systemd-nspawn` or other systemd components will not create new cgroups for rkt. Instead, `rkt` creates a new cgroup for each pod under the current cgroup, like `$CALLER_CGROUP/machine-some-id.slice`. `rkt` is able to detect if it is started as a systemd service (from a `.service` file or from `systemd-run`). In that case, `systemd-nspawn` is started with the `--keep-unit` parameter. This will cause `systemd-nspawn` to use the D-Bus method call `RegisterMachineWithNetwork` instead of `CreateMachineWithNetwork` and the pod will remain in the cgroup of the service. By default, the slice is `systemd.slice` but to select `machine.slice` with `systemd-run --slice=machine` or `Slice=machine.slice` in the `.service` file. It will result in `/machine.slice/servicename.service` when the user select that slice. `/machine.slice/machine-rkt...` when started on the command line with systemd v216+. `/$SLICE.slice/servicename.service` when started from a systemd service. `$CALLER_CGROUP/machine-some-id.slice` without systemd, or with systemd pre-v216 For example, a simple pod run interactively on a system with systemd would look like: ``` machine.slice machine-rkt\\x2df28d074b\\x2da8bb\\x2d4246\\x2d96a5\\x2db961e1fe7035.scope init.scope /usr/lib/systemd/systemd system.slice alpine-sh.service /bin/sh systemd-journald.service /usr/lib/systemd/systemd-journald ``` Right now, rkt uses the `cpu`, `cpuset`, and `memory` subsystems. When the stage1 starts, it mounts `/sys` . Then, for every subsystem, it: Mounts the subsystem (on `<rootfs>/sys/fs/cgroup/<subsystem>`) Bind-mounts the subcgroup on top of itself (e.g `<rootfs>/sys/fs/cgroup/memory/machine.slice/machine-rkt-UUID.scope/`) Remounts the subsystem readonly This is so that the pod itself cannot escape the cgroup. Currently the cgroup filesystems are not accessible to applications within the pod, but that may change. (N.B. `rkt` prior to v1.23 mounted each individual knob read-write. E.g. `.../memory/machine.slice/machine-rkt-UUID.scope/system.slice/etcd.service/{memory.limitinbytes, cgroup.procs}`) Unified hierarchy and cgroup2 is a new feature in Linux that will be available in Linux 4.4. This is tracked by . CGroup Namespaces is a new feature being developed in Linux. This is tracked by . Appc/spec defines the `resource/network-bandwidth` to limit the network bandwidth used by each app in the pod. This is not implemented yet. This could be implemented with cgroups." } ]
{ "category": "Runtime", "file_name": "cgroups.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Update vtep entries Create/Update vtep entry. ``` cilium-dbg bpf vtep update [flags] ``` ``` -h, --help help for update ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the VTEP mappings for IP/CIDR <-> VTEP MAC/IP" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_vtep_update.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "VM templating is a Kata Containers feature that enables new VM creation using a cloning technique. When enabled, new VMs are created by cloning from a pre-created template VM, and they will share the same initramfs, kernel and agent memory in readonly mode. It is very much like a process fork done by the kernel but here we fork VMs. Both and VM templating help speed up new container creation. When VMCache enabled, new VMs are created by the VMCache server. So it is not vulnerable to share memory CVE because each VM doesn't share the memory. VM templating saves a lot of memory if there are many Kata Containers running on the same host. VM templating helps speed up new container creation and saves a lot of memory if there are many Kata Containers running on the same host. If you are running a density workload, or care a lot about container startup speed, VM templating can be very useful. In one example, we created 100 Kata Containers each claiming 128MB guest memory and ended up saving 9GB of memory in total when VM templating is enabled, which is about 72% of the total guest memory. See [full results here](https://github.com/kata-containers/runtime/pull/303#issuecomment-395846767). In another example, we created ten Kata Containers with containerd shimv2 and calculated the average boot up speed for each of them. The result showed that VM templating speeds up Kata Containers creation by as much as 38.68%. See . One drawback of VM templating is that it cannot avoid cross-VM side-channel attack such as that originally targeted at the Linux KSM feature. It was concluded that \"Share-until-written approaches for memory conservation among mutually untrusting tenants are inherently detectable for information disclosure, and can be classified as potentially misunderstood behaviors rather than vulnerabilities.\" Warning: If you care about such attack vector, do not use VM templating or KSM. VM templating can be enabled by changing your Kata Containers config file (`/usr/share/defaults/kata-containers/configuration.toml`, overridden by `/etc/kata-containers/configuration.toml` if provided) such that: `qemu` version `v4.1.0` or above is specified in `hypervisor.qemu`->`path` section `enable_template = true` `initrd =` is set `image =` option is commented out or removed `shared_fs` should not be `virtio-fs` Then you can create a VM templating for later usage by calling ``` $ sudo kata-runtime factory init ``` and purge it by calling ``` $ sudo kata-runtime factory destroy ``` If you do not want to call `kata-runtime factory init` by hand, the very first Kata container you create will automatically create a VM templating." } ]
{ "category": "Runtime", "file_name": "what-is-vm-templating-and-how-do-I-use-it.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "An object store bucket is a container holding immutable objects. The Rook-Ceph creates a controller which automates the provisioning of new and existing buckets. A user requests bucket storage by creating an ObjectBucketClaim (OBC). Upon detecting a new OBC, the Rook-Ceph bucket provisioner does the following: creates a new bucket and grants user-level access (greenfield), or grants user-level access to an existing bucket (brownfield), and creates a Kubernetes Secret in the same namespace as the OBC creates a Kubernetes ConfigMap in the same namespace as the OBC. The secret contains bucket access keys. The configmap contains bucket endpoint information. Both are consumed by an application pod in order to access the provisioned bucket. When the ObjectBucketClaim is deleted all of the Kubernetes resources created by the Rook-Ceph provisioner are deleted, and provisioner specific artifacts, such as dynamic users and access policies are removed. And, depending on the reclaimPolicy in the storage class referenced in the OBC, the bucket will be retained or deleted. We welcome contributions! In the meantime, features that are not yet implemented may be configured by using the to run the `radosgw-admin` and other tools for advanced bucket policies. A Rook storage cluster must be configured and running in Kubernetes. In this example, it is assumed the cluster is in the `rook` namespace. The following resources, or equivalent, need to be created: - - - - When the storage admin is ready to create an object storage, he will specify his desired configuration settings in a yaml file such as the following `object-store.yaml`. This example is a simple object store with metadata that is replicated across different hosts, and the data is erasure coded across multiple devices in the cluster. ```yaml apiVersion: ceph.rook.io/v1alpha1 kind: ObjectStore metadata: name: my-store namespace: rook-ceph spec: metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: device erasureCoded: dataChunks: 6 codingChunks: 2 gateway: port: 80 securePort: 443 instances: 3 ``` Now create the object store. ```bash kubectl create -f object-store.yaml ``` At this point the Rook operator recognizes that a new object store resource needs to be configured. The operator will create all of the resources to start the object store. Metadata pools are created (`.rgw.root`, `my-store.rgw.control`, `my-store.rgw.meta`, `my-store.rgw.log`, `my-store.rgw.buckets.index`, `my-store.rgw.buckets.non-ec`) The data pool is created (`my-store.rgw.buckets.data`) A Ceph realm is created A Ceph zone group is created in the new realm A Ceph zone is created in the new zone group A cephx key is created for the rgw daemon A Kubernetes service is created to provide load balancing for the RGW pod(s) A Kubernetes deployment is created to start the RGW pod(s) with the settings for the new zone When the RGW pods start, the object store is ready to receive the http or https requests as" }, { "data": "The object store settings are exposed to Rook as a Custom Resource Definition (CRD). The CRD is the Kubernetes-native means by which the Rook operator can watch for new resources. The operator stays in a control loop to watch for a new object store, changes to an existing object store, or requests to delete an object store. The pools are the backing data store for the object store and are created with specific names to be private to an object store. Pools can be configured with all of the settings that can be specified in the . The underlying schema for pools defined by a pool CRD is the same as the schema under the `metadataPool` and `dataPool` elements of the object store CRD. All metadata pools are created with the same settings, while the data pool can be created with independent settings. The metadata pools must use replication, while the data pool can use replication or erasure coding. ```yaml metadataPool: failureDomain: host replicated: size: 3 dataPool: failureDomain: device erasureCoded: dataChunks: 6 codingChunks: 2 ``` The gateway settings correspond to the RGW service. `type`: Can be `s3`. In the future support for `swift` can be added. `sslCertificateRef`: If specified, this is the name of the Kubernetes secret that contains the SSL certificate to be used for secure connections to the object store. The secret must be in the same namespace as the Rook cluster. If it is an opaque Kubernetes Secret, Rook will look in the secret provided at the `cert` key name. The value of the `cert` key must be in the format expected by the [RGW service](https://docs.ceph.com/docs/master/install/ceph-deploy/install-ceph-gateway/#using-ssl-with-civetweb): \"The server key, server certificate, and any other CA or intermediate certificates be supplied in one file. Each of these items must be in pem form.\" If the certificate is not specified, SSL will not be configured. They are scenarios where the certificate DNS is set for a particular domain that does not include the local Kubernetes DNS, namely the object store DNS service endpoint. If adding the service DNS name to the certificate is not empty another key can be specified in the secret's data: `insecureSkipVerify: true` to skip the certificate verification. It is not recommended to enable this option since TLS is susceptible to machine-in-the-middle attacks unless custom verification is used. `port`: The service port where the RGW service will be listening (http) `securePort`: The service port where the RGW service will be listening (https) `instances`: The number of RGW pods that will be started for this object store (ignored if allNodes=true) `allNodes`: Whether all nodes in the cluster should run RGW as a daemonset `placement`: The rgw pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, and `podAntiAffinity` similar to placement defined for daemons configured by the" }, { "data": "The RGW service can be configured to listen on both http and https by specifying both `port` and `securePort`. ```yaml gateway: sslCertificateRef: my-ssl-cert-secret port: 80 securePort: 443 instances: 1 allNodes: false ``` By default, the object store will be created independently from any other object stores and replication to another object store will not be configured. This done by creating a new realm, zone group, and zone all with the name of the new object store. The zone group and zone are tagged as the `master`. If this is the first object store in the cluster, the realm, zone group, and zone will also be marked as the default. By implementing on the independent realms, zone groups, and zones, Rook supports multiple objects stores in a cluster. The set of users with access to the object store, the metadata, and the data are isolated from other object stores. If desired to configure the object store to replicate from another cluster or zone, the following settings would be specified on a new object store that is not the master. (This feature is not yet implemented.) `realm`: If specified, the new zone will be created in the existing realm with that name `group`: If specified, the new zone will be created in the existing zone group with that name `master`: If specified, settings indicate the RGW endpoint where this object store will need to connect to the master zone in order to initialize the replication. The Rook operator will execute `pull` commands for the realm and zone group as necessary. ```yaml zone: realm: myrealm group: mygroup master: url: https://my-master-zone-gateway:443/ accessKey: my-master-zone-access-key secret: my-master-zone-secret ``` Failing over the master could be handled by updating the affected object store CRDs, although more design is needed here. See the ceph docs on the concepts around zones and replicating between zones. For reference, a diagram of two zones working across different cluster can be found on page 5 of . For reference, here is a description of the underlying Ceph data model. ``` A cluster has one or more realms A realm spans one or more clusters A realm has one or more zone groups A realm has one master zone group A realm defined in another cluster is replicated with the pull command A zone group has one or more zones A zone group has one master zone A zone group spans one or more clusters A zone group defined in another cluster is replicated with the pull command A zone group defines a namespace for object IDs unique across its zones Zone group metadata is replicated to other zone groups in the realm A zone belongs to one cluster A zone has a set of pools that store the user and object metadata and object data Zone data and metadata is replicated to other zones in the zone group ```" } ]
{ "category": "Runtime", "file_name": "object-bucket.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Applications can be started in interactive mode and later attached via `rkt attach`. In order for an application to be attachable: it must be started in interactive mode it must be running as part of a running pod it must support the corresponding attach mode To start an application in interactive mode, either `tty` or `stream` must be passed as value for the `--stdin`, `--stdout` and `--stderr` options. An application can be run with a dedicated terminal and later attached to: ``` ``` ``` / # hostname rkt-911afe8e-992f-4089-8666-4a4c957a1964 / # tty /rkt/iottymux/alpine-sh/pts ^C ``` In a similar way, an application can be run without a tty but with separated attachable streams: ``` ``` ``` hostname rkt-846c35db-6728-471a-ad50-66d3a8d7ff9c tty not a tty ^C ``` If a pod contains multiple applications, the one to be used as attach target can be specified via `--app`. The following options are allowed as `--mode` values: `list`: list available endpoints, and return early without attaching `auto`: attach to all available endpoints `tty`: bi-directionally attach to the application terminal `tty-in` or `tty-out`: uni-directionally attach to the application terminal `stdin,stdout,stderr`: attach to specific application streams. Omitted streams will no be attached A more complex example, showing the usage of advanced options and piping: ``` ``` ``` stdin stdout stderr rkt-846c35db-6728-471a-ad50-66d3a8d7ff9c /bin/sh: fakecmd: not found ^C rkt-846c35db-6728-471a-ad50-66d3a8d7ff9c ^C /bin/sh: fakecmd: not found ^C ``` | Flag | Default | Options | Description | | | | | | | `--app` | `` | Name of an application | Name of the app to attach to within the specified pod | | `--mode` | `auto` | \"list\", \"auto\" or tty/stream mode | Attaching mode | See the table with ." } ]
{ "category": "Runtime", "file_name": "attach.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "Statedump is a file generated by glusterfs process with different data structure state which may contain the active inodes, fds, mempools, iobufs, memory allocation stats of different types of datastructures per xlator etc. We can find the directory where statedump files are created using `gluster --print-statedumpdir` command. Create that directory if not already present based on the type of installation. Lets call this directory `statedump-directory`. We can generate statedump using `kill -USR1 <pid-of-gluster-process>`. gluster-process is nothing but glusterd/glusterfs/glusterfsd process. There are also commands to generate statedumps for brick processes/nfs server/quotad For bricks: ``` gluster volume statedump <volname> ``` For nfs server: ``` gluster volume statedump <volname> nfs ``` For quotad: ``` gluster volume statedump <volname> quotad ``` For brick-processes files will be created in `statedump-directory` with name of the file as `hyphenated-brick-path.<pid>.dump.timestamp`. For all other processes it will be `glusterdump.<pid>.dump.timestamp`. For applications using libgfapi, `SIGUSR1` cannot be used, eg: smbd/libvirtd processes could have used the `SIGUSR1` signal already for other purposes. To generate statedump for the processes, using libgfapi, below command can be executed from one of the nodes in the gluster cluster to which the libgfapi application is connected to. ``` gluster volume statedump <volname> client <hostname>:<process id> ``` The statedumps can be found in the `statedump-directory`, the name of the statedumps being `glusterdump.<pid>.dump.timestamp`. For a process there can be multiple such files created depending on the number of times the volume is accessed by the process (related to the number of `glfs_init()` calls). We shall see snippets of each type of statedump. First and last lines of the file have starting and ending time of writing the statedump file. Times will be in UTC timezone. mallinfo return status is printed in the following format. Please read man mallinfo for more information about what each field means. ``` [mallinfo] mallinfo_arena=100020224 / Non-mmapped space allocated (bytes) / mallinfo_ordblks=69467 / Number of free chunks / mallinfo_smblks=449 / Number of free fastbin blocks / mallinfo_hblks=13 / Number of mmapped regions / mallinfo_hblkhd=20144128 / Space allocated in mmapped regions (bytes) / mallinfo_usmblks=0 / Maximum total allocated space (bytes) / mallinfo_fsmblks=39264 / Space in freed fastbin blocks (bytes) / mallinfo_uordblks=96710112 / Total allocated space (bytes) / mallinfo_fordblks=3310112 / Total free space (bytes) / mallinfo_keepcost=133712 / Top-most, releasable space (bytes) / ``` For every xlator data structure memory per translator loaded in the call-graph is displayed in the following format: For xlator with name: glusterfs ``` [global.glusterfs - Memory usage] #[global.xlator-name - Memory usage] num_types=119 #It shows the number of data types it is using ``` Now for each data-type it prints the memory usage. ``` [global.glusterfs - usage-type gfcommonmtgftimer_t memusage] size=112 #numallocs times the sizeof(data-type) i.e. numallocs * sizeof (data-type) num_allocs=2 #Number of allocations of the data-type which are active at the time of taking statedump. maxsize=168 #maxnumallocs times the sizeof(data-type) i.e. maxnum_allocs * sizeof (data-type) maxnumallocs=3 #Maximum number of active allocations at any point in the life of the process. total_allocs=7 #Number of times this data is allocated in the life of the process. ``` Mempools are optimization to reduce the number of allocations of a data type. If we create a mem-pool of lets say 1024 elements for a data-type, new elements will be allocated from heap using syscalls like calloc, only if all the 1024 elements in the pool are in active use. Memory pool allocated by each xlator is displayed in the following format: ``` [mempool] #Section name --=-- pool-name=fuse:fd_t #pool-name=<xlator-name>:<data-type> hot-count=1 #number of mempool elements that are in active use. i.e. for this pool it is the number of 'fd_t' s in active use." }, { "data": "#number of mempool elements that are not in use. If a new allocation is required it will be served from here until all the elements in the pool are in use i.e. cold-count becomes 0. padded_sizeof=108 #Each mempool element is padded with a doubly-linked-list + ptr of mempool + is-in-use info to operate the pool of elements, this size is the element-size after padding pool-misses=0 #Number of times the element had to be allocated from heap because all elements from the pool are in active use. alloc-count=314 #Number of times this type of data is allocated through out the life of this process. This may include pool-misses as well. max-alloc=3 #Maximum number of elements from the pool in active use at any point in the life of the process. This does not include pool-misses. cur-stdalloc=0 #Denotes the number of allocations made from heap once cold-count reaches 0, that are yet to be released via mem_put(). max-stdalloc=0 #Maximum number of allocations from heap that are in active use at any point in the life of the process. ``` ``` [iobuf.global] iobuf_pool=0x1f0d970 #The memory pool for iobufs iobufpool.defaultpage_size=131072 #The default size of iobuf (if no iobuf size is specified the default size is allocated) iobufpool.arenasize=12976128 # The initial size of the iobuf pool (doesn't include the stdalloc'd memory or the newly added arenas) iobufpool.arenacnt=8 #Total number of arenas in the pool iobufpool.requestmisses=0 #The number of iobufs that were stdalloc'd (as they exceeded the default max page size provided by iobuf_pool). ``` There are 3 lists of arenas: Arena list: arenas allocated during iobuf pool creation and the arenas that are in use(active_cnt != 0) will be part of this list. Purge list: arenas that can be purged(no active iobufs, active_cnt == 0). Filled list: arenas without free iobufs. ``` [purge.1] #purge.<S.No.> purge.1.mem_base=0x7fc47b35f000 #The address of the arena structure purge.1.active_cnt=0 #The number of iobufs active in that arena purge.1.passive_cnt=1024 #The number of unused iobufs in the arena purge.1.alloc_cnt=22853 #Total allocs in this pool(number of times the iobuf was allocated from this arena) purge.1.max_active=7 #Max active iobufs from this arena, at any point in the life of this process. purge.1.page_size=128 #Size of all the iobufs in this arena. [arena.5] #arena.<S.No.> arena.5.mem_base=0x7fc47af1f000 arena.5.active_cnt=0 arena.5.passive_cnt=64 arena.5.alloc_cnt=0 arena.5.max_active=0 arena.5.page_size=32768 ``` If the active_cnt of any arena is non zero, then the statedump will also have the iobuf list. ``` [arena.6.activeiobuf.1] #arena.<S.No>.activeiobuf.<iobuf.S.No.> arena.6.active_iobuf.1.ref=1 #refcount of the iobuf arena.6.active_iobuf.1.ptr=0x7fdb921a9000 #address of the iobuf [arena.6.active_iobuf.2] arena.6.active_iobuf.2.ref=1 arena.6.active_iobuf.2.ptr=0x7fdb92189000 ``` At any given point in time if there are lots of filled arenas then that could be a sign of iobuf leaks. All the fops received by gluster are handled using call-stacks. Call stack contains the information about uid/gid/pid etc of the process that is executing the fop. Each call-stack contains different call-frames per xlator which handles that fop. ``` [global.callpool.stack.3] #global.callpool.stack.<Serial-Number> stack=0x7fc47a44bbe0 #Stack address uid=0 #Uid of the process which is executing the fop gid=0 #Gid of the process which is executing the fop pid=6223 #Pid of the process which is executing the fop unique=2778 #Xlators like afr do copy_frame and perform the operation in a different stack, this id is useful to find out the stacks that are inter-related because of copy-frame lk-owner=0000000000000000 #Some of the fuse fops have lk-owner. op=LOOKUP #Fop type=1 #Type of the op i.e. FOP/MGMT-OP cnt=9 #Number of frames in this stack. ``` Each frame will have information about which xlator the frame belongs to, what is the function it wound to/from and will be unwind to. It also mentions if the unwind happened or" }, { "data": "If we observe hangs in the system and want to find out which xlator is causing it. Take a statedump and see what is the final xlator which is yet to be unwound. ``` [global.callpool.stack.3.frame.2]#global.callpool.stack.<stack-serial-number>.frame.<frame-serial-number> frame=0x7fc47a611dbc #Frame address ref_count=0 #Incremented at the time of wind and decremented at the time of unwind. translator=r2-client-1 #Xlator this frame belongs to complete=0 #if this value is 1 that means this frame is already unwound. 0 if it is yet to unwind. parent=r2-replicate-0 #Parent xlator of this frame windfrom=afrlookup #Parent xlator function from which the wind happened wind_to=priv->children[i]->fops->lookup unwindto=afrlookup_cbk #Parent xlator function to which unwind happened ``` Fuse maintains history of operations that happened in fuse. ``` [xlator.mount.fuse.history] TIME=2014-07-09 16:44:57.523364 message=[0] fuse_release: RELEASE(): 4590:, fd: 0x1fef0d8, gfid: 3afb4968-5100-478d-91e9-76264e634c9f TIME=2014-07-09 16:44:57.523373 message=[0] sendfuseerr: Sending Success for operation 18 on inode 3afb4968-5100-478d-91e9-76264e634c9f TIME=2014-07-09 16:44:57.523394 message=[0] fusegetattrresume: 4591, STAT, path: (/iozone.tmp), gfid: (3afb4968-5100-478d-91e9-76264e634c9f) ``` ``` [cluster/replicate.r2-replicate-0] #Xlator type, name information child_count=2 #Number of children to the xlator child_up[0]=1 pending_key[0]=trusted.afr.r2-client-0 child_up[1]=1 pending_key[1]=trusted.afr.r2-client-1 dataselfheal=on metadataselfheal=1 entryselfheal=1 datachangelog=1 metadatachangelog=1 entry-change_log=1 read_child=1 favorite_child=-1 wait_count=1 ``` ``` [active graph - 1] conn.1.bound_xl./data/brick01a/homegfs.hashsize=14057 conn.1.bound_xl./data/brick01a/homegfs.name=/data/brick01a/homegfs/inode conn.1.boundxl./data/brick01a/homegfs.lrulimit=16384 #Least recently used size limit conn.1.boundxl./data/brick01a/homegfs.activesize=690 #Number of inodes undergoing some kind of fop to be precise on which there is at least one ref. conn.1.boundxl./data/brick01a/homegfs.lrusize=183 #Number of inodes present in lru list conn.1.boundxl./data/brick01a/homegfs.purgesize=0 #Number of inodes present in purge list ``` ``` [conn.1.bound_xl./data/brick01a/homegfs.active.324] #324th inode in active inode list gfid=e6d337cf-97eb-44b3-9492-379ba3f6ad42 #Gfid of the inode nlookup=13 #Number of times lookups happened from the client or from fuse kernel fd-count=4 #Number of fds opened on the inode ref=11 #Number of refs taken on the inode ia_type=1 #Type of the inode. This should be changed to some string :-( Ref by xl:.patchy-md-cache=11 #Further this there will be a list of xlators, and the ref count taken by each of them on this inode at the time of statedump [conn.1.bound_xl./data/brick01a/homegfs.lru.1] #1st inode in lru list. Note that ref count is zero for these inodes. gfid=5114574e-69bc-412b-9e52-f13ff087c6fc nlookup=5 fd-count=0 ref=0 ia_type=2 Ref by xl:.fuse=1 Ref by xl:.patchy-client-0=-1 ``` For each inode per xlator some context could be stored. This context can also be printed in the statedump. Here is the inode ctx of locks xlator ``` [xlator.features.locks.homegfs-locks.inode] path=/homegfs/users/dfrobins/gfstest/r4/SCRATCH/fort.5102 - path of the file mandatory=0 inodelk-count=5 #Number of inode locks lock-dump.domain.domain=homegfs-replicate-0:self-heal #Domain name where self-heals take locks to prevent more than one heal on the same file inodelk.inodelk=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551615, owner=080b1ada117f0000, client=0xb7fc30, connection-id=compute-30-029.com-3505-2014/06/29-14:46:12:477358-homegfs-client-0-0-1, granted at Sun Jun 29 11:01:00 2014 #Active lock information inodelk.inodelk=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551615, owner=c0cb091a277f0000, client=0xad4f10, connection-id=gfs01a.com-4080-2014/06/29-14:41:36:917768-homegfs-client-0-0-0, blocked at Sun Jun 29 11:04:44 2014 #Blocked lock information lock-dump.domain.domain=homegfs-replicate-0:metadata #Domain name where metadata operations take locks to maintain replication consistency lock-dump.domain.domain=homegfs-replicate-0 #Domain name where entry/data operations take locks to maintain replication consistency inodelk.inodelk=type=WRITE, whence=0, start=11141120, len=131072, pid = 18446744073709551615, owner=080b1ada117f0000, client=0xb7fc30, connection-id=compute-30-029.com-3505-2014/06/29-14:46:12:477358-homegfs-client-0-0-1, granted at Sun Jun 29 11:10:36 2014 #Active lock information ``` is one of the bugs which was debugged using statedump to see which data-structure is leaking. Here is the process used to find what the leak is using statedump. According to the bug the observation is that the process memory usage is increasing whenever one of the bricks is wiped in a replicate volume and a `full` self-heal is invoked to heal the contents. Statedump of the process is taken using `kill -USR1 <pid-of-gluster-self-heal-daemon>`. ``` grep -w num_allocs glusterdump.5225.dump.1405493251 num_allocs=77078 num_allocs=87070 num_allocs=117376 .... grep hot-count glusterdump.5225.dump.1405493251 hot-count=16384 hot-count=16384 hot-count=4095 .... Find the occurrences in the statedump file to figure out the tags. ``` grep of the statedump revealed too many allocations for the following data-types under replicate, gfcommonmt_asprintf gfcommonmt_char" }, { "data": "After checking afr-code for allocations with tag `gfcommonmtchar` found `data-self-heal` code path does not free one such allocated memory. `gfcommonmtmempool` suggests that there is a leak in pool memory. `replicate-0:dictt`, `glusterfs:datat` and `glusterfs:datapairt` pools are using lot of memory, i.e. coldcount is `0` and too many allocations. Checking source code of dict.c revealed that `key` in `dict` is allocated with `gfcommonmtchar` i.e. `2.` tag and value is created using gfasprintf which in-turn uses `gfcommonmtasprintf` i.e. `1.`. Browsing the code for leak in self-heal code paths lead to a line which over-writes a variable with new dictionary even when it was already holding a reference to another dictionary. After fixing these leaks, ran the same test to verify that none of the `numallocs` are increasing even after healing 10,000 files directory hierarchy in statedump of self-heal daemon. Please check this for more info about the fix. Statedump output of memory pools was used to test and verify the fixes to . On code analysis, dict_t objects were found to be leaking (in terms of not being unref'd enough number of times, during name self-heal. The test involved creating 100 files on plain replicate volume, removing them from one of the brick's backend, and then triggering lookup on them from the mount point. Statedump of the mount process was taken before executing the test case and after it, after compiling glusterfs with -DDEBUG flags (to have cold count set to 0 by default). Statedump output of the fuse mount process before the test case was executed: ``` pool-name=glusterfs:dict_t hot-count=0 cold-count=0 padded_sizeof=140 alloc-count=33 max-alloc=0 pool-misses=33 cur-stdalloc=14 max-stdalloc=18 ``` Statedump output of the fuse mount process after the test case was executed: ``` pool-name=glusterfs:dict_t hot-count=0 cold-count=0 padded_sizeof=140 alloc-count=2841 max-alloc=0 pool-misses=2841 cur-stdalloc=214 max-stdalloc=220 ``` Here, with cold count being 0 by default, `cur-stdalloc` indicated the number of `dictt` objects that were allocated in heap using `memget()`, and yet to be freed using `memput()` (refer to this for more details on how mempool works). After the test case (name selfheal of 100 files), there was a rise in the cur-stdalloc value (from 14 to 214) for `dictt`. After these leaks were fixed, glusterfs was again compiled with -DDEBUG flags, and the same steps were performed again and statedump was taken before and after executing the test case, of the mount. This was done to ascertain the validity of the fix. And the following are the results: Statedump output of the fuse mount process before executing the test case: ``` pool-name=glusterfs:dict_t hot-count=0 cold-count=0 padded_sizeof=140 alloc-count=33 max-alloc=0 pool-misses=33 cur-stdalloc=14 max-stdalloc=18 ``` Statedump output of the fuse mount process after executing the test case: ``` pool-name=glusterfs:dict_t hot-count=0 cold-count=0 padded_sizeof=140 alloc-count=2837 max-alloc=0 pool-misses=2837 cur-stdalloc=14 max-stdalloc=119 ``` The value of cur-stdalloc remained 14 before and after the test, indicating that the fix indeed does what it's supposed to do. is one of the bugs where statedump was helpful in finding where the frame was lost. Here is the process used to find where the hang is using statedump. When the hang was observed, statedumps are taken for all the processes. On mount's statedump the following stack is shown: ``` [global.callpool.stack.1.frame.1] ref_count=1 translator=fuse complete=0 [global.callpool.stack.1.frame.2] ref_count=0 translator=r2-client-1 complete=1 <<-- Client xlator completed the readdirp call and unwound to afr parent=r2-replicate-0 windfrom=afrdo_readdir windto=children[callchild]->fops->readdirp unwindfrom=client33readdirpcbk unwindto=afrreaddirp_cbk [global.callpool.stack.1.frame.3] ref_count=0 translator=r2-replicate-0 complete=0 <<- Afr xlator is not unwinding for some reason. parent=r2-dht windfrom=dhtdo_readdir wind_to=xvol->fops->readdirp unwindto=dhtreaddirp_cbk [global.callpool.stack.1.frame.4] ref_count=1 translator=r2-dht complete=0 parent=r2-io-cache windfrom=iocreaddirp windto=FIRSTCHILD(this)->fops->readdirp unwindto=iocreaddirp_cbk [global.callpool.stack.1.frame.5] ref_count=1 translator=r2-io-cache complete=0 parent=r2-quick-read windfrom=qrreaddirp windto=FIRSTCHILD (this)->fops->readdirp unwindto=qrreaddirp_cbk ``` `unwindto` shows that call was unwound to `afrreaddirp_cbk` from client xlator. Inspecting that function revealed that afr is not unwinding the stack when" } ]
{ "category": "Runtime", "file_name": "statedump.md", "project_name": "Gluster", "subcategory": "Cloud Native Storage" }
[ { "data": "OPA is a lightweight general-purpose policy engine that can be co-located with MinIO server, in this document we talk about how to use OPA HTTP API to authorize requests. It can be used with any type of credentials (STS based like OpenID or LDAP, regular IAM users or service accounts). OPA is enabled through MinIO's Access Management Plugin feature. ```sh podman run -it \\ --name opa \\ --publish 8181:8181 \\ docker.io/openpolicyagent/opa:0.40.0-rootless \\ run --server \\ --log-format=json-pretty \\ --log-level=debug \\ --set=decision_logs.console=true ``` In another terminal, create a policy that allows root user all access and for all other users denies `PutObject`: ```sh cat > example.rego <<EOF package httpapi.authz import input default allow = false allow { input.owner == true } allow { input.action != \"s3:PutObject\" input.owner == false } EOF ``` Then load the policy via OPA's REST API. ``` curl -X PUT --data-binary @example.rego \\ localhost:8181/v1/policies/putobject ``` Set the `MINIOPOLICYPLUGIN_URL` as the endpoint that MinIO should send authorization requests to. Then start the server. ```sh export MINIOPOLICYPLUGIN_URL=http://localhost:8181/v1/data/httpapi/authz/allow export MINIOCICD=1 export MINIOROOTUSER=minio export MINIOROOTPASSWORD=minio123 minio server /mnt/data ``` Ensure that `mc` is installed and the configured with the above server with the alias `myminio`. ```sh mc mb myminio/test mc admin user add myminio foo foobar123 mc cp /etc/issue myminio/test/ export MCHOSTfoo=http://foo:foobar123@localhost:9000 mc ls foo/test mc cat foo/test/issue mc cp /etc/issue myminio/test/issue2 ```" } ]
{ "category": "Runtime", "file_name": "opa.md", "project_name": "MinIO", "subcategory": "Cloud Native Storage" }
[ { "data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|" } ]
{ "category": "Runtime", "file_name": "fuzzy_mode_convert_table.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }