content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List all endpoints ``` cilium-dbg endpoint list [flags] ``` ``` -h, --help help for list --no-headers Do not print headers -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage endpoints"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_endpoint_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: CephObjectStoreUser CRD Rook allows creation and customization of object store users through the custom resource definitions (CRDs). The following settings are available for Ceph object store users. ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: my-user namespace: rook-ceph spec: store: my-store displayName: my-display-name quotas: maxBuckets: 100 maxSize: 10G maxObjects: 10000 capabilities: user: \"*\" bucket: \"*\" ``` `name`: The name of the object store user to create, which will be reflected in the secret and other resource names. `namespace`: The namespace of the Rook cluster where the object store user is created. `store`: The object store in which the user will be created. This matches the name of the objectstore CRD. `displayName`: The display name which will be passed to the `radosgw-admin user create` command. `clusterNamespace`: The namespace where the parent CephCluster and CephObjectStore are found. If not specified, the user must be in the same namespace as the cluster and object store. To enable this feature, the CephObjectStore allowUsersInNamespaces must include the namespace of this user. `quotas`: This represents quota limitation can be set on the user. Please refer for details. `maxBuckets`: The maximum bucket limit for the user. `maxSize`: Maximum size limit of all objects across all the user's buckets. `maxObjects`: Maximum number of objects across all the user's buckets. `capabilities`: Ceph allows users to be given additional permissions. Due to missing APIs in go-ceph for updating the user capabilities, this setting can currently only be used during the creation of the object store user. If a user's capabilities need modified, the user must be deleted and re-created. See the for more info. Rook supports adding `read`, `write`, `read, write`, or `*` permissions for the following resources: `user` `buckets` `usage` `metadata` `zone` `roles` `info` `amz-cache` `bilog` `mdlog` `datalog` `user-policy` `odic-provider` `ratelimit`"
}
] |
{
"category": "Runtime",
"file_name": "ceph-object-store-user-crd.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Faster RBD/CephFS RWO recovery in case of node loss. For RBD RWO recovery: When a node is lost where a pod is running with the RBD RWO volume is mounted, the volume cannot automatically be mounted on another node. If two clients are write to the same volume it could cause corruption. The node must be guaranteed to be down before the volume can be mounted on another node. For CephFS recovery: With the current design the node recovery will be faster for CephFS. For RBD RWO recovery: We have a manual solution to the problem which involves forceful deletion of a pod so that forced detachment and attachment work is possible. The problem with the current solution is that even after the forced pod deletion it takes around 11 minutes for the volume to mount on the new node. Also there are still chances of data corruption if the old pod on the lost node comes back online, causing multiple writers and lead to data corruption if the is not followed to manually block nodes. For CephFS recovery: Currently, CephFS recovery is slower in case of node loss. Note: This solution requires minimum kubernetes version 1.26.0 The kubernetes feature is available starting in Kubernetes 1.26 to help improve the volume recovery during node loss. When a node is lost, the admin is required to add the taint `out-of-service` manually to the node. After the node is tainted, Kubernetes will: Remove the volume attachment from the lost node Delete the old pod on the lost node Create a new pod on the new node Allow the volume to be attached to the new node Once this taint is applied manually, Rook will create a . The will then blocklist the node to prevent any ceph rbd/CephFS client on the lost node from writing any more data. After the new pod is running on the new node and the old node which was lost comes back, Rook will delete the . example of taint to be applied to lost node: ```console kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute kubectl taint nodes <node-name>"
},
{
"data": "``` Note: This will be enabled by default in Rook if the NetworkFence CR is found, in the case for some reason user wants to disable this feature in Rook can edit the `rook-ceph-operator-config` configmap and update the `ROOKWATCHFORNODEFAILURE: \"false\"`. There are multiple networking options available for example, Host Networking, Pod networking, Multus etc. This make it difficult to know which NodeIP address to blocklist. For this we'll follow the following approach which will work for all networking options, except when connected to an external Ceph cluster. Get the `volumesInUse` from the node which has the taint `out-of-service`. List all the pv and compare the pv `spec.volumeHandle` with the node `volumesInUse` field `volumeHandle` Example: Below is sample Node volumeInUse field ``` volumesInUse: kubernetes.io/csi/rook-ceph.rbd.csi.ceph.com^0001-0009-Rook-ceph-0000000000000002-24862838-240d-4215-9183-abfc0e9e4002 ``` and the following is pv `volumeInHandle` ``` volumeHandle: 0001-0009-rook-ceph-0000000000000002-24862838-240d-4215-9183-abfc0e9e4002 ``` For Ceph volumes on that node: If RBD PVC makes use of the rbd status API example: ```console $ rbd status <poolname>/<image_name> Watchers: watcher=172.21.12.201:0/4225036114 client.17881 cookie=18446462598732840961 ``` If CephFS PVC uses below CLI to clients connect to subvolume example: ```console $ ceph tell mds.* client ls ... ... ... \"addr\": { \"type\": \"v1\", \"addr\": \"192.168.39.214:0\", \"nonce\": 1301050887 } ... ... ... ``` Get IPs from step 3 (in above example `172.21.12.201`) blocklist the IP where the volumes are mounted. Suggested change Example of a NetworkFence CR that the Rook operator would create when a `node.kubernetes.io/out-of-service` taint is added on the node: ```yaml apiVersion: csiaddons.openshift.io/v1alpha1 kind: NetworkFence metadata: name: <name> # We will keep the name the same as the node name namespace: <ceph-cluster-namespace> spec: driver: <driver-name> # extract the driver name from the PV object fenceState: <fence-state> # For us it will be `Fenced` cidrs: 172.21.12.201 secret: name: <csi-rbd-provisioner-secret-name/csi-cephfs-provisioner-secret-name> # from pv object namespace: <ceph-cluster-namespace> parameters: clusterID: <clusterID> # from pv.spec.csi.volumeAttributes ``` Once the node is back online, the admin removes the taint. Remove the taint ```console kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoExecute- kubectl taint nodes <node-name> node.kubernetes.io/out-of-service=nodeshutdown:NoSchedule- ``` Rook will detect the taint is removed from the node, and immediately unfence the node by deleting the corresponding networkFence CR. Rook will not automate tainting the node when they go offline. This is a decision the admin needs to make. But Rook will consider creating a sample script to watch for unavailable nodes and automatically taint the node based on how long node is offline. The admin can choose to enable the automated taints by running this example script. How to handle MultiCluster scenarios where two different node on different clusters have the same overlapping IP's."
}
] |
{
"category": "Runtime",
"file_name": "node-loss-rbd-cephfs.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Cobra can generate PowerShell completion scripts. Users need PowerShell version 5.0 or above, which comes with Windows 10 and can be downloaded separately for Windows 7 or 8.1. They can then write the completions to a file and source this file from their PowerShell profile, which is referenced by the `$Profile` environment variable. See `Get-Help about_Profiles` for more info about PowerShell profiles. Completion for subcommands using their `.Short` description Completion for non-hidden flags using their `.Name` and `.Shorthand` Command aliases Required, filename or custom flags (they will work like normal flags) Custom completion scripts"
}
] |
{
"category": "Runtime",
"file_name": "powershell_completions.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(cluster-config-networks)= All members of a cluster must have identical networks defined. The only configuration keys that may differ between networks on different members are , , and . See {ref}`clustering-member-config` for more information. Creating additional networks is a two-step process: Define and configure the new network across all cluster members. For example, for a cluster that has three members: incus network create --target server1 my-network incus network create --target server2 my-network incus network create --target server3 my-network ```{note} You can pass only the member-specific configuration keys `bridge.external_interfaces`, `parent`, `bgp.ipv4.nexthop` and `bgp.ipv6.nexthop`. Passing other configuration keys results in an error. ``` These commands define the network, but they don't create it. If you run , you can see that the network is marked as \"pending\". Run the following command to instantiate the network on all cluster members: incus network create my-network ```{note} You can add configuration keys that are not member-specific to this command. ``` If you missed a cluster member when defining the network, or if a cluster member is down, you get an error. Also see {ref}`network-create-cluster`. (cluster-https-address)= You can configure different networks for the REST API endpoint of your clients and for internal traffic between the members of your cluster. This separation can be useful, for example, to use a virtual address for your REST API, with DNS round robin. To do so, you must specify different addresses for {config:option}`server-cluster:cluster.httpsaddress` (the address for internal cluster traffic) and {config:option}`server-core:core.httpsaddress` (the address for the REST API): Create your cluster as usual, and make sure to use the address that you want to use for internal cluster traffic as the cluster address. This address is set as the `cluster.https_address` configuration. After joining your members, set the `core.https_address` configuration to the address for the REST API. For example: incus config set core.https_address 0.0.0.0:8443 ```{note} `core.https_address` is specific to the cluster member, so you can use different addresses on different members. You can also use a wildcard address to make the member listen on multiple interfaces. ```"
}
] |
{
"category": "Runtime",
"file_name": "cluster_config_networks.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "To create a managed network, use the command and its subcommands. Append `--help` to any command to see more information about its usage and available flags. (network-types)= The following network types are available: ```{list-table} :header-rows: 1 - Network type Documentation Configuration options - `bridge` {ref}`network-bridge` {ref}`network-bridge-options` - `ovn` {ref}`network-ovn` {ref}`network-ovn-options` - `macvlan` {ref}`network-macvlan` {ref}`network-macvlan-options` - `sriov` {ref}`network-sriov` {ref}`network-sriov-options` - `physical` {ref}`network-physical` {ref}`network-physical-options` ``` Use the following command to create a network: ```bash incus network create <name> --type=<networktype> [configurationoptions...] ``` See {ref}`network-types` for a list of available network types and links to their configuration options. If you do not specify a `--type` argument, the default type of `bridge` is used. (network-create-cluster)= If you are running an Incus cluster and want to create a network, you must create the network for each cluster member separately. The reason for this is that the network configuration, for example, the name of the parent network interface, might be different between cluster members. Therefore, you must first create a pending network on each member with the `--target=<cluster_member>` flag and the appropriate configuration for the member. Make sure to use the same network name for all members. Then create the network without specifying the `--target` flag to actually set it up. For example, the following series of commands sets up a physical network with the name `UPLINK` on three cluster members: ```{terminal} :input: incus network create UPLINK --type=physical parent=br0 --target=vm01 Network UPLINK pending on member vm01 :input: incus network create UPLINK --type=physical parent=br0 --target=vm02 Network UPLINK pending on member vm02 :input: incus network create UPLINK --type=physical parent=br0 --target=vm03 Network UPLINK pending on member vm03 :input: incus network create UPLINK --type=physical Network UPLINK created ``` Also see {ref}`cluster-config-networks`. (network-attach)= After creating a managed network, you can attach it to an instance as a {ref}`NIC device <devices-nic>`. To do so, use the following command: incus network attach <networkname> <instancename> [<devicename>] [<interfacename>] The device name and the interface name are optional, but we recommend specifying at least the device name. If not specified, Incus uses the network name as the device name, which might be confusing and cause problems. For example, Incus images perform IP auto-configuration on the `eth0` interface, which does not work if the interface is called differently. For example, to attach the network `my-network` to the instance `my-instance` as `eth0` device, enter the following command: incus network attach my-network my-instance eth0 The command is a shortcut for adding a NIC device to an instance. Alternatively, you can add a NIC device based on the network configuration in the usual way: incus config device add <instancename> <devicename> nic network=<network_name> When using this way, you can add further configuration to the command to override the default settings for the network if needed. See {ref}`NIC device <devices-nic>` for all available device options."
}
] |
{
"category": "Runtime",
"file_name": "network_create.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "release v1.12.15 fix lsp not updating addresses (#4011) bump gosec to 2.19.0 fix: close file (#4007) fix node gc (#3992) bump go to 1.22.3 (#3989) build(deps): bump google.golang.org/protobuf from 1.34.0 to 1.34.1 (#3981) build(deps): bump golang.org/x/sys from 0.19.0 to 0.20.0 (#3980) prepare for next release dependabot[bot] guangwu zhangzujian * release v1.12.14 ignore CVEs in CNI plugins ipam: fix IPRangeList clone (#3979) remove unused e2e test cases (#3968) chart: fix kubeVersion to allow for patterns that match sub versions (#3975) prepare for next release Joachim Hill-Grannec zhangzujian * release v1.12.13 bump k8s to v1.27.13 (#3963) fix subnet check ip in using to avoid ipam init (#3964) add patch permission for cni clusterrole fix index out of range (#3958) fix nil pointer dereference (#3951) fix: lower camel case (#3942) prepare for next release Zhao Congqi bobz965 * release v1.12.12 drop both IPv4 and IPv6 traffic in networkpolicy drop acl (#3940) update net package version (#3936) add metric for subnet info (#3932) cni-server: set sysctl variables only when the env variables are passed in (#3929) ovn: check whether db file is fixed (#3928) fix backport (#3924) append filepath when compare cni config (#3923) re calculate subnet using ips while inconsistency detected (#3920) update ovn monitor (#3903) add monitor for sysctl para (#3913) refactor kubevirt vm e2e (#3914) support specifying routes when providing IPAM for other CNI plugins (#3904) Fix init sg (#3890) distinguish portSecurity with security group (#3862) fix br-external not init because of no permission after ovn-nat-gw configmap created (#3902) build(deps): bump google.golang.org/grpc from 1.63.0 to 1.63.2 (#3900) build(deps): bump github.com/osrg/gobgp/v3 from 3.24.0 to 3.25.0 (#3897) build(deps): bump golang.org/x/sys from 0.18.0 to 0.19.0 (#3896) build(deps): bump golang.org/x/mod from 0.16.0 to 0.17.0 (#3895) build(deps): bump google.golang.org/grpc from 1.62.1 to 1.63.0 (#3898) build(deps): bump github.com/Microsoft/hcsshim from 0.12.1 to 0.12.2 (#3891) build(deps): bump github.com/cenkalti/backoff/v4 from 4.2.1 to 4.3.0 (#3875) chart: fix ovs-ovn update strategy (#3887) Fix index out of range in controller security_group (#3845) fix: ipam invalid memory address or nil pointer dereference (#3889) add tracepath (#3884) prepare for next release Longchuanzheng Zhao Congqi bobz965 dependabot[bot] hzma zhangzujian * release v1.12.11 change northd probe interval to 5s (#3882) fix go fmt ovn: reduce down time during upgrading from version 21.06 (#3881) compile binaries with debug symbols for debug images (#3871) ovn: update patch for skipping ct (#3879) ci: fix memory leak reporting caused by ovn-controller crashes (#3873) Update dependencies in go.mod file prepare for next release changluyi zhangzujian * release v1.12.10 fix: when give ipv4 cidr ipv6 gateway, the gateway will extend infinitely (#3860) prepare for next release changluyi release v1.12.9 chart: fix missing ENABLE_IC (#3851) fix: cleanup.sh (#3569) exclude vip as encap ip (#3855) fix: duplicate ip deletion (#3589) iptables: reject access to invalid service port only for TCP (#3843) Fix 1.12 ipam deletion (#3554) delete lsp and ipam with ip (#3540) update protobuf module (#3841) kubectl-ko: fix subnet diagnose failure (#3808) pinger: do not setup metrics if disabled (#3806) Fix the failure to enable multi-network card traffic mirroring for newly created pods (#3805) Makefile: add target kind-install-metallb (#3795) build(deps): bump golang.org/x/sys from 0.17.0 to 0.18.0 (#3792) build(deps): bump golang.org/x/mod from 0.15.0 to 0.16.0 (#3793) Refactor build-go targets to use the -trimpath flag fix incorrect variable assignment (#3787) prepare for next release bobz965 changluyi dependabot[bot] hzma xieyanker zhangzujian * release v1.12.8 update release.sh prepare for next release release v1.12.7 fix sts pod's logical switch port do not update externa_id vendor and ls (#3778) update dependence update release.sh update release.sh refactor ovn clusterrole (#3755) prepare for next release changluyi hzma release"
},
{
"data": "if startOVNIC firstly, and setazName secondly, the ovn-ic-db may sync the old azname (#3759) (#3763) modify chart.yaml version add action for build base prepare for next release changluyi release v1.12.5 ci: bump github actions ci: bump azure/setup-helm to v4.0.0 (#3743) base: install libmnl0 instead of libmnl-dev (#3745) ci: collect ko logs for all kind clusters (#3744) ci: fix ovn ic log file name (#3742) ci: bump kind and node image update chart relase action workflow (#3728,#3734,#3691) (#3738) remove invalid ovs build option (#3733) dpdk: remove unnecessary ovn patch (#3736) Fix: Resolve issue with skipped execution of sg annotations (#3700) fix: all gw nodes (#3723) ovn: remove unnecessary patch (#3720) ci: fix artifact upload bump k8s to v1.27.10 (#3693) fix backport (#3697) remove unused (#3696) kube-ovn-controller: remove unused codes (#3692) remove fip controller (#3684) build(deps): bump github.com/osrg/gobgp/v3 from 3.22.0 to 3.23.0 (#3688) Compatible with controller deployment methods before kube-ovn 1.11.16 (#3677) set after genevsys6081 started (#3680) ovn: add nb option version_compatibility (#3671) Makefile: fix install/upgrade chart (#3678) ovn: do not send direct traffic between lports to conntrack (#3663) ovn-ic-ecmp refactor 1.12 (#3637) build(deps): bump google.golang.org/grpc from 1.60.1 to 1.61.0 (#3669) fix 409 (#3662) fix nil pointer (#3661) chart: fix parsing image tag when the image url contains a port (#3644) ovs: reduce cpu utilization (#3650) kube-ovn-monitor and kube-ovn-pinger export pprof path (#3657) build(deps): bump github.com/onsi/gomega from 1.30.0 to 1.31.0 (#3641) build(deps): bump actions/cache from 3 to 4 (#3643) build(deps): bump github.com/onsi/ginkgo/v2 from 2.14.0 to 2.15.0 (#3642) SYSCTLIPV4IPNOPMTU_DISC set default to 0 build(deps): bump github.com/evanphx/json-patch/v5 from 5.7.0 to 5.8.0 (#3628) chart: fix ovs-ovn upgrade (#3613) build(deps): bump github.com/emicklei/go-restful/v3 (#3619) build(deps): bump github.com/emicklei/go-restful/v3 (#3606) update policy route when subnet cidr is changed (#3587) update ipset to v7.17 (#3601) ovs: increase cpu limit to 2 cores (#3530) build(deps): bump github.com/osrg/gobgp/v3 from 3.21.0 to 3.22.0 (#3603) do not count ips in excludeIPs as available and using IPs (#3582) fix security issue (#3588) ovn0 ipv6 addr gen mode set 0 (#3579) fix: add err log (#3572) build(deps): bump google.golang.org/protobuf from 1.31.0 to 1.32.0 (#3571) Makefile: fix kwok installation (#3561) fix u2o infinity recycle do not calculate subnet.spec.excludeIPs as availableIPs (#3550) add np prefix to networkpolicy name when networkpolicy's name starts with number (#3551) fix: apply changes to the latest version (#3514) fix ovn ic not clean lsp and lrp when az name contains \"-\" (#3541) build(deps): bump golang.org/x/crypto from 0.16.0 to 0.17.0 (#3544) Revert \"ovn-central: check raft inconsistency from nb/sb logs (#3532)\" ovn-central: check raft inconsistency from nb/sb logs (#3532) fix chassis gc (#3525) prepare for next release Changlu Yi Qinghao Huang Zhao Congqi bobz965 changluyi dependabot[bot] hzma zhangzujian * set release v1.12.4 cni-server: set sysctl variable net.ipv4.ipnopmtu_disc to 1 by default (#3504) fix: duplicate gw nodes (#3500) add drop invalid rst 1.12 (#3490) delete String() function (#3488) fix: lost gc lsp in previous pr (#3493) fix: ipam clean all pod nic ip address and mac even if just delete a nic (#3453) fix: check chassis before creation (#3482) build(deps): bump github.com/osrg/gobgp/v3 from 3.20.0 to 3.21.0 (#3481) fix ovn eip not calculated (#3477) fix: calculate subnet before handle finalizer (#3469) schedule kube-ovn-controller on the kube-ovn-master node (#3479) delete vm's lsp and release ipam.ip (#3476) build(deps): bump github.com/onsi/ginkgo/v2 from 2.13.1 to 2.13.2 (#3475) build(deps): bump golang.org/x/time from 0.4.0 to 0.5.0 (#3463) build(deps): bump golang.org/x/sys (#3464) kube-ovn-cni: fix pinger result when timeout is reached (#3457) ovs-healthcheck: ignore error when log file does not exist (#3456) ipam: fix duplicate allocation after cidr"
},
{
"data": "(#3455) fix e2e install failed readd assigned ip addresses to ipam when subnet has been changed (#3448) base: fix missing CFLAGS -fPIC for arm64 (#3428) fix: multus network status not find dpdk interface name (#3432) bump k8s to v1.27.8 (#3425) ci: fix missing environment variables (#3430) base: fix dpdk build failure (#3426) base: fix ovn build failure (#3340) fix: lsp dhcp options set failed when subnet dhcp option is enabled (#3422) trivy: ignore CVE-2023-5528 update policy route nexthops para ci: fix dpdk jobs (#3405) ci: free disk space for all x86 jobs (#3406) ci: free disk space (#3404) fix dpdk workflow (#3384) base: fix ovn-northd/ovn-controller not creating pidfile in arm64 (#3413) support ovn ic ecmp (#3348) (#3410) fix kube-ovn-monitor probe (#3409) fix: wrong usage about DeepEqual (#3396) subnet support config mtu (#3367) fix dualStack network checkgw raise panic (#3392) feat: dpdk-22.11.1 support by kube-ovn (#3388) fix: gc delete multus ip cr and lsp setting when enable keep vm ip (#3378) add kube-ovn-controller nodeAffinity prefer not on ic gateway fix: externalID map should not include external_ids (#3385) build(deps): bump github.com/moby/sys/mountinfo from 0.6.2 to 0.7.0 build(deps): bump golang.org/x/mod from 0.13.0 to 0.14.0 (#3380) build(deps): bump golang.org/x/time from 0.3.0 to 0.4.0 (#3383) build(deps): bump golang.org/x/time from 0.3.0 to 0.4.0 (#3383) prepare for next release Changlu Yi bobz965 changluyi dependabot[bot] hzma pengbinbin1 wujixin xujunjie-cover zhangzujian * * set release for 1.12.3 kube-ovn-dpdk building need its dpdk base img (#3371) delete check for existing ip cr (#3361) fix IP residue after changing subnet of vm in some scenarios (#3370) sync acp chart (#3364) kube-ovn-controller: fix memory growth caused by unused workqueue build(deps): bump github.com/osrg/gobgp/v3 from 3.19.0 to 3.20.0 (#3362) fix access svc ip failed, when acl is on (#3350) Add Layer 2 forwarding for subnet ports again (#3300) add compact for release-1.12 (#3342) prepare for next release Tobias bobz965 changluyi dependabot[bot] hzma * set release 1.12.2 Nat reuse router port external ip (#3313) dump cpu/mem profile into file on signal SIGUSR1/SIGUSR2 (#3262) kube-ovn-controller: fix ovn ic log directory not mounted to hostpath (#3322) fix golang lint error (#3323) update go version fix build error add type assertion for ip crd (#3311) add load balancer health check (#3216) build(deps): bump google.golang.org/grpc from 1.58.3 to 1.59.0 (#3310) build(deps): bump github.com/Microsoft/hcsshim from 0.11.1 to 0.11.2 (#3309) support vpc configuration of multiple external network segments through label and crd (#3264) sync subnet to vpc while switching between custom VPC and default VPC (#3218) security: ignore kubectl cve (#3305) Don't enqueue VPC update when DeletionTimestamp is zero (#3302) Revert \"update base image to ubuntu:23.10 (#3289)\" add base rules for allowing vrrp packets (#3293) build(deps): bump google.golang.org/grpc from 1.58.2 to 1.58.3 (#3295) build(deps): bump golang.org/x/net from 0.16.0 to 0.17.0 (#3296) webhook: fix ip validation when pod is annotated with an ippool name (#3284) webhook: use dedicated port for health probe (#3285) add concurrency limiter to ovs-vsctl (#3288) update base image to ubuntu:23.10 (#3289) support custom vpc dns its deployment replicas (#3286) ovs: load kernel module ip_tables only when it exists (#3281) update directory name in charts readme (#3276) fix ovn build failure (#3275) build(deps): bump golang.org/x/sys from 0.12.0 to 0.13.0 (#3271) build(deps): bump golang.org/x/sys from 0.12.0 to 0.13.0 (#3271) build(deps): bump github.com/prometheus/client_golang (#3266) build(deps): bump github.com/prometheus/client_golang (#3266) prepare for the next release pinger: increase packet send interval (#3259) add init container in vpc-nat-gateway statefulset for init (#3254) lrp should use chassis name instead of uuid (#3258) Tobias bobz965 dependabot[bot] hzma wenwenxiong zcq98 * * set release for v1.12.1 fix: for existing nic, no need to set the port type to internal (#3243) adjust vip prints as ip (#3248) add dpdk"
},
{
"data": "(#3151) build(deps): bump google.golang.org/grpc from 1.58.1 to 1.58.2 (#3251) build(deps): bump github.com/Microsoft/hcsshim from 0.11.0 to 0.11.1 (#3245) update kubectl to v1.28.2 fix goproxy Denial of Service vulnerability (#3240) build(deps): bump github.com/cyphar/filepath-securejoin (#3239) build(deps): bump github.com/onsi/ginkgo/v2 from 2.11.0 to 2.12.1 (#3237) build(deps): bump github.com/docker/docker (#3234) build(deps): bump github.com/evanphx/json-patch/v5 from 5.6.0 to 5.7.0 (#3235) build(deps): bump github.com/osrg/gobgp/v3 from 3.17.0 to 3.18.0 (#3238) build(deps): bump google.golang.org/grpc from 1.57.0 to 1.58.1 build(deps): bump github.com/Microsoft/hcsshim from 0.10.0 to 0.11.0 (#3228) build(deps): bump golang.org/x/sys from 0.11.0 to 0.12.0 (#3232) chart: remove subnet finalizers before subnets are deleted (#3213) kubectl-ko: add new command ovn-trace for tracing ovn lflows only (#3202) fix conflict after cherry-pick add golang lint (#3154) add special handling for the route policy of the default VPC (#3194) fix add static route to wrong table of ovn (#3195) netpol: fix duplicate default drop acl (#3197) add log to help find conflict ip owner (#3191) suuport user custom log location (#3186) enable set --ovn-northd-n-threads (#3150) Fix max unavailable (#3149) add probe (#3133) underlay: fix ip/route tranfer when the nic is managed by NetworkManager (#3184) ci: wait for terminating ovs-ovn pod to disappear (#3160) fix ovn build (#3166) chart: fix ovs-ovn upgrade (#3164) subnet: fix deleting lr policy on node deletion (#3176) ci/test: bump various versions (#3162) kubectl-ko: get ovn db leaders only on necessary (#3158) underlay: fix NetworkManager operation (#3147) Revert \"enable set --ovn-northd-n-threads\" enable set --ovn-northd-n-threads sbctl chassis operation replace with libovsdb (#3119) base: remove ovn patch for skipping ct (#3141) Enable set probe (#3145) support recreate a backup pod with full annotation (#3144) fix ovn nat not clean (#3139) ovn: do not send direct traffic between lports to conntrack (#3131) delete append externalIds process in initIPAM (#3134) add e2e test for ovn db recover (#3118) bump version number docs: updated CHANGELOG.md (#3122) bobz965 changluyi dependabot[bot] github-actions[bot] hzma * * update changelog build(deps): bump sigs.k8s.io/controller-runtime from 0.15.0 to 0.15.1 (#3120) ovn: fix corrupted database file on start (#3112) some fixes in e2e (#3116) controller: fix vpc update (#3117) increase event burst size (#3115) build(deps): bump golang.org/x/sys from 0.10.0 to 0.11.0 (#3114) ovn eip (#3107) fix u2o policy route allocate too many openflows cause oom (#3099) Fix relevant annotations are not deleted in hotnoplug nic process (#3108) ovn: delete the db file if the node with new empty db file cannot join cluster for more than 120s (#3101) get all chassis once (#3103) distinguish nat ip for central subnet with ecmp and active-standby (#3100) build(deps): bump github.com/osrg/gobgp/v3 from 3.16.0 to 3.17.0 (#3105) add log near err (#3098) iptables: reject access to invalid service port when kube-proxy works in IPVS mode (#3059) Ovn nat 1 (#3095) skip ok pod (#3090) ipam: return error for invalid ip range (#3088) some fixes in e2e (#3094) bugfix if only one port bind to the sg, then unbind the port to the sg ,it will not enforce in portgroup (#3092) fix .status.default when initializing the default vpc (#3086) fix repeate set chassis (#3083) build(deps): bump google.golang.org/grpc from 1.56.2 to 1.57.0 (#3085) fix go fmt fix kube-ovn-speaker log (#3081) remove FOSSA status card cni-server: fix ovn mappings for vpc nat gateway (#3075) fix kube-ovn-speaker (#3076) build(deps): bump github.com/Microsoft/hcsshim from 0.9.10 to 0.10.0 (#3079) ovn client: fix sb chassis existence check (#3072) e2e: fix switch lb rule test (#3071) bump github.com/docker/docker to v24.0.5 (#3073) iptables: add --random-fully to SNAT rules (#3066) update lint tmeout build(deps): bump github.com/onsi/gomega from 1.27.9 to 1.27.10 (#3069) bump k8s to v1.27.4 (#3063) e2e: do not import pkg/daemon (#3055) build(deps): bump github.com/onsi/gomega from 1.27.8 to"
},
{
"data": "(#3065) build(deps): bump github.com/Microsoft/hcsshim from 0.9.9 to 0.9.10 (#3061) ci: fix multus installation (#3062) add srl connectivity test (#3056) ipam: fix ippool with single dual-stack address (#3054) controller: skip VIP gc if LB not found (#3048) keep vm vip when enableKeepVmIP is true (#3053) cni: reduce memory usage (#3047) set genevsys6081 tx checksum off (#3045) fix vpc lb init (#3046) custom vpc pod support tcp http probe with tproxy method (#3024) change log (#3042) Makefile: add deepflow and kwok installation (#3036) windows: fix ovn patches (#3035) ci: pin go version to 1.20.5 (#3034) static ip in exclude-ips can be allocated normally when subnet's availableIPs is 0 (#3031) pinger: use fully qualified domain name (#3032) feat: suport kubevirt nic hotplug (#3013) fix lrp eip not clean (#3026) build(deps): bump helm/kind-action from 1.7.0 to 1.8.0 (#3029) update maintainer uninstall.sh: fix ipset name (#3028) build(deps): bump github.com/docker/docker (#3027) replace ovn legacy client with libovsdb (#3018) install.sh: fix duplicate resources apply (#3023) build(deps): bump github.com/docker/docker (#3019) build(deps): bump google.golang.org/grpc from 1.56.1 to 1.56.2 (#3020) ovn: fix cluster connections when SSL is enabled (#3001) cleanup.sh: wait for provier-networks to be deleted before deleting kube-ovn-cni (#3006) kube-ovn-controller: fix workqueue metrics (#3011) ci: fix go cache key (#3015) fix vlan subnet use logical gw can not access outside cluster node (#3007) build(deps): bump github.com/prometheus-community/pro-bing (#3016) fix vpc already delete while delete policy route (#3005) make compatible with simplicified enable-eip-snat-cm (#3009) build(deps): bump golang.org/x/sys from 0.9.0 to 0.10.0 (#3012) subnet: fix nat outgoing policy rule (#3003) build(deps): bump github.com/osrg/gobgp/v3 from 3.15.0 to 3.16.0 (#3010) fix subnet finalizer (#3004) chart: fix readOnly in volumes (#3002) libovsdb: various bug fixes (#2998) choose subnet by pod's annotation in networkpolicy (#2987) IPPool: fix missing support for CIDR (#2982) kubectl ko performance enhance (#2975) fix deleting old sb chassis for a re-added node (#2989) add e2e for new ippool feature (#2981) underlay: fix NetworkManager syncer for virtual interfaces (#2988) underlay: does not set a device managed to no if it has VLAN managed by NM (#2986) build(deps): bump google.golang.org/protobuf from 1.30.0 to 1.31.0 (#2985) support helm install hybrid_dpdk ovs-ovn (#2980) add unittest for IPAM (#2977) IPAM: fix subnet mutex not released when static IP is out of range (#2979) fix initialization check of vpc nat gateway configuration (#2978) refactor: make qos test cases parallel (#2957) IPAM: add support for ippool (#2958) build(deps): bump google.golang.org/grpc from 1.56.0 to 1.56.1 (#2974) ovn ic support dual (#2970) base: fix ovn patches (#2971) build(deps): bump github.com/onsi/ginkgo/v2 from 2.10.0 to 2.11.0 (#2968) add detail comment (#2969) add host multicast perf (#2965) cni-server: reconcile ovn0 routes periodically (#2963) uninstall.sh: flush and delete iptables chain OVN-MASQUERADE (#2961) fix e2e failed (#2960) u2o specify u2oip from v1.9 (#2934) underlay: sync NetworkManager IP config to OVS bridge (#2949) chore: USERS.md (#2955) bump k8s version to v1.27.3 (#2953) ci: fix build-base strategy (#2950) e2e: add qos policy test cases (#2924) typo (#2952) build(deps): bump google.golang.org/grpc from 1.55.0 to 1.56.0 (#2951) build(deps): bump github.com/prometheus/client_golang (#2948) Revert \"nm not managed only in the change provide nic name case (#2754)\" (#2944) add permision for test-server.sh (#2942) Kubectl ko diagnose perf (#2915) build(deps): bump golang.org/x/sys from 0.8.0 to 0.9.0 (#2940) controller: fix DHCP MTU when the default network mode is underlay (#2941) e2e: fix u2o case (#2931) add err log to help find conflict ip owner (#2939) support set the mtu of dhcpv4_options (#2930) modify lb-svc dnat port error (#2927) fix race condition in gateway check logs (#2928) add subnet.spec.u2oInterconnectionIP (#2921) disable ai review e2e: fix waiting deployment to be"
},
{
"data": "(#2909) make conformance with underlay pn vlan subnet has no gw (#2908) fix: natgw init check command not work (#2923) fix issue 2916 (#2917) add sync map to fix cocurrent write (#2918) cni-server: clear iptables mark before doing masquerade (#2919) build(deps): bump github.com/onsi/ginkgo/v2 from 2.9.7 to 2.10.0 (#2913) build(deps): bump github.com/onsi/gomega from 1.27.7 to 1.27.8 (#2914) For eip created without spec.V4ip this field (#2912) match outgoing interface when perform snat (#2911) libovsdb: ignore not found error when listing objects with a filter (#2900) build(deps): bump github.com/sirupsen/logrus from 1.9.2 to 1.9.3 (#2903) build(deps): bump github.com/osrg/gobgp/v3 from 3.14.0 to 3.15.0 (#2904) fix base build fix build base ci fix build base ci refactor IPAM (#2896) add e2e u2o vpc version check (#2901) kube-ovn-controller: fix subnet update (#2882) Supporting user-defined kubelet directory (#2893) ci: use latest golangci-lint underlay: do not delete patch ports created by ovn-controller (#2851) update pr-review auto build base for release branches Add natoutgoing policy rules (#2883) pin golangci-lint version skip case 'connect to NodePort service with external traffic policy set to Local from other nodes' (#2895) refactor subnet gateway (#2872) update webhook check (#2878) skip pr-review as run out openai quota skip kubectl cve build(deps): bump github.com/onsi/ginkgo/v2 from 2.9.5 to 2.9.7 (#2890) e2e: multiple external network (#2884) build(deps): bump github.com/stretchr/testify from 1.8.3 to 1.8.4 (#2885) fix vip str format (#2879) ci: fix valgrind result analysis (#2853) ovs: fix memory leak in qos (#2871) feat: vpc nat gw e2e (#2866) build(deps): bump github.com/docker/docker (#2875) fix gc nil pointer (#2858) bump k8s to v1.27.2 (#2861) add e2e test for slr (#2841) Move docs to new website (#2862) build(deps): bump gopkg.in/k8snetworkplumbingwg/multus-cni.v4 (#2860) update dependabot refactor clusterrole for kube-ovn (#2833) some fixes in CI/e2e (#2856) manage ovn bfd with libovsdb (#2812) update the volumeMounts premission (#2852) fix vip lsp not clean (#2848) U2o support custom vpc (#2831) kubectl-ko: fix trace when u2oInterconnection is enabled (#2836) ci: detect ovs/ovn memory leak (#2839) iptables: always do SNAT for access from other nodes to nodeport with external traffic policy set to Local (#2844) fix underlay access to node through ovn0 (#2842) build(deps): bump github.com/docker/docker (#2843) adapt vpc dns in master (#2822) bump go dependencies (#2820) fix MTU when subnet is using logical gateway (#2834) refactor image builds (#2818) build(deps): bump github.com/stretchr/testify from 1.8.2 to 1.8.3 (#2832) build(deps): bump github.com/onsi/gomega from 1.27.6 to 1.27.7 (#2830) vip support create arp proxy logical switch port (#2817) build(deps): bump github.com/sirupsen/logrus from 1.9.0 to 1.9.2 (#2828) build(deps): bump github.com/docker/docker (#2827) add route for service ip range when init vpc-nat-gw (#2821) do not allocate MAC address when kube-ovn is called as an IPAM plugin (#2816) Iptables nat support share eip (#2805) fix typos (#2815) fix some typos (#2814) add iperf to test group multicast (#2796) add available check for northd enpoint (#2799) manage ovn lr static route with libovsdb (#2804) add support of user-defined endpoints to SwitchLBRule (#2777) e2e: fix test container not removed (#2800) manage ovn lr policy with libovsdb (#2788) build(deps): bump github.com/docker/distribution (#2797) fix handedeletePod repeat 4 times (#2789) fix cleanup order (#2792) fix missing main route table for the default vpc (#2785) add ovn DVR fip e2e (#2780) build(deps): bump github.com/containernetworking/plugins (#2784) add key lock for more resources (#2781) bump cni plugins to v1.3.0 (#2786) replace util.DefaultVpc with c.config.ClusterRouter (#2782) fix static route recreation after kube-ovn-controller restarts (#2778) clean up code about static routes (#2779) Reorder cleanup step by put subnet and vpc to the last to avoid conflict (#2776) optimize kube-ovn-controller logic (#2771) use rate limiting queue with delaying for pod deletion"
},
{
"data": "(#2774) fix underlay subnet kubectl ko trace error (#2773) feat: natgw qos (#2753) build(deps): bump github.com/docker/docker (#2770) fix ip statistics in subnet status (#2769) informer: wait for cache sync before adding event handlers (#2768) build(deps): bump github.com/scylladb/go-set (#2766) support disable arp check ip conflict in vlan provider network (#2760) replace string map with string set (#2765) cni-server: wait ovs-vswitchd to be running (#2759) kubectl-ko: support trace for pod with host network (#2761) libovsdb: fix potential duplicate addresses (#2763) ci: run kube-ovn e2e for underlay (#2762) kubectl-ko: fix pod tracing in underlay (#2757) When Subnet spec.vpc is updated, the status in VPC should also be updated. (#2756) ovn-nbctl: remove unused functions (#2755) add route table option in static route for subnet (#2748) replace acl/address_set function call with ovnClient (#2648) nm not managed only in the change provide nic name case (#2754) support node local dns cache (#2733) build(deps): bump google.golang.org/grpc from 1.54.0 to 1.55.0 (#2752) build(deps): bump golang.org/x/sys from 0.7.0 to 0.8.0 (#2751) update eip qos procees, replace qosLabelEIP with natLabelEip (#2736) refresh nat gw image before using it (#2743) build(deps): bump github.com/prometheus/client_golang (#2745) Using full repo name to avoid short-name error in podman (#2746) build(deps): bump github.com/osrg/gobgp/v3 from 3.13.0 to 3.14.0 (#2738) add policy route when use old active gateway node for centralized subnet (#2722) feat: support for multiple external network (#2725) build(deps): bump github.com/docker/docker (#2732) build(deps): bump github.com/Microsoft/hcsshim from 0.9.8 to 0.9.9 (#2731) base: remove patch for fixing ofpbuf memory leak (#2715) fix recover db failed using method in (#2711) refactor: improve performance by using cache (#2713) For dualstack and ipv6 the default ipv6 range should be same with the ipv4 cidr. (#2708) feat: support dynamically changing qos for EIP (#2671) base: refactor dockerfile (#2696) kubectl-ko: add support for tracing nodes (#2697) cni-server: do not perform ipv4 conflict detection during VM live migration (#2693) fix: iptables nat gw e2e not clean sts eth0 net1 ip (#2698) Add random fully when nat (#2681) replace StrategicMergePatchType with MergePatchType (#2694) ci: fix scheduled vpc nat gateway e2e (#2692) ovn-controller: do not send GARP on localnet for Kube-OVN ports (#2690) netpol: fix enqueueing network policy after LSP creation (#2687) add tcp mem collector (#2683) fix manifest yamls (#2689) attach node name label in ip cr (#2680) adapt ippool annotation (#2678) netpol: fix packet drop casued by incorrect address set deletion (#2677) fix kubectl ko using ovn-central pod that not in a good status (#2676) add nat gw e2e (#2639) add workflows for release chart (#2672) build(deps): bump github.com/Microsoft/go-winio from 0.6.0 to 0.6.1 (#2663) remove auto update k8s and cadvisor build(deps): bump k8s.io/sample-controller from 0.26.3 to 0.26.4 (#2675) ignore k8s major and minor dependencies as they always break build. rename charts (#2667) ipam update condition refactor (#2651) fix LSP existence check (#2657) fix network policy issues (#2652) Resolve SetLoadBalancerAffinityTimeout not being effective (#2647) broadcast free arp when pod is setup (#2638) delete sync user (#2629) fix: eip qos (#2632) fix: make webhook port configurable. (#2631) support ovn ipsec (#2616) feat: add support for EIP QoS (#2550) libovsdb: fix race condition in OVN LB operations (#2625) fix IPAM allocation caused by incorrect pod annotations patch (#2624) ci: deploy multus in thick mode (#2628) libovsdb: use monitor_cond as the monitor method (#2627) ci: fix multus installation (#2622) ovs: fix dpif-netlink ofpbuf memory leak (#2620) Optimized tolerations code in vpc-nat-gw (#2613) replace port_group function call with ovnClient (#2608) reduce test binary size and add missing webhook build (#2610) fix: ovneip print column and finalizer (#2593) add affinity to vpc-nat-gw (#2609) ci: fix multus installation (#2604) update"
},
{
"data": "(#2600) bump go modules (#2603) build(deps): bump peter-evans/create-pull-request from 4 to 5 (#2606) build(deps): bump github.com/docker/docker (#2605) build(deps): bump golang.org/x/sys from 0.6.0 to 0.7.0 (#2607) cut invalid OVNNBDAEMON to make log more readable (#2601) unittest: fix length assertion (#2597) use copilot to generate pr content replace lb function call with ovnClient (#2598) build(deps): bump github.com/osrg/gobgp/v3 from 3.12.0 to 3.13.0 (#2596) Merge handleAddPod with handleUpdatePod. (#2563) fix log (#2586) fix: ovn snat and fip delete (#2584) underlay: get address/route before setting nm managed to no (#2592) update chart description (#2582) iptables: use the same mode with kube-proxy (#2535) ci: bump kind image to v1.26.3 (#2581) fix: invalid memory address (#2585) kubectl ko change solution to collect logs to path kubectl-ko-log (#2575) if one item is removed, do not requeue (#2578) build(deps): bump github.com/onsi/gomega from 1.27.5 to 1.27.6 (#2579) fix vpc dns when ovn-default is dualstack (#2576) move the vpc-nat generic configurations into one single ConfigMap (#2574) feat: add ovn dnat (#2565) Fix kubectl ko log loss when restart deployment or ds (#2531) add wait until (#2569) do no review dependency update build(deps): bump github.com/opencontainers/runc from 1.1.4 to 1.1.5 (#2572) move ipam.subnet.mutex to caller (#2571) build(deps): bump sigs.k8s.io/controller-runtime from 0.14.5 to 0.14.6 (#2568) fix: memory leak in IPAM caused by leftover map keys (#2566) build(deps): bump github.com/docker/docker (#2567) fix ovn-bridge-mappings deletion (#2564) fix lrp deletion after upgrade (#2548) fix gw label for vpc update field (#2562) update CRD in helm chart (#2560) fix CRD indent in install.sh (#2559) fix update snat rules not effect correctly (#2554) fix go mod list (#2556) do not set device unmanaged if NetworkManager is not running (#2549) update review bot build(deps): bump github.com/onsi/gomega from 1.27.4 to 1.27.5 (#2551) underlay: fix network manager operation (#2546) controller: fix apiserver connection timeout on startup (#2545) fix update fip rules not effect correctly (#2540) fix lsp deletion failure when external-ids:ls is empty (#2544) sync parameters to charts from install script (#2526) underlay: delete altname after renaming the link (#2539) failed to delete ovn-fip or ovn-snat (#2534) fix encap_ip will be lost when we restart the ovs-dpdk node (#2543) fix service fail (#2537) Add speaker param check (#2538) feat: support nic-hotplug to a running pod. (#2521) build(deps): bump google.golang.org/grpc from 1.53.0 to 1.54.0 (#2541) fix update dnat rules not effect correctly (#2518) underlay: fix link name exchange (#2516) add vip to webhook e2e (#2525) fix submariner e2e (#2519) fix lsp gc after upgrade (#2513) fix: ovn-fip creation failure due to an excessively long label (#2529) add sleep (#2523) when restart deployment kube-ovn-controller the kubectl ko log loss (#2508) optimize e2e framework (#2492) fix ovs patches (#2506) fix subnet iprange not correct (#2505) bump k8s to v1.26.3 (#2514) add kubevirt multus nic lsp before gc process (#2504) update slack link docs: updated CHANGELOG.md (#2515) optimize ovs upgrade script (#2512) ci: change to pullrequesttarget ci: add openai to review the code (#2511) add support of user-defined image name for vpc-dns (#2502) build(deps): bump google.golang.org/protobuf from 1.29.1 to 1.30.0 (#2500) build(deps): bump github.com/Microsoft/hcsshim from 0.9.7 to 0.9.8 (#2499) replace lr/ls/lrp/lsp function call with ovnClient (#2477) ci: fix go cache (#2498) add skip (#2491) ensure address label is correct before deleting it (#2487) fix scheduled submariner e2e (#2469) build(deps): bump actions/setup-go from 3 to 4 (#2490) build(deps): bump github.com/onsi/gomega from 1.27.3 to 1.27.4 (#2489) add some sleep wait iptables clean (#2488) Add kubectl ko log (#2451) fix: gw configmap may not exist (#2484) fix ovs qos e2e for versions prior to v1.12 (#2483) add node to addNodeQueue if required annations are"
},
{
"data": "(#2481) Add jitter support to netem qos parameters (#2476) build(deps): bump google.golang.org/protobuf from 1.29.0 to 1.29.1 (#2480) fix ovs-ovn startup/restart (#2467) fix changging the stopped vm's subnets, the vm cann't start normally (#2463) build(deps): bump github.com/onsi/gomega from 1.27.2 to 1.27.3 (#2475) when we delete the podit's no need to update the sgs assign to pod (#2465) fix libovsdb issues (#2462) fix ips CR not found due to etcd error (#2472) wait for subnet lb (#2471) chore: update base periodically to resolve security issues. (#2470) do not delete external switch if it is created by provider network vlan subnet (#2449) add upgrade compatibility (#2468) ci: fix ovn-ic installation (#2456) FixedPrevents grep from prematurely exiting the shell script if it cannot find a pattern (#2466) add install for webhook (#2460) e2e add some debug info and sleep (#2439) do not set subnet's vlan empty on failure (#2445) wait subnet lb clear in set subnet EnableLb to false e2e (#2450) build(deps): bump github.com/emicklei/go-restful/v3 (#2458) ci(Mergify): configuration update (#2457) kube-ovn-speaker support IPv6/Dual (#2455) replace nb_global function call with ovnClient (#2454) build(deps): bump google.golang.org/protobuf from 1.28.1 to 1.29.0 (#2452) fix parsing logical router static routes (#2443) base: fix ovn patches (#2444) prepare for libovsdb replacement (#1978) support auto change external bridge (#2437) fix ovn-speaker router bug (#2433) ovs: change update strategy to RollingUpdate (#2422) add kubevirt install (#2430) e2e: wait for subnet to meet specified condition (#2431) delete all invalid ovn lb strategy and prevent invalid multiple endpoint reconsile (#2419) add sumbarier case (#2416) iptables-rules upgrade compatible (#2429) add log (#2423) check subnet gateway after wait (#2428) fix chart install/upgrade e2e (#2426) ci: fix cilium chaining e2e (#2391) build(deps): bump golang.org/x/sys from 0.5.0 to 0.6.0 (#2427) resolve e2e error in v1.12.0 (#2425) update test server and test results (#2421) Modify the pod scheduling of vpcdns (#2420) e2e: double parallel test nodes in ci (#2411) fix scheduled e2e (#2417) build(deps): bump sigs.k8s.io/controller-runtime from 0.14.4 to 0.14.5 (#2415) build(deps): bump github.com/osrg/gobgp/v3 from 3.11.0 to 3.12.0 (#2414) build(deps): bump k8s.io/klog/v2 from 2.90.0 to 2.90.1 (#2413) bump go modules (#2408) e2e: fix random conflict in parallel processes (#2410) fixbasesg_rule (#2401) build(deps): bump k8s.io/sample-controller from 0.26.1 to 0.26.2 (#2403) build(deps): bump github.com/onsi/gomega from 1.27.1 to 1.27.2 (#2396) Support bfd management (#2382) remove unused param (#2393) update ipv6 security-group remote group name (#2389) Fix routeregexp ipv6 (#2395) ci: fix ref name check (#2390) add support of user-defined kubelet directory (#2388) support 1.11 (#2387) ci: skip netpol e2e automatically for push events (#2379) ci: make path filter more accurate (#2381) build(deps): bump github.com/stretchr/testify from 1.8.1 to 1.8.2 (#2386) Fix comment format (#2383) fix: ovs-ovn should reboot now (#2297) fix service dual stack add/del cluster ips not change ovn nb (#2367) ci: fix path filter for windows build (#2378) e2e: run specs in parallel (#2375) add base sg rules for ports (#2365) accelerate cleanup (#2376) update ovnnb model (#2371) docs: updated CHANGELOG.md (#2373) fix changelog workflow (#2372) build(deps): bump github.com/Microsoft/hcsshim from 0.9.6 to 0.9.7 (#2370) Add gateway monitor metrics and event (#2345) ci: fix default branch test (#2369) fix github actions workflows (#2363) Fixed iptables creation failure due to an excessively long label (#2366) use existing node switch cidr instead of the configured one (#2359) Do not wait pod deletion one by one to accelerate install (#2360) Change log level (#2362) change log level (#2356) build(deps): bump github.com/onsi/gomega from 1.27.0 to 1.27.1 (#2357) simplify github actions workflows (#2338) update go version to v1.20 (#2312) build(deps): bump golang.org/x/net from 0.6.0 to 0.7.0 (#2353) build(deps): bump github.com/onsi/gomega from 1.26.0 to"
},
{
"data": "(#2349) chore: no need to wait 30 seconds before kube-ovn-cni get ready. (#2339) do not remove link local route on ovn0 (#2341) fix encap ip when the tunnel interface has multiple addresses (#2340) fix legacy network policy err (#2313) enqueue endpoint when handling service add event (#2337) Add neighbor-address format check for kube-ovn-speaker (#2335) add ovnext0 inside ns on gw node for ecmp static route with bfd (#2237) OVN LB: add support for SCTP protocol (#2331) fix getting service backends in dual-stack clusters (#2323) e2e: skip case of switching session affinity (#2328) fix k8s networking dns e2e (#2325) Add the bgp router-id format check (#2316) perform the gateway check but ignore the result when the annotation of subnet is disableGatewayCheck=true to make sure of the first network packet (#2290) perf: use empty struct to reduce memory usage (#2327) split netpol cases (#2322) feat: support default service session stickiness timeout (#2311) feat: configure routes via pod annotation (#2307) build(deps): bump github.com/docker/docker (#2320) e2e: do not test versions prior to 1.11 for ovn-ic update (#2319) ovndb: use Local_Config to configure listen addresses (#2299) chore: improve the list style in Markdown (#2315) fix egress node and gateway acl should apply after lb. (#2310) fix kube-ovn-controller crash on startup (#2305) build(deps): bump google.golang.org/grpc from 1.52.3 to 1.53.0 (#2308) build(deps): bump golang.org/x/sys from 0.4.0 to 0.5.0 (#2309) ignore e2e for subnet enableEcmp before v1.12.0 (#2306) fix u2o code err (#2300) set join subnet.spec.enableLb to nil (#2304) fix image tag in helm chart (#2302) update trivy deprecated arg and the ignored CVE. (#2296) move enableEcmp to subnet (#2284) build(deps): bump sigs.k8s.io/controller-runtime from 0.14.3 to 0.14.4 (#2301) fix gosec ci installation (#2295) delete htb qos priority (#2288) build(deps): bump sigs.k8s.io/controller-runtime from 0.14.2 to 0.14.3 (#2292) ovn northd: fix connection inactivity probe (#2286) fix ct new config error (#2289) fix wrong network interface name in gateway check (#2282) build(deps): bump github.com/docker/docker (#2287) Improve webhook (#2278) add named port support (#2273) fix access from node to overlay pods when network policy ingress exists (#2279) move enableLb to subnet (#2276) build(deps): bump github.com/osrg/gobgp/v3 from 3.10.0 to 3.11.0 (#2280) add V4/V6UsingIPRange and V4/V6AvailableIPRange in subnet status (#2268) skip u2o test case before 1.9 (#2274) fix network break on kube-ovn-cni startup (#2272) bump go modules (#2267) fix setting mtu for ovs internal port (#2247) bump ovs/ovn versions (#2254) use node ip instead of ovn0 ip when accessing overlay pod/svc from host network (#2243) build(deps): bump google.golang.org/grpc from 1.52.1 to 1.52.3 (#2265) build(deps): bump google.golang.org/grpc from 1.52.0 to 1.52.1 (#2264) build(deps): bump k8s.io/klog/v2 from 2.80.1 to 2.90.0 (#2262) build(deps): bump github.com/onsi/gomega from 1.25.0 to 1.26.0 (#2263) build(deps): bump k8s.io/sample-controller from 0.26.0 to 0.26.1 (#2260) build(deps): bump github.com/docker/docker (#2259) egress networkpolicy acl add option apply-after-lb (#2251) ovn db: add support for listening on pod ip (#2235) update cni plugin to 1.2.0 (#2255) build(deps): bump github.com/onsi/gomega from 1.24.2 to 1.25.0 (#2257) clean up legacy u2o implement (#2248) eip status (#2256) build(deps): bump github.com/containernetworking/plugins (#2253) fix vip create (#2245) improve webhook functions for vpc and subnet (#2241) fix syntax errors (#2240) add release-1.11 to scheduled e2e (#2238) fix webhook (#2236) fix: ovnic del old AZ after establish the new as name (#2229) prepare for next release build(deps): bump google.golang.org/grpc from 1.51.0 to 1.52.0 (#2234) Alex Jones Daviddcc KillMaster9 Longchuanzheng Miika Petjniemi Nico Wang Rick bobz965 changluyi dependabot[bot] fsl github-actions[bot] gugu hzma jeffy jizhixiang lanyujie liuzhen21 lut777 mingo qiutingjun shane wangyd1988 wujixin xujunjie-cover * * set release 1.11.16 fix cves ci: fix memory leak reporting caused by ovn-controller"
},
{
"data": "(#3873) Fix the failure to enable multi-network card traffic mirroring for newly created pods (#3805) fix incorrect variable assignment (#3787) if startOVNIC firstly, and setazName secondly, the ovn-ic-db may sync the old azname (#3762) ci: cleanup disk space Log near err (#3739) ip trigger subnet delete (#3703) fix some ip can not allocate after released (#3699) Compatible with controller deployment methods before kube-ovn 1.11.16 ovn: do not send direct traffic between lports to conntrack (#3663) sync master change to 1.11 (#3674) delete cm ovn-ic-config cause crash 1.11 (#3665) prepare for next release Changlu Yi bobz965 changluyi xieyanker zhangzujian * set for release 1.11.15 refactor start-ic-db.sh (#3645) kube-ovn-monitor and kube-ovn-pinger export pprof path (#3656) SYSCTLIPV4IPNOPMTU_DISC set default to 0 Ovn ic ecmp enhance 1.11 (#3609) fix: subnet can not delete even if no ip in using (#3621) fix: static spectify one ip (#3614) do not calculate subnet.spec.excludeIPs as availableIPs (#3612) update policy route when subnet cidr changed (#3611) prepare for next release Changlu Yi bobz965 changluyi hzma set release 1.11.14 ovs: increase cpu limit to 2 cores (#3530) fix u2o infinity recycle (#3567) 1.11 ip delete lsp (#3562) fix ipam deletion (#3549) Revert \"ovn-central: check raft inconsistency from nb/sb logs (#3532)\" fix IP residue after changing subnet of vm in some scenarios (#3370) ovn-central: check raft inconsistency from nb/sb logs (#3532) prepare for next release bobz965 changluyi zhangzujian * set release v1.11.13 cni-server: set sysctl variable net.ipv4.ipnopmtu_disc to 1 by default (#3504) typo (#3495) add iptables drop invalid rst (#3491) delete vm's lsp and release ipam.ip (#3476) fix: ipam clean all pod nic ip address and mac even if just delete a nic (#3451) ovs-healthcheck: ignore error when log file does not exist (#3456) prepare for the next release bobz965 changluyi zhangzujian * set allow-related for gatewayACL and NodePgACL (#3433) bump k8s to v1.26.11 (#3427) base: fix ovn build failure (#3340) optimize ecmp policy route (#3421) trivy: ignore CVE-2023-5528 mtu merge failed (#3417) fix helm (#3412) Supporting user-defined kubelet directory (#2893) add mtu config to release-1.11 fix Vulnerability (#3391) add kube-ovn-controller nodeAffinity prefer not on ic gateway Revert \"upgrade ovs-ovn pod by generation version instead of chart version (#1960)\" (#3387) delete check for existing ip cr (#3361) kube-ovn-controller: fix memory growth caused by unused workqueue add compact switch release-1.11 (#3338) add switch for compact (#3336) base: fix ovn build failure (#3327) kube-ovn-controller: fix ovn ic log directory not mounted to hostpath (#3322) fix golang lint error (#3323) add type assertion for ip crd (#3311) update go net version to v0.17.0 (#3312) security: ignore kubectl cve (#3305) Revert \"update base image to ubuntu:23.10 (#3289)\" update base image to ubuntu:23.10 (#3289) ovs: load kernel module ip_tables only when it exists (#3281) pinger: increase packet send interval (#3259) Changlu Yi bobz965 changluyi hzma zhangzujian * * set for release 1.11.11 fix: for existing nic, no need to set the port type to internal (#3243) undo delete perl cmd to update release-1.11 image update kubectl and delete perl (#3223) fix vpc-peer dualstack bug (#3204) fix ipam random get (#3200) fix G101 add err log to help find conflict ip owner (#2939) underlay: fix ip/route tranfer when the nic is managed by NetworkManager (#3184) fix ovn build (#3166) chart: fix ovs-ovn upgrade (#3164) subnet: fix deleting lr policy on node deletion (#3178) delete append externalIds process in initIPAM (#3134) move unnecessary init process after startWorkers (#3124) underlay: fix NetworkManager operation (#3147) base: remove ovn patch for skipping ct (#3140) delete append externalIds process in initIPAM (#3134) prepare for the next release add e2e test for ovn db recover (#3118) update"
},
{
"data": "bobz965 hzma * * ovn: fix corrupted database file on start (#3112) update version to v1.11.10 fix u2o policy route generate too many flow tables cause oom distinguish nat ip for central subnet with ecmp and active-standby (#3100) bugfix if only one port bind to the sg, then unbind the port to the sg ,it will not enforce in portgroup (#3092) Revert \"fix sg\" fix sg fix .status.default when initializing the default vpc (#3086) cni-server: fix ovn mappings for vpc nat gateway (#3075) ovn client: fix sb chassis existence check (#3072) ci: do not pin go version (#3073) ci: fix multus installation (#3062) ipam: fix ippool with single dual-stack address (#3054) fix vpc lb init (#3046) Revert \"prepare for next release\" set genevsys6081 tx checksum off (#3045) prepare for next release fix ifname start with pod (#3038) static ip in exclude-ips can be allocated normally when subnet's availableIPs is 0 #3031 ci: pin go version to 1.20.5 (#3034) pinger: use fully qualified domain name (#3032) uninstall.sh: fix ipset name (#3028) kube-ovn-controller: fix workqueue metrics (#3011) fix subnet finalizer (#3004) choose subnet by pod's annotation in networkpolicy (#2987) kubectl ko performance enhance (#2975) (#2994) fix deleting old sb chassis for a re-added node (#2989) underlay: fix NetworkManager syncer for virtual interfaces (#2988) underlay: does not set a device managed to no if it has VLAN managed by NM (#2986) bump k8s version to v1.26.6 (#2973) base: fix ovn patches (#2972) add detail comment Kubectl ko diagnose perf release 1.11 (#2967) cni-server: reconcile ovn0 routes periodically (#2963) uninstall.sh: flush and delete iptables chain OVN-MASQUERADE (#2961) underlay: sync NetworkManager IP config to OVS bridge (#2949) typo (#2952) Revert \"base: fix ovn build failure (#2926)\" Revert \"nm not managed only in the change provide nic name case (#2754)\" (#2944) kubectl ko perf on release-1.11 (#2945) controller: fix DHCP MTU when the default network mode is underlay (#2941) support set the mtu of dhcpv4_options (#2930) u2o support specify u2o ip on release-1.11 (#2937) modify lb-svc dnat port error (#2927) bobz965 changluyi hzma yichanglu * * prepare for next release base: fix ovn build failure (#2926) bump version number to v1.11.8 fix encap_ip will be lost when we restart the ovs-dpdk node (#2543) cni-server: clear iptables mark before doing masquerade (#2919) For eip created without spec.V4ip this field (#2912) match outgoing interface when perform snat (#2911) * prepare for release 1.11.7 underlay: do not delete patch ports created by ovn-controller (#2851) fix gc report error #2886 add support of user-defined kubelet directory (#2388) ci: fix valgrind result analysis (#2853) ovs: fix memory leak in qos (#2871) prepare for next release zhangzujian * * prepare for next release u2o support custom vpc release 1.11 (#2849) kubectl-ko: fix trace when u2oInterconnection is enabled (#2836) ci: detect ovs/ovn memory leak (#2839) fix underlay access to node through ovn0 (#2846) iptables: always do SNAT for access from other nodes to nodeport with external traffic policy set to Local (#2844) delete user tss (#2838) ci: fix no-avx512 image build ci: fix kube-ovn-base build refactor image builds (#2818) fix MTU when subnet is using logical gateway (#2834) update vpc dns env value add route for service ip range when init vpc-nat-gw (#2821) fix cleanup order (#2792) add available check for northd enpoint update release note changluyi hzma zhangzujian * prepare for release 1.11.5 reorder the deletion to avoid dependency conflict fix ip statistics in subnet status (#2769) support disable arp check ip conflict in vlan provider network (#2760) cni-server: wait ovs-vswitchd to be"
},
{
"data": "(#2759) ci: run kube-ovn e2e for underlay (#2762) iptables: use the same mode with kube-proxy (#2758) nm not managed only in the change provide nic name case (#2754) update policy route when change from ecmp to active-standby (#2717) fix recover db failed using offical doc (#2718) fixbasesg_rule (#2401) add base sg rules for ports (#2365) bump base images base: remove patch for fixing ofpbuf memory leak (#2715) prepare for release 1.11.4 cni-server: do not perform ipv4 conflict detection during VM live migration (#2693) fix can not clean the last abandoned snat table (#2701) replace StrategicMergePatchType with MergePatchType (#2694) fix build error by partially revert 951f89c5 ovn-controller: do not send GARP on localnet for Kube-OVN ports (#2690) adapt ippool annotation (#2678) netpol: fix packet drop casued by incorrect address set deletion (#2677) fix pg set port fail when lsp is already deleted (#2658) add subnetstatus lock for handleAddOrUpdateSubnet (#2669) broadcase free arp when pod setup delete sync user (#2629) Add ipsec package to image release 1.11 (#2618) ci: deploy multus in thick mode (#2628) libovsdb: use monitor_cond as the monitor method (#2627) ci: fix multus installation (#2622) ovs: fix dpif-netlink ofpbuf memory leak (#2620) update Dockerfile.debug ci: fix multus installation (#2604) cut invalid OVNNBDAEMON to make log more readable (#2601) unittest: fix length assertion (#2597) bump base image security: remove CVE-2022-29526 from .trivyignore base: fix CVE-2022-3294 (#2594) underlay: get address/route before setting nm managed to no (#2592) base: fix ovs patches (#2590) ci: bump kind image to v1.26.3 (#2581) move ipam.subnet.mutex to caller (#2571) fix: memory leak in IPAM caused by leftover map keys (#2566) fix ovn-bridge-mappings deletion (#2564) fix go mod list (#2556) do not set device unmanaged if NetworkManager is not running (#2549) fix update dnat rules not effect correctly (#2518) underlay: fix network manager operation (#2546) controller: fix apiserver connection timeout on startup (#2545) underlay: delete altname after renaming the link (#2539) underlay: fix link name exchange (#2516) fix changging the stopped vm's subnets, the vm cann't start normally (#2463) add kubevirt multus nic lsp before gc process (#2504) update for release v1.11.3 bobz965 changluyi hzma yichanglu zhangzujian * * prepare for release v1.11.3 ensure address label is correct before deleting it (#2487) add node to addNodeQueue if required annations are missing (#2481) fix ips CR not found due to etcd error (#2472) ci: fix ovn-ic installation (#2456) do not set subnet's vlan empty on failure (#2445) change cni version from v1.1.1 to v1.2.0 fix ovn-speaker router bug (#2433) fix chart install/upgrade e2e (#2426) ci: fix cilium chaining e2e (#2391) Modify the pod scheduling of vpcdns (#2420) fix: python package issues update ipv6 security-group remote group name (#2389) Fix routeregexp ipv6 (#2395) ci: fix ref name check (#2390) bump base images ci: skip netpol e2e automatically for push events (#2379) ci: make path filter more accurate (#2381) fix service dual stack add/del cluster ips not change ovn nb ci: fix path filter for windows build (#2378) e2e: run specs in parallel (#2375) Daviddcc KillMaster9 changluyi hzma jeffy yichanglu zhangzujian * fix CVE-2022-41723 bump base images fix: ovs-ovn should reboot now (#2298) ci: fix default branch test (#2369) fix github actions workflows (#2363) simplify github actions workflows (#2338) Fixed iptables creation failure due to an excessively long label (#2366) Improve webhook (#2278) eip status (#2256) fix vip create (#2245) improve webhook functions for vpc and subnet (#2241) fix webhook (#2236) use existing node switch cidr instead of the configured one (#2359) Release 1.11 merge netpol (#2361) Release 1.11 merge netpol (#2355) prepare for"
},
{
"data": "do not remove link local route on ovn0 (#2341) fix encap ip when the tunnel interface has multiple addresses (#2340) enqueue endpoint when handling service add event (#2337) Add neighbor-address format check for kube-ovn-speaker (#2335) OVN LB: add support for SCTP protocol (#2331) fix getting service backends in dual-stack clusters (#2323) fix github actions workflow perform the gateway check but ignore the result when the annotation of subnet is disableGatewayCheck=true to make sure of the first network packet (#2290) Add the bgp router-id format check (#2316) KillMaster9 changluyi jeffy lut777 qiutingjun zhangzujian * prepare for release v1.11.1 fix: ovnic del old AZ after establish the new as name (#2229) fix u2o code err fix kube-ovn-controller crash on startup (#2305) fix Makefile delete htb qos priority (#2288) fix gosec ci installation (#2295) ovn northd: fix connection inactivity probe (#2286) fix ct new config error fix network break on kube-ovn-cni startup (#2272) fix setting mtu for ovs internal port (#2247) fix gosec installation fix ovn patches ovn db: add support for listening on pod ip (#2235) Revert \"prepare for next release\" prepare for next release changluyi lut777 zhangzujian * Update CHANGELOG.md for v1.11.0 feat: add helm upgrade e2e (#2222) fix: now route with connected/static will all be sync (#2231) add enable-metrics arg to disable metrics (#2232) add u2o test case (#2203) add more args to break test server add release-1.8/1.9/1.10 to scheduled e2e (#2224) cni-server: fix waiting for routed annotation (#2225) build(deps): bump golang.org/x/sys from 0.3.0 to 0.4.0 (#2223) feature: detect ipv4 address conflict in underlay (#2208) fix git ref name in e2e (#2218) fix e2e for v1.8 (#2216) some fixes for e2e testing (#2207) build(deps): bump github.com/osrg/gobgp/v3 from 3.9.0 to 3.10.0 (#2209) distinguish ippool process for dualstack and normal ippool situation (#2204) u2o feature (#2189) ovn nb and sb can't bind lan ip in ssl (#2200) build(deps): bump sigs.k8s.io/controller-runtime from 0.14.0 to 0.14.1 (#2199) local ip bind to service (#2195) refactor e2e testing (#2078) fix: ovs gc just for pod if (#2187) update docs link in install.sh (#2196) fix lr policy for default subnet with logical gateway enabled (#2177) sync delete pod process from release-1.9 (#2190) fix: update helm 1.11.0 (#2182) reserve pod eip static route when update vpc (#2185) ignore conflict check for pod ip crd (#2188) remove unused subnet status fields (#2178) fix:react leader elect (#2167) fix base/windows build (#2172) add metric interfacerxmulticast_packets (#2156) build(deps): bump github.com/onsi/gomega from 1.24.1 to 1.24.2 (#2168) update wechat link build(deps): bump github.com/Microsoft/hcsshim from 0.9.5 to 0.9.6 (#2161) ci: refactor previous push multi arch (#2164) security: we should check all the vulnerabilities that can be fixed (#2163) An error occurred when netpol was added in double-stack mode (#2160) add process for delete networkpolicy start with number (#2157) security remove private key (#2159) add scheduled e2e testing (#2144) northd: fix race condition in health check (#2154) add check for subnet cidr (#2153) delete nc cmd in image (#2148) bump k8s to v1.26 (#2152) add benchmark test for ipam (#2123) update: add YuDong Wang into MAINTAINERS (#2147) build(deps): bump k8s.io/sample-controller from 0.25.4 to 0.25.5 (#2146) delete nc in base image (#2141) update go modules (#2142) delete ip crd base on podName (#2143) fix vpc spec external not true after init external gw (#2140) refactor ipam unit test (#2126) build(deps): bump github.com/k8snetworkplumbingwg/network-attachment-definition-client (#2139) some optimization for provider network status update (#2135) simplify iptables eip nat (#2137) kind: support to specify api server address/port (#2134) kubectl-ko: fix registry/version (#2133) check if subnet cidr is correct (#2136) fix: sometimes alloc ipv6 address failed sometimes ipam.GetStaticAddress return"
},
{
"data": "(#2132) fix: delete static route should consider dualstack (#2130) build(deps): bump github.com/osrg/gobgp/v3 from 3.8.0 to 3.9.0 (#2121) build(deps): bump github.com/Wifx/gonetworkmanager from 0.4.0 to 0.5.0 (#2122) build(deps): bump golang.org/x/time from 0.2.0 to 0.3.0 (#2120) fix: vlan gw clean in 2 scene (#2117) optimize provider network (#2099) build(deps): bump golang.org/x/sys from 0.2.0 to 0.3.0 (#2119) fix removing default static route in default vpc (#2116) fix: eip deletion (#2118) fix: ecmp route keep delete and recreate (#2083) fix policy route for subnets with logical gateway (#2108) build(deps): bump github.com/emicklei/go-restful/v3 from 3.9.0 to 3.10.1 (#2113) refactor function name isIPAssignedToPod to isIPAssignedToOtherPod (#2096) build(deps): bump github.com/onsi/gomega from 1.24.0 to 1.24.1 (#2111) fix: logical gw underlay gw subnet not clean (#2114) build(deps): bump github.com/osrg/gobgp/v3 from 3.6.0 to 3.8.0 (#2110) build(deps): bump sigs.k8s.io/controller-runtime from 0.12.3 to 0.13.1 (#2109) fix go mod (#2107) build(deps): bump github.com/onsi/ginkgo/v2 from 2.3.1 to 2.5.1 (#2103) build(deps): bump k8s.io/sample-controller from 0.24.4 to 0.25.4 (#2101) build(deps): bump github.com/Microsoft/go-winio from 0.5.2 to 0.6.0 (#2104) build(deps): bump google.golang.org/grpc from 1.49.0 to 1.51.0 (#2102) build(deps): bump github.com/Microsoft/hcsshim from 0.9.4 to 0.9.5 (#2100) Create dependabot.yml replace klog.Fatalf with klog.ErrorS and klog.FlushAndExit (#2093) fix: slow vip finalizer operation (#2092) ko-trace: support ARP request/reply (#2046) fix: cni response missing sandbox field (#2089) check if externalIds map is nil when add node as gw for centralized subnet (#2088) fix: del createIPS (#2087) fix: add opts for ips del (#2079) fix ovs bridge not deleted cause by port link not found (#2084) fix libovsdb issues (#2070) ipset: fix unknown ipset data attribute from kernel (#2086) fix: vpc lrp reset after restart kube-ovn-controller (#2074) fix: add del bash for redundant ips (#2063) refactor: add unknown config to logic switch port (#2067) ovs-dpdk supports adding bond for multi-NICs (#2064) fix OVN LS/LB gc (#2069) fix: vip ipam not recover all (#2071) bug-fix: make kind-reload invalid (#2068) remove no need params svcasname (#2057) Fix:hybrid-dpdk with vxlan tunnel modeThe OVS node does not create a VXLAN tunnel to the OVS-DPDK node (#2065) update ipv6 address for vpc peer (#2060) perf: reduce controller init time (#2054) reflactor note (#2053) fix: replace replace with add to override existing route (#2052) refactor Makefile (#1901) pass klog verbosity to libovsdb (#2048) ovs: fix reaching resubmit limit in underlay (#2038) sync crd yamls (#2040) add helm and e2e test (#2020) fix: add unitest (#2030) fix: pod not add finalizer after add iptables fip (#2041) feat: ovn eip snat fip (#2029) fix: vpc and vpc nat gw not clean (#2032) update CHANGELOG.md fix pinger namespace error (#2034) iptables: avoid duplicate logging (#2028) fix: gateway route should stay still when node is pingable (#2011) update np name with character prefix (#2024) bump kind and node image versions (#2023) fix ovn nb/sb health check (#2019) fix ovs fdb for the local bridge port (#2014) fix go version perf: add debug info for perf trace (#2017) fix: not append finalizer (#2012) do not need to delete pg when update networkpolicy (#1959) test: add test-server to collect packet lost during upgrade (#2010) support create iptables fip and eip automatically if pod enable fip (#1993) ci: upgrade deprecated actions (#2004) fix: make ip deletion the same as creation (#2002) fix: Add support for Mellanox NIC (#1999) fix: nat gw not enqueue its resources (#1996) fix: delete fiprule failed at first time (#1998) fix typo (#1994) feat: now interface for containerd could be inspected (#1987) fix: snat conntrack race (#1985) add check of write to ovn sb db for ovn-controller (#1989) fix grep matching device in routes (#1986) delete pod after TerminationGracePeriodSeconds (#1984) ovs: fix waiting flows in underlay"
},
{
"data": "(#1983) feature: support default vpc use nat gw pod as cust vpc (#1979) ovn db: recover automatically on startup if db corruption is detected (#1980) fix: modify src route priority (#1973) upgrade ovs-ovn pod by generation version instead of chart version (#1960) avoid concurrent subnet status update (#1976) fix metrics name (#1977) add vm pod to ipam by ip when initIPAM (#1974) validate nbctl socket path in start-controller.sh (#1971) skip CVE-2022-3358 (#1972) fix version mismatch between the Ginkgo CLI and the imported package (#1967) ovs: fix mac learning in environments with hairpin enabled (#1943) fix: add default deny acl (#1935) Fix registry for ovn-central container in install.sh (#1951) ovs: add fdb update logging (#1941) add chart version check when upgrade ovs-ovn pod (#1942) fix underlay e2e testing (#1929) set leader flag when get leader (#1939) set ovsdb-server vlog level to avoid warnings caused by ovs-vsctl (#1937) fix: UpdateNatRule will error when logicalIP, externalIP is different protocol; replace : to \\\\: when IPv6 in ovs cli. fix: noAllowLiveMigration port can't sync vips (#7) fix: add pod not update vip virtual port fix: delete chassis (#1927) fix: pod mistaken ls label (#1925) ignore pod without lsp when add pod to port-group (#1924) add network partition check in ovn probes (#1923) fix: fip unbind can't take effect immediately when conntrack record exists (#1922) No need to change deivceID to sriov_netdevice. (#1904) update ns annotation when subnet cidr changed (#1921) fix EIP/SNAT on dynamic Pod annotation (#1918) fix: eip and nat crd can delete even if nat gw pod deleted and ipatab (#1917) fix missing crd (#1909) Nat gw support toleration (#1907) Update USERS.md (#1908) fix typo (#1897) fix: Make the /sys directory in ovs-ovn-dpdk pod writable (#1899) fix: failed to add eip (#1898) fix: gatewaynode might be null (#1896) ci: increase golangci-lint timeout (#1894) update Go to version 1.19 (#1892) fix: api rollback (#1895) ci: use concurrency to ensure that only a single workflow (#1850) kubectl-ko: turn off pipefail for ovn leader check (#1891) kubectl-ko: fix trace for KubeVirt VM (#1802) fix duplicate logs for leader election (#1886) fix setting ether dst addr for dnat (#1881) change the prtocol string to const (#1887) refactor iptables rules (#1868) cni should handler unmont volume, when delete pod. (#1873) delete and recreate netem qos when update process (#1872) feat: check configuration (#1832) security: conform to gosec G114 (#1860) update CHANGELOG.md e2e: add timeout for waiting resources to be ready (#1871) upgrade to Ginkgo v2 (#1861) feat: reduce downtime by increasing arp cache timeout feat: reduce wait time by counting the flow num. fix: missing stopovndaemon args fix: nat gw pod should set default gw to net1 so that to access public (#1864) delete log severity for drop acl when update networkpolicy (#1863) ovs: fix log file descriptor leak in monitor process (#1855) fix: dnat port not use whole words to check (#1854) fix ovs-ovn logging (#1848) fix ovn dhcp not work with ovs-dpdk (#1853) docs: Update USERS.md (#1851) fix: multus macvlan ipvlan use kube-ovn ipambut ip not inited in init-ipam (#1843) fix underlay e2e (#1828) fix arping error log (#1841) ko: fix kube-proxy check (#1842) base: use patch from OVN upstream (#1844) ci: switch environment to ubuntu-20.04 (#1838) ci: split image builds to speed up jobs (#1807) ci: update Go cache to speed up jobs (#1829) windows: fix ovs/ovn versions and patches (#1830) ovs-ovn-dpdk ovs vhostuserclient port (#1831) support adding routes in underlay Pods for access to overlay Pods (#1762) update centralized subnet gateway ready patch operation (#1827) remove pod security"
},
{
"data": "(#1822) fix duplicate log for tunnel interface decision (#1823) update ovs/ovn version to fix hardware offload (#1821) fix: use full longest word to match full ip about dnat (#1825) update centralize subnet gatewayNode until gw is ready (#1814) initialize IPAM from IP CR with empty PodType for sts Pods (#1812) feat: add editable ovn-ic (#1795) kubectl-ko: fix missing env-check (#1804) kubectl-ko: fix destination mac (#1801) fix cilium e2e (#1759) abort kube-ovn-controller on leader change (#1797) avoid invalid ovn-nbctl daemon socket path (#1799) update CHANGELOG.md Perf/memleak (#1791) delete htb qos when releated annotation is deleted (#1788) Fix nag gw gc (#1783) fix iptables for services with external traffic policy set to Local (#1773) perf: reduce metrics labels (#1784) northd: remove lookuparpip actions (#1780) fix: 5ms is too short for eip and nats creation (#1781) Lb-svc supports custom VPCs (#1779) fix ovnic e2e (#1763) fix iptables for service traffic when external traffic policy set to local (#1728) set sysctl variables on cni server startup (#1758) fix: add omitempty to subnet spec (#1765) perf: replace jemalloc to reduce memory usage (#1764) avoid patch interface deletion & recreation during restart (#1741) feature: support exchange link names of OVS bridge and provider nic in underlay networks (#1736) dpdk-v2 --with-hybrid-dpdk Dockerfile.base-dpdk ovs-dpdk (#1754) fix: Adjust order for Log and output err when get NatRule faild. (#1751) only support IPv4 snat in vpc-nat-gw when internal subnet is dual (#1747) update README.md docs: update USERS.md (#1743) style: import group ordering. (#1742) enqueue subnets after vpc update (#1722) do not add subnet not processed by kube-ovn to vpc (#1735) dpdk-v2 --with-hybrid-dpdk qemu sock (#1739) fix: const extGw may expired, after subnet updated, so use ipam subne (#1730) fix service not working when a node's IPv6 address is before the IPv4 address (#1724) update pr template fix: If pod has snat or eip, also need delete staticRoute when delete pod. (#1731) optimize lrp create for subnet in vpc (#1712) fix: cancel delete staticroute when it's used by NatRule (#1733) fix: wrong info when update subnet from dual to ipv4 or ipv6. (#1726) change: add newline at end of file (#1717) add kernel prerequisite for Rocky Linux 8.6 (#1713) Add CODE_STYLE.md (#1711) Change system-cluster-critical to system-node-critical to prevent pods of DaemonSet type from being (#1709) Develop custom vpc-dns (#1662) fix CVE-2022-30065 (#1710) fix: add and set ENABLEKEEPVM_IP=true to keep vm ip (#1702) update CHANGELOG.md fix overlay MTU in vxlan/stt tunnels (#1693) fix: response has no gw when create nic without default route (#1703) add note in install.sh for install --with-hybrid-dpdk(dpdk-v2). (#1699) ignore ovsdb-server/compact error: not storing a duplicate snapshot (#1691) Get latest vpc data from apiserver instead of cache (#1684) support kubernetes v1.24 (#1553) update priority range in htb qos (#1688) fix: clean vip eip snat dant fip in cleanup.sh (#1690) update: add Bingbing Zhang into MAINTAINERS (#1687) fix: move away words that is considered offensive after k8s v1.20.0 (#1682) update CHANGELOG.md add upgrade-ovs script (#1681) fix: change ovn-ic static route to policy (#1670) Delete pod if subnet of the pod's owner(sts/vm) updated (#1678) Develop switch lb rule (#1656) do not snat packets only for subnets with distributed gateway when external traffic policy is set to local (#1616) refactor: extract external routes from eip func, make it the same as (#1671) add loadbalancer service (#1611) bgp: consolidate service check and use service const (#1674) security: disable pprof by default (#1672) fix bgp: sync service cache (#1673) fix iptables for direct routing (#1578) feature: support pod use static"
},
{
"data": "(#1650) fix: kubectl-ko does't work when ovn-nb, ovn-sb and ovn-northd master slave Switchover (#1669) mount modules for auto load ip6tables moudles (#1665) update docs links fix: subnet failed when create without protocol (#1653) ignore pod not scheduled when reconcile subnet (#1666) fix libovsdb (#1664) fix ovs-ovn not running on newly added nodes (#1661) fix get security group name by external_ids (#1663) fix:can not delete pod with sriov vf (#1654) add policy route when add subnet (#1655) update CHANGELOG.md fix: no need routed when use v1.multus-cni.io/default-network (#1652) docs: add GOVERNANCE.md and SECURITY.md fix: should go on check ip after occupied ip (#1649) set ether dst addr for dnat on logical switch (#1512) docs: update README.md CI: delete resources in order to avoid a long time waiting for subnet deletions. (#1643) ci: fix golangci-lint (#1639) Update install.sh (#1645) fix: make sure pod annotation switch is the first choice to allocate ip, and fix vpc nat sts not delete (#1640) docs: update docs link set networkpolicy log default to false (#1633) update policy route when join subnet cidr changed (#1638) fix: diskfull may lead to wrong raft status for ovs db (#1635) ci: update trivy options (#1637) fix no interface report to multus cni, missing in k8s.v1.cni.cncf.io/network[s]-status (#1636) change vp gw pod workload from deployment to statefulset (#1630) increase initial delay of ovs-ovn liveness probe (#1634) fix: cleanup should ignore patch failed (#1626) delete \"allow\" policy route on subnet deletion (#1628) wait ovn-central pods running before delete ovs-ovn pods (#1627) vip, eip support ipv6 vip count (#1624) ci: auto changelog now (#1625) get dbstatus for all ovn-central pod (#1619) refactor: use ConfigMap resourceVersion to check if ovn-vpc-nat-gw-config changed (#1617) fix controller exit before process pod update event (#1621) docs: update ROADMAP.md fix acl log name too long (exceed 63) (#1612) docs: Add High-level design of ovn-spekaer (#1609) docs: Fix allowed subnets (#1610) add cni log Prevent \"for loop time\" approximately health check time (#1606) docs:Add Usage of ovn-speaker for passivemode and ebgp-multihop (#1605) update static ip docs (#1607) Modify the next hop calculation method for kube-ovn-speaker (#1604) fix static ip error in dual stack (#1598) ci: build amd64 images without avx512 (#1584) Add ebgp-multihop function for kube-ovn-speaker (#1601) monitor dns in cilium e2e (#1597) Add passivemode for kube-ovn-speaker (#1600) Bump github.com/emicklei/go-restful/v3 from 3.7.4 to 3.8.0 (#1599) docs: fix the kind name (#1593) Support CNI VESION command (#1596) update ovs health check, delete connection to ovn sb db (#1588) fix ovn-ic doc err (#1590) fix: all cluster pod will be in podadd queue (#1587) feat: add args for gc/inspect interval (#1572) fix: Do not Recreate LogicalRouterPort when Vpc recreated (#1570) optimized initialization and gc for the chassis (#1511) fix pod could not be ready (#1562) Fix incorrect usage info of 'argExternalGatewayNet' (#1567) fix: delete pod panic when delete vm or statefulset. (#1565) fix: clean CRDs introduced by new vpc-nat-gateway (#1563) do not gc vm pod lsp when vm still exists (#1558) (#1561) do not delete static routes on controller startup (#1560) update alpine to v3.16 (#1559) fix VPC document (#1554) replace ovn-nbctl daemon with libovsdb in frequent operations (#1544) fix exec cmd in vpc nat gateway (#1556) CNI: do not return route if nic is not eth0 (#1555) do not nat packets for incoming traffic when (#1552) add kubeovn 1.9.2 charts (#1539) fix: opt kubectl-ko install solution (#1550) always set mac address to sriov vf (#1551) use leases for leader election (#1529) fix: fix db-check bug (#1541) bump version to v1.11.0 (#1545) exit kube-ovn-controller on stopped leading (#1536) fix: update check script for restart ovs-ovn after rebuild"
},
{
"data": "(#1534) tmp cancel cilium external svc test (#1531) remove name for default drop acl in networkpolicy (#1522) Alex Jones Chris Kaihang Zhang KillMaster9 Mengxin Liu Money Liu Noah ShaPoHun Usman Malik Wang Bo Xiaobo Liu bobz965 carezkh changluyi dependabot[bot] fanriming gugu halfcrazy huangsq hzma jeffy long.wang lut777 pengbinbin1 runzhliu shane wangyd1988 xujunjie-cover zhouhui-Corigine * * prepare for release v1.10.10 ensure address label is correct before deleting it (#2487) add node to addNodeQueue if required annations are missing (#2481) remove unused subnet status fields (#2482) fix ips CR not found due to etcd error (#2472) ci: fix ovn-ic installation (#2456) do not set subnet's vlan empty on failure (#2445) fix: missing import netlink change cni version from v1.1.1 to v1.2.0 (#2435) fix ovn-speaker router bug (#2433) fix ovn-ic e2e ci: fix cilium chaining e2e (#2391) fix: python package issues update ipv6 security-group remote group name (#2389) Fix routeregexp ipv6 (#2395) ci: fix ref name check (#2390) bump base images ci: skip netpol e2e automatically for push events (#2379) ci: make path filter more accurate (#2381) ci: fix path filter for windows build (#2378) e2e: run specs in parallel (#2375) fix CVE-2022-41723 ci: fix default branch test (#2369) fix github actions workflows (#2363) simplify github actions workflows (#2338) use existing node switch cidr instead of the configured one (#2359) do not remove link local route on ovn0 (#2341) fix encap ip when the tunnel interface has multiple addresses (#2340) enqueue endpoint when handling service add event (#2337) fix getting service backends in dual-stack clusters (#2323) fix github actions workflow prepare for release v1.10.9 fix u2o code err fix kube-ovn-controller crash on startup (#2305) fix gosec ci installation (#2295) ovn northd: fix connection inactivity probe (#2286) fix ct new config error fix network break on kube-ovn-cni startup (#2272) fix setting mtu for ovs internal port (#2247) fix gosec installation bump base image version fix ovn patches ovn db: add support for listening on pod ip (#2235) add enable-metrics arg to disable metrics (#2232) fix not building no-avx512 image (#2228) u2o feature merge to 1.10 (#2227) fix windows build add release-1.8/1.9/1.10 to scheduled e2e (#2224) cni-server: fix waiting for routed annotation (#2225) release-1.10: refactor e2e (#2213) feature: detect ipv4 address conflict in underlay (#2208) set release v1.10.8 Daviddcc KillMaster9 changluyi hzma zhangzujian * prepare for release 1.10.8 ci: add publish action ovn nb and sb can't bind lan ip in ssl merge to 1.10 (#2202) bind local ip release 1.10 (#2198) fix: ovs gc just for pod if (#2187) update docs link in install.sh (#2196) fix lr policy for default subnet with logical gateway enabled (#2177) sync delete pod process from release-1.9 reserve pod eip static route when update vpc (#2185) ignore conflict check for pod ip crd (#2188) fix base/windows build (#2172) add metric interfacerxmulticast_packets (#2156) An error occurred when netpol was added in double-stack mode (#2160) add process for delete networkpolicy start with number (#2157) northd: fix race condition in health check (#2154) add check for subnet cidr (#2153) delete nc cmd in image (#2148) delete ip crd base on podName (#2143) some optimization for provider network status update (#2135) kind: support to specify api server address/port (#2134) kubectl-ko: fix registry/version (#2133) fix: sometimes alloc ipv6 address failed sometimes ipam.GetStaticAddress return NoAvailableAddress fix: delete static route should consider dualstack (#2130) optimize provider network (#2099) fix removing default static route in default vpc (#2116) fix: cni response missing sandbox field (#2089) fix: eip deletion (#2118) fix policy route for subnets with logical gateway (#2108) replace klog.Fatalf with klog.ErrorS and klog.FlushAndExit (#2093) fix: del"
},
{
"data": "(#2087) check if externalIds map is nil when add node as gw for centralized subnet (#2088) fix ovs bridge not deleted cause by port link not found (#2084) fix libovsdb issues (#2070) ipset: fix unknown ipset data attribute from kernel (#2086) reflactor: add unkown config to lsp fix OVN LS/LB gc (#2069) Fix:hybrid-dpdk with vxlan tunnel modeThe OVS node does not create a VXLAN tunnel to the OVS-DPDK node (#2065) update ipv6 address for vpc peer (#2060) perf: reduce controller init time (#2054) fix: replace replace with add to override existing route (#2052) pass klog verbosity to libovsdb (#2048) use the latest base image ovs: fix reaching resubmit limit in underlay (#2038) fix: vpc and vpc nat gw not clean (#2032) fix: install the latest version (#2036) Mengxin Liu bobz965 changluyi fanriming hzma lut777 wangyd1988 zhangzujian * * set release for 1.10.7 fix: Add support for Mellanox NIC (#1999) fix pinger namespace error (#2034) increase action timeout prepare release for 1.10.7 fix: gateway route should stay still when node is pingable (#2011) iptables: avoid duplicate logging (#2028) update np name with character prefix (#2024) bump kind and node image versions (#2023) fix ovn nb/sb health check (#2019) fix ovs fdb for the local bridge port (#2014) do not need to delete pg when update networkpolicy (#1959) ci: upgrade deprecated actions (#2004) fix: make ip deletion the same as creation (#2002) fix: delete fiprule failed at first time (#1998) add check of write to ovn sb db for ovn-controller (#1989) fix grep matching device in routes (#1986) delete pod after TerminationGracePeriodSeconds (#1984) ovs: fix waiting flows in underlay networking (#1983) feature: support default vpc use nat gw pod as cust vpc (#1979) ovn db: recover automatically on startup if db corruption is detected (#1980) fix: modify src route priority (#1973) fix CVE-2022-32149 avoid concurrent subnet status update (#1976) upgrade ovs-ovn pod by generation version instead of chart version (#1960) fix metrics name (#1977) add vm pod to ipam by ip when initIPAM (#1974) validate nbctl socket path in start-controller.sh skip CVE-2022-3358 (#1972) use latest base image fix: add default deny acl (#1935) ovs: fix mac learning in environments with hairpin enabled (#1943) Fix registry for ovn-central container in install.sh (#1951) ovs: add fdb update logging (#1941) add chart version check when upgrade ovs-ovn pod fix underlay e2e testing (#1929) set leader flag when get leader set ovsdb-server vlog level to avoid warnings caused by ovs-vsctl (#1937) fix: pod mistaken ls label (#1925) ignore pod without lsp when add pod to port-group add network partition check in ovn probes update ns annotation when subnet cidr changed (#1921) fix CVE-2022-27664 fix EIP/SNAT on dynamic Pod annotation (#1918) fix: eip and nat crd can delete even if nat gw pod deleted and ipatab (#1917) fix: failed to add eip (#1898) ci: increase golangci-lint timeout (#1894) fix: gatewaynode might be null (#1896) fix: api rollback fix: diskfull may lead to wrong raft status for ovs db (#1635) kubectl-ko: turn off pipefail for ovn leader check (#1891) update dpdk base image kubectl-ko: fix trace for KubeVirt VM (#1802) fix duplicate logs for leader election (#1886) fix setting ether dst addr for dnat (#1881) refactor iptables rules (#1868) cni should handler unmont volume, when delete pod. (#1873) delete and recreate netem qos when update process (#1872) feat: check configuration (#1832) fix: nat gw pod should set default gw to net1 so that to access public (#1864) Kaihang Zhang Mengxin Liu Noah bobz965 hzma jeffy long.wang lut777 runzhliu shane zhangzujian * set release"
},
{
"data": "feat: reduce downtime by increasing arp cache timeout feat: reduce wait time by counting the flow num. fix: missing stopovndaemon args delete log severity for drop acl when update networkpolicy refactor: extract external routes from eip func, make it the same as (#1671) prepare release for 1.10.6 ovs: fix log file descriptor leak in monitor process (#1855) fix ovs-ovn logging (#1848) fix: dnat port not use whole words to check (#1854) fix ovn dhcp not work with ovs-dpdk (#1853) update base image fix: add and set ENABLEKEEPVM_IP=true to keep vm ip (#1702) fix: multus macvlan ipvlan use kube-ovn ipambut ip not inited in init-ipam (#1843) fix underlay e2e (#1828) fix arping error log (#1841) ko: fix kube-proxy check (#1842) base: use patch from OVN upstream (#1844) ci: switch environment to ubuntu-20.04 (#1838) ovs-ovn-dpdk ovs vhostuserclient port (#1831) windows: fix ovs/ovn versions and patches (#1830) update centralized subnet gateway ready patch operation (#1827) fix duplicate log for tunnel interface decision (#1823) update ovs/ovn version to fix hardware offload (#1821) fix: use full longest word to match full ip about dnat (#1825) update centralize subnet gatewayNode until gw is ready (#1814) initialize IPAM from IP CR with empty PodType for sts Pods (#1812) kubectl-ko: fix missing env-check (#1804) kubectl-ko: fix destination mac (#1801) abort kube-ovn-controller on leader change (#1797) avoid invalid ovn-nbctl daemon socket path (#1799) update vpc-nat-gateway base fix: warning for empty chassis fixed (#1787) bobz965 hzma long.wang lut777 zhangzujian * set release v1.10.5 prepare for release v1.10.5 delete htb qos when releated annotation is deleted (#1788) perf: fix memory leak perf: disable mlockall to reduce memory usage fix iptables for services with external traffic policy set to Local (#1773) perf: reduce metrics labels (#1784) northd: remove lookuparpip actions (#1780) fix install error fix:can not delete pod with sriov vf (#1654) dpdk-v2 --with-hybrid-dpdk Dockerfile.base-dpdk ovs-dpdk (#1754) dpdk-v2 --with-hybrid-dpdk qemu sock (#1739) feature: support exchange link names of OVS bridge and provider nic in underlay networks (#1736) support kubernetes v1.24 (#1761) use leases for leader election (#1529) fix iptables for service traffic when external traffic policy set to local (#1728) set sysctl variables on cni server startup (#1758) fix: add omitempty to subnet spec perf: replace jemalloc to reduce memory usage avoid patch interface deletion & recreation during restart (#1741) only support IPv4 snat in vpc-nat-gw when internal subnet is dual (#1747) enqueue subnets after vpc update (#1722) do not add subnet not processed by kube-ovn to vpc (#1735) dpdk-v2 --with-hybrid-dpdk qemu sock (#1739) fix: If pod has snat or eip, also need delete staticRoute when delete pod. (#1731) optimize lrp create for subnet in vpc (#1712) fix: cancel delete staticroute when it's used by NatRule (#1733) fix: wrong info when update subnet from dual to ipv4 or ipv6. (#1726) fix overlay MTU in vxlan/stt tunnels (#1693) Mengxin Liu hzma long.wang xujunjie-cover zhouhui-Corigine * set release 1.10.4 prepare for release 1.10.4 fix: response has no gw when create nic without default route (#1703) ignore ovsdb-server/compact error: not storing a duplicate snapshot Get latest vpc data from apiserver instead of cache (#1684) update priority range in htb qos (#1688) fix: clean vip eip snat dant fip in cleanup.sh (#1690) add upgrade-ovs script (#1681) Mengxin Liu Wang Bo bobz965 hzma xujunjie-cover zhangzujian set release 1.10.3 prepare for release 1.10.3 fix: change ovn-ic static route to policy (#1670) fix: Do not Recreate LogicalRouterPort when Vpc recreated (#1570) Delete pod if subnet of the pod's owner(sts/vm)"
},
{
"data": "(#1678) do not snat packets only for subnets with distributed gateway when external traffic policy is set to local (#1616) security: disable pprof by default (#1672) bgp: consolidate service check and use service const (#1674) fix bgp: sync service cache (#1673) fix iptables for direct routing (#1578) fix libovsdb (#1664) mount modules for auto load ip6tables moudles (#1665) ignore pod not scheduled when reconcile subnet (#1666) fix ovs-ovn not running on newly added nodes (#1661) fix get security group name by external_ids (#1663) add policy route when add subnet (#1655) Mengxin Liu Money Liu Wang Bo gugu hzma lut777 wangyd1988 * set for release 1.10.2 fix: no need routed when use v1.multus-cni.io/default-network (#1652) prepare for release 1.10.2 fix: subnet failed when create without protocol set ether dst addr for dnat on logical switch (#1512) CI: delete resources in order to avoid a long time waiting for subnet deletions. (#1643) ci: fix golangci-lint (#1639) fix: cleanup should ignore patch failed (#1626) fix no interface report to multus cni, missing in k8s.v1.cni.cncf.io/network[s]-status (#1636) Update install.sh (#1645) set networkpolicy log default to false (#1633) update policy route when join subnet cidr changed (#1638) ci: update trivy options (#1637) increase initial delay of ovs-ovn liveness probe (#1634) wait ovn-central pods running before delete ovs-ovn pods (#1627) get dbstatus for all ovn-central pod (#1619) delete \"allow\" policy route on subnet deletion (#1628) Mengxin Liu ShaPoHun halfcrazy hzma xujunjie-cover zhangzujian * monitor dns in cilium e2e (#1597) prepare for release 1.10.1 ci: build amd64 images without avx512 (#1584) update ovs health check, delete connection to ovn sb db (#1588) fix: all cluster pod will be in podadd queue (#1587) fix pod could not be ready (#1562) fix: delete pod panic when delete vm or statefulset. (#1565) fix: clean CRDs introduced by new vpc-nat-gateway (#1563) do not gc vm pod lsp when vm still exists (#1558) do not delete static routes on controller startup (#1560) replace ovn-nbctl daemon with libovsdb in frequent operations (#1544) fix exec cmd in vpc nat gateway (#1556) CNI: do not return route if nic is not eth0 (#1555) do not nat packets for incoming traffic when service externalTrafficPolicy is Local exit kube-ovn-controller on stopped leading (#1536) tmp cancel cilium external svc test (#1531) hzma lut777 xujunjie-cover zhangzujian * release 1.10.0 use inc-engine/recompute instead of deprecated recompute (#1528) update kind to v0.13.0 (#1530) move dumb-init from base images to kube-ovn image (#1527) fix installing dumb-init in arm64 image (#1525) optimize ovs request in cni (#1518) optimize node port-group check (#1514) logic optimization (#1521) fix defunct ovn-nbctl daemon (#1523) fix arm image (#1524) fix: keep vm's and statefulset's ips when user specified subnet (#1520) feature: add doc for tunning packages (#1513) add document for windows support (#1515) reduce ovs-ovn restart downtime (#1516) finish basic windows support (#1463) refactor logical router routes (#1500) add netem qos when create pod (#1510) handle the case of error node cidr (#1509) fix: ovs trace flow always ends with controller action (#1508) add qos e2e test (#1505) optimize IPAM initialization (#1498) test: fix flaky test (#1506) docs: update README.md synchronize yamls with installation script (#1504) feature: svc of multiple clusters (#1491) use OVS branch-2.17 (#1495) Update USERS.md (#1496) update document for mellanox hardware offload (#1494) Feature iptables eip nats splits (#1437) Update USERS.md (#1493) update github actions (#1489) update USER.md (#1492) fix: add empty chassis check in ovn db (#1484) feat: lsp forwarding external Layer-2 packets (#1487) base: add back kubectl (#1485) delete ipam record when gc lsp (#1483) fix: wrong vpc-nat-gateway arm image (#1482) fix pod annotation may override by patch (#1480) add acl doc (#1476) fix: workqueue_depth should show count not"
},
{
"data": "(#1478) add delete ovs pods after restore nb db (#1474) delete monitor noexecute toleration (#1473) add env-check (#1464) Support kubevirt vm live migrate for pod static ip (#1468) fix routes for packets from Pods to other nodes add manual compile method for ubuntu20.04 (#1461) append metrics (#1465) Annotation network_type always is geneve masquerade packets from Pods to service IP update OVS and OVN for windows windows support for cni server add kube-ovn-controller switch for EIP and SNAT docs: add USERS.md (#1454) update topology pic feature: add sb/nb db check bash script (#1441) add routed check in circulation (#1446) modify init ipam by ip crd only for sts pod (#1448) base: refactor ovn/ovs build (#1444) log: show the reason if get gw node failed (#1443) add doc for #1358 (#1440) prepare windows support for cni server modify webhook img to independent image (#1442) update alpine to fix CVE-2022-1271 fix adding key to delete Pod queue fix IPAM initialization temporary cancel the external2cluater e2e test for cilium (#1428) ignore all link local unicast addresses/routes fix error handling for netlink.AddrDel replace pod name when create ip crd (#1425) add webhook vaildate the vpc resource whether can be deleted. (#1423) We are looking forward to your PR! (#1422) support alloc static ip from any subnet after ns supports multi subnets (#1417) fix provider-networks status build ovs/ovn for windows in ci cilium e2e: deploy k8s without kube-proxy windows support for CNI add simple e2e for multus integration update e2e testing recover ips CR on IPAM initialization docs: update ROADMAP.md and MAINTAINERS create ip crd in kube-ovn-controller (#1413) add condition for triggering the deletion of redundant chassises in sbdb (#1411) fix: do not recreate port for terminating pods (#1409) update cni version to 1.0 update underlay environment requirements avoid frequent ipset update add reset for kube-ovn-monitor metrics (#1403) fix: The underlay physical gateway config by external-gw-addr when use snat&eip (#1400) add custom acls for subnet (#1395) check the cidr format whether is correct (#1396) optimize docs due to frequently asked question. (#1393) adding IP Protocol enumeration to CRD can reduce the kube-ovn Controller judgment logic (#1391) change the wechat qcode append vm deletion check (#1390) We should handle the case where the subnet protocol is handled (#1373) VIP is decoupled from port security (#1389) chore: reduce image size (#1388) docs: update the maintainer and roadmap (#1387) ci: update kind and k8s fix external egress gateway add missing link scope routes in vpc-nat-gateway update nodeips for restore cmd in ko plugin increase memory limit of ovn-central fix range loop fix probe error update script to add restore plugin cmd support dpdk (#1317) Use camel case instead of snake case add detail error when failed to create resource add restore process for ovn nb db add reset porocess for ovs interface metrics fix SNAT/PR on Pod startup optimize kube-ovn-monitor yaml Update subnet.go feat: add webhook to check subnet deletion. modify ipam v6 release ip problem skip ping gateway for pods during live migration don't check conflict for migration pod with only static mac add service cidr when init kubeadm docs: add provide and ns spec for multus crd update flag parse in webhook fix usage of ovn commands add check for pod update process log: rotate all logs in kube-ovn-cni and add compress keep ip for kubevirt pod docs: add integration with Corigine OVS offload fix OVS bridge with bond port in mode 6 fix: continue of deletion for del pod failed when can't found vpc or"
},
{
"data": "(#1335) feat: support DHCP Fix usage of ovn commands resync provider network status periodically Revert \"resync provider network status periodically\" use const instead the string when update gateway info, we should append old to new deploy resync provider network status periodically fix underlay subnet in custom VPC fix ips update kube-ovn CNI (#1318) delete the logic of repeated enqueueing add log to file, update upgrade script Temporarily comment out the compile and upload of the centos8 compile container. Revert \"Temporarily comment out the compile and upload of the centos8 compile\" Temporarily comment out the compile and upload of the centos8 compile container. feat: add webhook for subnet update validation optimized decision logic Use camel case instead of snake case append add cidr and excludeIps annotation for namespace feat: vpc peering connection Remove excess code chore: show install options when installing (#1293) feat: update provider network via node annotation add container compile and insmod add policy route for centralized subnet fix: replace ecmp dphash with hash by srcip (#1289) Use go to rerimplement ovn-is-leader.sh (#1243) fix: only log matched svc with np (#1287) feat: Replace command health check with k8s tcpSocket check (#1251) add 'virtual' port for vip (#1278) skip the missing of kube-dns (#1286) fix: check if taint exists before un-taint add policy route for distributed subnet in default vpc ci: add retry to fix flaky test set up tunnel correctly in hybrid mode check static route conflict fix: https://github.com/kubeovn/kube-ovn/issues/1271#issue-1108813998 transfer IP/route earlier in OVS startup delete unused constant add metric for ovn nb/sb db status add gateway check after update subnet we should first see if a condition is not going to be met add judge before use slices index prevent multiple namespace reconcile prevent multiple namespace reconcile fix: validate statefulset pod by name fix golang and base image versions add back centralized subnet active-standby mode support to add multiple subnets for a namespace prepare for next release Support only configure static mac_address Cookie Wang Fudankenshin Mengxin Liu Samuel Liu amoy-xuhao bob199x bobz965 caohuilong chestack fanriming gongysh2004 hackeren halfcrazy hzma jyjiangkai long.wang lut777 pengbinbin1 wang_yudong wangyd1988 xujunjie xujunjie-cover yi.luo zhangzujian * * set release 1.9.31 Subnet add mtu config 1.9 (#3397) add kube-ovn-controller nodeAffinity prefer not on ic gateway (#3390) Revert \"upgrade ovs-ovn pod by generation version instead of chart version (#1960)\" (#3386) delete check for existing ip cr (#3361) update go net version (#3351) Iptables wrapper 1.9 (#3341) add type assertion for ip crd (#3311) prepare for the next release changluyi hzma * ovs: load kernel module ip_tables only when it exists (#3281) pinger: increase packet send interval (#3259) prepare for the next release * add multicast snoop for release-1.9 (#3192) underlay: fix ip/route tranfer when the nic is managed by NetworkManager (#3184) chart: fix ovs-ovn upgrade (#3164) subnet: fix deleting lr policy on node deletion (#3178) delete append externalIds process in initIPAM (#3134) move unnecessary init process after startWorkers (#3124) delete append externalIds process in initIPAM (#3134) prepare for the next release changluyi hzma * update version to v1.9.28 fix u2o policy route generate too many flow tables cause oom distinguish nat ip for central subnet with ecmp and active-standby (#3100) fix .status.default when initializing the default vpc (#3086) ci: do not pin go version (#3073) ci: fix multus installation (#3062) Revert \"prepare for release 1.9.28\" set genevsys6081 tx checksum off (#3045) prepare for release 1.9.28 static ip in exclude-ips can be allocated normally when subnet's availableIPs is 0 #3031 ci: pin go version to 1.20.5 (#3034) pinger: use fully qualified domain name (#3032) uninstall.sh: fix ipset"
},
{
"data": "(#3028) fix subnet finalizer (#3004) kubectl ko performance enhance (#2975) (#2992) underlay: fix NetworkManager syncer for virtual interfaces (#2988) underlay: does not set a device managed to no if it has VLAN managed by NM (#2986) changluyi hzma * release 1.9.27 add detail comment prepare for next release Kubectl ko diagnose perf release 1.9 (#2964) underlay: sync NetworkManager IP config to OVS bridge (#2949) typo (#2952) Revert \"nm not managed only in the change provide nic name case (#2754)\" (#2944) kubectl ko perf on release-1.9 (#2947) u2o support specify u2o ip on release-1.9 (#2935) support tos inherit from inner packet underlay: do not delete patch ports created by ovn-controller (#2851) kubectl-ko: fix trace when u2oInterconnection is enabled (#2836) fix underlay access to node through ovn0 (#2847) fix MTU when subnet is using logical gateway (#2834) changluyi yichanglu * prepare for v1.9.26 fix ip statistics in subnet status (#2769) add EXCHANGELINKNAME to installation script cni-server: wait ovs-vswitchd to be running (#2759) ci: run kube-ovn e2e for underlay (#2762) nm not managed only in the change provide nic name case (#2754) update policy route when change from ecmp to active-standby (#2716) fix ovn lb gc (#2728) fix recover db failed using offical doc (#2721) bump base image base: remove patch for fixing ofpbuf memory leak (#2715) cni-server: do not perform ipv4 conflict detection during VM live migration (#2693) ovn-controller: do not send GARP on localnet for Kube-OVN ports (#2690) netpol: fix packet drop casued by incorrect address set deletion (#2677) fix pg set port fail when lsp is already deleted add subnetstatus lock for handleAddOrUpdateSubnet (#2668) prepare for next release broadcast free arp when pod setup (#2643) delete sync user (#2629) prepare for next release ci: deploy multus in thick mode (#2628) libovsdb: use monitor_cond as the monitor method (#2627) ovs: fix dpif-netlink ofpbuf memory leak (#2620) add debug image ci: fix multus installation (#2604) cut invalid OVNNBDAEMON to make log more readable (#2601) unittest: fix length assertion (#2597) bump base image ci: bump actions/upload-artifact to v3 security: clear .trivyignore underlay: get address/route before setting nm managed to no (#2592) ci: bump kind image to v1.26.3 (#2581) bobz965 changluyi hzma yichanglu zhangzujian * move ipam.subnet.mutex to caller (#2571) fix: memory leak in IPAM caused by leftover map keys (#2566) fix ovn-bridge-mappings deletion (#2564) fix go mod list (#2556) do not set device unmanaged if NetworkManager is not running (#2549) underlay: fix network manager operation (#2546) controller: fix apiserver connection timeout on startup (#2545) underlay: delete altname after renaming the link (#2539) underlay: fix link name exchange (#2516) change version to v1.9.23 fix changging the stopped vm's subnets, the vm cann't start normally (#2463) add kubevirt multus nic lsp before gc process (#2504) update for release v1.9.22 hzma zhangzujian * * ensure address label is correct before deleting it (#2487) add node to addNodeQueue if required annations are missing (#2481) remove unused subnet status fields (#2482) prepare for release v1.9.22 fix ips CR not found due to etcd error (#2472) ci: fix ovn-ic installation (#2456) do not set subnet's vlan empty on failure (#2445) set release v1.9.21 prepare for release v1.9.21 fix: missing import netlink release-1.9 cni version update from v0.9.1 => v1.2.0 (#2434) fix ovn-speaker router bug (#2433) fix chart install/upgrade e2e (#2426) ci: fix cilium chaining e2e (#2391) Fix routeregexp ipv6 (#2395) ci: fix ref name check (#2390) bump base image ovs: fix re-creation of tunnel backing interfaces on restart. ci: skip netpol e2e automatically for push events (#2379) e2e: run specs in"
},
{
"data": "(#2375) Daviddcc KillMaster9 changluyi zhangzujian * fix CVE-2022-41723 ci: fix default branch test (#2369) fix github actions workflows (#2363) simplify github actions workflows (#2338) use existing node switch cidr instead of the configured one (#2359) prepare for 1.9.20 do not remove link local route on ovn0 (#2341) fix encap ip when the tunnel interface has multiple addresses (#2340) enqueue endpoint when handling service add event (#2337) fix getting service backends in dual-stack clusters (#2323) fix github actions workflow fix u2o code err fix kube-ovn-controller crash on startup (#2305) fix gosec ci installation (#2295) ovn northd: fix connection inactivity probe (#2286) fix ct new config error fix network break on kube-ovn-cni startup (#2272) fix gosec installation bump base image version ovn db: add support for listening on pod ip (#2235) add enable-metrics arg to disable metrics (#2232) changluyi hzma zhangzujian * update install.sh prepare release v1.9.19 u2o feature merge to 1.9 (#2226) add release-1.8/1.9/1.10 to scheduled e2e (#2224) cni-server: fix waiting for routed annotation (#2225) feature: detect ipv4 address conflict in underlay (#2208) fix git ref name in e2e release-1.9: refactor e2e (#2210) changluyi zhangzujian * ci: add publish action add netem qos when create pod (#1510) ovn nb and sb can't bind lan ip in ssl merge to 1.9 (#2201) ci: load image to kind for helm install prepare for release v1.9.18 local ip bind to service merge to release 1.9 (#2197) fix: change condition to conditions fix: ovs gc just for pod if (#2187) update docs link in install.sh (#2196) Release 1.9 (#2181) ignore conflict check for pod ip crd (#2188) Mengxin Liu changluyi hzma lut777 tonyleu * An error occurred when netpol was added in double-stack mode (#2160) add process for delete networkpolicy start with number (#2157) prepare for release 1.9.17 security: remove private key file security: fix security issues update version to v1.9.16 in install.sh add check for subnet cidr (#2153) delete nc cmd in image (#2148) some optimization for provider network status update (#2135) kind: support to specify api server address/port (#2134) fix: sometimes alloc ipv6 address failed sometimes ipam.GetStaticAddress return NoAvailableAddress optimize provider network (#2099) Revert \"optimize provider network (#2099)\" optimize provider network (#2099) Mengxin Liu changluyi fanriming hzma wangyd1988 zhangzujian * prepare for release 1.9.16 fix policy route for subnets with logical gateway (#2108) fix lint replace klog.Fatalf with klog.ErrorS and klog.FlushAndExit (#2093) zhangzujian * prepare for release v1.9.15 fix: del createIPS (#2087) check if externalIds map is nil when add node as gw for centralized subnet (#2088) fix ovs bridge not deleted cause by port link not found (#2084) fix gosec error bump go version to 1.18 fix libovsdb issues (#2070) refactor: add unknown config to lsp (#2076) fix: replace replace with add to override existing route (#2061) fix OVN LS/LB gc (#2069) update ipv6 address for vpc peer (#2060) perf: reduce controller init time (#2054) pass klog verbosity to libovsdb (#2048) use the latest base image ovs: fix reaching resubmit limit in underlay (#2038) fix: vpc and vpc nat gw not clean (#2032) Mengxin Liu bobz965 changluyi hzma lut777 zhangzujian * set release for 1.9.14 fix pinger namespace error (#2034) prepare release for 1.9.14 fix: gateway route should stay still when node is pingable (#2011) update np name with character prefix (#2024) bump kind and node image versions (#2023) fix ovn nb/sb health check (#2019) fix ovs fdb for the local bridge port (#2014) do not need to delete pg when update networkpolicy (#1959) add helm and e2e test (#1992) add check of write to ovn sb db for"
},
{
"data": "(#1989) Noah hzma lut777 zhangzujian * update ovs version to branch-2.16 (#1988) fix grep matching device in routes (#1986) delete pod after TerminationGracePeriodSeconds (#1984) ovs: fix waiting flows in underlay networking (#1983) use latest base image ovn db: recover automatically on startup if db corruption is detected (#1980) prepare for release 1.9.13 fix CVE-2022-32149 avoid concurrent subnet status update (#1976) upgrade ovs-ovn pod by generation version instead of chart version (#1960) fix metrics name (#1977) add vm pod to ipam by ip when initIPAM (#1974) validate nbctl socket path in start-controller.sh skip CVE-2022-3358 (#1972) use latest base image fix: add default deny acl (#1935) ovs: fix mac learning in environments with hairpin enabled (#1943) Fix registry for ovn-central container in install.sh (#1951) ovs: add fdb update logging (#1941) Mengxin Liu hzma lut777 runzhliu zhangzujian * add chart version check when upgrade ovs-ovn pod fix underlay e2e testing (#1929) prepare for release v1.9.12 set leader flag when get leader set ovsdb-server vlog level to avoid warnings caused by ovs-vsctl (#1937) use leases for leader election (#1529) * prepare release 1.9.11 fix: pod mistaken ls label (#1925) ignore pod without lsp when add pod to port-group add network partition check in ovn probes feat: Replace command health check with k8s tcpSocket check (#1251) fix CVE-2022-27664 update ns annotation when subnet cidr changed (#1921) hzma lut777 zhangzujian * set release 1.9.10 prepare for release 1.9.10 fix: gatewaynode might be null (#1896) fix: api rollback fix: diskfull may lead to wrong raft status for ovs db (#1635) kubectl-ko: turn off pipefail for ovn leader check (#1891) fix logrotate issues fix security issues security: conform to gosec G114 (#1860) fix duplicate logs for leader election (#1886) delete and recreate netem qos when update process (#1872) Mengxin Liu hzma lut777 zhangzujian * set release 1.9.9 feat: reduce downtime by increasing arp cache timeout feat: reduce wait time by counting the flow num. fix: missing stopovndaemon args delete log severity for drop acl when update networkpolicy base: use patch from OVN upstream (#1844) prepare release for 1.9.9 ovs: fix log file descriptor leak in monitor process (#1855) fix ovs-ovn logging (#1848) fix: add and set ENABLEKEEPVM_IP=true to keep vm ip (#1702) fix: multus macvlan ipvlan use kube-ovn ipambut ip not inited in init-ipam (#1843) fix underlay e2e (#1828) fix arping error log (#1841) ko: fix kube-proxy check (#1842) ci: switch environment to ubuntu-20.04 (#1838) update centralized subnet gateway ready patch operation (#1827) fix duplicate log for tunnel interface decision (#1823) update centralize subnet gatewayNode until gw is ready (#1814) initialize IPAM from IP CR with empty PodType for sts Pods (#1812) kubectl-ko: fix missing env-check (#1804) kubectl-ko: fix destination mac (#1801) abort kube-ovn-controller on leader change (#1797) avoid invalid ovn-nbctl daemon socket path (#1799) update vpc-nat-gateway base fix: warning for empty chassis fixed (#1786) Mengxin Liu bobz965 hzma lut777 zhangzujian * set release v1.9.8 prepare for release v1.9.8 delete htb qos when releated annotation is deleted (#1788) perf: fix memory leak perf: disable mlockall to reduce memory usage perf: reduce metrics labels (#1784) feature: support exchange link names of OVS bridge and provider nic in underlay networks (#1736) perf: replace jemalloc to reduce memory usage (#1764) fix: add omitempty to subnet spec (#1765) set sysctl variables on cni server startup (#1758) avoid patch interface deletion & recreation during restart (#1741) enqueue subnets after vpc update (#1722) do not add subnet not processed by kube-ovn to vpc (#1735) add logrotate for kube-ovn log (#1740) fix: If pod has snat or eip, also need delete staticRoute when delete"
},
{
"data": "(#1731) fix iptables for service traffic when external traffic policy set to local(#1725) optimize lrp create for subnet in vpc (#1712) fix: cancel delete staticroute when it's used by NatRule (#1733) fix: wrong info when update subnet from dual to ipv4 or ipv6. (#1726) fix: new ovn-ic static route method adapted due to old ovn version (#1718) Mengxin Liu hzma lut777 xujunjie-cover zhangzujian * set release 1.9.7 prepare for release 1.9.7 Get latest vpc data from apiserver instead of cache (#1684) update priority range in htb qos (#1688) add upgrade-ovs script (#1681) Mengxin Liu Wang Bo hzma set release 1.9.6 prepare for release 1.9.6 shim: fix diffs of commits fix: change ovn-ic static route to policy (#1670) fix: Do not Recreate LogicalRouterPort when Vpc recreated (#1570) feat: vpc peering connection Delete pod if subnet of the pod's owner(sts/vm) updated (#1678) security: disable pprof by default (#1672) bgp: consolidate service check and use service const (#1674) fix bgp: sync service cache (#1673) fix libovsdb (#1664) mount modules for auto load ip6tables moudles (#1665) ignore pod not scheduled when reconcile subnet (#1666) fix get security group name by external_ids (#1663) add policy route when add subnet Mengxin Liu Money Liu Wang Bo gugu hzma lut777 wangyd1988 * * set for release 1.9.5 fix: no need routed when use v1.multus-cni.io/default-network (#1652) prepare for release 1.9.5 CI: delete resources in order to avoid a long time waiting for subnet deletions. (#1643) set networkpolicy log default to false (#1633) update policy route when join subnet cidr changed (#1638) ci: update trivy options (#1637) increase initial delay of ovs-ovn liveness probe (#1634) wait ovn-central pods running before delete ovs-ovn pods (#1627) get dbstatus for all ovn-central pod (#1619) fix issues about OVN policy routing use policy route instead of static route (#1618) hzma xujunjie-cover zhangzujian * ci: disable cilium e2e for release prepare for release 1.9.4 update ovs health check, delete connection to ovn sb db (#1588) fix: all cluster pod will be in podadd queue (#1587) fix pod could not be ready (#1562) fix: delete pod panic when delete vm or statefulset. (#1565) fix: keep vm's and statefulset's ips when user specified subnet (#1520) do not gc vm pod lsp when vm still exists (#1558) fix exec cmd in vpc nat gateway (#1556) CNI: do not return route if nic is not eth0 (#1555) exit kube-ovn-controller on stopped leading (#1536) remove name for default drop acl in networkpolicy (#1522) tmp cancel cilium external svc test (#1531) move dumb-init from base images to kube-ovn image hzma lut777 xujunjie-cover * release 1.9.3 fix defunct ovn-nbctl daemon optimize ovs request in cni (#1518) optimize node port-group check (#1514) reduce ovs-ovn restart downtime (#1516) prepare for release 1.9.3 fix: ovs trace flow always ends with controller action (#1508) optimize IPAM initialization ci: skip some checks delete ipam record and static route when gc lsp (#1490) Mengxin Liu hzma zhangzujian release for v1.9.2 fix: wrong vpc-nat-gateway arm image (#1482) add delete ovs pods after restore nb db (#1474) delete monitor noexecute toleration (#1473) add env-check (#1464) append metrics (#1465) masquerade packets from Pods to service IP add kube-ovn-controller switch for EIP and SNAT ignore cni cve add routed check in circulation (#1446) modify init ipam by ip crd only for sts pod (#1448) log: show the reason if get gw node failed (#1443) modify webhook img to independent image (#1442) support keep-vm-ip and live-migrate at the same"
},
{
"data": "(#1439) update alpine to fix CVE-2022-1271 fix adding key to delete Pod queue fix IPAM initialization ignore all link local unicast addresses/routes fix error handling for netlink.AddrDel replace pod name when create ip crd support alloc static ip from any subnet after ns supports multi subnets (#1417) fix provider-networks status recover ips CR on IPAM initialization create ip crd in kube-ovn-controller (#1413) add condition for triggering the deletion of redundant chassises in sbdb (#1411) fix: do not recreate port for terminating pods (#1409) avoid frequent ipset update fix: The underlay physical gateway config by external-gw-addr when use snat&eip (#1400) add reset for kube-ovn-monitor metrics (#1403) check the cidr format whether is correct (#1396) update dockerfile to use v1.9.1 base img append vm deletion check delete repeat para update nodeips for restore cmd in ko plugin fix external egress gateway add missing link scope routes in vpc-nat-gateway increase memory limit of ovn-central fix range loop update script to add restore plugin cmd Mengxin Liu hzma lut777 wangyd1988 xujunjie-cover zhangzujian release update 1.9.1 changelog (#1361) add restore process for ovn nb db optimize kube-ovn-monitor yaml add reset porocess for ovs interface metrics fix SNAT/PR on Pod startup modify ipam v6 release ip problem skip ping gateway for pods during live migration update flag parse in webhook feat: add webhook for subnet update validation keep ip for kubevirt pod add check for pod update process fix ips update append htbqos para in crd yaml fix: replace ecmp dphash with hash by srcip (#1289) fix OVS bridge with bond port in mode 6 fix: continue of deletion for del pod failed when can't found vpc or subnet (#1335) Fix usage of ovn commands resync provider network status periodically Revert \"resync provider network status periodically\" fix statefulset Pod deletion resync provider network status periodically fix underlay subnet in custom VPC append add cidr and excludeIps annotation for namespace support to add multiple subnets for a namespace feat: update provider network via node annotation fix: only log matched svc with np (#1287) transfer IP/route earlier in OVS startup add metric for ovn nb/sb db status check static route conflict set up tunnel correctly in hybrid mode fix clusterrole in ovn-ha.yaml add gateway check after update subnet fix: validate statefulset pod by name add back centralized subnet active-standby mode Mengxin Liu chestack hzma lut777 xujunjie xujunjie-cover zhangzujian prepare for release 1.9.0 fix: liveMigration with IPv6 update networkpolicy port process Add args to configure port ln-ovn-external update check for delete statefulset pod ignore hostnetwork pod when initipam kubectl-ko: support trace Pods being created add dnsutils for base image Add new arg to configure ns of ExternalGatewayConfig update scripts for 1.8.2 Optimized decision logic add svc cidr in ovs LB for optimization add doc for gateway pod in default vpc optimize log for node port-group fix iptables rules and service e2e add kubectl-ko to docker image fix: invalid syntax error fix pod tolerations modify pod's process of update for use multus cni as default cni fix installation script add log for ecmp route fix: ipv6 traffic still go into ct append check for centralized subnet nat process move chassis judge to the end of node processing change nbctl args 'wait=sb' to 'no-wait' use different ip crd with provider suffix for pod multus nic fix service cidr in dual stack cluster add healthcheck cmd to probe live and ready delete frequently log support running ovn-ic e2e on macOS pinger: fix getting empty PodIPs fix cni deepcopy add cilium e2e filter used qos when delete qos add protocol check when subnet is dual-stack lint: make go-lint happy some fixes compatible with OVN"
},
{
"data": "use multus-cni as default cni to assign ip some fixes perf: jemalloc and ISA optimization fix: check np switch fix: port security fix nat rule When netpol is added to a workload, the workload's POD can be accessed using service when update subnet's execpt ip,we should filter repeat ip update wechat image fix: do not reuse released ip after subnet updated update: update 1.7-1.8 script perf: do not send traffic to ct if not designate to svc fix: add back the leader check fix port_security sync live migration vm port docs: add f5 ces integration docs update Go modules update delete operation for statefulset pod chore: update klog to v2 which embed log rotation fix: add kube-ovn-cni prob timeout append add db compact for nb and sb db deleting all chassises which are not nodes add db compact for nb and sb db add vendor param for fix list LR fix LB: skip service without cluster IP add webhook with cert-manager issued certificate security: update base ubuntu image add pod in default vpc to node port-group fix pinger's compatibility for k8s v1.16 check IPv4 gateway by resolving gateway MAC in underlay subnets add nodeSelector for vpc-nat-gateway pod do not send multicast packets to conntrack Revert \"support to set NBGlobal option mcastprivileged\" add ip address for lsp fix: no need to set address for ls to lr port add sg acl check when init cleanup command flags replace port-group named address-set with port-group since there's no ip set for lsp when create lsp support to set NBGlobal option mcastprivileged add networkpolicy support for attachment cni add process for pod attachment nic with subnet in default vpc fix security group fix the duplicate call about strings.Split deepcopy fix steps fix: do not nat route traffic fix: Skip MAC address Settings when PCI addresse is unavailable add ovn-ic e2e other CNI can be used as the default network fix: move macvlan binary to host Revert \"ci: init kind cluster before build finish\" fix ko trace add ovn-ic HA deploy fix node address set name update cni init image chore: update kind k8s to 1.22 and remove pre 1.16 support do not set bridge-nf-call-iptables use logical router policy for accessing node ci: init kind cluster before build finish reduce qos query with ovs-vsctl cmd fix read-only pointer in vlan and provider-network fix: trace in custom vpc fix read-only pointer in vlan and provider-network update docs fix LB in dual stack cluster fix: check allocated annotation in update handler support using logical gateway in underlay subnet docs: optimize cilium integration docs fix: ensure all kube-ovn components deleted before annotate pods fix bug: logical switch ts not ready Fix unpopulated CPU charts Revert \"get default subnet\" add htbqoses rbac feat: pod can use multiple nic with the same subnet add error detail add check switch for default subnet's gateway get default subnet remove node chassis annotation on cleanup update: add 1.7 to 1.8 update scripts base: add macvlan to help vpc setup fix: delete vpc-nat-gw deployment ko: check ovsdb storage status fix cleanup.sh and uninstall.sh use constant instead a string fix: check and load ip_tables module fix: multus-cni subnet allocation docs: add svg chore: update install integrate Cilium into Kube-OVN fix kubectl-ko diagnose change inspection logic from manually adding lsp to just readding pod queue fix pinger in dual stack cluster add e2e testing for dual stack underlay fix pinger and monitor in underlay networking fix kubectl plugin ko adjust the location of the log ci: push vpc-nat-gateway replace api for get lsp id by name docsrevise"
},
{
"data": "grafana: optimize grafana dashboard In netpol egress rules, except rule should be set to != and should not be == ci: add vpc-nat-gateway build Update OVN to version 21.06 modify kube-ovn as multus-cni problem support to set htb qos priority perf: add fastpath module for 4.x kernel add inspection perf: add stt section and update benchmark feat: optimize log fix: init node with wrong ipamkey and lead conflict fix installation scripts fix getting LSP UUID by name fix StatefulSet down scale fix vpc policy route docs: update roadmap refactor: mute ovn0 ping log and add ping details fix: wrong link for iptables fix IPAM for StatefulSet append externalIds for pod and node when upgrade feature: LoadBalancer for custom VPC feat: support vip fix VPC document fix init ipam fix: gc lb Update prometheus.md feat: support VLAN subnet in VPC ci: push dev image to separate repo fix: kubeclient timeout fix: serialize pod add/delete order perf: increase ovn-nb timeout fix gc lsp statistic for multiple subnet fix: re-check ns annotation to avoid annotations lost perf: do not diagnose external access feature: vpc support policy route reactor: remove ovn ipam options perf: switch's router port's addresses to \"router\" lint: make staticcheck happy fix e2e testing prepare for next release fix variable referrence fix typos refactor: reuse waitNetworkReady to check ovn0 and slightly improve the installation speed fix nat-outgoing/policy-routing on pod startup feat: suport vm live migration Mengxin Liu MengxinLiu azee chestack feixiang43 huangjunwei hzma lhalbert liqd luoyunhe lut777 pengbinbin1 vseeker wang_yudong wangchl01 zhangzujian * Dockerfile: fix base image version pinger: increase packet send interval (#3259) prepare for 1.8.18 release ci: pin go version to 1.20.5 (#3034) static ip in exclude-ips can be allocated normally when subnet's availableIPs is 0 #3031 prepare for 1.8.17 release add subnet match check when change subnet gatewayType from centralized to distributed (#2891) add static route for active-standby centralized subnet when use old active gateway node (#2699) prepare for next release hzma zhangzujian * ci: add publish action netpol: fix packet drop casued by incorrect address set deletion (#2677) do not set subnet's vlan empty on failure (#2445) ci: fix cilium chaining e2e (#2391) ci: fix ref name check (#2390) ci: skip netpol e2e automatically for push events (#2379) e2e: run specs in parallel (#2375) fix CVE-2022-28948 fix CVE-2022-41723 ci: fix default branch test (#2369) fix github actions workflows (#2363) simplify github actions workflows (#2338) do not remove link local route on ovn0 (#2341) fix encap ip when the tunnel interface has multiple addresses (#2340) enqueue endpoint when handling service add event (#2337) fix getting service backends in dual-stack clusters (#2323) fix github actions workflow An error occurred when netpol was added in double-stack mode (#2160) bump base image fix gosec ci installation (#2295) fix CVE-2022-41721 fix network break on kube-ovn-cni startup (#2272) fix gosec installation add release-1.8/1.9/1.10 to scheduled e2e (#2224) release-1.8: refactor e2e (#2214) prepare for release 1.8.15 fix: ovs gc just for pod if (#2187) fix: change condition to conditions do not add subnet not processed by kube-ovn to vpc (#1735) kind: support to specify api server address/port (#2134) fix: sometimes alloc ipv6 address failed sometimes ipam.GetStaticAddress return NoAvailableAddress fix lint replace klog.Fatalf with klog.ErrorS and klog.FlushAndExit (#2093) fix: del createIPS (#2087) fix ovs bridge not deleted cause by port link not found (#2084) fix: replace replace with add to override existing route (#2061) fix OVN LS/LB gc (#2069) perf: reduce controller init time (#2054) ovs: fix reaching resubmit limit in underlay (#2038) fix pinger namespace"
},
{
"data": "(#2034) update np name with character prefix bump kind and node image versions (#2023) fix ovn nb/sb health check (#2019) fix ovs fdb for the local bridge port (#2014) Mengxin Liu Noah changluyi lut777 tonyleu wangyd1988 zhangzujian * fix: get ecmp nodecheck back (#2016) fix: gateway route should stay still when node is pingable (#2015) do not need to delete pg when update networkpolicy (#1959) do not set bridge-nf-call-iptables add check of write to ovn sb db for ovn-controller (#1989) fix grep matching device in routes (#1986) delete pod after TerminationGracePeriodSeconds (#1984) ovs: fix waiting flows in underlay networking (#1983) use latest base image ovn db: recover automatically on startup if db corruption is detected (#1980) prepare for release 1.8.14 fix CVE-2022-32149 avoid concurrent subnet status update (#1976) modify build error fix metrics name (#1977) add vm pod to ipam by ip when initIPAM (#1974) validate nbctl socket path in start-controller.sh skip CVE-2022-3358 (#1972) use latest base image fix: add default deny acl (#1935) ovs: fix mac learning in environments with hairpin enabled (#1943) Fix registry for ovn-central container in install.sh (#1951) ovs: add fdb update logging (#1941) prepare for release v1.8.13 set ovsdb-server vlog level to avoid warnings caused by ovs-vsctl (#1937) update Go to v1.17 add network partition check in ovn probes feat: Replace command health check with k8s tcpSocket check (#1251) fix CVE-2022-27664 update ns annotation when subnet cidr changed (#1921) Mengxin Liu hzma lut777 runzhliu zhangzujian * * set release 1.8.12 prepare release 1.8.12 fix: gatewaynode might be null (#1896) fix: api rollback fix logrotate issues fix security issues security: conform to gosec G114 (#1860) fix: diskfull may lead to wrong raft status for ovs db (#1635) kubectl-ko: turn off pipefail for ovn leader check (#1891) fix ip6tables link fix duplicate logs for leader election (#1886) Mengxin Liu lut777 zhangzujian * set release 1.8.11 feat: reduce downtime by increasing arp cache timeout feat: reduce wait time by counting the flow num. fix: missing stopovndaemon args delete log severity for drop acl when update networkpolicy (#1862) prepare release for 1.8.11 ovs: fix log file descriptor leak in monitor process (#1855) fix ovs-ovn logging (#1848) fix: multus macvlan ipvlan use kube-ovn ipambut ip not inited in init-ipam (#1843) ko: fix kube-proxy check (#1842) avoid patch interface deletion & recreation during restart ci: switch environment to ubuntu-20.04 (#1838) fix base failure update base image fix base build failure update centralized subnet gateway ready patch operation fix duplicate log for tunnel interface decision (#1823) update version to v1.8.10 (#1819) do not check static route conflict (#1817) update centralize subnet gatewayNode until gw is ready (#1814) initialize IPAM from IP CR with empty PodType for sts Pods (#1812) abort kube-ovn-controller on leader change (#1797) avoid invalid ovn-nbctl daemon socket path (#1799) do not wait dynamic address for pod (#1800) update vpc-nat-gateway base append delete static route for sts pod (#1798) perf: fix memory leak perf: disable mlockall to reduce memory usage set sysctl variables on cni server startup (#1758) fix: add omitempty to subnet spec (#1765) fix CVE-2022-21698 add logrotate for kube-ovn log (#1740) fix: cancel delete staticroute when it's used by NatRule (#1733) fix: wrong info when update subnet from dual to ipv4 or ipv6. (#1726) Get latest vpc data from apiserver instead of cache (#1684) Mengxin Liu Wang Bo bobz965 hzma xujunjie-cover zhangzujian * set release 1.8.9 prepare for release 1.8.9 [PATCH] Delete pod if subnet of the pod's owner(sts/vm) updated (#1678) security: disable pprof by default (#1672) update ovs health check, delete connection to ovn sb"
},
{
"data": "(#1588) Mengxin Liu Wang Bo hzma set release 1.8.8 prepare for release 1.8.8 CI: delete resources in order to avoid a long time waiting for subnet deletions. (#1643) add ovn-ic HA deploy set networkpolicy log default to false hzma lut777 * prepare for release 1.8.7 cni handler: do not wait routed annotation for net1 (#1586) fix adding static route after LSP deletion (#1571) fix duplicate netns parameter (#1580) do not gc vm pod lsp when vm still exists (#1558) fix exec cmd in vpc nat gateway (#1556) CNI: do not return route if nic is not eth0 (#1555) exit kube-ovn-controller on stopped leading (#1536) remove name for default drop acl in networkpolicy (#1522) move dumb-init from base images to kube-ovn image fix defunct ovn-nbctl daemon hzma zhangzujian * release 1.8.6 reduce ovs-ovn restart downtime (#1516) prepare for release 1.8.6 fix: ovs trace flow always ends with controller action (#1508) optimize IPAM initialization Mengxin Liu zhangzujian ci: skip some checks delete ipam record and static route when gc lsp (#1490) CVE-2022-27191 (#1479) add delete ovs pods after restore nb db (#1474) delete monitor noexecute toleration (#1473) add env-check (#1464) append metrics (#1465) add kube-ovn-controller switch for EIP and SNAT add routed check in circulation (#1446) modify init ipam by ip crd only for sts pod (#1448) ignore cni cve log: show the reason if get gw node failed (#1443) update alpine to fix CVE-2022-1271 fix adding key to delete Pod queue fix IPAM initialization ignore all link local unicast addresses/routes fix error handling for netlink.AddrDel replace pod name when create ip crd support alloc static ip from any subnet after ns supports multi subnets fix provider-networks status recover ips CR on IPAM initialization Mengxin Liu hzma zhangzujian release update 1.8.4 changelog (#1414) create ip crd in kube-ovn-controller (#1412) fix: add condition for triggering the deletion of redundant chassises in sbdb (#1411) fix: do not recreate port for terminating pods (#1409) avoid frequent ipset update fix: The underlay physical gateway config by external-gw-addr when use snat&eip (#1400) add reset for kube-ovn-monitor metrics (#1403) check the cidr format whether is correct (#1396) update dockerfile to use v1.8.3 base img append vm deletion check update nodeips for restore cmd in ko plugin fix external egress gateway update ip assigned check add missing link scope routes in vpc-nat-gateway increase memory limit of ovn-central fix range loop hzma lut777 wangyd1988 xujunjie-cover zhangzujian release update 1.8.3 changelog (#1360) add restore process for ovn nb db optimize kube-ovn-monitor yaml add reset porocess for ovs interface metrics deepcopy fix steps fix SNAT/PR on Pod startup add check for pod update process fix ips update fix cni deepcopy fix: replace ecmp dphash with hash by srcip (#1289) keep ip for kubevirt pod fix OVS bridge with bond port in mode 6 fix: continue of deletion for del pod failed when can't found vpc or subnet (#1335) Fix usage of ovn commands ignore cilint resync provider network status periodically Revert \"resync provider network status periodically\" fix statefulset Pod deletion resync provider network status periodically feat: optimize log optimize log for node port-group append add cidr and excludeIps annotation for namespace support to add multiple subnets for a namespace feat: update provider network via node annotation fix: only log matched svc with np (#1287) transfer IP/route earlier in OVS startup add metric for ovn nb/sb db status check static route conflict set up tunnel correctly in hybrid mode fix clusterrole in"
},
{
"data": "add gateway check after update subnet add back centralized subnet active-standby mode update networkpolicy port process update check for delete statefulset pod chestack hzma lut777 xujunjie-cover zhangzujian release: update 1.8.2 changelog add log for ecmp route fix pod tolerations fix installation script append check for centralized subnet nat process change nbctl args 'wait=sb' to 'no-wait' move chassis judge to the end of node processing use different ip crd with provider suffix for pod multus nic use multus-cni as default cni to assign ip fix: do not reuse released ip after subnet updated delete frequently log pinger: fix getting empty PodIPs add protocol check when subnet is dual-stack filter used qos when delete qos fix: check np switch When netpol is added to a workload, the workload's POD can be accessed using service when update subnet's execpt ip,we should filter repeat ip fix: add back the leader check security: upadate base image update delete operation for statefulset pod chore: update klog to v2 which embed log rotation fix: add kube-ovn-cni prob timeout append add db compact for nb and sb db add vendor param for fix list LR deleting all chassises which are not nodes add db compact for nb and sb db fix pinger's compatibility for k8s v1.16 fix LB: skip service without cluster IP security: update base ubuntu image add pod in default vpc to node port-group add sg acl check when init fix: no need to set address for ls to lr port fix ko trace fix read-only pointer in vlan and provider-network fix read-only pointer in vlan and provider-network fix: trace in custom vpc fix: multus-cni subnet allocation fix LB in dual stack cluster prepare for release 1.8.2 fix: check allocated annotation in update handler fix bug: logical switch ts not ready fix: ensure all kube-ovn components deleted before annotate pods Revert \"add check switch for default subnet's gateway\" add check switch for default subnet's gateway remove node chassis annotation on cleanup fix: delete vpc-nat-gw deployment fix: serialize pod add/delete order change inspection logic from manually adding lsp to just readding pod queue add inspection fix: check and load ip_tables module fix cleanup.sh and uninstall.sh fix kubectl-ko diagnose fix pinger in dual stack cluster add e2e testing for dual stack underlay fix pinger and monitor in underlay networking fix kubectl plugin ko replace api for get lsp id by name In netpol egress rules, except rule should be set to \"!=\" and should not be \"==\" modify kube-ovn as multus-cni problem Mengxin Liu hzma lut777 wang_yudong zhangzujian * release: prepare for 1.8.1 fix: init node with wrong ipamkey and lead conflict fix installation scripts fix getting LSP UUID by name fix StatefulSet down scale refactor: mute ovn0 ping log and add ping details fix: wrong link for iptables fix IPAM for StatefulSet append externalIds for pod and node when upgrade perf: increase ovn-nb timeout fix: re-check ns annotation to avoid annotations lost perf: do not diagnose external access reactor: remove ovn ipam options perf: switch's router port's addresses to \"router\" fix gc lsp statistic for multiple subnet fix e2e testing fix variable referrence fix nat-outgoing/policy-routing on pod startup Mengxin Liu hzma lut777 zhangzujian fix adding OVN routes in dual stack Kubernetes release: prepare for"
},
{
"data": "add update process and adding label to ls/lsp/lr fix: VLAN CIDR conflict check security: update base image update provider network CRD fix external-vpc perf: use link alias to filter packet security: fix CVE-2021-3538 add print columns for subnet/vpc/vpc-nat-gw crd improve support for dual-stack initialize ipsets on cni server startup delete residual ovs internal ports simplify vlan implement fix: ovn-northd svc flip flop add container run command for runtime containerd fix subnet conflict check for node address feat: read interface in installation from environment update encap ip by node annotation periodic fix ipset on pod creation/deletion add ready status for provider network avoid Pod IP to be the same with node internal IP remove subnet's `spec.underlayGateway` field add support for custom routes Add missing metadata directive in VpcNatGateway example use util.hostNameEnv instead KUBENODENAME chore: change wechat image fix typo perf: add fastpath and tuning guide update node labels and provider network's status.readyNodes when provider network is not initialized successfully in a node fix issues in underlay networking add external vpc switch update versions in docs and yamls update Go to version 1.16 fix IPv6-related issues ci: use stable version fix: bad udp checksum when access nodeport fix port-security, address parameters should be merged into one docs: optimize description ensure provider nic is up fix uninstall.sh some optimizations fix gofmt lint fix multi-nic.md fix dual stack cluster created by kind remove external egress gateway from additionalPrinterColumns fix default bind socket of cni server if the string of ip is empty,program will die if the string of ip is empty,program will die fix underlay networking on node reboot add judge before use the index about cidrBlocks and ips add validation check function docs: add wechat qcode feat: security group delete subnet AvailableIPs and UsingIPs para fix: panic when node has nil annotations append pod/exec resource for vpc nat gw update comment for SetInterfaceBandwidth update qos process fix LoadBalancer in custom VPCs Support Pod annotations control port mirroring fix docs externalOvnRouters is ok with 0 delete attachment ips fix externalids:podnetns add switch for network policy support fix subnet e2e ignore empty strings when counting lbs fix iptables fix issue #944 fix openstackonkubernetes doc bugs add switch for gateway connectivity check fix cleanup.sh security: fix CVE-2021-33910 delete ecmp route when node is deleted fix: if nftables not exists do no exit update wechat contract method delete overlapped var subnet add designative nat ip process for centralized subnet fix ipsets update underlay e2e testing match chassis until timeout fix CRD provider-networks.kubeovn.io fix: set vf mac update qos ingresspolicingburst add field defaultNetworkType in configmap ovn-config keep subnet's vlan empty if not specified delete ecmp route when node is not ready add del learned routes when remove ovnic [kubectl-ko] support trace in underlay networking fix subnet available IPs fix bug for deleting ovn-ic lrp failed add node internal ip into ovn-ic advertise blacklist underlay/vlan network refactoring chore: update ovn to 21.03 security: fix CVE-2021-3121 list ls with label to avoid listing ts failure Update log error delete the process of ip crd delete in cni delete request update networkpolicy process modify func name Additonal to Additional fix uninstall.sh execution in OVS pods perf: enable tx offload again as upstream already fix it label lr, ls and lsp, and add label filter when gc security: add go build security options feat: ko support cluster operations status/kick/backup docs: update docs about vlan/internal-port/kubeconfig add judge before use slices's index update kind to version"
},
{
"data": "adapt to vfio-pci driver fix IP/route transfer on node reboot add master check when a node adding to a cluster and config sb/nb address update installation scripts enable hw-offload do not delete statefulset pod when update pod fix: node route should filter out 'vpc' feat: lb switch docs: show openstack docs and docker image status fix: clean up gateway chassis list for external gw add doc for openstack/kubernetes hybrid deploy configure OVS internal port after dummy interface some fixes in vlan initialization clean up vpc service feat: vpc load balancer fix: lsp may lost when server pressure is high fix: check crds when controller start start evpc ph1 start evpc ph1 ci: retry arm build when failed update ecmp notes add interface name in cni response add nicType for offload 1.Support to specify node nic name 2.Delete extra blank lines ignore update pod nic annotation when is not nil set default UnderlayGateway to true in vlan mode unify logical entity list funcs (#863) ci: remove dpdk ci correct vlan e2e testing fix: remove rollout check adapt internal tcpdump update docker buildx install method fix: remove wait ovn sb fix: ci issues fix: cleanup kube-ovn-monitor resource fix multi-nic.md fix: acl overlay issues ci: split ovn/ovs into base image add judge before use slices's index update version to v1.7 in docs update master version to v1.8.0 Mengxin Liu Ruijian Zhang Tobias feixiang43 hzma lhalbert lut777 pengbinbin pengbinbin1 wang_yudong xieyanker xuhao zhang.zujian zhangzujian * release: prepare for 1.7.3 fix: disable periodically gc fix installation scripts fix StatefulSet down scale fix: init node with wrong ipamkey and lead conflict refactor: mute ovn0 ping log and add ping details fix: wrong alias for iptables fix: northd probe issues fix IPAM for StatefulSet append externalIds for pod and node when upgrade security: update base image fix gc lsp statistic for multiple subnet fix: kubeclient timeout fix: serialize pod add/delete order refactor: reuse waitNetworkReady to check ovn0 and slightly improve the installation speed perf: increase ovn-nb timeout fix: re-check ns annotation to avoid annotations lost perf: do not diagnose external access reactor: remove ovn ipam options perf: switch's router port's addresses to \"router\" fix e2e testing fix variable referrence fix nat-outgoing/policy-routing on pod startup Mengxin Liu hzma lut777 zhangzujian fix: VLAN CIDR conflict check perf: use link alias to filter packet security: fix CVE-2021-3538 prepare for release v1.7.2 initialize ipsets on cni server startup delete residual ovs internal ports fix: ovn-northd svc flip flop fix subnet conflict check for node address update comment for SetInterfaceBandwidth update encap ip by node annotation periodic delete subnet AvailableIPs and UsingIPs para fix ipset on pod creation/deletion add ready status for provider network avoid Pod IP to be the same with node internal IP update node labels and provider network's status.readyNodes when provider network is not initialized successfully in a node fix issues in underlay networking fix IPv6-related issues ci: use stable version fix: bad udp checksum when access nodeport ensure provider nic is up fix uninstall.sh fix gofmt lint if the string of ip is empty,program will die fix dual stack cluster created by kind fix default bind socket of cni server update kind to v0.11.1 fix underlay networking on node reboot append pod/exec resource for vpc nat gw fix: panic when node has nil annotations update qos process delete attachment ips fix externalids:podnetns fix subnet e2e ignore empty strings when counting lbs fix iptables fix image version fix cleanup.sh security: fix CVE-2021-33910 delete ecmp route when node is deleted fix: if nftables not exists do no exit delete overlapped var subnet match chassis until timeout update qos ingresspolicingburst fix ipsets update underlay e2e testing fix CRD provider-networks.kubeovn.io Mengxin Liu Ruijian Zhang feixiang43 hzma lut777 zhangzujian * ready for release"
},
{
"data": "add field defaultNetworkType in configmap ovn-config keep subnet's vlan empty if not specified update ecmp notes delete ecmp route when node is not ready delete the process of ip crd delete in cni delete request fix subnet available IPs [kubectl-ko] support trace in underlay networking underlay/vlan network refactoring adapt internal tcpdump fix bug for deleting ovn-ic lrp failed add node internal ip into ovn-ic advertise blacklist security: fix CVE-2021-3121 feat: ko support cluster operations status/kick/backup fix uninstall.sh execution in OVS pods perf: enable tx offload again as upstream already fix it security: add go build security options fix IP/route transfer on node reboot add master check when a node adding to a cluster and config sb/nb address do not delete statefulset pod when update pod fix: node route should filter out 'vpc' some fixes in vlan initialization fix: clean up gateway chassis list for external gw ci: remove dpdk ci correct vlan e2e testing configure OVS internal port after dummy interface fix: lsp may lost when server pressure is high 1.Support to specify node nic name 2.Delete extra blank lines ignore update pod nic annotation when is not nil set default UnderlayGateway to true in vlan mode fix: remove rollout check fix: remove wait ovn sb fix: cleanup kube-ovn-monitor resource fix: acl overlay issues update version to v1.7 in docs Mengxin Liu Ruijian Zhang hzma lut777 xuhao zhangzujian * prepare for release v1.7.0 diagnose: check sa related resource fix: do not nat route traffic fix: release ip addresses even if pods not found fix typo docs: add description of custom kubeconfig fix: add address_set to avoid error message optimize Makefile update vlan document add label to avoid deleting other delete unused log add ovs internal-port for pod network interface support underlay mode with single nic support underlay mode with single nic fix: add node to pod allow acl traffic rate for multus nic add ovs internal-port for pod network interface Add maintainers add e2e tests for external egress gateway fix e2e testing on macOS ci: fix lint and scan error fix: check if provider network exists update subnet document rename ExternalGateway to ExternalEgressGateway fix installation doc fix: forward policy to accept ci: fix lint error traffic rate for multus nic refactor: optimize service.go and subnet.go Check and Fetch all ValidatePodNetwork errors add judge about nic address implement new feature: external gateway start_ic should run regardless of ts port add judge before use index specify ovs ops on diff nodes fix mss rule Get node info from listerv1.NodeLister(index) Clean up the wrong log refactor: optimize subnet.go Optimise the redundancy code Handler the parse config error before used ci: remove 3-master e2e Remove the unnecessary rm command Use localtime when the kube-ovn installed Fix the different time from container and host add issue template add bgp doc support afisafis feat: support graceful restart fix: del might panic if duplicate delete fix: lr-route for eip using nic-ip, and not external gateway addr. feat: support announce service ip Fix some minor nits for docs add bpg options in bgp.md add Opstk&K8s ic doc add holdtime function fix: do not re-generate ts port fix: ignore root path doc ci fix: do not gc learned routes feat: add vxlan in README.md fix: getleaderip always return fist node ip fix: remove tty error notification fix ovn nb reconnect add docs for 'multus ovn network' add vpc nat gateway docs fix: static route for default multus network feat: support vxlan tunnel append delete ovn-monitor in ovn.yaml split ovn-central and ovn-monitor Fix mount the systemid path handle update deployment vpc-nat-gw refactor: remove function genNatGwDeployment's return error Update crd vpc-nat-gateways.kubeovn.io for pre-1.16 fix incorrect method for gateway node judgement Fix the 'multus how to use' link fix multi nic fix duplicate imports fix: compatible with JSON format fix: leader may change during startup, use cluster connection to set"
},
{
"data": "fix SNAT on pod startup fix development guide fix gofmt fix: configure nic failed when ifname empty fix: port does not support vlan tag update fix build dev image support hybrid mode for geneve and vlan remove extra space fix: compatible with no norhtd svc fix chassis check for node optimization for ovn/ovs status metric fix: release norhtd lock when power off add single node e2e fix get pod attachment net support ovn defautl attach net add network-attachment-definitions clusterRole feat: multus ovn nic update node ip when upgrade to dualstack add details for prerequisite Add Ecmp Static Route for centralized gateway fix: disable offload if geneve port exists disable offload for genevsys6081 refactor: optimize ovn command when error exists add net-attach-def ClusterRole add lsp with external_id feat: multus ovn nic fix: check ovn0 status livenessprobe fail if ovn nb/ovn sb not running fix: disable checksum offload for ovn0 to prevent kernel issue ignore ip6tabels check for v4 hostIP improve the code style of [import group ordering] fix wrong sequence update arm64 build fix: restart ovn-controller to force update flows fix: disable checksum validation Use public network effective image update usingips check when update finalizer for subnet fix dependency Update vendor. trim space the port_binding's output refactor: remove unnecessary config logic update maintainers docs: deprecated webhook fix: add missing ovn-ic binary chore: change action name chore update artworks fix: delete chassis_private when delete node Add 'kubectl ko trace' command's default namespace Add 'kubectl ko trace' command's default namespace perf: reclaim heap memory after compaction. remove the old script docs: add CNCF description fix: gc not exist node error perf: use new option to decrease ovn-sb size fix: return err docs: add faq section add vpc nat gateway Dockerfile feat: vpc nat gateway add node address allocate check when init update upgrade for ovn-default and join subnet fix: lint error fix: add missing ovn-ic-db schema update subnet ip num calculate fix: masq traffic to ovn0 from other nodes refactor: reduce duplicated GetNodeInternalIP function chore: update go version chore: move build dependency from alauda to kubeovn feat: support set default gateway in install script docs: fix typos Update install-pre-1.16.sh Update install.sh go import repo change to kubeovn feat: vpc nat gateway Resolving typo. filter repeat exclude ips modify ip count for dual docs: add ARCHITECTURE.MD refactor: reduce duplicated function fix: add dpdk pod name Update cleanup.sh Update cleanup.sh test: add service e2e modify test problem fix: kube-proxy check ovn-central: set default db addr same with leader node to fix nb and sb error 'bind: Address already in use' fix: reset ovn0 addr tests: add e2e for ofctl/dpctl/appctl ci: replace image docs: clarify dpdk usage scenario ci: update kind version and set timeout Update install-pre-1.16.sh Update install.sh refactor: remove duplicated call Update kubectl-ko Fix missing square brackets in curl ipv6 Modify the health check for kube-proxy port, compatible with ipv6 Update controller.go Fix: remove IsNotFound when get configmap external gateway Fix: check kube-proxy's 10256 port healthz fix: ip6tables check error Add MAINTAINERS file add vpcs && vpcs/status clusterRole Update install-pre-1.16.sh delete connect to ovsdb for ovn-monitor cni-bin-dir,cni-conf-dir configurable Fix https://github.com/alauda/kube-ovn/issues/655 Update install.sh Error: unknown command \"ko\" for \"kubectl\" Fix: wrong split in FindLoadbalancer function vlan nic support regex fix underlay gateway flood logs fix: check required module before start docs: add underlay docs chore: update ovn to 20.12 and ovs to"
},
{
"data": "prepare for next release fix: make sure northd leader change fix: make sure ovn-central is updated one by one fix: restart when init ping failed fix: increase raft timer to avoid leader flap pass golangci-lint add golangci-lint to github actions fix pod terminating not recycle ip when controller not ready fix: add new iptable cleanup commands modify static gw changed problem Fix wait pod network ready take long time fix: when address is empty, skip route/nat deletion fix: update ipam cidr when subnet changed modify test problem for dual-stack upgrade Amye Scavarda Perrin JinLin Fu Mengxin Liu Wan Junjie Yan Wei Yan Zhu caoyingjun chestack cmj danieldin95 halfcrazy hzma luoyunhe1 lut777 pengbinbin1 sayicui wangyudong withlin xieyanker zhangzujian * prepare release for v1.6.3 fix: do not nat route traffic fix: release ip addresses even if pods not found security: fix crypto CVE fix: add address_set to avoid error message fix: add node to pod allow acl Handler the parse config error before used fix: del might panic if duplicate delete fix: do not re-generate ts port fix: getleaderip always return fist node ip fix: do not gc learned routes fix: remove tty error notification fix ovn nb reconnect perf: reclaim heap memory after compaction. fix: leader may change during startup, use cluster connection to set options. fix SNAT on pod startup Mengxin Liu Yan Zhu caoyingjun chestack zhangzujian * release 1.6.2 fix: configure nic failed when ifname empty remove extra space fix chassis check for node fix: compatible with no norhtd svc fix: release norhtd lock when power off fix: disable offload if geneve port exists disable offload for genevsys6081 rebuild to fix openssl cve fix: check ovn0 status ignore ip6tabels check for v4 hostIP livenessprobe fail if ovn nb/ovn sb not running fix: disable checksum offload for ovn0 to prevent kernel issue add node address allocate check when init update arm64 build fix: restart ovn-controller to force update flows fix: disable checksum validation update usingips check when update finalizer for subnet Mengxin Liu danieldin95 halfcrazy hzma lut777 fix: add missing ovn-ic binary release for 1.6.1 fix: delete chassis_private when delete node chore: update ovn to 20.12 ovs to 2.15 refactor: reduce duplicated function fix: masq traffic to ovn0 from other nodes ovn-central: set default db addr same with leader node to fix nb and sb error 'bind: Address already in use' fix: reset ovn0 addr Fix: wrong split in FindLoadbalancer function fix underlay gateway flood logs fix: check required module before start fix: make sure northd leader change fix: restart when init ping failed fix pod terminating not recycle ip when controller not ready fix: add new iptable cleanup commands Fix wait pod network ready take long time fix: when address is empty, skip route/nat deletion fix: update ipam cidr when subnet changed prepare for 1.6.1 move build dependency from alauda to kubeovn update upgrade for ovn-default and join subnet update subnet ip num calculate fix: ip6tables check error delete unused import packet filter repeat exclude ips modify ip count for dual modify test problem add vpcs && vpcs/status clusterRole delete connect to ovsdb for ovn-monitor modify static gw changed problem modify test problem for dual-stack upgrade Mengxin Liu Wan Junjie Yan Zhu cmj hzma wangyudong xieyanker release: 1.6.0 docs: add docs for vpc fix typo ci: update go version to 1.15 Fix: replace the command to run the script via 'sh' with 'bash' Fix the default mtu parameter's describe modify network policy process upgrade for subnet from single protocol to dual-stack add network policy adapt for dual-stack feat: update ovn to 20.09 docs: prepare docs for"
},
{
"data": "release perf: add pprof to pinger doc for dual-stack Update the container nic name use the CNI_IFNAME parameter which passed by kubelet ci: enable docker experimental feature ci: build multi arch image <fix>(np) fix mulit np rule and gateway bug fix start-db.sh echo message fix: iface check error fix: add missing ping due to deb build fix: find iface by full match first then regex match fix: livenessProb/readinessProb might conflict when run logrotate at same time modify subnet and ip crds modify service vip parse error update vendor update client-go fix: np with multiple rules modify loop error for get metrics diagnose: add more diagnose info ci: trigger action when yamls change fix: ha e2e failed fix: allow traffic to gateway fix: cni-server default encap ip use right interface ip feat: change default build image to ubuntu add build for dualstack feat: distributed eip Add CNI modify for dualstack Debian: Add debian docker image support Add adaption for dualstack, part of daemon process. chore: reduce binary size modify build problem Append ip monitor to document license: fix felix dir feat: support advertise subnet route Add IP Num Alert Add adaption for dualstack, part of controller process. convert ip to string add pod static ip validate chore: add COC and roadmap fix: move felix to self repo to remove bird license Add license scan report and status fix: default network release for 1.5.2 fix: ovn-ic support ssl fix: nat rules can be modified fix: remove svc cidr routes ci: specify ubuntu version to make github action happy fix: specify exec container to mute warning message feat: remove cluster ip dependency for ovn/ovs components fix: add resources limits to avoid eviction fix: vpc static route manage fix: validate vpc subnet Fix external-address config description Fix the problem of confusion between old and new versions of crd fix: ovn-central check if it exits in NODE_IPS fix: check ipv6 requirement before start feat: add ovs/ovn log rotation add node ping total count metric diagnose: add ovs-vsctl show to diagnose results fix: nat rules fix: masq other nodes to local pod to avoid nodrport triangle traffic Update install.sh to allow dpdk limits configuration (#546) format test: e2e uses IPVS cluster by default chore: update go version to 1.15 fix: tolerate all taints feature: add vpc static route fix: cleanup script error docs: modify eip config description security: remove sqlite to mute cve warning test: add e2e for kubectl-ko feat: pinger can return exit code when failed fix: nat traffic that from host to svc docs: new feat for disable-ic, regex iface and pod bind subnet sync the default subnet of ns by vpc's status fix: devault vpc lb/dns fix: shutdown vpc workqueue fix: subnet CIDRConflict fix: subnet bind to ns feature: add vpc crd Release and gc the resources in vpc fix: gc logic router gc and clean vpc Remove the VPC while removing the default subnet feature: support custom vpc chore: refactor log feat: iface support regexp feat: support disable interconnection for specific subnet modify review problems docs: v1.5.1 changelog perf: accelerate ic and ex gw update fix: missing version date fix: check multicast and loopback subnet monitor: refactor grafana dashboard docs: do not allow install to namespace other than kube-system update review problems for ovn_monitor monitor: add more dashboard chore: add vendor Updated Dockerfile.dpdk1911 to use Centos8 and"
},
{
"data": "fix: CodeQL scan warning fix: ipt wrong order and add cluster route opt: only allow specifies default subnet chore: reduce image size feature: Support for namespace binding multiple subnets docs: fix multi nic subnet options docs: add pinger/controller/cni metrics fix: add default ssl var for compatibility Add monitor doc fix: ipv6 network format when update subnet fix: ipv6 len mismatch chore: add version info metrics: add ovs client latency metrics Add OVN/OVS Monitor docs: performance test method fix: wrong port porto for udp docs: add descriptions of local files ci: add github code scan fix: do not adv join cidr when enable ovn-ic perf: remove default acl rules prepare for next release fix: use internal IP when node connect pod ci: change to docker buildx action fix: delete pod when marked with deletionTimestamp fix: remove not alive pod in pg Emma Kenny Mengxin Liu MengxinLiu Wan Junjie emmakenny feixiang fossabot hzma luoyunhe1 wiwen xieyanker * release for 1.5.2 fix: nat rules can be modified fix: add resources limits to avoid eviction ci: specify ubuntu version to make github action happy fix: remove svc cidr routes Fix the problem of confusion between old and new versions of crd Fix external-address config description fix: ovn-central check if it exits in NODE_IPS fix: check ipv6 requirement before start feat: add ovs/ovn log rotation diagnose: add ovs-vsctl show to diagnose results add node ping total count metric fix: tolerate all taints chore: update go version to 1.15 fix: masq other nodes to local pod to avoid nodrport triangle traffic Update install.sh to allow dpdk limits configuration (#546) prepare for 1.5.2 fix: cleanup script error security: remove sqlite to mute cve warning chore: refactor log fix: nat traffic that from host to svc feat: iface support regexp Mengxin Liu emmakenny hzma xieyanker release 1.5.1 opt: only allow specifies default subnet feature: Support for namespace binding multiple subnets perf: accelerate ic and ex gw update fix: check multicast and loopback subnet fix: CodeQL scan warning fix: ipt wrong order and add cluster route fix: add default ssl var for compatibility fix: broken rpm link fix: ipv6 network format when update subnet fix: ipv6 len mismatch fix: wrong port porto for udp fix: do not adv join cidr when enable ovn-ic perf: remove default acl rules fix: use internal IP when node connect pod ci: change to docker buildx action fix: delete pod when marked with deletionTimestamp fix: remove not alive pod in pg Mengxin Liu * release: prepare for release 1.5.0 perf: use podLister to optimize k8s calls chore: enable ssl to default ci tests security: change ovsdb file access to 600 docs: improve hw-offload feat: support db ssl communication diagnose: show nb/sb/node info fix: pinger diagnose should use cmd args fix: ipv6 get portmap failed again fix: ipv6 get portmap failed fix: delay mv cni conf to when cniserver is ready chore: update kind and kube-ovn-cni updateStrategy monitor: add cni grafana dashboard monitor: add more kube-ovn-cni metrics feat: update pinger dashboard fix: issues with vlan underlay gateway feat: set more metadata to interface external_ids feat: grace stop ovn-controller refactor: fix bridge-mappings and refactor vlan code fix: allow mirror config update fix: cleanup v6 iptables and ipset docs: add gateway docs and optimize others feat: integrate ovn sfc feat: support pod snat prepare for next release fix: ovn-ic-db restart failed fix: stop ovn-ic when disabled fix: use nodeName as chassis hostname Mengxin Liu prepare for 1.4 release fix: do not gc learned routes chore: add psp perf: apply udp improvement chore: sync pre-1.16 install.sh ci: use go 1.15 fix: add prob timeout to wait script finish resolve review problem chore: suppress verbose logs fix: do not gc ic logical_switch fix: only gc VIF type logicalswitchport docs: update docs chore: add back lflow reduction optimization chore: update ovs to"
},
{
"data": "fix: remove duplicated gcLogicalSwitch fix: modify src-ip route priority fix: missing session lb to logical switch feat: ovn-ic integration fix:resolve gosec check problem feat: do not perform masq on external traffic chore: fix patch failure fix: subnet acl might conflict if allowSubnets and subnet cidr cover each other feat: acl log drop packets chore: remove juju log dependency feat: gw switch from overlay to underlay chore: prepare for 1.4 release fix: prevent update failed logs fix: ko use external-ids to find related nic fix: forward accept rules Mengxin Liu hzma chore: add build date release: update 1.3.0 docs fix: call appendMssRule function to resolved mss according problem dpdk: add kmod, pdump and proc-info tools fix: ci image tags chore: optimize dpdk build docs: add hw-offload docs and resolve some issues fix: if sriov device, do not delete the host nic fix: use keymutex to serialize pod add/delete operation feat: assign a pod as the gw ci: add arm build to normal ci process ci: add unfixed cve ci: arm64 build accelerate chore: add logs to sriov interface ci: add ipv6 install e2e feat: recycle lsp at runtime fix: qos error fix: variable error ci: modify cache usage ci: save ci time chore: use j2 to render different kind.yaml fix: set qlen for ovn0 prepare for 1.3 release chore: update build.sh fix: log error chore: check ovn-sb connectivity from ovn-ovs probe fix: available ips calculation issues perf: add hw offload docs: add gateway qos doc ci: remove master taint chore: update cni dependencies feat: session service Revert \"perf: use policy-route to replace src-ip route\" Revert \"fix: ipv6 policy route\" Revert \"fix: reset address_set when delete subnet\" fix: reset address_set when delete subnet test: statefulset without ippool match apps/ statefulset fix: ipv6 policy route feat: support gw qos perf: use policy-route to replace src-ip route Solve the problem of non-standard statefulset creation mode fix: arm64 build missing env action: use commit as image tag Add libatomic to docker image chore: save disk space when building chore: change crd form v1beta1 to v1 kubectl-ko: add ovs-tracing info pinger: add metrics to resolve external address chore: update ovn to 20.06 update changelog fix some version in docs fix: rename variable fix: minor fix feat: use never used address first to reduce conflict ci: use tmpfs to accelerate e2e fix: create/delete order might lead ip conflict ci: do not push image when pr clean up all white noise security: update yum repo fix node's annottaions overwrited incorrectly Fix typo in multi-nic.md Userspace-CNI updates in dpdk.md Remove empty lines from DPDK Dockerfile security: update loopback to fix CVE Make OVS-DPDK start script more robust Reduce DPDK image size fix: add back privilege for ipv6 Config support for OVS-DPDK security: add trivy scan and fix image CVEs docs: modify arm build docs: update development refactor: use ovs.Exec replace raw command chore: add gosec to audit code security prepare for next release fix: arm build fix: change version in install.sh Gary Haocheng Liu Mengxin Liu MengxinLiu Patryk Strusiewicz-Surmacki Xiang Dai ckji laik linruichao release 1.2.1 fix: create/delete order might lead ip conflict fix node's annottaions overwrited incorrectly security: update loopback to fix CVE fix: add back privilege for ipv6 fix: arm build fix: change version in install.sh Mengxin Liu MengxinLiu ckji chore: prepare for release 1.2 chore: prepare for release"
},
{
"data": "DPDK doc update and small image reduction Add OVS-DPDK support, for issue 104 fix: pod get deleted between configure nb and patch pod fix: native vlan and delete subnet issues fix: trigger github action when dist dir change fix: update ovn patch chore: improve log fix: gc lsp for pod that not alive feat: support underlay without vlan encap chore: optimize kube-ovn-cni log fix: gc node lsp chore: remove vagrant fix: dst route policy might be empty feat: in vlan mode if physical gateway exists, no need to create a virtual one perf: add amd64 compile flags back fix: init ipam before gc, other wise routes will be deleted fix: patch ovn to lower src-ip route priority to work with ovn-ic fix: return early if allocation is not ready chore: remove networks crd perf: remove more stale lflow ci: run ut and e2e in github action fix: check svc and endpoint protocol perf: reduce lflow count fix: when podName or namespace contains dot, lsp cannot be deleted correctly fix: wrong subnet status feat: change pod route when update gateway type feat: refactor subnet and allow cidr change fix: use kubectl to avoid tls handshake error chore: reduce logs feat: only show error log of kube-ovn-controller fix: map concurrent panic fix: ipv6 related issues fix: validate if subnet cidr conflicts with svc ip fix: validate if node address conflict with subnet cidr feat: github action fix: wait node annotations ready before handle pods fix: check ovn-nbctl socket in new dir fix: error log found in scale test fix: concurrent panic feat: use bgp to announce pod ip release 1.1.1 fix: labels might be nil fix: ping output format monitor: make graph more sensitive to changes docs: update vlan docs docs: update docs feat: improve install/uninstall refactor: refactor cni-server refactor: controller refactor feat: modify install.sh for vlan type network feat(vlan): vlan network type feat(vlan): vlan network type fix: yaml indent and ovn central dir docs: chinese wechat info fix: fork go-ping and apply patches chore: update kind node to 1.18 and ginkgo docs: add arm build steps fix: mount etc/origin/ovn to ovs-ovn add support for multi-arch build docs: change the cidr to avoid misunderstanding feat: diagnose check if dns/kubernetes svc exist OVS local interface table macinuse row is lower case, but pod annotation store mac in Upper case. prepare for 1.2 fix: separate log for no address and wrong address docs: format docs Gary Mengxin Liu MengxinLiu Yan Zhu fangtian linruichao release 1.1.1 fix: labels might be nil monitor: make graph more sensitive to changes fix: ping output format fix: yaml indent and ovn central dir fix: fork go-ping and apply patches fix: mount etc/origin/ovn to ovs-ovn fix: use legacy iptables MengxinLiu release 1.1.0 feat: use buildx to reduce image size test: check host route when add/del a subnet [DO NOT REVIEW] vendor update: introduce klogr and do some tidy [webhook] init logger for controller-runtime test: add node test fix: acl and qos issues feat: expose iface in install.sh fix: remove auto checksums perf: offload udp checksum if possible release v1.0.1 perf: add x86 optimization CFLAGS chore: add scripts to build ovs fix: lost route when subnet add and is not ready fix: ip prefix might be empty chore: reduce image size chore: modify nodeSelector label to support k8s 1.17 fix: use ovn-appctl to do recompute docs: multi nic feat: ip cr support multi-nic fix: update in svc 1.1.1.1 may del svc"
},
{
"data": "feat: add cni side logical to support ipam for multi-nic feat: add basic allocation function for multus-cni fix: only delete pod that restart policy is Always perf: only enqueue updatePod when needed fix: add iptables to accept container traffic feat: check kube-proxy and coredns in diagnose feat: add label param in install script perf: recycle ip and lsp for pod that in failed or succeeded phase fix: add inactivity_probe back feat: check if crds exist in diagnose fix: gc static routes fix: still delete lsp if pod not in ipam fix: delete chassis from sb when delete node fix: missing label selector feat: add one script installer fix: cleanup in offline environment feat: diagnose check ds/deployment status refactor: the ipam now has lock itself no need for ippool queue fix: if pod is evicted, recycle address fix: use uuid to fetch vip refactor ipam release 1.0.0 refactor pod controller merge images into one fix:enablebash alias option in Dockerfile CMD scripts webhook: use global variables to avoid repeated map constructing remove useless fields in webhook.yaml remove leader-election for webhook manager feat: update to 20.03.0 ovn Bruce Ma MengxinLiu Your Name release v1.0.1 fix: lost route when subnet add and is not ready fix: ip prefix might be empty fix: update in svc 1.1.1.1 may del svc 1.1.1.10 fix: add inactivity_probe back fix: use uuid to fetch vip MengxinLiu release 1.0.0 prepare for 1.0 fix: add back missing lsp gc fix: delete lb if it has no backend metrics: expose cni operation metrics refactor: refactor server.go fix: disable ovn-nb inactivity_probe fix: wait for container network ready before cni return refactor: refactor controller.go ovn: pick upstream performance patch docs: add the development guide and fix the lint docs: add companies using kube-ovn section docs: add community information fix: alleviate ping lost refactor: refactor ovn-nbctl.go docs: modify the readme fix: pinger percentage error fix: add kube-ovn types to default scheme refactor: cniserver docs: update docs fix: add a periodically recompute to ovn-controller to avoid inconsistency fix: add timeout to pinger access ovs/ovn fix: when subnet cidr conflict requeue the subnet fix: add runGateway to wait.Until fix: restart nbctl-daemon if not response feat: display controller log in kubectl-ko diagnose refactor: separate normal check and ovn specific check fix: do not return not found err fix: move components to kube-system ns and add priorityClass feat: cniserver check allocated annotation before configure pod network fix: set ovn-openflow-probe-interval pinger: add port binds check between local ovs and ovn-sb fix: if cidr block not ends with zero, reformat it fix: resync iptables update version pinger: add timeout for dns resolve e2e: add basic framework and tests for e2e Bruce Ma Mengxin Liu MengxinLiu withlin release 0.10.2 fix: add a periodically recompute to ovn-controller to avoid inconsistency fix: when subnet cidr conflict requeue the subnet fix: add runGateway to wait.Until fix: restart nbctl-daemon if not response Mengxin Liu release: v0.10.1 fix: do not return not found err fix: set ovn-openflow-probe-interval pinger: add port binds check between local ovs and ovn-sb fix: if cidr block not ends with zero, reformat it fix: resync iptables Mengxin Liu docs: update changelog fix: address in ep might be empty fix: cniserver wait ovs ready fix: wrong deletion in gc lb and portgroup ovn: add memory patch to slow down memory increase fix: wait default and node logical switch ready fix: podSelector in networkpolicy should only consider pods in the same ns fix: do not add unallocated pod to port-group release 0.10.0 ovn: pick up commit from upstream feat: pinger support check an address out of cluster. chore: double quote shell variables fix: cluster mode db will generate lots listen error log fix: gc logicalswitchport form listing pods and nodes fix: some init and cleanup bugs fix: ovn-cluster mode feat: exclude_ips can be changed dynamically update ovn to"
},
{
"data": "feat: use label to select leader to avoid pod status misleading fix: ip conflict when use ippool docs: add v0.9.1 changelog fix: block subnet deletion when there any ip in use plugin: kubectl plugin now expose ovs-vsctl to each node fix: nbctl need timeout to avoid hang infinitely perf: as lr-route-add with --may-exist will replace exist route, no need for another delete perf: when controller restart skip pod already create lsp fix: when delete node recycle related ip/route resource fix typo in start-ovs.sh perf: skip evicted pod when enqueueAddPod and enqueueUpdatePod fix: use ep.subset.port.name to infer target port number fix: if no available address delete pod might failed related to #155 kind: support reload kube-ovn component in kind cluster perf: filter pod in informer list-watch and disable resync fix: index out of range err when create lsp prepare for next release kind: support to install kube-ovn in kind fix: mount /var/run/netns that kind will use it to store network ns files Mengxin Liu qsyqian release v0.9.1 fix: block subnet deletion when there any ip in use fix: nbctl need timeout to avoid hang infinitely fix: when delete node recycle related ip/route resource fix typo in start-ovs.sh fix: use ep.subset.port.name to infer target port number fix image tag fix: mount /var/run/netns that kind will use it to store network ns files fix: index out of range err when create lsp Mengxin Liu qsyqian release: v0.9.0 feat: when use nodelocaldns do not nat the address docs: add description about relation of cidr and static ip allocation Check the short name of kubernetes services which is independant of the cluster domain name. fix: some grafana modification fix: add missing cap chore: update ovn and other minor fix fix re-annotate namespaces when subnet deleted fix: add ingresspolicingburst to accurate limit ingress bandwidth fix: network unreachable when add egress qos for pod fix: err when add egress qos fix: remove privilege=true from long run container perf: optimize pod add fix: add keepalive to ovn-controller feat: add controller metrics If pod have not a status.PodIP skip add/del static route fix: ippool pod static route might lost during leader election fix: static route might lost during leader election feat: add grafana config and modify metrics. fix: only keep the last iface-id fix: add missing gc fix: gc resource when start controller fix: watch will break if timeout is set feat: pinger add apiserver check metrics fix: avoid conflict when init Mengxin Liu QIANSHUANGYANG [] Sbastien BERNARD Yan Zhu release v0.8.0 fix: loss might be negative number feat: pinger prometheus support feat: support pinger chore: update ovs/ovn feat: gateway ha chore: remove ovs-ipsec and update go to 1.13 feat: add kubectl plugin docs: add comparison fix: pod should be accessed from node when acl applied enable portmap by default to support hostport feat: add port security to pod port feat: add node switch allocated ip cr prepare for next release Mengxin Liu MengxinLiu Yan Zhu release: bump v0.7.0 fix: add default excludeIps and check kern version fix: deal with ipv6 connection str fix missing condition when subnet is private add subnet status fix: acl related issues Revert \"add subnet status field\" add missing subnets/status operation permission Update cleanup.sh feat: add exclude_ips annotation to namespace fix: use pg-del to remove pg and acl, check if ports is empty before set pg add subnet status feat: add subnet annotation to ns and automatically unbind ns from"
},
{
"data": "docs: add cn docs link feat: add default values to subnet write back subnet name to ip label chore: enable mirror in yaml and modify docs fix: duplicate import in network_policy.go fix: improve cni-conf name priority fix: wait subnet ready before start worker. fix: check ls exists before handle it docs: add more installation tools. docs: add support os and notes. Update subnet.md feat: add ip info to ip crd feat: update logo feat: add logo feat: reserve vport for statefulset pod docs: add crd installation fix: modify default header length fix: do not create exist logical switch chore: prepare for next release MengxinLiu Yan Zhu ftiannew halfcrazy shuangyang.qian docs: add crd/ipv6 docs and bump version 0.6.0 fix build error feat: support ipv6-only mode add webhook docs add admission webhook for static ip docs: add support platform version feat: use subnet crd to manage logical switch Use k8s hostname, fix #60 fix: remove dependency on cluster-admin chore: use go mod to replace dep docs: update mirror feature to readme feat: support traffic mirror prepare for next release MengxinLiu Yan Zhu chore: bump v0.5.0 fix: wrong mtu feat: support user define iface and mtu fix: remove mask field from ip annotation feat: auto assign gw for controller config and expose more cmd args feat: add pprof and use it as probe feat: set kernel args when start cniserver feat: support network policy prepare for next release MengxinLiu bump version to v0.4.1 fix: manual static ip allocation and automatic allocation should use different ip validation Fix json: cannot unmarshal string into Go value of type request.PodResponse https://github.com/alauda/kube-ovn/issues/33 fix: use ovsdb-client to get leader info fix: use default-gw as default-exclude-ips and expose args to docs to cleanup all created resources, not only kube-ovn namespace. prepare for next release MengxinLiu Yan Zhu fanbin feat: bump version to 0.4.0 feat: support expose pod ip to external network fix: check conflict subnet cidr fix: start informer when controller is leader feat: validate namespace/pod annotations fix: wait node-gw info ready fix: use ovn/ovs-ctl to health check feat: remove finalizer dependency improve svc performance fix: reuse node ip and mac annotation Add ha for ovn dbs and simplify makefile feat: merge ovn-nbctl request feat: separate ip pool pod and add parallelism to workers Mute logrus log for ipset Dont need to change the vendored code. Fix klog cant use V module The side affect of this commit is glog's V module not work. feat: use ovn macam to allocate mac for static ip pod feat: update ovn to 2.11.1 Add vagrantfile fix: use tag version yaml url chore: fix go-report golint issues ha for kube-ovn-controller cleanup unused code docs: add network topology chore: Minor updates to gateway.md chore: Gateway documentation touch-ups chore: QoS documentation touch-ups chore: Subnet Isolation documentation touch-ups chore: Static IP documentation touch-up chore: Subnet documentation touch-ups chore: Installation Guide touch-ups chore: README touch-up. Kai Chen MengxinLiu Yan Zhu docs: bump version fix: acl rule error fix: init node gw before run controller fix: external dns issues feat: use daemon ovn-nbctl to improve performance and cleanup unused dns code Implement centralized gateway. chore: migrate from bitbucket to github MengxinLiu Yan Zhu remove dns from ls and bump new version make filter table forward chain default accept ipset exclude cluster service ip range fix: lb bugs read cidr from ns annotation fix: remove dns table from nodeswitch and remove unused other_config:namespace fix pod has no ip Distributed gateway implement fix: clean lost interface. feat: support subnet isolation feat: support dynamic qos fix: ovn restart issues fix: ovn restart issues fix: validate namespace switch annotations fix lint &&"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG.md",
"project_name": "Kube-OVN",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Storj link: https://github.com/storj-thirdparty/velero-plugin objectStorage: true volumesnapshotter: false localStorage: false This repository contains a Velero object store plugin to support backup/restore from Velero to Storj decentralized file object storage."
}
] |
{
"category": "Runtime",
"file_name": "05-storj.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Autoscaling menu_order: 40 search_type: Documentation An autoscaling configuration begins with a fixed cluster: Configured as per the scenario. Hosted on reserved or protected instances to ensure long-term stability. Ideally sized at a minimum of three or five nodes (you can make your fixed cluster bigger to accommodate base load as required; a minimum is specified here in the interests of resilience only.) Building on this foundation, arbitrary numbers of dynamic peers can be added or removed concurrently as desired, without requiring any changes to the configuration of the fixed cluster. As with the fixed cluster, dynamically added nodes recover automatically from reboots and partitions. On the additional dynamic peer, at boot, via or equivalent: weave launch --no-restart --ipalloc-init=observer $PEERS Where, `$PEERS` means all peers in the fixed cluster, initial and subsequently added, which have not been explicitly removed. It should include fixed peers which are temporarily offline or stopped. You do not have to keep track of and specify the addresses of other dynamic peers in `$PEERS` - they will discover and connect to each other via the fixed cluster. Note: The use of `--ipalloc-init=observer` prevents dynamic peers from coming to a consensus on their own - this is important to stop a clique forming amongst a group of dynamically added peers if they become partitioned from the fixed cluster after having learned about each other via discovery. On the dynamic peer to be removed: weave reset If for any reason you cannot arrange for `weave reset` to be run on the peer before the underlying host is destroyed (for example when using spot instances that can be destroyed without notice), you will need an asynchronous process to [reclaim lost IP address space](/site/operational-guide/tasks.md#detect-reclaim-ipam)."
}
] |
{
"category": "Runtime",
"file_name": "autoscaling.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The opengovernance.dev project [provides the following checklist](https://github.com/opengovernance/opengovernance.dev/blob/master/README.md#checklist) for defining open governance. Submariner's answers are provided for each point. This is not meant to define Submariner's governance, but instead to document it clearly. Authoritative sources are cited inline. Who owns the copyright on contributed code? All Submariner code is to verify the copyright notices specify \"Contributors to the Submariner project\", per the . Who owns the domain(s) for the project? Submariner is a CNCF Sandbox project, so all Submariner domains are owned by the CNCF. Who owns the trademark for the project, is it neutrally owned and governed? Are there open trademark guidelines? Submariner is a CNCF Sandbox project, so all Submariner trademarks are owned by the CNCF. How can users license the projects branding? Are there open branding guidelines? Submariner is a CNCF Sandbox project, so please see the CNCF/artwork project's . If the project raises funds, who owns it? Submariner is a CNCF Sandbox project, so funds would be owned by the CNCF. Who makes decisions on how the project performs releases? Project Owners are responsible for defining milestones and releases, per the Submariner . How can the project contributors become committers? In short, by reviewing pull requests and receiving Committer/Owner approval. See the for details. How are project committers removed? Committers can be removed by stepping down or by two thirds vote of Project Owners, per the Submariner [Committer Responsibilities and Privileges](https://submariner.io/community/contributor-roles/#committer-responsibilities-and-privileges). Project Owner removals are currently frozen except for stepping down or for Code of Conduct violations. See the [Owner Removal and Future Elected Governance](https://submariner.io/community/contributor-roles/#owner-removal-and-future-elected-governance) documentation for details. If the project raises funds, who decides how this money is spent? Project Owners are responsible for deciding how funds are spent, per the Submariner . Who decides the project roadmap? The is public and open to everyone. Who can participate in security disclosure issues? Project Owners are responsible for receiving and ensuring an adequate response, per the Submariner . How transparent are the decision-making processes? Decisions are made in weekly public community meetings or on public GitHub Issues/PRs. All meetings are documented in Submariner's . Who enforces the code of conduct? The Submariner Project Owners enforce the , per the Submariner ."
}
] |
{
"category": "Runtime",
"file_name": "GOVERNANCE.md",
"project_name": "Submariner",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Integrating Docker via the API Proxy menu_order: 5 search_type: Documentation The Docker API proxy automatically attaches containers to the Weave network when they are started using the ordinary Docker or the . There are three ways to attach containers to a Weave network (which method to use is entirely up to you): The Weave Net Docker API Proxy. See . The Docker Network Plugin framework. The Docker Network Plugin is used when Docker containers are started with the --net flag, for example: `docker run --net <docker-run-options>` Where, `<docker-run-options>` are the you give to your container on start Note that if a Docker container is started with the --net flag, then the Weave Docker API Proxy is automatically disabled and is not used to attach containers. See and . Containers can also be attached to the Weave network with `weave attach` commands. This method also does not use the Weave Docker API Proxy. See . The proxy sits between the Docker client (command line or API) and the Docker daemon, and intercepts the communication between the two. It is started along with the router and weaveDNS when you run: host1$ weave launch N.B.: Prior to version 2.0, the `launch-proxy` command allowed to pass configuration options and to start the proxy independently. This command has been removed in 2.0 and `launch` now also accepts configuration options for the proxy. By default, the proxy decides where to listen based on how the launching client connects to Docker. If the launching client connected over a UNIX socket, the proxy listens on `/var/run/weave/weave.sock`. If the launching client connects over TCP, the proxy listens on port 12375, on all network interfaces. This can be adjusted using the `-H` argument, for example: host1$ weave launch -H tcp://127.0.0.1:9999 If no TLS or listening interfaces are set, TLS is auto-configured based on the Docker daemon's settings, and the listening interfaces are auto-configured based on your Docker client's settings. Multiple `-H` arguments can be specified. If you are working with a remote docker daemon, then any firewalls in between need to be configured to permit access to the proxy port. All docker commands can be run via the proxy, so it is safe to adjust your `DOCKER_HOST` to point at the proxy. Weave Net provides a convenient command for this: host1$ eval $(weave env) host1$ docker ps The prior settings can be restored with host1$ eval $(weave env --restore) Alternatively, the proxy host can be set on a per-command basis with host1$ docker $(weave config) ps The proxy can be stopped, along with the router and weaveDNS, with host1$ weave stop If you set your `DOCKER_HOST` to point at the proxy, you should revert to the original settings prior to running `stop`. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "weave-docker-api.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "(instances-routed-nic-vm)= When adding a {ref}`routed NIC device <nic-routed>` to an instance, you must configure the instance to use the link-local gateway IPs as default routes. For containers, this is configured for you automatically. For virtual machines, the gateways must be configured manually or via a mechanism like `cloud-init`. To configure the gateways with `cloud-init`, firstly initialize an instance: incus init images:ubuntu/22.04 jammy --vm Then add the routed NIC device: incus config device add jammy eth0 nic nictype=routed parent=my-parent-network ipv4.address=192.0.2.2 ipv6.address=2001:db8::2 In this command, `my-parent-network` is your parent network, and the IPv4 and IPv6 addresses are within the subnet of the parent. Next we will add some `netplan` configuration to the instance using the `cloud-init.network-config` configuration key: cat <<EOF | incus config set jammy cloud-init.network-config - network: version: 2 ethernets: enp5s0: routes: to: default via: 169.254.0.1 on-link: true to: default via: fe80::1 on-link: true addresses: 192.0.2.2/32 2001:db8::2/128 EOF This `netplan` configuration adds the {ref}`static link-local next-hop addresses <nic-routed>` (`169.254.0.1` and `fe80::1`) that are required. For each of these routes we set `on-link` to `true`, which specifies that the route is directly connected to the interface. We also add the addresses that we configured in our routed NIC device. For more information on `netplan`, see . ```{note} This `netplan` configuration does not include a name server. To enable DNS within the instance, you must set a valid DNS IP address. If there is a `incusbr0` network on the host, the name server can be set to that IP instead. ``` You can then start your instance with: incus start jammy ```{note} Before you start your instance, make sure that you have {ref}`configured the parent network <nic-routed>` to enable proxy ARP/NDP. ```"
}
] |
{
"category": "Runtime",
"file_name": "instances_routed_nic_vm.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "See for general information about the Kata Agent Policy generation tool. The most basic way to use `genpolicy` is to provide just a Kubernetes YAML file as command line parameter - e.g., ```bash $ genpolicy -y test.yaml ``` `genpolicy` encodes the auto-generated Policy text in base64 format and appends the encoded string as an annotation to user's `YAML` file. `genpolicy` is using standard Rust logging. To enable logging, use the RUST_LOG environment variable - e.g., ```bash $ RUST_LOG=info genpolicy -y test.yaml ``` or ```bash $ RUST_LOG=debug genpolicy -y test.yaml ``` `RUSTLOG=debug` logs are more detailed than the `RUSTLOG=info` logs. See for information regarding the contents of the auto-generated Policy. Part of the Policy contents is information used to verify the integrity of container images. In order to calculate the image integrity information, `genpolicy` needs to download the container images referenced by the `YAML` file. For example, when specifying the following YAML file as parameter: ```yaml apiVersion: v1 kind: Pod metadata: name: policy-test spec: runtimeClassName: kata containers: name: first-test-container image: quay.io/prometheus/busybox:latest command: sleep \"120\" ``` `genpolicy` downloads the `quay.io/prometheus/busybox:latest` container image. Depending on the size of the container images and the speed of the network connection to the container registry, downloading these images might take several minutes. For testing scenarios where `genpolicy` gets executed several times, it can be useful to cache the container images after downloading them, in order to avoid most of the time needed to download the same container images multiple times. If a container image layer was already cached locally, `genpolicy` uses the local copy of that container layer. The application caches the image information under the `./layers_cache` directory. Warning Using cached image layers can lead to undesirable results. For example, if one or more locally cached layers have been modified (e.g., by an attacker) then the auto-generated Policy will allow those modified container images to be executed on the Guest VM. To enable caching, use the `-u` command line parameter - e.g., ```bash $ RUST_LOG=info genpolicy -u -y test.yaml ``` You may specify `-d` to use existing `containerd` installation as image manager. This method supports a wider set of images (e.g., older images with `v1` manifest). Needs `sudo` permission to access socket - e.g., ```bash $ sudo genpolicy -d -y test.yaml ``` This will use `/var/contaienrd/containerd.sock` as default socket path. Or you may specify your own socket path - e.g., ```bash $ sudo genpolicy -d=/my/path/containerd.sock -y test.yaml ``` To print the auto-generated Policy text, in addition to adding its `base64` encoding into the `YAML` file, specify the `-r` parameter - e.g., ```bash $ genpolicy -r -y test.yaml ``` To print the `base64` encoded Policy, in addition to adding it into the `YAML` file, specify the `-b` parameter - e.g., ```bash $ genpolicy -b -y test.yaml ``` The default `genpolicy` settings file is `./genpolicy-settings.json`. Users can specify in the command line a different settings file by using the `-j` parameter - e.g., ```bash $ genpolicy -j my-settings.json -y"
},
{
"data": "``` By default, the `genpolicy` input files and must be present in the current directory - otherwise `genpolicy` returns an error. Users can specify different paths to these two files, using the `-p` and `-j` command line parameters - e.g., ```bash $ genpolicy -p /tmp/rules.rego -j /tmp/genpolicy-settings.json -y test.yaml ``` As described by the , K8s supports a very large number of fields in `YAML` files. `genpolicy` supports just a subset of these fields (hopefully the most commonly used fields!). The `genpolicy` authors reviewed the `YAML` fields that are supported as inputs to this tool, and evaluated the impact of each field for confidential containers. Some other input fields were not evaluated and/or don't make much sense when present in an input `YAML` file. By default, when `genpolicy` encounters an unsupported field in its input `YAML` file, the application returns an error. For example, when the input `YAML` contains: ```yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: \"2023-09-18T23:08:02Z\" ``` `genpolicy` returns an error, because: Specifying a fixed creation timestamp as input for a pod doesn't seem very helpful. The `genpolicy` authors did not evaluate the potential effects of this field when creating a confidential containers pod. Users can choose to silently ignore unsupported fields by using the `-s` parameter: ```bash $ genpolicy -s -y test.yaml ``` Warning Ignoring unsupported input `YAML` fields can result in generating an unpredictably incorrect Policy. The `-s` parameter should be used just by expert `K8s` and confidential container users, and only after they carefully evaluate the effects of ignoring these fields. Tip The `-s` parameter can be helpful for example when investigating a problem related to an already created Kubernetes pod - e.g.,: Obtain the existing pod YAML from Kubernetes: ```bash kubectl get pod my-pod -o yaml > my-pod.yaml ``` Auto-generate a Policy corresponding to that `YAML` file: ```bash $ genpolicy -s -y my-pod.yaml ``` `genpolicy` doesn't attach a Policy to `YAML` files. However, a `ConfigMap` `YAML` file might be required for generating a reasonable Policy for other types of `YAML` files. For example, given just this `Pod` input file (`test.yaml`): ```yaml apiVersion: v1 kind: Pod metadata: name: policy-test spec: runtimeClassName: kata containers: name: first-test-container image: quay.io/prometheus/busybox:latest command: sleep \"120\" env: name: CONFIGMAPVALUE1 valueFrom: configMapKeyRef: key: simple_value1 name: config-map1 ``` `genpolicy` is not able to generate the Policy data used to verify the expected value of the CONFIGMAPVALUE1 environment variable. There are two ways to specify the required `ConfigMap` information: A user can create for example `test-config.yaml` with the following contents: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: config-map1 data: simple_value1: value1 ``` and specify that file in the `genpolicy` command line using the `-c` parameter: ```bash $ genpolicy -c test-config.yaml -y test.yaml ``` The same `ConfigMap` information above can be added to `test.yaml`: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: config-map1 data: simple_value1: value1 apiVersion: v1 kind: Pod metadata: name: policy-test spec: runtimeClassName: kata containers: name: first-test-container image: quay.io/prometheus/busybox:latest command: sleep \"120\" env: name: CONFIGMAPVALUE1 valueFrom: configMapKeyRef: key: simple_value1 name: config-map1 ``` and then the `-c` parameter is no longer needed: ```bash $ genpolicy -y test.yaml ```"
}
] |
{
"category": "Runtime",
"file_name": "genpolicy-advanced-command-line-parameters.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The main Antrea Docker images (`antrea/antrea-agent-ubuntu` and `antrea/antrea-controller-ubuntu`) are multi-arch images. For example, the `antrea/antrea-agent-ubuntu` manifest is a list of three manifests: `antrea/antrea-agent-ubuntu-amd64`, `antrea/antrea-agent-ubuntu-arm64` and `antrea/antrea-agent-ubuntu-arm`. Of these three manifests, only the first one is built and uploaded to Dockerhub by Github workflows defined in the `antrea-io/antrea` repositories. The other two are built and uploaded by Github workflows defined in a private repository (`vmware-tanzu/antrea-build-infra`), to which only the project maintainers have access. These workflows are triggered every time the `main` branch of `antrea-io/antrea` is updated, as well as every time a new Antrea Github release is created. They build the `antrea/antrea-agent-ubuntu-arm64` and `antrea/antrea-agent-ubuntu-arm` Docker images on native arm64 workers, then create the `antrea/antrea-agent-ubuntu` multi-arch manifest and push it to Dockerhub. The same goes for the controller images. They are also in charge of testing the images in a cluster. The `vmware-tanzu/antrea-build-infra` repository uses self-hosted ARM64 workers provided by the at Oregon State University. These workers enable us to build, and more importantly test, the Antrea Docker images for the arm64 and arm/v7 architectures. Being able to build Docker images on native ARM platforms is convenient as it is much faster than emulation. But if we just wanted to build the images, emulation would probably be good enough. However, testing Kubernetes ARM support using emulation is no piece of cake. Which is why we prefer to use native ARM64 workers. Github strongly not to use self-hosted runners with public repositories, for security reasons. It would be too easy for a malicious person to run arbitrary code on the runners by opening a pull request. Were we to make this repository public, we would therefore at least need to disable pull requests, which is sub-optimal for a public repository. We believe Github will address the issue eventually and provide safeguards to enable using self-hosted runners with public repositories, at which point we will migrate workflows from this repository to the main Antrea repository. In the future, we may switch over to ARM hosted Github runners provided by the CNCF."
}
] |
{
"category": "Runtime",
"file_name": "antrea-docker-image.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "[TOC] This guide describes how to change the used by `runsc`. Configuring the platform provides significant performance benefits, but isn't the only step to optimizing gVisor performance. See the [Production guide] for more. If you intend to run the KVM platform, you will also need to have KVM installed on your system. If you are running a Debian based system like Debian or Ubuntu you can usually do this by ensuring the module is loaded, and your user has permissions to access the `/dev/kvm` device. Usually, this means that your user is in the `kvm` group. ```shell $ ls -l /dev/kvm crw-rw-+ 1 root kvm 10, 232 Jul 26 00:04 /dev/kvm $ groups | grep -qw kvm && echo ok ok ``` For best performance, use the KVM platform on bare-metal machines only. If you have to run gVisor within a virtual machine, the `systrap` platform will often yield better performance than KVM. If you still want to use KVM within a virtual machine, you will need to make sure that nested virtualization is configured. Here are links to documents on how to set up nested virtualization in several popular environments: Google Cloud: Microsoft Azure: VirtualBox: KVM: *Note: nested virtualization will have poor performance and is historically a cause of security issues (e.g. ). It is not recommended for production.* A third platform, `ptrace`, also has the versatility of running on any environment. However, it has higher performance overhead than `systrap` in almost all cases. `systrap` replaced `ptrace` as the default platform in mid-2023. While `ptrace` continues to exist in the codebase, it is no longer supported and is expected to eventually be removed entirely. If you depend on `ptrace`, and `systrap` doesn't fulfill your needs, please . The platform is selected by the `--platform` command line flag passed to `runsc`. By default, the `systrap` platform is selected. For example, to select the KVM platform, modify your Docker configuration (`/etc/docker/daemon.json`) to pass the `--platform` argument: ```json { \"runtimes\": { \"runsc\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--platform=kvm\" ] } } } ``` You must restart the Docker daemon after making changes to this file, typically this is done via `systemd`: ```shell $ sudo systemctl restart docker ``` Note that you may configure multiple runtimes using different platforms. For example, the following configuration has one configuration for systrap and one for the KVM platform: ```json { \"runtimes\": { \"runsc-kvm\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--platform=kvm\" ] }, \"runsc-systrap\": { \"path\": \"/usr/local/bin/runsc\", \"runtimeArgs\": [ \"--platform=systrap\" ] } } } ```"
}
] |
{
"category": "Runtime",
"file_name": "platforms.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: DigitalOcean link: https://github.com/digitalocean/velero-plugin objectStorage: true volumesnapshotter: true DigitalOcean Block Storage provider plugin for Velero. The plugin is designed to create filesystem snapshots of Block Storage backed PersistentVolumes that are used in a Kubernetes cluster running on DigitalOcean."
}
] |
{
"category": "Runtime",
"file_name": "05-digitalocean.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Longhorn can support local volume to provide better IO latencies and IOPS. https://github.com/longhorn/longhorn/issues/3957 Longhorn can support local volume (data locality=strict-local) for providing better IO latencies and IOPS. A local volume can only have one replica. A local volume supports the operations such as snapshot, backup and etc. A local volume's data locality cannot be converted to other modes when volume is not detached. A local volume does not support multiple replicas in the first version. The local replication could be an improvement in the future. Introduce a new type of volume type, a local volume with `strict-local` data locality. Different than a volume with `best-effort` data locality, the engine and replica of a local volume have to be located on the same node. Unix-domain socket are used instead of TCP between the replica process' data server and the engine. A local volume supports the existing functionalities such as snapshotting, backup, restore, etc. Longhorn is a highly available replica-based storage system. As the data path is designed for the replication, a volume with a single replica still suffers from high IO latency. In some cases, the distributed data workloads such as databases already have their own data replication, sharding, etc, so we should provide a volume type for these use cases while supporting existing volume functionalities like snapshotting, backup/restore, etc. The functionalities and behaviors of the volumes with `disabled` and `best-effort` data localities will not be changed. A volume with `strict-local` data locality Only has one replica The engine and replica have to be located on the same node Cannot convert to `disabled` or `best-effort` data locality when the volume is not detached Can convert to `disabled` or `best-effort` data locality when the volume is detached Existing functionalities such as snapshotting, backup, restore, etc. are supported Add `--volume-name` in engine-binary `replica` command The unix-domain-socket file will be `/var/lib/longhorn/unix-domain-socket/${volume name}.sock` Add `--data-server-protocol` in engine-binary `replica` command Available options are `tcp` (default) and `unix` Add `--data-server-protocol` in engine-binary `controller` command Available options are `tcp` (default) and `unix` Add a new data locality `strict-local` in `volume.Spec.DataLocality` When creating and attaching a volume with `strict-local` data locality, the replica is scheduled on the node where the engine is located. Afterward, the replica process is created with the options `--volume-name ${volume name}` and `--data-server-protocol unix`. The data server in the replica process is created and listens on a unix-domain-socket file (`/var/lib/longhorn/unix-domain-socket/${volume name}.sock`). Then, the engine process of the volume is created with the option `--data-server-protocol unix`. The client in the engine process connects to the data server in the replica process via the unix-domain-socket file. If a volume with `strict-local` data locality, the `numberOfReplicas` should be 1. If a local volume is attached, the conversion between `strict-local` and other data localities is not allowable. If a local volume is attached, the update of the replica count is not allowable. Successfully create a local volume with `numberOfReplicas=1` and `dataLocality=strict-local`. Check the validating webhook can reject the following cases when the volume is created or attached Create a local volume with `dataLocality=strict-local` but `numberOfReplicas>1` Update a attached local volume's `numberOfReplicas` to a value greater than one Update a attached local volume's `dataLocality` to `disabled` or `best-effort`"
}
] |
{
"category": "Runtime",
"file_name": "20221123-local-volume.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "vmadm(8) -- Manage SmartOS virtual machines ============================================ /usr/vm/sbin/vmadm <command> [-d] [-v] [command-specific arguments] The vmadm tool allows you to interact with virtual machines on a SmartOS system. It allows you to create, inspect, modify and delete virtual machines on the local system. IMPORTANT: Support for LX VMs is currently limited and experimental. This means it is very likely to change in major ways without notice. Also: not all the LX functionality that is implemented is documented yet. The documentation will be updated as things stabilize. Most properties that apply to OS VMs also apply to LX VMs. The primary reference for a VM is its UUID. Most commands operate on VMs by UUID. In SmartOS, there are included bash tab-completion rules so that you can tab-complete UUIDs rather than having to type them out for every command. The following commands and options are supported: create [-f <filename>] Create a new VM on the system. Any images/datasets referenced must already exist on the target zpool. Input must be JSON You can either pass in a file with the -f parameter or redirect stdin from something with JSON. Create will refuse to create a VM if no file is specified and stdin is a tty. See the 'PROPERTIES' or 'EXAMPLES' sections below for details on what to put in the JSON payload. create-snapshot <uuid> <snapname> Support for snapshots is currently experimental. It only works for bhyve VMs and OS VMs which also have no additional datasets. The <snapname> parameter specifies the name of the snapshot to take of the specified VM. The snapname must be 64 characters or less and must only contain alphanumeric characters and characters in the set [-_.:%] to comply with ZFS restrictions. You can use delete-snapshot or rollback-snapshot in the future on a snapshot you've created with create-snapshot, so long as that snapshot still exists. See the 'SNAPSHOTS' section below for some more details on how to use these snapshots, and their restrictions. console <uuid> Connect to the text console for a running VM. For OS VMs, this will be the zone console. For KVM VMs, this will be the serial console and your VM will need to be setup with getty or similar running on the first serial device. Not yet supported on LX VMs. To end the serial console session hit CTRL-]. For OS VMs, you'll need to do this at the start of a line, so generally this means pressing: ENTER then CTRL-] then a dot character. For KVM VMs you should just need to press CTRL-] by itself. delete <uuid> Delete the VM with the specified UUID. The VM and any associated storage including zvols and the zone filesystem will be removed. If you have set the indestructiblezoneroot or indestructibledelegated flags on a VM it cannot be deleted until you have unset these flags with something like: vmadm update <uuid> indestructible_zoneroot=false vmadm update <uuid> indestructible_delegated=false to remove the snapshot and holds. Note: 'vmadm delete' command is not interactive, take care to delete the right VM. delete-snapshot <uuid> <snapname> Support for snapshots is currently experimental. It only works for bhyve VMs and OS VMs which also have no additional datasets. This command deletes the ZFS snapshot that exists with the name <snapname> from the VM with the specified uuid. You cannot undo this operation and it will no longer be possible to rollback to the specified snapshot. See the 'SNAPSHOTS' section below for some more details on how to use these snapshots, and their"
},
{
"data": "events [-fjr] [uuid] Output events seen for a given VM (all VMs on the system if the uuid argument is omitted). The command will run indefinitely outputting a single line per event to stdout as they are seen. -f, --full Output the full event (full zone names, timestamp, etc.) No data will be truncated. -j, --json Output in JSON. If `-j` is supplied `-f` is ignored. -r, --ready Output an extra event when the event stream is first opened and ready. get <uuid> Output the JSON object describing a VM. The JSON object will be dumped to stdout. The output object can then be further handled by the json(1) command if desired. info <uuid> [type,...] The info command operates on running KVM VMs only. It talks to the vmadmd(8) daemon and requests some information about the running VM. The information is output to stdout as a JSON object with member objects for each type specified. If no types are specified, all info is included. The type values can be separated either by commas or spaces. The info types available are: all: Explicitly include all of the other types. block: Information about the block devices attached to this VM. blockstats: Counters for blocks read/written, number of operations and highest write offset for each block device. chardev: Information about the special character devices attached to this VM. cpus: Information about the virtual CPUs attached to this VM. kvm: Information about the availability of the KVM driver in this VM. pci: Information about each device on the virtual PCI bus attached to this VM. spice: The IP, port and VNC display number for the TCP socket we're listening on for this VM. If spice is enabled. version: qemu version information. vnc: The IP, port and VNC display number for the TCP socket we're listening on for this VM. If VNC is enabled. list [-p] [-H] [-o field,...] [-s field,...] [field=value ...] The list command can list the VMs on a system in a variety of ways. The filters, order and sort options are all based on the properties of VMs. See the PROPERTIES section below for the list of keys allowed. All those listed there as 'listable' can be used as keys for filtering, sorting or ordering. The list command always operates on a set of VMs which is limited by a filter. By default the filter is empty so all VMs are listed. You add filters by specifying key=value pairs on the command line. You can also match filters by regular expression by using key=~value and making value be a regular expression. You can add as many filters as you want and only VMs that match all the filter parameters will be shown. The fields output are controlled with the -o option which specifies the order. The default order is 'uuid,type,ram,state,alias'. If you specify your own order with the -o option, this order is replaced so any fields from the default you want to keep in your output you'll have to add them to your list of fields. The order of the rows in the output is controlled through the -s option which determines the sort order. The default sort order is 'ram,uuid' which means VMs will be first sorted by RAM and then VMs which have the same RAM value will be sorted by uuid. You can also choose to have a field sorted in descending order by prefixing that field name with a '-' character. Thus an order like '-ram,uuid' would do the same as the default except be sorted with the highest RAM value"
},
{
"data": "The two other options which you can specify for the list command are '-p' which chooses parsable output. With this flag set, output is separated by ':' characters instead of being lined up in columns. This option also disables printing of the header. If you would like to disable the printing of the header in the normal output for some reason, you can do so with the '-H' option. You can see several examples using order, sort and selection in the EXAMPLES section below. lookup [-j|-1] [-o field,field,..] [field=value ...] The lookup command is designed to help you find VMs. It takes a set of filter options in the same format as the list command. This means you specify them with key=value pairs on the command line and can use the key=~value format to specify a regular expression value. The VMs which match all of your filter parameters will be output. The default output is a single column list of UUIDs for VMs that match the filter. This allows you to do things like: for vm in $(vmadm lookup type=KVM state=running); do echo -n \"${vm} \" vmadm info ${vm} vnc | json vnc.display done based on the output. If you want to use the output as JSON, you can add the -j parameter. With that flag set, the output will be a JSON array of VM objects containing the same JSON data as the 'get' command for each VM matched. When the -j flag is passed, you can also limit the fields in the objects of the output array. To do so, use the -o option. For example if you use: vmadm lookup -j -o uuid,brand,quota the objects in the output array will only have the uuid, brand and quota members. Where possible vmadm optimizes the lookups such that not including fields in the output means it won't have to do the potentially expensive operations to look them up. By default (without -o) all fields are included in the objects. If you pass the -1 parameter, lookup should only return 1 result. If multiple results are matched or 0 results are matched, an error will be returned and the exit status will be non-zero. See the PROPERTIES section below for the list of keys allowed. All those listed there as 'listable' can be used as keys for filtering. reboot <uuid> [-F] Reboot a VM. The default reboot will attempt a graceful stop of the VM and when the VM has stopped, it will be booted again. This ensures that processes within the VM are given an opportunity to shut down correctly in attempt to minimize data loss. For OS VMs, the shutdown command '/usr/sbin/shutdown -y -g 0 -i 6' (or '/sbin/shutdown -r now' if brand is 'lx') will be run within the zone, which will cause the VM to reboot after shutting down. For HVM VMs, vmadmd will act as a helper here for the reboot in the same manner as described below for the 'stop' command. If for some reason you are unable or do not want to do a graceful reboot you can add the '-F' parameter to do a forced reboot. This reboot will be much faster but will not necessarily give the VM any time to shut down its processes. rollback-snapshot <uuid> <snapname> Support for snapshots is currently experimental. It only works for bhyve VMs and OS VMs which also have no additional"
},
{
"data": "This command rolls the dataset backing the the VM with the specified uuid back to its state at the point when the snapshot with snapname was taken. You cannot undo this except by rolling back to an even older snapshot if one exists. IMPORTANT: when you rollback to a snapshot, all other snapshots newer than the one you're rolling back to will be deleted. It will no longer be possible to rollback to a snapshot newer than <snapname> for this VM. Also note: your VM will be stopped if it is running when you start a rollback-snapshot and will be booted after the snapshot has been restored. See the 'SNAPSHOTS' section below for some more details on how to use these snapshots, and their restrictions. start <uuid> [option=value ...] Start a VM which is in the 'off' state. For OS VMs, this doesn't take any arguments. For KVM VMs, it is possible to specify some additional boot parameters for the VM with this tool. These can be: order=cdn[,once=d] This option allows you to change the boot order for the VM for the current boot. The order options are 'c' for the hard disk, 'd' for the first CD-ROM drive and 'n' for network boot. So the order 'cdn' means boot the hard disk and if that fails try cdrom and if that fails try network boot. You can also add a ',once=X' option where 'X' is one of the same order options. This will set the boot order once and if the VM is rebooted (even from inside) the order will go back to the default. This is especially useful for installation media, since you can add ,once=d to boot off an ISO image once and then after the install is complete you will boot on the hard drive. The order= option can only be specified once per boot. cdrom=/path/to/image.iso,[ide|scsi|virtio] This option lets you add a virtual CD-ROM disk to a VM for this boot only. The path specified is evaluated within the zoneroot of the VM so /image.iso will actually be something like the path /zones/<uuid>/root/image.iso from the global zone. The second part of this parameter (after the comma) indicates which model the CD-ROM drive should be. You should choose ide in most cases. You can specify multiple cdrom options when booting a VM. They will be attached in the order they appear on the command line. disk=/path/to/disk,[ide|scsi|virtio] This option lets you add an additional disk to a VM for this boot only. The path specified is evaluated within the zoneroot of the VM so /raw.img will actually be something like the path /zones/<uuid>/root/raw.img from the global zone. The second part of this parameter (after the comma) indicates which model the virtual drive should be. You should choose virtio when you know that the VM supports it, and ide or scsi otherwise depending on the drivers supported in the guest. You can specify multiple disk options when booting a VM. They will be attached in the order they appear on the command line. stop <uuid> [-F] [-t timeout] Stop a VM. The default stop will attempt to be graceful. This ensures that processes within the VM are given an opportunity to shut down correctly in attempt to minimize data loss. For OS VMs, a shutdown command will be run in the zone, which will cause the VM to go to the 'off' state after shutting down all processes. If brand is 'lx', the shutdown command is '/sbin/shutdown -h now'. For other OS VMs, the shutdown command is '/usr/sbin/shutdown -y -g 0 -i"
},
{
"data": "If the VM does not shutdown before its timer expires (60 seconds), the VM is forcibly halted. OS VMs do not support the [-t timeout] option unless they also have the docker property set to true. For HVM VMs, the running qemu/bhyve process sends an ACPI signal to the guest kernel telling it to shut down. In case the guest kernel ignores this or for some reason does not receive this request we mark the VM with a transition property indicating that we tried to shut it down. This transition marker also includes a timeout (default 180 seconds). If we hit the timeout, the VM is forcibly halted. For docker VMs, vmadm will send a SIGTERM to init and then wait some number of seconds for the init process to exit. If it has not exited by the timeout expiry, a SIGKILL will be sent. The default timeout is 10 seconds. For both HVM and docker VMs the stop timeouts can be adjusted with the -t <timeout seconds> option. For non-Docker and non-HVM VMs use of the -t option will result in an error. If for some reason you are unable or do not want to do a graceful stop you can also add the '-F' parameter via to do a forced stop. This stop will be much faster (especially for HVM) but will not give the VM any time to shut down its processes. sysrq <uuid> <nmi|screenshot> This command is only available for KVM VMs. For those it exposes the ability to send the guest OS Kernel an non maskable interrupt (NMI) or take a screenshot of the virtual console. To send an NMI, you can run: vmadm sysrq <uuid> nmi To take a screenshot: vmadm sysrq <uuid> screenshot Screenshots will end up under the directory zonepath for the VM, at: <zonepath>/root/tmp/vm.ppm from the global zone. update <uuid> [-f <filename>] update <uuid> property=value [property=value ...] This command allows you to update properties of an existing VM. The properties which can be updated are listed below in the PROPERTIES section with the 'updatable: yes' property. To update properties, you can either pass a file containing a JSON object as the argument to the -f option on the command line, send a JSON object on stdin (though it will refuse to work if stdin is a tty), or pass property=value arguments on the command line. Many properties can be cleared by specifying their value as null in the JSON, e.g. { ... \"zfssnapshotlimit\": null } However this does not work via a direct `vmadm update UUID prop=null` command. If you pass in a JSON object, that object should be formatted in the same manner as a create payload. The only exception is with fields that are themselves objects: VM NICs, KVM VM disks, customer_metadata, internal_metadata, tags and routes. In the the case of the \"simple\" properties 'tags', 'customermetadata', 'internalmetadata' and 'routes' which are key-value pairs, there are 2 special payload members: settags || setcustomer_metadata || setinternalmetadata || set_routes removetags || removecustomer_metadata || removeinternalmetadata || remove_routes which can add/update or remove entries from key/value sets. To add an entry, include it in the set_X object with a simple string value. To remove an object from these dictionaries, include its name in a list as the value to remove_X. For example, to add a tag 'hello' with value 'world', your JSON would look like this: {\"set_tags\": {\"hello\": \"world\"}} then to change the value for this key you'd do: {\"set_tags\": {\"hello\": \"universe\"}} and finally to remove this key you'd do: {\"remove_tags\": [\"hello\"]} The same pattern is used for customermetadata, internalmetadata and"
},
{
"data": "In the case of nics, disks, and filesystems, there are 3 special objects: adddisks || addnics || add_filesystems removedisks || removenics || remove_filesystems updatedisks || updatenics || update_filesystems For NICs for example, you can include an array of NIC objects with the parameter add_nics in your input. Those NICs would get added to the VM. For update you also pass in a new NIC object but only need to specify the \"mac\" parameter (to identify which NIC to update) and the properties that you want to change. If you need to change the MAC address itself, you'll need to add a new NIC with the same properties and a different MAC, and remove the existing one. To remove a NIC, the remove_nics property should be an array of MAC addresses only (not NIC objects). For updating filesystems, you use the same format as described above for NICs except that the options are addfilesystems, removefilesystems and update_filesystems and instead of \"mac\" these will be keyed on \"target\". For updating disks, you use the same format as described above for NICs except that the options are adddisks, removedisks and update_disks and instead of \"mac\" these will be keyed on \"path\". When updating disks.*.size, the system protects against accidental shrinkage and associated data loss. If the size of a disk is reduced, the end of the disk is removed. If that space contains data, it is permanently lost. Snapshots do not provide protection. To allow a disk to shrink, set the dangerousallowshrink property to true. This property is used only for the update - it is not stored. For example, the following will resize a disk to 10 MiB, even if it had previously been larger. { \"update_disks\": [ { \"path\": \"/dev/zvol/rdsk/zones/.../disk1\", \"size\": 10, \"dangerousallowshrink\": true } ] } Those fields marked in the PROPERTIES section below as updatable and modified with '(live update)' mean that when you update the property the change takes effect immediately for the VM without the VM being restarted. Other properties will require a reboot in order to take effect. If the VM is running when an update is made, the 'mdata:fetch' service inside the zone will be restarted - the service will be enabled regardless of its state prior to the update. validate create [-f <filename>] validate update <brand> [-f <filename>] This command allows you to validate your JSON payloads before calling create or update. You must specify the action for which your payload is intended (create or update) as the validation rules are different. In addition, when validating an update payload, you must pass the brand parameter as validation rules vary based on brand. If no -f <filename> is specified the payload is expected to be passed on stdin. If -f <filename> is specfied, the payload to validate will be read from the file with that name. Output from this command in the case the payload is valid will be something like: \"VALID create payload for joyent brand VMs.\" and the exit code will be 0. When the payload is not valid the exit code will be 1 and you will get back a json object which will have at least one of the following members: 'bad_brand' The brand argument you passed to validate is invalid. 'bad_properties' This is an array of payload properties which are not valid for the specified action. 'bad_values' This is an array of payload properties which had unacceptable values. 'missing_properties' This is an array of the payload properties which are required for the given action but are missing from the specified"
},
{
"data": "consult the PROPERTIES section below for help correcting errors in your payload. Snapshots are currently only implemented for bhyve VMs and OS VMs, and only for those that do not utilize delegated datasets or any other datasets other than the zoneroot dataset and its dependent datasets. When you create a snapshot with create-snapshot, it will create a ZFS snapshot of that dataset with the name dataset@vmsnap-<snapname> and the .snapshots member of VM objects returned by things like vmadm get will only include those snapshots that have been created using this pattern. That allows vmadm to distinguish between snapshots it has taken and snapshots that could have been taken using other tools. To delete a snapshot you can use the delete-snapshot command. That will destroy the snapshot in ZFS and it will automatically be removed from the machine's snapshot list. It will no longer be possible to rollback to it. To rollback a VM to its state at the time of a previous snapshot, you can use the rollback-snapshot command. This will stop the VM rollback the zoneroot dataset to the specified snapshot and start the VM again. IMPORTANT: rollback-snapshot will automatically delete all snapshots newer than the one you're rolling back to. This cannot be undone. Every VM has a number of properties. The properties for a VM can be listed with the 'vmadm get <uuid>' command. Some of these properties can be included in a create payload, some can be included in the output or be used to sort output for the 'vmadm list' command. Not all fields will be included for all VMs. Below the fields are marked as: type -- type of the properties value. vmtype -- This value can be one of the following groups: OS: all types of OS VMs (joyent, joyent-minimal, and lx) HVM: all types of HVM VMs (bhyve and kvm) ANY: all types of VMs or an explicit brand name such as 'lx'. listable -- if they can be included in the -o or -s lists for the 'vmadm list' command. create -- if the field can be included in a create payload. update -- if the field can be updated using the 'vmadm update' command. Some fields are also marked (live update) in which case, updates affect the behaviour of the running machine. Other updatable fields will either not affect VM operation or require a reboot of the VM to do so. default -- if the field has a default value, this will explain what that value is. alias: An alias for a VM which is for display/lookup purposes only. Not required to be unique. type: string vmtype: ANY listable: yes create: yes update: yes archiveondelete: When archiveondelete is set to 'true' and the VM is deleted and the zones/archive dataset exists and is mounted on /zones/archive, we will extract debug information from the zone before destroying it. Information saved includes cores, the JSON as output by 'vmadm get', the zone's XML file from /etc/zones, SMF logs, qemu logs (for KVM), the startvm script (for KVM), the properties from all the zone's datasets, metadata, tags and /var/adm/messages. In the future the list may change. The files specified will be written to the directory /zones/archive/<uuid>. type: boolean vmtype: ANY listable: no create: yes update: yes default: false autoboot: Controls whether or not a VM is booted when the system is"
},
{
"data": "This property can be set with the initial create but any time the VM is started this will also get set true and when the VM is stopped it will get set false. This is to ensure that the compute node will always reboot into the intended state. type: boolean vmtype: ANY listable: yes create: yes update: yes billing_id: An identifier intended to help identify which billing category this VM should fall into. type: string (UUID) vmtype: ANY listable: yes create: yes update: yes defaul: 00000000-0000-0000-0000-000000000000 bhyveextraopts: This allows you to specify additional bhyve command line arguments, this string (if set) will be appended to the end of the bhyve command line. It is intended for debugging and not for general use. type: string (space-separated options for bhyve) vmtype: bhyve listable: no create: yes update: yes boot: This option allows you to set the boot order for KVM VMs. The format is the same as described above for the order parameter to the 'start' command. type: string vmtype: kvm listable: no create: yes update: yes default: 'order=cd' boot_timestamp: This is a read-only property that will exist only for running VMs. When available, it will indicate the time the VM last booted. type: string (ISO 8601 timestamp) vmtype: ANY listable: yes create: no update: no bootrom: This indicates the bootrom to use for bhyve, valid values are 'bios', 'uefi', or a path to a bootrom binary. At provision time, only 'bios' or 'uefi' are allowed. After the instance has been provisioned successfully an operator may copy a custom bootrom into the zoneroot and set a path. The path specified is evaluated within the zoneroot of the VM so /uefi-debug.bin will actually be something like the path /zones/<uuid>/root/uefi-debug.img from the global zone. type: string vmtype: bhyve listable: no create: yes update: yes default: 'bios' brand: This will be one of 'joyent', 'joyent-minimal' or 'lx' for OS virtualization, or 'kvm' or 'bhyve' for full hardware virtualization. This is a required value for VM creation. type: string (joyent|joyent-minimal|lx|kvm|bhyve) vmtype: ANY listable: yes create: yes update: no cpu_cap: Sets a limit on the amount of CPU time that can be used by a VM. The unit used is the percentage of a single CPU that can be used by the VM. Eg. a value of 300 means up to 3 full CPUs. type: integer (percentage of single CPUs, set to 0 for no cap) vmtype: ANY listable: yes create: yes update: yes (live update) cpu_shares: Sets a limit on the number of fair share scheduler (FSS) CPU shares for a VM. This value is relative to all other VMs on the system, so a value only has meaning in relation to other VMs. If you have one VM with a a value 10 and another with a value of 50, the VM with 50 will get 5x as much time from the scheduler as the one with 10 when there is contention. type: integer (number of shares) vmtype: ANY listable: yes create: yes update: yes (live update) default: 100 cpu_type: For kvm VMs, this controls the type of the virtual CPU exposed to the guest. If the value is 'host' the guest will see the same CPU type and flags as are seen on the host. type: string (qemu64|host) listable: yes vmtype: kvm create: yes update: yes default: qemu64 create_timestamp: The time at which the VM was created in ISO 8601 format. type: string (format: '2011-12-31T06:38:42.457Z') vmtype: ANY listable: yes create: no (automatically added) update: no default: always set to current time at VM.create(). server_uuid: This is the UUID of the compute node on which the VM currently"
},
{
"data": "It is most useful when pulled from sources external to the GZ (whether in the VM, or from another node). type: string (compute node's UUID) vmtype: ANY listable: no create: no update: no default: this is always pulled when the object is loaded. customer_metadata: This field allows metadata to be set and associated with this VM. The value should be an object with only top-level key=value pairs. NOTE1: for historical reasons, do not put keys in here that match the pattern *pw. Those keys should go in internalmetadata instead. NOTE2: keys that are prefixed with one of the prefixes listed in internalmetadatanamespaces will not be read from customer_metadata but rather from internal_metadata. These will also be read-only from within the zone. type: JSON Object (key: value) vmtype: ANY listable: no create: yes update: yes (but see special notes on update command) default: {} datasets: If a VM has extra datasets available to it (eg. if you specified the delegate_dataset option when creating) the list and get output will include the information about that dataset under this key. type: string (dataset name) vmtype: OS listable: no create: no (use delegate_dataset to include one) update: no delegate_dataset: This property indicates whether we should delegate a ZFS dataset to an OS VM. If true, the VM will get a dataset <zoneroot dataset>/data (by default: zones/<uuid>/data) added to it. This dataset will be also be mounted on /<zoneroot dataset>/data inside the zone (again by default: /zones/<uuid>/data) but you can change this by setting the mountpoint option on the dataset from within the zone with zfs(8). When using this option, sub-datasets can be created, snapshots can be taken and many other options can be performed on this dataset from within the VM. type: boolean vmtype: OS listable: no create: yes update: no default: false disks: When creating or getting a HVM VM's JSON, you will use this property. This is an array of 'disk' objects. The properties available are listed below under the disks.*.<property> options. If you want to update disks, see the special notes in the section above about the 'update' command. When adding or removing disks, the disks will be available to the VM in the order that the disks are included in the disks or add_disks array. To use these properties in a list output or lookup, use the format: disks.*.size # for lookup matching any disk disks.0.size # for list output or lookup of a specific disk disks.*.block_size: Specifies the block size for the disk. This property can only be set at disk creation time and cannot be changed without destroying the disk and creating a new one. Important: this property cannot be set on disks that have an image_uuid parameter as the image being cloned will already have the ZFS volblocksize property set. type: integer (block size in bytes, 512 to 131072, must be power of 2) vmtype: HVM listable: no create: yes update: no (except when adding new disks) default: 8192 disks.*.boot: Specifies whether this disk should be bootable (only one disk should). type: boolean vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: no disks.*.compression: Specifies a compression algorithm used for this disk. This has the same details, warnings and caveats as the global zfsrootcompression option below but only affects a single disk on the VM. See zfsrootcompression section below for more details. type: string one of: \"on,off,gzip,gzip-N,lz4,lzjb,zle\" vmtype: HVM listable: no create: yes update: yes (see caveat in zfsrootcompression section below) default: off disks.*.guestblocksize: Specifies the device block size reported to the"
},
{
"data": "By default, the block size of the underlying device is reported to the guest (see 'disk.*.block_size' above). This setting will override the default value. It also allows reporting of both a physical and logical block size using a string of the form \"logicalsize/physicalsize\" (e.g. \"512/4096\" to look like a 512e drive. This is useful for guests such as Windows where older versions of the Windows virtio driver always reported the block size of a virtio device as 512 bytes (regardless of the block size presented to the guest) while newer versions of the driver report the actual size of the device being reported by the host. NOTE: the value is always a string, and all values must be a power of two. type: string of the form \"NNN\" or \"NNN/NNN\" vmtype: bhyve listable: yes create: yes update: yes (special, see description in 'update' section above) default: no disks.*.nocreate: This parameter indicates whether or not the disk should be created. It only makes sense for disks with media type 'disk'. For media type 'cdrom' the device is not created. It also can only be set when creating a disk. type: boolean vmtype: HVM listable: no create: yes update: no (except when adding new disks) default: false (new zvol is created when media type is 'disk') disks.*.image_name: Name of dataset from which to clone this VM's disk. You should specify either this and 'imagesize' and 'imageuuid', or 'size' for a disk. type: string vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: no disks.*.image_size: The size of the image from which we will create this disk. When neither size nor imagesize is passed for a disk but an imageuuid is, and that image is available through imgadm, the image_size value from the manifest will be set as image_size. Important: image_size is required (unless you rely on imgadm) when you include image_uuid for a disk and not allowed when you don't. type: integer (size in MiB) vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: no (loaded from imgadm if possible) disks.*.image_uuid: UUID of dataset from which to clone this VM's disk. Note: this image's UUID must show up in the 'imgadm list' output in order to be valid. type: string (UUID) vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: no disks.*.notrim: Explicitly disables TRIM functionality for the disk in the guest. This functionality is also known as UNMAP or DISCARD. This corresponds to the bhyve `nodelete` block-device-option. type: boolean vmtype: bhyve listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: no disks.*.pci_slot: Specifies the virtual PCI slot that this disk will occupy. Bhyve places each disk into a PCI slot that is identified by the PCI bus, device, and function (BDF). The slot may be specified as <bus>:<device>:<function> (\"0:4:0\"), <device>:<function> (\"4:0\") or <device> (\"4\"). If bus or function is not specified, 0 is used. Per the PCI specification legal values for bus, device and function are: bus: 0 - 255, inclusive device: 0 - 31, inclusive function: 0 - 7, inclusive All functions on devices 0, 6, 30, and 31 on bus 0 are reserved. For maximum compatibility with boot ROMs and guest operating systems, the disk with boot=true should exist on bus 0 device 3, 4, or 5. If any function other than zero (e.g. 0:5:1) is used, function zero on the same device"
},
{
"data": "0:5:0) must also be used for the guest OS to recognize the disk in the non-zero slot. If pci_slot is not specified, disks will be assigned to available slots in the 0:4:0 - 0:4:7 range. Disks with media=cdrom will be assigned to 0:3:0 - 0:3:7. The format used by pci_slot is slightly different than that reported by the Linux `lspci` utility that may be used in guests. The format used by `lspci` is <bus>:<device>.<function> with each number is represented in hexadecimal. Also notice the mixture of `:` and `.` separators by `lspci`. type: string (<bus>:<device>:<function>, <device>:function, or <device>) vmtype: bhyve listable: yes create: yes update: yes (special, see description in 'update' section above) default: no disks.*.refreservation: Specifies a refreservation for this disk. This property controls the minimum amount of space reserved for a given disk. See also the zfs(1) man page's description of refreservation. type: integer number of MiB vmtype: HVM listable: no create: yes update: yes (special, see description in 'update' section above) default: size of the disk disks.*.size: Size of disk in MiB. You should only specify this parameter if you've not included the image_* parameters. It will show up in get requests for all disks whether you've specified or not as a means to determine the size of the zvol. Important: size is required when you don't include image_uuid for a disk and not allowed when you do. type: integer (size in MiB) vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: no disks.*.media: Specify whether this disk is a 'disk' or 'cdrom'. type: string (one of ['disk','cdrom']) vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: disk disks.*.model: Specify the driver for this disk. If your image supports it, you should use virtio. If not, use ide or scsi depending on the drivers in your guest. type: string (kvm: ['virtio','ide','scsi']) (bhyve: ['virtio','ahci','nvme']) vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: the value of the disk_driver parameter for this VM disks.*.zpool: The zpool in which to create this zvol. type: string (zpool name) vmtype: HVM listable: yes (see above) create: yes update: yes (special, see description in 'update' section above) default: zones NOTE: SDC does not support any pool name other than the default 'zones'. disks.*.uuid: A UUID that may be used to uniquely identify this disk. It must be unique across all disks associated with this VM. type: uuid vmtype: bhyve listable: yes (see above) create: yes update: yes default: Assigned while adding the disk or at next `vmadm start`. disk_driver: This specifies the default values for disks.*.model for disks attached to this VM. type: string (one of ['virtio','ide','scsi']) vmtype: kvm listable: no create: yes update: yes donotinventory: This specifies that the VM should not be counted or automatically imported into external management tools. The primary use-case is for test zones that are created but you don't want their existence propagated up to a management system since they'll be short-lived. Note: this property will only show up in a 'vmadm get' when it's set true. When set false the property will not appear. type: boolean vmtype: ANY listable: no create: yes update: yes dns_domain: For OS VMs this specifies the domain value for /etc/hosts that gets set at create time. Updating this after create will have no"
},
{
"data": "type: string (domain name) vmtype: OS listable: yes create: yes update: no default: local filesystems: This property can be used to mount additional filesystems into an OS VM. It is primarily intended for SDC special VMs. The value is an array of objects. The properties available are listed below under the filesystems.*.<property> options. Those objects can have the following properties: source, target, raw (optional), type and options. filesystems.*.type: For OS VMs this specifies the type of the filesystem being mounted in. Example: lofs type: string (fs type) vmtype: OS listable: no create: yes update: no filesystems.*.source: For OS VMs this specifies the directory in the global zone of the filesystem being mounted in. Example: /pool/somedirectory type: string (path) vmtype: OS listable: no create: yes update: no filesystems.*.target: For OS VMs this specifies the directory inside the Zone where this filesystem should be mounted. Example: /somedirectory type: string (path) vmtype: OS listable: no create: yes update: no filesystems.*.raw: For OS VMs this specifies the additional raw device that should be associated with the source filesystem. Example: /dev/rdsk/somedisk type: string (device) vmtype: OS listable: no create: yes update: no filesystems.*.options: For OS VMs this specifies the array of mount options for this file system when it is mounted into the zone. Examples of options include: \"ro\" and \"nodevices\". type: array of strings (each string is an option) vmtype: OS listable: no create: yes update: no firewall_enabled: This enables the firewall for this VM, allowing firewall rules set by fwadm(8) to be applied. Note: this property will only show up in a 'vmadm get' when it's set true. When set false the property will not appear. type: boolean vmtype: OS listable: no create: yes update: yes flexibledisksize: This sets an upper bound for the amount of space that a bhyve instance may use for its disks and snapshots of those disks. If this value is not set, it will not be possible to create snapshots of the instance. This value must be at least as large as the sum of all of the disk.*.size values. type: integer (number of MiB) vmtype: bhyve listable: yes create: yes update: yes (live update) free_space: This specifies the amount of space in a bhyve instance that is neither allocated to disks nor in use by snapshots of those disks. If snapshots are present, writes to disks may reduce this value. type: integer (number of MiB) vmtype: bhyve listable: no create: no update: no fs_allowed: This option allows you to specify filesystem types this zone is allowed to mount. For example on a zone for building SmartOS you probably want to set this to: \"ufs,pcfs,tmpfs\". To unset this property, set the value to the empty string. type: string (comma separated list of filesystem types) vmtype: OS listable: no create: yes update: yes (requires zone reboot to take effect) hostname: Sets the instance's hostname. For OS VMs, this value will get set in several files at creating time, but changing it later will do nothing. For HVM instances, the hostname is set during boot via DHCP (kvm only) or other boot-time automation such as cloud-init. type: string (hostname) vmtype: ANY listable: yes create: yes update: yes (but does nothing for OS VMs) default: the value of zonename hvm: A boolean that depicts whether or not the VM is hardware virtualized. This property is computed based on the \"brand\" property and is not modifiable. type: boolean vmtype: ANY listable: yes create: no update: no image_uuid: This should be a UUID identifying the image for the VM if a VM was created from an"
},
{
"data": "NOTE: when this is passed for HVM VMs, it specifies the zone root dataset which is not visible from within the VM. The user-visible dataset will be the one specified through the disks.*.image_uuid. Normally you do not want to set this for HVM VMs. type: string (UUID) vmtype: ANY listable: yes create: yes update: no internal_metadata: This field allows metadata to be set and associated with this VM. The value should be an object with only top-level key=value pairs. The intention is that customer_metadata contain customer modifiable keys whereas internal_metadata is for operator generated keys. NOTE: for historical reasons, when a user in a zone does: mdata-get name_pw where the key ends with 'pw', the key is looked up in internalmetadata instead of customer_metadata. type: JSON Object (key: value) vmtype: ANY listable: no create: yes update: yes (but see special notes on update command) default: {} internalmetadatanamespaces: This allows a list of namespaces to be set as internal_metadata-only prefixes. If a namespace 'foo' is in this list, metadata keys with the prefix 'foo:' will come from internal_metadata rather than customer_metadata. They will also be read-only from within the zone. type: list of strings vmtype: ANY listable: no create: yes update: yes default: [] indestructible_delegated: When set this property adds an @indestructible snapshot to the delegated (<zfs_filesystem>/data) dataset and sets a zfs hold on that snapshot. This hold must be removed before the VM can be deleted enabling a two-step deletion. Eg. to delete a VM where this has been set, you would need to: vmadm update <uuid> indestructible_delegated=false vmadm delete <uuid> instead of being able to do the delete on its own. The property will only show up in VM objects when set true. NOTE: if the hold on the @indestructible dataset is removed manually from the GZ or from within the zone, this would also remove this flag and allow the VM to be deleted. type: boolean vmtype: ANY listable: yes create: yes update: yes default: false indestructible_zoneroot: When set this property adds an @indestructible snapshot to the zoneroot (zfs_filesystem) dataset and sets a zfs hold on that snapshot. This hold must be removed before the VM can be deleted or reprovisioned. Eg. to delete a VM where this has been set, you would need to: vmadm update <uuid> indestructible_zoneroot=false vmadm delete <uuid> instead of being able to do the delete on its own. The property will only show up in VM objects when set true. NOTE: if the hold on the @indestructible dataset is removed manually from the GZ, this would also remove this flag and allow the VM to be deleted. type: boolean vmtype: ANY listable: yes create: yes update: yes default: false kernel_version: This sets the version of Linux to emulate for LX VMs. type: string (kernel version, eg. 2.6.31) vmtype: lx listable: no create: no update: yes limit_priv: This sets a list of privileges that will be available to the Zone that contains this VM. See privileges(5) for details on possible privileges. type: string (comma separated list of zone privileges) vmtype: OS listable: no create: yes update: yes OS default: \"default\" maintain_resolvers: If set, the resolvers in /etc/resolv.conf inside the VM will be updated when the resolvers property is updated. type: boolean vmtype: OS listable: no create: yes update: yes default: false maxlockedmemory: The total amount of physical memory in the host than can be locked for this VM. This value cannot be higher than maxphysicalmemory. type: integer (number of MiB) vmtype: OS listable: yes create: yes update: yes (live update) default: value of maxphysicalmemory max_lwps: The maximum number of lightweight processes this VM is allowed to have running on the"
},
{
"data": "type: integer (number of LWPs) vmtype: OS listable: yes create: yes update: yes (live update) default: 2000 maxphysicalmemory: The maximum amount of memory on the host that the VM is allowed to use. For kvm VMs, this value cannot be lower than 'ram' and should be ram + 1024. type: integer (number of MiB) vmtype: OS listable: yes create: yes update: yes (live update) default: 256 for OS VMs, (ram size + 1024) for HVM VMs. max_swap: The maximum amount of virtual memory the VM is allowed to use. This cannot be lower than maxphysicalmemory, nor can it be lower than 256. type: integer (number of MiB) vmtype: OS listable: yes create: yes update: yes (live update) default: value of maxphysicalmemory or 256, whichever is higher. mdataexectimeout: For OS VMs this parameter adjusts the timeout on the start method of the svc:/smartdc/mdata:execute service running in the zone. This is the service which runs user-script scripts. This parameter only makes sense when creating a VM and is ignored in other cases. type: integer (0 for unlimited, >0 number of seconds) vmtype: OS listable: no create: yes update: no default: 300 nics: When creating or getting a HVM VM's JSON, you will use this property. This is an array of 'nic' objects. The properties available are listed below under the nics.*.<property> options. If you want to update nics, see the special notes in the section above about the 'update' command. When adding or removing NICs, the NIC names will be created in the order the interfaces are in the nics or add_nics array. To use these properties in a list output or lookup, use the format: nics.*.ip # for lookup matching any interface nics.0.ip # for list output or lookup of a specific interface nics.*.allowdhcpspoofing: With this property set to true, this VM will be able to operate as a DHCP server on this interface. Without this, some of the packets required of a DHCP server will not get through. This property also implies the behavior of allowipspoofing. type: boolean vmtype: ANY listable: yes (see above) create: yes update: yes default: false nics.*.allowipspoofing: With this property set to true, this VM will be able to send and receive packets over this nic that don't match the IP address specified by the ip property. type: boolean vmtype: ANY listable: yes (see above) create: yes update: yes default: false nics.*.allowmacspoofing: With this property set to true, this VM will be able to send packets from this nic with MAC addresses that don't match the mac property. type: boolean vmtype: ANY listable: yes (see above) create: yes update: yes default: false nics.*.allowrestrictedtraffic: With this property set to true, this VM will be able to send restricted network traffic (packets that are not IPv4, IPv6, or ARP) from this nic. type: boolean vmtype: ANY listable: yes (see above) create: yes update: yes default: false nics.*.allowunfilteredpromisc: With this property set to true, this VM will be able to have multiple MAC addresses (eg. running SmartOS with VNICs). Without this option these packets will not be picked up as only those unicast packets destined for the VNIC's MAC will get through. Warning: do not enable this option unless you fully understand the security implications. type: boolean vmtype: HVM listable: yes (see above) create: yes update: yes default: false nics.*.blockedoutgoingports: Array of ports on which this nic is prevented from sending traffic. type: array vmtype: ANY listable: yes (see above) create: yes update: yes"
},
{
"data": "This sets additional IP addresses from which this nic is allowed to send traffic, in addition to the IPs in the ips and vrrpprimaryip properties (if set). Values may be single IPv4 or IPv6 addresses or IPv4 and IPv6 CIDR ranges. The following are all valid examples of allowed_ips: '10.169.0.0/16', '10.99.99.7', 'fe82::/15', '2600:3c00::f03c:91ff:fe96:a267'. type: array (of IP addresses or CIDR ranges) vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.alloweddhcpcids: This specifies which DHCP Client Identifiers outbound DHCP packets are allowed to use. By default, when no Client Identifiers are listed, and nics.*.ips includes \"dhcp\" or \"addrconf\", all DHCP Client Identifiers are permitted. Client Identifiers are specified as a string of pairs of hexadecimal characters beginning with the prefix \"0x\". Up to 20 Client Identifiers can be listed. type: array (of even-lengthed hexadecimal strings beginning with \"0x\") vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.dhcp_server (DEPRECATED): This option behaves identically to allowdhcpspoofing. nics.*.gateway (DEPRECATED): The IPv4 router on this network (not required if using DHCP). This property should be considered deprecated in favor of using nics.*.gateways. type: string (IPv4 address) vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.gateways: An array of IPv4 addresses to use as the network gateway. If multiple addresses are specified, the OS-specific behaviour will apply (e.g., round robining on SmartOS). This property is not required if using DHCPv4. The interface for updating this field is liable to change in the future to make it easier to add or remove addresses. type: array (of IPv4 addresses) vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.interface: This is the interface name the the VM will see for this interface. It will always be in the format netX where X is an integer >= 0. type: string (netX) vmtype: OS listable: yes (see above) create: yes update: no nics.*.ip (DEPRECATED): IPv4 unicast address for this NIC, or 'dhcp' to obtain address via DHCPv4. This property should be considered deprectated in favor of using nics.*.ips. type: string (IPv4 address or 'dhcp') vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.ips: An array of IPv4 or IPv6 addresses to assign to this NIC. The addresses should specify their routing prefix in CIDR notation. The strings 'dhcp' (DHCPv4) and 'addrconf' (SLAAC or DHCPv6) can also be used to obtain the address dynamically. Up to 20 addresses can be listed. Since kvm instances receive their static IP addresses from QEMU via DHCPv4, they can only receive a single IPv4 address. Therefore, the only values that should be used are one of 'dhcp' or an IPv4 address. To assign further IP addresses to them, use nics.*.allowed_ips and configure them from inside the guest operating system. The interface for updating this field is liable to change in the future to make it easier to add or remove addresses. type: array (of IP addresses with routing prefixes, 'dhcp' or 'addrconf') vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.mac: MAC address of virtual NIC. type: string (MAC address) vmtype: ANY listable: yes (see above) create: yes update: no (see 'update' command description) default: we'll generate one nics.*.model: The driver for this NIC [virtio|e1000|rtl8139|...] type: string (one of ['virtio','e1000','rtl8139']) vmtype: kvm listable: yes (see above) create: yes update: yes default: the value of the nic_driver property on the VM nics.*.mtu: Sets the MTU for the network interface. The maximum MTU for a device is determined based on its nic tag. If this property is not set, then it defaults to the current MTU of the data link that the nic tag corresponds"
},
{
"data": "The supported range of MTUs is from 1500-9000 for VMs created on physical nics, and 576-9000 for VMs created on etherstubs or overlays. This property is not updated live with vmadm update. If a specific MTU has not been requested, then this property is not present through get. type: integer vmtype: ANY listable: no create: yes update: yes nics.*.netmask The netmask for this NIC's network (not required if using DHCP) type: string (IPv4 netmask, eg. 255.255.255.0) vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.network_uuid UUID for allowing nics to be tracked in an external system type: string (UUID) vmtype: ANY listable: yes (see above) create: yes update: yes nics.*.nic_tag This option for a NIC determines which host NIC the VMs nic will be attached to. The value can be either a nic tag as listed in the 'NIC Names' field in `sysinfo`, or an etherstub or device name. type: string (device name or nic tag name) vmtype: ANY listable: yes create: yes update yes (requires zone stop/boot) nics.*.primary This option selects which NIC's default gateway and nameserver values will be used for this VM. If a VM has any nics, there must always be exactly one primary. Setting a new primary will unset the old. Trying to set two nics to primary is an error. type: boolean (only true is valid) vmtype: ANY listable: yes (see above) create: yes update: yes (setting primary=true on one NIC removes the flag from the current primary, and sets on the new) nics.*.vlan_id: The vlan with which to tag this NIC's traffic (0 = none). type: integer (0-4095) vmtype: ANY listable: yes (see above) create: yes update: yes default: 0 nics.*.vrrpprimaryip: The source IP that will be used to transmit the VRRP keepalive packets for this nic. The IP must be the IP address of one of the other non- VRRP nics in this VM. type: string (IPv4 address) vmtype: OS listable: yes (see above) create: yes update: yes nics.*.vrrp_vrid: The VRRP Virtual Router ID for this nic. This sets the MAC address of this nic to one based on the VRID. type: integer (0-255) vmtype: OS listable: yes (see above) create: yes update: yes nic_driver: This specifies the default values for nics.*.model for NICs attached to this VM. type: string (one of ['virtio','e1000','rtl8139']) vmtype: kvm listable: no create: yes update: yes nowait: This parameter is accepted when provisioning OS VMs and considers the provision complete when the VM is first started rather than waiting for the VM to be rebooted. type: boolean vmtype: OS listable: no create: yes update: no default: false owner_uuid: This parameter can be used for defining the UUID of an 'owner' for this VM. It serves no functional purpose inside the system itself, but can be used to tie this system to others. type: string (UUID) vmtype: ANY listable: yes create: yes update: yes default: 00000000-0000-0000-0000-000000000000 package_name: This is a private field intended for use by Joyent's SDC product. Other users can ignore this field. type: string vmtype: ANY listable: yes create: yes update: yes package_version: This is a private field intended for use by Joyent's SDC product. Other users can ignore this field. type: string vmtype: ANY listable: yes create: yes update: yes pid: For VMs that are currently running, this field indicates the PID of the `init` process for the zone. type: integer (PID) vmtype: ANY listable: yes create: no update: no qemu_opts: This parameter allows one to specify additional arguments to be passed to the"
},
{
"data": "This is primarily designed to be used for debugging and should not be used beyond that. important: this replaces all of the options listed, so you need to include those from the default list that you want to keep. NOTE: setting this also overrides any SPICE options you might have set. type: string (space-separated options for qemu) vmtype: kvm listable: no create: yes update: yes default: if vnc_password.length != 0: '-vnc unix:/tmp/vm.vnc,password -parallel none -usb -usbdevice tablet -k en-us' else '-vnc unix:/tmp/vm.vnc -parallel none -usb -usbdevice tablet -k en-us' qemuextraopts: This allows you to specify additional qemu command line arguments. When set this string will be appended to the end of the qemu command line. It is intended for debugging and not for general use. type: string (space-separated options for qemu) vmtype: kvm listable: no create: yes update: yes quota: This sets a quota on the zone filesystem. For OS VMs, this value is the space actually visible/usable in the guest. For kvm and bhyve VMs, this value is the quota (kvm) or refquota (bhyve) for the Zone containing the VM, which is not directly available to users. Set quota to 0 to disable (ie. for no quota). type: integer (number of GiB) vmtype: ANY listable: yes create: yes update: yes (live update) ram: For kvm and bhyve VMs this is the amount of virtual RAM that will be available to the guest kernel. For OS VMs this will be the same as the property maxphysicalmemory. type: integer (number of MiB) vmtype: HVM listable: yes create: yes update: yes (requires VM reboot to take effect) default: 256 resolvers: For OS VMs, this value sets the resolvers which get put into /etc/resolv.conf at VM creation. If maintain_resolvers is set to true, updating this property will also update the resolvers in /etc/resolv.conf. For HVM instances, the resolvers are set via DHCP (kvm only) or other other boot-time automation such as cloud-init. type: array vmtype: OS,kvm listable: no create: yes update: yes routes: This is a key-value object that maps destinations to gateways. These will be set as static routes in the VM. The destinations can be either IPs or subnets in CIDR form. The gateways can either be IP addresses, or can be of the form \"nics[0]\" or \"macs[aa:bb:cc:12:34:56]\". Using nics[] or macs[] specifies a link-local route. When using nics[] the IP of the numbered nic in that VM's nics array (the first nic is 0) is used. When using macs[] the IP of the nic with the matching mac address in that VM's nic array is used. As an example: { \"10.2.2.0/24\": \"10.2.1.1\", \"10.3.0.1\": \"nics[1]\", \"10.4.0.1\": \"macs[aa:bb:cc:12:34:56]\" } This sets three static routes: to the 10.2.2.0/24 subnet with a gateway of 10.2.1.1, a link-local route to the host 10.3.0.1 over the VM's second nic, and a link-local route to the host 10.4.0.1 over the VM's nic with the corresponding mac address. type: object vmtype: OS listable: no create: yes update: yes snapshots (EXPERIMENTAL): For bhyve VMs and OS VMs, this will display a list of snapshots from which you can restore the root dataset and its dependent datasets for your VM. Currently this is only supported when your VM does not have any delegated datasets. type: array vmtype: OS or bhyve listable: no create: no (but you can use create-snapshot) update: no (but you can use rollback-snapshot and delete-snapshot) spice_opts (EXPERIMENTAL): This property allows you to add additional -spice options when you are using"
},
{
"data": "NOTE: SPICE support requires your kvm zone to be using a zone dataset with the image_uuid option and that image must know what to do with these special options. type: string (-spice XXX options) vmtype: kvm listable: no create: yes update: yes default: <unset> spice_password (EXPERIMENTAL): This property allows you to set a password which will be required when connecting to the SPICE port when SPICE is enabled. NOTE: SPICE support requires your kvm zone to be using a zone root dataset with the image_uuid option and that dataset must know what to do with these special options. IMPORTANT: this password will be visible from the GZ of the CN and anyone with access to the serial port in the guest. Set to an empty string (default) to not require a password at this level. type: string (8 chars max) vmtype: kvm listable: no create: yes update: yes default: <unset> spice_port (EXPERIMENTAL): This specifies the TCP port to listen on for the SPICE server. By default SPICE is not enabled. NOTE: SPICE support requires your kvm zone to be using a zone root dataset with the image_uuid option and that dataset must know what to do with these special options. If set to zero, a port will be chosen at random. Set to -1 to disable TCP listening for SPICE. type: integer (0 for random, -1 for disabled) vmtype: kvm listable: no create: yes update: yes default: <unset> state: This property exposes the current state of a VM. See the 'VM STATES' section below for more details. type: string vmtype: ANY listable: yes create: no update: no tmpfs: This property specifies how much of the VM's memory will be available for the /tmp filesystem. This is only available for OS VMs, and doesn't make any sense for HVM VMs. If set to 0 this indicates that you would like to not have /tmp mounted as tmpfs at all. When changing to/from a \"0\" value, the VM must be rebooted in order for the change to take effect. vmtype: OS listable: yes create: yes update: yes default: maxphysicalmemory transition_expire: When a HVM VM is in transition from running to either 'off' (in the case of stop) or 'start' (in the case of reboot), the transition_expire field will be set. This value will indicate the time at which the current transaction will time out. When the transaction has timed out, vmadmd will force the VM into the correct state and remove the transition. type: integer (unix epoch timestamp) vmtype: kvm listable: no create: no (will show automatically) update: no transition_to: When a HVM VM is in transition from running to either 'off' (in the case of stop) or 'start' (in the case of reboot), the transition_to field will be set to indicate which state the VM is transitioning to. Additionally when a VM is provisioning you may see this with a value of 'running'. type: string value, one of: ['stopped', 'start', 'running'] vmtype: ANY listable: no create: no update: no type: This is a virtual field and cannot be updated. It will be 'OS' when the brand == 'joyent*', 'LX' when the brand == 'lx', 'KVM' when the brand == 'kvm', and 'BHYVE' when the brand == 'bhyve'. type: string value, one of: ['OS', 'LX', 'KVM', 'BHYVE'] vmtype: ANY listable: yes create: no, set by 'brand' property. update: no uuid: This is the unique identifer for the VM. If one is not passed in with the create request, a new UUID will be generated. It cannot be changed after a VM is"
},
{
"data": "type: string (UUID) vmtype: ANY listable: yes create: yes update: no default: a new one is generated vcpus: For HVM VMs this parameter defines the number of virtual CPUs the guest will see. Generally recommended to be a multiple of 2. type: integer (number of vCPUs) vmtype: HVM listable: yes create: yes update: yes (requires VM reboot to take effect) default: 1 vga: This property allows one to specify the VGA emulation to be used by kvm VMs. The default is 'std'. NOTE: with the qemu bundled in SmartOS qxl and xenfb do not work. type: string (one of: 'cirrus','std','vmware','qxl','xenfb') vmtype: kvm listable: no create: yes update: yes default: 'std' virtio_txburst: This controls how many packets can be sent on a single flush of the tx queue. This applies to all the vnics attached to this VM using the virtio model. type: integer vmtype: kvm listable: no create: yes update: yes default: 128 virtio_txtimer: This sets the timeout for the TX timer. It applies to all the vnics attached to this VM using the virtio model. type: integer (in nanoseconds) vmtype: kvm listable: no create: yes update: yes default: 200000 vnc_password: This property allows you to set a password which will be required when connecting to the VNC port. IMPORTANT: this password will be visible from the GZ of the CN. For KVM anyone with access to the serial port in the guest can also see the password. Set to an empty string (default) to not require a password at this level. Changing the password will require a reboot of the zone before the change becomes active. Reboots from inside the guest will not make the changed password active. type: string (8 chars max) vmtype: HVM listable: no create: yes update: yes default: <unset> vnc_port: This specifies the TCP port to listen on for the VNC server, the default is zero which means a port will be chosen at random. Set to -1 to disable TCP listening. type: integer (0 for random, -1 for disabled) vmtype: HVM listable: no create: yes update: yes default: 0 zfsdatacompression: Specifies a compression algorithm used for this VM's data dataset. This option affects only the delegated dataset and therefore only makes sense when the VM has been created with the delegate_dataset option. The caveats and warnings in the zfsrootcompression section below also apply to this option. type: string one of: \"on,off,gzip,gzip-N,lz4,lzjb,zle\" vmtype: OS (and only with a delegated dataset) listable: no create: yes update: yes (see warning in zfsrootcompression section) default: off zfsdatarecsize: This specifies the suggested block size for files in the delegated dataset's filesystem. It can only be set when your zone has a data dataset as added by the delegate_dataset option. The warnings and caveats for zfsrootrecsize also apply to this option. You should read and understand those before using this. type: integer (record size in bytes, 512 to 131072, must be power of 2) vmtype: OS (and only with a delegated dataset) listable: no create: yes update: yes (see caveat below under zfsrootrecsize) default: 131072 (128k) zfsfilesystemlimit: This specifies a limit on the number of filesystems a VM can have. It is most useful when combined with the delegate_dataset option as a mechanism to limit the number of filesystems that can be created from within the zone. The root user in the GZ is immune to this limit. type: integer (0+, set to '', null, or undefined to unset) vmtype: OS listable: no create: yes update: yes default: none (no limit) See zfs(8) `filesystem_limit` for more details. zfsiopriority: This sets an IO throttle priority value relative to other"
},
{
"data": "If one VM has a value X and another VM has a value 2X, the machine with the X value will have some of its IO throttled when both try to use all available IO. type: integer (relative value) vmtype: ANY listable: yes create: yes update: yes (live update) default: 100 zfsrootcompression: Specifies a compression algorithm used for this VM's root dataset. This option affects only the zoneroot dataset. Setting to 'on' is equivalent to setting to 'lzjb'. If you want more information about the specific compression types, see the man page for zfs(8). WARNING: If you change this value for an existing VM, only new data will be compressed. It will not rewrite existing data compress. NOTE: to change this property for HVM, see disks.*.zfs_compression above. type: string one of: \"on,off,gzip,gzip-N,lz4,lzjb,zle\" vmtype: OS listable: no create: yes update: yes (see warning above) default: off zfsrootrecsize: Specifies a suggested block size for files in the root file system. This property is designed solely for use with database workloads that access files in fixed-size records. ZFS automatically tunes block sizes according to internal algorithms optimized for typical access patterns. If you have a delegated dataset (with the delegate_dataset option) you should consider leaving this unset and setting zfsdatarecsize instead. WARNING: Use this property only if you know exactly what you're doing as it is very possible to have an adverse effect performance when setting this incorrectly. Also, when doing an update, keep in mind that changing the file system's recordsize affects only files created after the setting is changed; existing files are unaffected. NOTE: to change this property for HVM, see disks.*.block_size above. type: integer (record size in bytes, 512 to 131072, must be power of 2) vmtype: OS listable: no create: yes update: yes (see caveat above) default: 131072 (128k) zfssnapshotlimit: This specifies a limit on the number of snapshots a VM can have. It is most useful when combined with the delegate_dataset option as a mechanism to limit the number of snapshots that can be taken from within the zone. The root user in the GZ is immune to this limit. type: integer (0+, set to '', null, or undefined to unset) vmtype: OS listable: no create: yes update: yes default: none (no limit) See zfs(8) `snapshot_limit` for more details. zlogmaxsize: This property is used to set/show the maximum size for a docker zone's stdio.log file before zoneadmd(8) will rotate it. type: integer (size in bytes) vmtype: ANY listable: no create: yes update: yes default: none (no rotation) zlog_mode: This property will show up for docker zones and indicates which mode the zlog/zfd devices will be in for the VM. The values are simply positional letters used to indicate various capabilities. The following table shows the meaning of the mode values: zlog-mode gz log - tty - ngz log gt- y y n g-- y n n gtn y y y g-n y n y -t- n y n n n n where the \"gz log\" here means we'll write the log to the /zones/<uuid>/logs/stdio.log file, \"tty\" means we'll setup the zfd devices as a tty, and \"ngz log\" means we'll setup the zfd devices to loop logs back into the zone so that a dockerlogger can process them in the zone. type: string (3 character mode string) vmtype: ANY listable: no create: no (handled via docker:* metadata) update: no (handled via docker:* metadata) zone_state: This property will show up when fetching a VMs JSON. this shows the state of the zone in which this VM is contained. eg."
},
{
"data": "It can be different from the 'state' value in several cases. See the 'VM STATES' section below for more details. type: string vmtype: HVM listable: yes create: no update: no zonepath: This property will show up in JSON representing a VM. It describes the path in the filesystem where you will find the VMs zone dataset. For OS VMs all VM data will be under this path, for HVM VMs this is where you'll find things such as the logs and sockets for a VM. type: string (path) vmtype: ANY listable: no create: no (automatic) update: no zonename: This property indicates the zonename of a VM. The zonename is a private property and not intended to be used directly. For OS VMs you can set this property with the create payload, but such use is discouraged. type: string vmtype: ANY listable: yes create: yes (OS VMs only) update: no default: value of uuid zonedid: This property will show up in a JSON payload and can be included in list output. It is a value that is used internally to the system and primarily exists to aid debugging. This value will not change when the VM is started/stopped. type: integer vmtype: ANY listable: yes create: no update: no zoneid: This property will show up in a JSON payload and can be included in list output. It is however a value that is used internally to the system and primarily exists to aid debugging. This value will change whenever the VM is stopped or started. Do not rely on this value. type: integer vmtype: ANY listable: yes create: no update: no zpool: This defines which ZFS pool the VM's zone dataset will be created in For OS VMs, this dataset is where all the data in the zone will live. For HVM VMs, this is only used by the zone shell that the VM runs in. type: string (zpool name) vmtype: ANY listable: yes create: yes update: no default: zones NOTE: SDC does not support any pool name other than the default 'zones'. The 'zone_state' field represents the state of the zone which contains the VM. The zones(7) man page has some more information about these zone states. The 'state' field defaults to the value of zone\\_state, but in some cases the state indicates details of the VM that are not reflected directly by the zone. For example, zones have no concept of 'provisioning' so while a VM is provisioning it will go through several zone\\_states but remain in the provisioning 'state' until either it goes to 'failed', 'stopped' or 'running'. Generally for zone\\_state you should see transitions something like: configured ^ | uninstall | | install | v +> installed <-+ | | | ^ | | | halt | | ready | halt | | v | | | | ready -+ | | | | | boot | v | | running | | | | | shutdown/reboot | v | | shutting_down | | | | | | v | down The state field will have similar transition except: The zone\\_state 'installed' will be state 'stopped'. When first provisioning the VM, the 'provisioning' state will hide the zone_states 'configured' -> 'installed' -> 'ready' -> 'running', as well as any reboots that take place as part of the scripts inside the zone. From 'provisioning' a VM can go into state 'failed' from which it will not recover. It is possible for a VM to be in state 'receiving' while zone\\_state transitions through several"
},
{
"data": "HVM VMs can show state 'stopping' when zone\\_state is running but the guest OS has been notified that it should perform an orderly shutdown. The rest of this section describes the possible values for the 'state' and 'zone_state' fields for a VM object. Each state will be followed by a note about whether it's possible for state, zone\\_state or both, and a brief description what it means that a VM has that state. configured Possible For: state + zone\\_state This indicates that the configuration has been created for the zone that contains the VM, but it does not have data. When a VM is first created you will briefly see this for zone\\_state but see state 'provisioning'. While the VM is being destroyed it also transitions through configured in which case you may see it for both state and zone\\_state. down Possible For: state + zone\\_state The VM has been shut down but there is still something holding it from being completely released into the 'installed' state. Usually VMs only pass through this state briefly. If a VM stays in state 'down' for an extended period of time it typically requires operator intervention to remedy as some portion of the zone was unable to be torn down. failed Possible For: state When a provision fails (typically due to timeout) the VM will be marked as failed and the state will be 'failed' regardless of the zone\\_state. This is usually caused either by a bug in the image's scripts or by the system being overloaded. When a VM has failed to provision it should generally be investigated by an operator to confirm the cause is known and perform any remedy possible before destroying the failed VM and provisioning it again. It is also possible for VMs to go to 'failed' when scripts inside the image have failed during a reprovision. In this case the best course of action is usually to have an operator confirm the cause is known, and reprovision again after fixing the source of the failure. incomplete Possible For: state + zone\\_state If a VM is in this state, it indicates that the zone is in the process of being installed or uninstalled. Normally VMs transition through this state quickly but if a VM stays in this state for an extended period of time it should be investigated by an operator. installed Possible For: zone\\_state The VM has been created and the datasets have been installed. As this really indicates that the VM appears to be healthy but is just not running, we translate this zone\\_state to state 'stopped' to make it clear that it is ready to be started. provisioning Possible For: state When a VM is first being created and autoboot is true, the VM will have state provisioning even as the zone\\_state makes several transitions. Non-HVM VMs will stay in state 'provisioning' until the scripts inside the zone have completed to the point where they have removed the /var/svc/provisioning file that was inserted before the zone was first booted. HVM VMs will stay in state 'provisioning' until the 'query-status' result from qemu includes 'hwsetup' with a value of true. ready Possible For: state + zone\\_state This indicates that the VM has filesystems mounted and devices created but that it is not currently running processes. This state is normally only seen briefly while transitioning to running. receiving Possible For: state This is similar to 'provisioning' in that a VM will stay in state 'receiving' while the 'vmadm recv' command is running and the zone\\_state will change underneath"
},
{
"data": "A received VM will similarly stay in state 'receiving' until all the required datasets have been received. running Possible For: state + zone\\_state The VM has all required resources and is executing processes. shutting_down Possible For: state + zone\\_state The VM is being shut down. Usually VMs only pass through this state briefly. If a VM stays in state 'shutting_down' for an extended period of time it typically requires operator intervention to remedy as some portion of the zone was unable to be torn down. stopped Possible For: state When a VM has zone\\_state 'installed', it will always have state 'stopped'. This is just a straight rename. Please see the 'installed' state for details on what this actually means. stopping Possible For: state This is a state which only exists for HVM VMs. When we have sent a system_powerdown message to qemu via QMP we will mark the the VM as being in state 'stopping' until either the shutdown times out and we halt the zone, or the VM reaches zone\\_state 'installed'. Example 1: Listing KVM VMs with 128M of RAM, sorting by RAM descending and with customized field order. vmadm list -o uuid,type,ram,quota,cpushares,zfsio_priority \\ -s -ram,cpu_shares type=KVM ram=128 Example 2: Creating an OS VM. vmadm create <<EOF { \"brand\": \"joyent\", \"zfsiopriority\": 30, \"quota\": 20, \"image_uuid\": \"47e6af92-daf0-11e0-ac11-473ca1173ab0\", \"maxphysicalmemory\": 256, \"alias\": \"zone70\", \"nics\": [ { \"nic_tag\": \"external\", \"ips\": [\"10.2.121.70/16\"], \"gateways\": [\"10.2.121.1\"], \"primary\": true } ] } EOF Example 3: Creating a KVM VM. vmadm create <<EOF { \"brand\": \"kvm\", \"vcpus\": 1, \"ram\": 256, \"disks\": [ { \"boot\": true, \"model\": \"virtio\", \"image_uuid\": \"e173ecd7-4809-4429-af12-5d11bcc29fd8\", \"image_name\": \"ubuntu-10.04.2.7\", \"image_size\": 5120 } ], \"nics\": [ { \"nic_tag\": \"external\", \"model\": \"virtio\", \"ips\": [\"10.88.88.51/24\"], \"gateways\": [\"10.88.88.2\"], \"primary\": true } ] } EOF Example 4: Getting JSON for the VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0. vmadm get 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 Example 5: Find the VM with the IP 10.2.121.70 (second one with JSON output) vmadm lookup nics.*.ip=10.2.121.70 vmadm lookup -j nics.*.ip=10.2.121.70 Example 6: Looking up all 128M VMs with an alias that starts with 'a' or 'b' and then again with JSON output. vmadm lookup ram=128 alias=~^[ab] vmadm lookup -j ram=128 alias=~^[ab] Example 7: Set the quota to 40G for VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 vmadm update 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 quota=40 Example 8: Set the cpu_shares to 100 for VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 echo '{\"cpu_shares\": 100}' | \\ vmadm update 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 Example 9: Add a NIC to the VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 vmadm update 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 <<EOF { \"add_nics\": [ { \"interface\": \"net1\", \"nic_tag\": \"external\", \"mac\": \"b2:1e:ba:a5:6e:71\", \"ips\": [\"10.2.121.71/16\"], \"gateways\": [\"10.2.121.1\"] } ] } EOF Example 10: Change the IP of the NIC with MAC b2:1e:ba:a5:6e:71 for the VM with the UUID 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0. vmadm update 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 <<EOF { \"update_nics\": [ { \"mac\": \"b2:1e:ba:a5:6e:71\", \"ips\": [\"10.2.121.72/16\"] } ] } EOF Example 11: Remove the NIC with MAC b2:1e:ba:a5:6e:71 from VM with UUID 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0. echo '{\"remove_nics\": [\"b2:1e:ba:a5:6e:71\"]}' | \\ vmadm update 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 Example 12: Adding a lofs filesystem mount to the VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 vmadm update 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 <<EOF { \"add_filesystems\": [ { \"source\": \"/bulk/logs/54f1cc77-68f1-42ab-acac-5c4f64f5d6e0\", \"target\": \"/var/log\", \"type\": \"lofs\", \"options\": [ \"nodevice\" ] } ] } EOF Example 13: Stop VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 vmadm stop 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 Example 14: Start VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 vmadm start 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 Example 15: Reboot VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 vmadm reboot 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 Example 16: Delete VM 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 vmadm delete 54f1cc77-68f1-42ab-acac-5c4f64f5d6e0 The following exit values are returned: 0 Successful completion. 1 An error occurred. 2 Invalid usage. zones(7), vmadmd(8), zonecfg(8), zoneadm(8) Some of the vmadm commands depend on the vmadmd(8) service: svc/system/smartdc/vmadmd:default If the vmadmd service is stopped while the vmadm utility is running, the vmadm command behaviour will be undefined. Additionally if the service is not running, some commands will be unavailable."
}
] |
{
"category": "Runtime",
"file_name": "vmadm.8.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "We added support for pulling Docker images which use the schema1 manifest format in the containerd client. This allows pulling images from older registries as well as images pushed from older versions of Docker. With this change any image from the Docker Hub or registry supporting the v2 API can be pulled. As part of our commitment to support OCI images, the schema 1 images pulled by the client are converted to OCI images before being stored by containerd. The containerd client will only support pushing these images as OCI. One of the goals of containerd is to support multiple consumers. The ability to have swarm, docker, kube, and more all running on the same system and using the same containerd without having naming and actions conflict with one another. We have the namespace API merged in and most of the underlying components updated to support namespaces. You can view the namespace PR below for more information on the functionality of namespaces. We added updates to the client to support attach, checkpoint and restore, exec of additional processes, and fixes. We also ported over the `ctr` and `dist` tools to use the client this week. We only have a few features left to implement, such as , for our 1.0 release. The rest of the month we are working on usability, bug fixes, and stability. We will be helping out with integrations of containerd into other platforms to make sure that everyone has the functionality that they need. If you run into any problems please open an issue on github. Also Pull Requests for the problems you are having are welcome. If you have any question or help working on a change, stop by the #containerd slack channel as everyone is friendly and helpful."
}
] |
{
"category": "Runtime",
"file_name": "2017-06-09.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The is a collaborative project under Linux Foundation supported by storage users and vendors, including Dell EMC, Intel, Huawei, Fujitsu, Western Digital, Vodafone, NTT and Oregon State University. The project will also seek to collaborate with other upstream open source communities such as Cloud Native Computing Foundation, Docker, OpenStack, and Open Container Initiative. It is a software defined storage controller that provides unified block, file, object storage services and focuses on: Simple*: well-defined API that follows the specification. Lightweight*: no external dependencies, deployed once in binary file or container. Extensible*: pluggable framework available for different storage systems, identity services, capability filters, etc. The OpenSDS community welcomes anyone who is interested in software defined storage and shaping the future of cloud-era storage. If you are a company, you should consider joining the . If you are a developer want to be part of the code development that is happening now, please refer to the Contributing sections below. Mailing list: slack: # Ideas/Bugs:"
}
] |
{
"category": "Runtime",
"file_name": "opensds.md",
"project_name": "Soda Foundation",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Longhorn 1.6.1 introduces several improvements and bug fixes that are intended to improve system quality, resilience, and stability. The Longhorn team appreciates your contributions and expects to receive feedback regarding this release. Note: For more information about release-related terminology, see . Ensure that your cluster is running Kubernetes v1.21 or later before installing Longhorn v1.6.1. You can install Longhorn using a variety of tools, including Rancher, Kubectl, and Helm. For more information about installation methods and requirements, see in the Longhorn documentation. Ensure that your cluster is running Kubernetes v1.21 or later before upgrading from Longhorn v1.5.x or v1.6.x to v1.6.1. Longhorn only allows upgrades from supported versions. For more information about upgrade paths and procedures, see in the Longhorn documentation. For information about important changes, including feature incompatibility, deprecation, and removal, see in the Longhorn documentation. For information about issues identified after this release, see . - @derekbit @chriscchien - @c3y1huang @roger-ryao - @PhanLe1010 @chriscchien - @james-munson @roger-ryao - @votdev @roger-ryao - @ejweber @roger-ryao - @ChanYiLin @roger-ryao - @ChanYiLin @chriscchien - @shuo-wu @roger-ryao - @Vicente-Cheng @roger-ryao - @ejweber @chriscchien - @shuo-wu @chriscchien - @ejweber @roger-ryao - @james-munson @roger-ryao - @mantissahz @roger-ryao - @ChanYiLin @chriscchien - @ChanYiLin @khushboo-rancher @chriscchien - @c3y1huang @chriscchien - @roger-ryao - @derekbit @chriscchien - @ejweber @chriscchien - @ejweber @roger-ryao - @james-munson @chriscchien - @ejweber @roger-ryao - @ChanYiLin @chriscchien - @votdev @yangchiu - @yangchiu @ejweber - @yangchiu @ChanYiLin - @ejweber @chriscchien - @ejweber @chriscchien - @c3y1huang @chriscchien - @FrankYang0529 @roger-ryao - @ejweber @roger-ryao - @shuo-wu @roger-ryao - @yangchiu @mantissahz @PhanLe1010 @c3y1huang - @yangchiu @ChanYiLin @chriscchien - @james-munson @chriscchien - @ChanYiLin @chriscchien - @mantissahz @ChanYiLin @FrankYang0529 @PhanLe1010 @Vicente-Cheng @c3y1huang @chriscchien @derekbit @ejweber @innobead @james-munson @khushboo-rancher @mantissahz @roger-ryao @shuo-wu @votdev @yangchiu"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.6.1.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "| Author | | | | - | | Date | 2021-12-30 | | Email | | Seccomp stands for secure computing mode and is used to limit calls made by a process from user space to the kernel. iSulad generates a docker seccomp spec by reading a standard configuration file, and then converts it into an oci seccomp spec for the container runtime lcr. After lcr gets the oci seccomp spec, it will save each architecture in seccomp and more than 300 system calls corresponding to the architecture. Since all the architectures are copied indiscriminately when the docker seccomp spec is converted to the oci seccomp spec, there will be a situation where the seccomp information of the arm architecture is also stored on the x86 architecture machine, which leads to consumption. This reconstruction intends to obtain the current machine architecture when the program is running, and obtain the architecture in a targeted manner during the docker/oci seccomp spec conversion process, so as to reduce the file writing time and improve the container startup speed. During spec conversion, the current machine architecture is read through uname, and the architecture is converted into the seccomp standard format. The corresponding relationship is as follows (currently only x86 and arm architectures are supported): 386 || amd64 SCMP_ARCH_X86_64 arm64 || arm SCMP_ARCH_AARCH64 Then traverse all the architectures in the docker seccomp spec, find the required architecture and add it and all its sub-architectures to the oci seccomp spec. In this way, when the lower-level container is running, only the system calls of the corresponding architecture of the current system will be placed on the disk. x86_64: Create 500 containers sequentially: the average time increases from 57.00s to 56.671s, an increase of 0.6% Create 200 containers in parallel: the average time increases from 14.407s to 14.084s, an increase of 2.24% arm64: Create 500 containers sequentially: the average time increases from 75.271s to 74.263s, an increase of 1.34% Create 150 containers in parallel: average time from 10.255s to 10.131s, an increase of 1.21%"
}
] |
{
"category": "Runtime",
"file_name": "seccomp_optimization.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Cilium configuration options ``` cilium-dbg config [<option>=(enable|disable) ...] [flags] ``` ``` -a, --all Display all cilium configurations -h, --help help for config --list-options List available options -n, --num-pages int Number of pages for perf ring buffer. New values have to be > 0 -o, --output string json| yaml| jsonpath='{}' -r, --read-only Display read only configurations ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Retrieve cilium configuration"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_config.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. -->"
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: JuiceFS S3 Gateway sidebar_position: 5 description: JuiceFS S3 Gateway allows the JuiceFS file system to be accessed externally using the S3 protocol. This enables applications to access files stored on JuiceFS through Amazon S3 SDKs. JuiceFS S3 Gateway is one of the various access methods supported by JuiceFS. It allows the JuiceFS file system to be accessed externally using the S3 protocol. This enables applications to access files stored on JuiceFS using Amazon S3 SDKs. In JuiceFS, . JuiceFS provides multiple access methods, including the FUSE POSIX, WebDAV, S3 Gateway, and CSI Driver. Among these options, S3 Gateway is particularly popular. Below is the S3 Gateway architecture: JuiceFS S3 Gateway implements its functionality through . Leveraging MinIO's , we integrate the JuiceFS file system as the backend storage for MinIO servers. This provides a user experience close to that of native MinIO usage while inheriting many advanced features of MinIO. In this architecture, JuiceFS acts as a local disk for the MinIO instance, and the principle is similar to the `minio server /data1` command.. Common application scenarios for JuiceFS S3 Gateway include: Exposing the S3 API for JuiceFS: Applications can access files stored on JuiceFS using S3 SDKs. Using S3 clients: Using tools like S3cmd, AWS CLI, and MinIO clients to easily access and manage files stored on JuiceFS. Managing files in JuiceFS: JuiceFS S3 Gateway provides a web-based file manager to manage files in JuiceFS directly from a browser. Cluster replication: In scenarios requiring cross-cluster data replication, JuiceFS S3 Gateway serves as a unified data export for clusters. This avoids cross-region metadata access and enhances data transfer performance. For details, see . Create a JuiceFS file system by following the steps in . Start JuiceFS S3 Gateway. Before enabling the gateway, set the `MINIOROOTUSER` and `MINIOROOTPASSWORD` environment variables. They serve as the access key and secret key for authentication when you access the S3 API. These credentials are called administrator credentials, with the highest privileges. For example: ```shell export MINIOROOTUSER=admin export MINIOROOTPASSWORD=12345678 ``` Note that `MINIOROOTUSER` must be at least 3 characters long, and `MINIOROOTPASSWORD` must be at least 8 characters long. Windows users must use the `set` command to set environment variables, for example, `set MINIOROOTUSER=admin`. Then, use the `juicefs gateway` command to start JuiceFS S3 Gateway. For example: ```shell juicefs gateway redis://localhost:6379/1 localhost:9000 ``` The `gateway` subcommand requires at least two parameters: the database URL for storing metadata and the address/port for JuiceFS S3 Gateway to listen on. To optimize JuiceFS S3 Gateway, you can add to `gateway` subcommands as needed. For example, you can set the default local cache to 20 GiB. ```shell juicefs gateway --cache-size 20480 redis://localhost:6379/1 localhost:9000 ``` This example assumes that the JuiceFS file system uses a local Redis database. When JuiceFS S3 Gateway is enabled, you can access the gateway's management interface at `http://localhost:9000` on the current host. To allow access to JuiceFS S3 Gateway from other hosts on the local network or the internet, adjust the listen address. For example: ```shell juicefs gateway redis://localhost:6379/1 0.0.0.0:9000 ``` This configuration makes JuiceFS S3 Gateway accept requests from all networks by default. Different S3 clients can access JuiceFS S3 Gateway using different addresses. For example: Third-party clients on the same host as JuiceFS S3 Gateway can use `http://127.0.0.1:9000` or `http://localhost:9000` for access. Third-party clients on the same local network as the JuiceFS S3 Gateway host can use"
},
{
"data": "for access (assuming the JuiceFS S3 Gateway host's internal IP address is `192.168.1.8`). Using `http://110.220.110.220:9000` to access JuiceFS S3 Gateway over the internet (assuming the JuiceFS S3 Gateway host's public IP address is `110.220.110.220`). Various S3 API-supported clients, desktop applications, and web applications can access JuiceFS S3 Gateway. Ensure you use the correct address and port for accessing JuiceFS S3 Gateway. :::tip Note The following examples assume accessing JuiceFS S3 Gateway running on the local host with third-party clients. Adjust JuiceFS S3 Gateway's address according to your specific scenario. ::: Download and install the AWS Command Line Interface (AWS CLI) from . Configure it: ```bash $ aws configure AWS Access Key ID [None]: admin AWS Secret Access Key [None]: 12345678 Default region name [None]: Default output format [None]: ``` The program guides you interactively to add new configurations. Use the same values for `Access Key ID` as `MINIOROOTUSER` and `Secret Access Key` as `MINIOROOTPASSWORD`. Leave the region name and output format blank. Now you can use the `aws s3` command to access JuiceFS storage, for example: ```bash $ aws --endpoint-url http://localhost:9000 s3 ls $ aws --endpoint-url http://localhost:9000 s3 ls s3://<bucket> ``` To avoid compatibility issues, we recommend using the `RELEASE.2021-04-22T17-40-00Z` version of the MinIO Client (`mc`). You can find historical versions with different architectures of `mc` at this . For example, for the amd64 architecture, you can download the `RELEASE.2021-04-22T17-40-00Z` version of `mc` from this . After installing `mc`, add a new alias: ```bash mc alias set juicefs http://localhost:9000 admin 12345678 ``` Then, you can freely copy, move, add, and delete files and folders between the local disk, JuiceFS storage, and other cloud storage services using the `mc` client. ```shell $ mc ls juicefs/jfs [2021-10-20 11:59:00 CST] 130KiB avatar-2191932_1920.png [2021-10-20 11:59:00 CST] 4.9KiB box-1297327.svg [2021-10-20 11:59:00 CST] 21KiB cloud-4273197.svg [2021-10-20 11:59:05 CST] 17KiB hero.svg [2021-10-20 11:59:06 CST] 1.7MiB hugo-rocha-qFpnvZ_j9HU-unsplash.jpg [2021-10-20 11:59:06 CST] 16KiB man-1352025.svg [2021-10-20 11:59:06 CST] 1.3MiB man-1459246.ai [2021-10-20 11:59:08 CST] 19KiB sign-up-accent-left.07ab168.svg [2021-10-20 11:59:10 CST] 11MiB work-4997565.svg ``` By default, JuiceFS S3 Gateway only allows one bucket. The bucket name is the file system name. If you need multiple buckets, you can add `--multi-buckets` at startup to enable multi-bucket support. This parameter exports each subdirectory under the top-level directory of the JuiceFS file system as a separate bucket. Creating a bucket means creating a subdirectory with the same name at the top level of the file system. ```shell juicefs gateway redis://localhost:6379/1 localhost:9000 --multi-buckets ``` By default, JuiceFS S3 Gateway does not save or return object ETag information. You can enable this with `--keep-etag`. Object tags are not supported by default, but you can use `--object-tag` to enable them. By default, JuiceFS S3 Gateway supports path-style requests in the format of `http://mydomain.com/bucket/object`. The `MINIO_DOMAIN` environment variable is used to enable virtual host-style requests. If the request's `Host` header information matches `(.+).mydomain.com`, the matched pattern `$1` is used as the bucket, and the path is used as the object. For example: ```shell export MINIO_DOMAIN=mydomain.com ``` The default refresh interval for Identity and Access Management (IAM) caching is 5 minutes. You can adjust this using `--refresh-iam-interval`. The value of this parameter is a time string with a unit, such as \"300ms\", \"-1.5h\", or \"2h45m.\" Valid time units are \"ns\", \"us\" (or \"s\"), \"ms\", \"s\", \"m\", and"
},
{
"data": "For example, to set a refresh interval of 1 minute: ```sh juicefs gateway xxxx xxxx --refresh-iam-interval 1m ``` The core feature of JuiceFS S3 Gateway is to provide the S3 API. Now, the support for the S3 protocol is comprehensive. Version 1.2 supports IAM and bucket event notifications. These advanced features require the `RELEASE.2021-04-22T17-40-00Z` version of the `mc` client. For the usage of these advanced features, see the or the `mc` command-line help information. If you are unsure about the available features or how to use a specific feature, you can append `-h` to a subcommand to view the help information. Before version 1.2, `juicefs gateway` only created a superuser when starting, and this superuser belonged only to that process. Even if multiple gateway processes shared the same file system, their users were isolated between processes. You could set different superusers for each gateway process, and they were independent and unaffected by each other. Starting from version 1.2, `juicefs gateway` still requires setting a superuser at startup, and this superuser remains isolated per process. However, it allows adding new users using `mc admin user add`. Newly added users are shared across the same file system. You can manage new users using `mc admin user`. This supports adding, disabling, enabling, and deleting users, as well as viewing all users and displaying user information and policies. ```Shell $ mc admin user -h NAME: mc admin user - manage users USAGE: mc admin user COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: add add a new user disable disable user enable enable user remove remove user list list all users info display info of a user policy export user policies in JSON format svcacct manage service accounts ``` An example of adding a user: ```Shell $ mc admin user add myminio user1 admin123 $ mc admin user list myminio enabled user1 $ mc admin user list myminio --json { \"status\": \"success\", \"accessKey\": \"user1\", \"userStatus\": \"enabled\" } ``` The `mc admin user svcacct` command supports service account management. This allows you to add service accounts for a user. Each service account is associated with a user identity and inherits policies attached to its parent user or the group to which the parent user belongs. Each access key supports optional inline policies that can further restrict access to operations and resources subsets available to the parent user. ``` $ mc admin user svcacct -h NAME: mc admin user svcacct - manage service accounts USAGE: mc admin user svcacct COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: add add a new service account ls List services accounts rm Remove a service account info Get a service account info set edit an existing service account enable Enable a service account disable Disable a services account ``` The S3 Gateway Security Token Service (STS) is a service that allows clients to request temporary credentials to access MinIO resources. The working principle of temporary credentials is almost the same as default administrator credentials but with some differences: Temporary credentials are short-lived. They can be configured to last from minutes to hours. After expiration, the gateway no longer recognizes them and does not allow any form of API request access. Temporary credentials do not need to be stored with the application. They are dynamically generated and provided to the application when requested. When temporary credentials expire, applications can request new credentials. The `AssumeRole` operation returns a set of temporary security credentials. You can use them to access gateway"
},
{
"data": "`AssumeRole` requires authorization credentials for an existing gateway user and returns temporary security credentials, including an access key, secret key, and security token. Applications can use these temporary security credentials to sign requests for gateway API operations. The policies applied to these temporary credentials inherit from gateway user credentials. By default, `AssumeRole` creates temporary security credentials with a validity period of one hour. However, you can specify the duration of the credentials using the optional parameter `DurationSeconds`, which can range from 900 (15 minutes) to 604,800 (7 days). `Version` Indicates the STS API version information. The only supported value is '2011-06-15', borrowed from the AWS STS API documentation for compatibility. | Parameter | Value | | - | | | Type | String | | Require | Yes | `AUTHPARAMS` Indicates the STS API authorization information. If you are familiar with AWS Signature V4 authorization headers, this STS API supports the signature V4 authorization as described . `DurationSeconds` Duration in seconds. This value can range from 900 seconds (15 minutes) to 7 days. If the value is higher than this setting, the operation fails. By default, this value is set to 3,600 seconds. | Parameter | Value | |-|| | Type | Integer | | Valid range | From 900 to 604,800 | | Required | No | Policy A JSON-format IAM policy that you want to use as an inline session policy. This parameter is optional. Passing a policy to this operation returns new temporary credentials. The permissions of the generated session are the intersection of preset policy names and the policy set here. You cannot use this policy to grant more permissions than allowed by the assumed preset policy names. | Parameter | Value | |-|-| | Type | String | | Valid range | From 1 to 2,048 | | Required | No | The XML response of this API is similar to . The XML error response of this API is similar to . ``` http://minio:9000/?Action=AssumeRole&DurationSeconds=3600&Version=2011-06-15&Policy={\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Stmt1\",\"Effect\":\"Allow\",\"Action\":\"s3:\",\"Resource\":\"arn:aws:s3:::\"}]}&AUTHPARAMS ``` ``` <?xml version=\"1.0\" encoding=\"UTF-8\"?> <AssumeRoleResponse xmlns=\"https://sts.amazonaws.com/doc/2011-06-15/\"> <AssumeRoleResult> <AssumedRoleUser> <Arn/> <AssumeRoleId/> </AssumedRoleUser> <Credentials> <AccessKeyId>Y4RJU1RNFGK48LGO9I2S</AccessKeyId> <SecretAccessKey>sYLRKS1Z7hSjluf6gEbb9066hnx315wHTiACPAjg</SecretAccessKey> <Expiration>2019-08-08T20:26:12Z</Expiration> <SessionToken>eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJZNFJKVTFSTkZHSzQ4TEdPOUkyUyIsImF1ZCI6IlBvRWdYUDZ1Vk80NUlzRU5SbmdEWGo1QXU1WWEiLCJhenAiOiJQb0VnWFA2dVZPNDVJc0VOUm5nRFhqNUF1NVlhIiwiZXhwIjoxNTQxODExMDcxLCJpYXQiOjE1NDE4MDc0NzEsImlzcyI6Imh0dHBzOi8vbG9jYWxob3N0Ojk0NDMvb2F1dGgyL3Rva2VuIiwianRpIjoiYTBiMjc2MjktZWUxYS00M2JmLTg3MzktZjMzNzRhNGNkYmMwIn0.ewHqKVFTaP-jkgZrcOEKroNUjk10GEp8bqQjxBbYVovV0nHO985VnRESFbcT6XMDDKHZiWqN2viETX_u3Q-w</SessionToken> </Credentials> </AssumeRoleResult> <ResponseMetadata> <RequestId>c6104cbe-af31-11e0-8154-cbc7ccf896c7</RequestId> </ResponseMetadata> </AssumeRoleResponse> ``` Start the gateway and create a user named `foobar`. Configure the AWS CLI: ``` [foobar] region = us-east-1 awsaccesskey_id = foobar awssecretaccess_key = foo12345 ``` Use the AWS CLI to request the `AssumeRole` API. :::note Note In the command below, `--role-arn` and `--role-session-name` have no significance for the gateway. You can set them to any value that meets the command line requirements. ::: ```sh $ aws --profile foobar --endpoint-url http://localhost:9000 sts assume-role --policy '{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"Stmt1\",\"Effect\":\"Allow\",\"Action\":\"s3:\",\"Resource\":\"arn:aws:s3:::\"}]}' --role-arn arn:xxx:xxx:xxx:xxxx --role-session-name anything { \"AssumedRoleUser\": { \"Arn\": \"\" }, \"Credentials\": { \"SecretAccessKey\": \"xbnWUoNKgFxi+uv3RI9UgqP3tULQMdI+Hj+4psd4\", \"SessionToken\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJhY2Nlc3NLZXkiOiJLOURUSU1VVlpYRVhKTDNBVFVPWSIsImV4cCI6MzYwMDAwMDAwMDAwMCwicG9saWN5IjoidGVzdCJ9.PetK5wWUcnCJkMYv6TEs7HqlA4xvViykQ8b2T6hapFGJTO34sfTwqBnHF6lAiWxRoZXco11B0R7y58WAsrQw\", \"Expiration\": \"2019-02-20T19:56:59-08:00\", \"AccessKeyId\": \"K9DTIMUVZXEXJL3ATUOY\" } } ``` See the . By default, newly created users have no permissions and need to be granted permissions using `mc admin policy` before they can be used. This command supports adding, deleting, updating, and listing policies, as well as adding, deleting, and updating permissions for users. ```Shell $ mc admin policy -h NAME: mc admin policy - manage policies defined in the MinIO server USAGE: mc admin policy COMMAND [COMMAND FLAGS | -h]"
},
{
"data": "COMMANDS: add add new policy remove remove policy list list all policies info show info on a policy set set IAM policy on a user or group unset unset an IAM policy for a user or group update Attach new IAM policy to a user or group ``` The gateway includes the following common policies: `readonly`: Read-only users. `readwrite`: Read-write users. `writeonly`: Write-only users. `consoleAdmin`: Read-write-admin users, where \"admin\" means the ability to use management APIs such as creating users. For example, to set a user as a read-only user: ```Shell $ mc admin policy set myminio readonly user=user1 $ mc admin user list myminio enabled user1 readonly ``` For custom policies, use `mc admin policy add`: ```Shell $ mc admin policy add -h NAME: mc admin policy add - add new policy USAGE: mc admin policy add TARGET POLICYNAME POLICYFILE POLICYNAME: Name of the canned policy on MinIO server. POLICYFILE: Name of the policy file associated with the policy name. EXAMPLES: Add a new canned policy 'writeonly'. $ mc admin policy add myminio writeonly /tmp/writeonly.json ``` The policy file to be added here must be in JSON format with syntax, and no more than 2,048 characters. This syntax allows for more fine-grained access control. If you are unfamiliar with this, you can first use the following command to see the simple policies and then modify them accordingly. ```Shell $ mc admin policy info myminio readonly { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetBucketLocation\", \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::*\" ] } ] } ``` JuiceFS S3 Gateway supports creating user groups, similar to Linux user groups, and uses `mc admin group` for management. You can set one or more users to a group and grant permissions uniformly to the group. This usage is similar to user management. ```Shell $ mc admin group -h NAME: mc admin group - manage groups USAGE: mc admin group COMMAND [COMMAND FLAGS | -h] [ARGUMENTS...] COMMANDS: add add users to a new or existing group remove remove group or members from a group info display group info list display list of groups enable enable a group disable disable a group ``` In addition to user-specific permissions, anonymous access management is also possible. This allows specific objects or buckets to be accessible to anyone. You can use the `mc policy` command to manage this functionality. ```Shell Name: mc policy - manage anonymous access to buckets and objects USAGE: mc policy [FLAGS] set PERMISSION TARGET mc policy [FLAGS] set-json FILE TARGET mc policy [FLAGS] get TARGET mc policy [FLAGS] get-json TARGET mc policy [FLAGS] list TARGET PERMISSION: Allowed policies are: [none, download, upload, public]. FILE: A valid S3 policy JSON filepath. EXAMPLES: Set bucket to \"download\" on Amazon S3 cloud storage. $ mc policy set download s3/burningman2011 Set bucket to \"public\" on Amazon S3 cloud storage. $ mc policy set public s3/shared Set bucket to \"upload\" on Amazon S3 cloud storage. $ mc policy set upload s3/incoming Set policy to \"public\" for bucket with prefix on Amazon S3 cloud storage. $ mc policy set public s3/public-commons/images Set a custom prefix based bucket policy on Amazon S3 cloud storage using a JSON file. $ mc policy set-json /path/to/policy.json s3/public-commons/images Get bucket permissions. $ mc policy get s3/shared Get bucket permissions in JSON format. $ mc policy get-json s3/shared List policies set to a specified bucket. $ mc policy list s3/shared List public object URLs recursively. $ mc policy --recursive links s3/shared/ ``` The gateway has built-in support for four types of anonymous permissions by default: `none`: Disallows anonymous access (typically used to clear existing permissions). `download`: Allows anyone to read. `upload`: Allows anyone to"
},
{
"data": "`public`: Allows anyone to read and write. The following example shows how to set an object to allow anonymous downloads: ``` mc policy set download useradmin/testbucket1/afile mc policy get-json useradmin/testbucket1/afile $ mc policy --recursive links useradmin/testbucket1/ http://127.0.0.1:9001/testbucket1/afile wget http://127.0.0.1:9001/testbucket1/afile mc policy set none useradmin/testbucket1/afile ``` All management API updates for JuiceFS S3 Gateway take effect immediately and are persisted to the JuiceFS file system. Clients that accept these API requests also immediately reflect these changes. However, in a multi-server gateway setup, the situation is slightly different. This is because when the gateway handles request authentication, it uses in-memory cached information as the validation baseline. Otherwise, reading configuration file content for every request would pose unacceptable performance issues. However, caching also introduces potential inconsistencies between cached data and the configuration file. Currently, JuiceFS S3 Gateway's cache refresh strategy involves forcibly updating the in-memory cache every 5 minutes (certain operations also trigger cache update operations). This ensures that configuration changes take effect within a maximum of 5 minutes in a multi-server setup. You can adjust this time by using the `--refresh-iam-interval` parameter. If immediate effect on a specific gateway is required, you can manually restart it. You can use bucket event notifications to monitor events happening on objects within a storage bucket and trigger certain actions in response. Currently supported object event types include: `s3:ObjectCreated:Put` `s3:ObjectCreated:CompleteMultipartUpload` `s3:ObjectAccessed:Head` `s3:ObjectCreated:Post` `s3:ObjectRemoved:Delete` `s3:ObjectCreated:Copy` `s3:ObjectAccessed:Get` Supported global events include: `s3:BucketCreated` `s3:BucketRemoved` You can use the `mc` client tool with the event subcommand to set up and monitor event notifications. Notifications sent by MinIO for publishing events are in JSON format. See the . To reduce dependencies, JuiceFS S3 Gateway has cut support for certain event destination types. Currently, storage bucket events can be published to the following destinations: Redis MySQL PostgreSQL Webhooks ```Shell $ mc admin config get myminio | grep notify notify_webhook publish bucket notifications to webhook endpoints notify_mysql publish bucket notifications to MySQL databases notify_postgres publish bucket notifications to Postgres databases notify_redis publish bucket notifications to Redis datastores ``` Redis event destination supports two formats: `namespace` and `access`. In the `namespace` format, the gateway synchronizes objects in the bucket to entries in a Redis hash. Each entry corresponds to an object in the storage bucket, with the key set to \"bucket name/object name\" and the value as JSON-formatted event data specific to that gateway object. Any updates or deletions of objects also update or delete corresponding entries in the hash. In the `access` format, the gateway uses to add events to a list. Each element in this list is a JSON-formatted list with two elements: A timestamp string A JSON object containing event data related to operations on the bucket In this format, elements in the list are not updated or deleted. To use notification destinations in `namespace` and `access` formats: Configure Redis with the gateway. Use the `mc admin config set` command to configure Redis as the event notification destination: ```Shell $ mc admin config set myminio notifyredis:1 address=\"127.0.0.1:6379/1\" format=\"namespace\" key=\"bucketevents\" password=\"yoursecret\" queuedir=\"\" queue_limit=\"0\" ``` You can use `mc admin config get myminio notify_redis` to view the configuration options. Different types of destinations have different configuration options. For Redis type, it has the following configuration options: ```Shell $ mc admin config get myminio notify_redis notifyredis enable=off format=namespace address= key= password= queuedir= queue_limit=0 ``` Here are the meanings of each configuration option: ```Shell notify_redis[:name] Supports setting multiple Redis instances with different"
},
{
"data": "address* (address) Address of the Redis server. For example: localhost:6379. key* (string) Redis key to store/update events. The key is created automatically. format (namespace|access) Whether it is namespace or access. Default is 'namespace'. password (string) Password for the Redis server. queue_dir (path) Directory to store unsent messages, for example, '/home/events'. queue_limit (number) Maximum limit of unsent messages. Default is '100000'. comment (sentence) Optional comment description. ``` The gateway supports persistent event storage. Persistent storage backs up events when the Redis broker is offline and replays events when the broker comes back online. You can set the directory for event storage using the `queuedir` field and the maximum limit for storage using `queuelimit`. For example, you can set `queuedir` to `/home/events`, and you can set `queuelimit` to 1,000. By default, `queue_limit` is 100,000. Before updating the configuration, you can use the `mc admin config get` command to get the current configuration. ```Shell $ mc admin config get myminio notify_redis notifyredis:1 address=\"127.0.0.1:6379/1\" format=\"namespace\" key=\"bucketevents\" password=\"yoursecret\" queuedir=\"\" queue_limit=\"0\" $ mc admin config set myminio notifyredis:1 queuelimit=\"1000\" Successfully applied new settings. Please restart your server 'mc admin service restart myminio'. ``` After using the `mc admin config set` command to update the configuration, restart JuiceFS S3 Gateway to apply the changes. JuiceFS S3 Gateway will output a line similar to `SQS ARNs: arn:minio:sqs::1:redis`. Based on your needs, you can add multiple Redis destinations by providing the identifier for each Redis instance (like the \"1\" in the example \"notify_redis:1\") along with the configuration parameters for each instance. Enable bucket notifications. Now you can enable event notifications on a bucket named \"images.\" When a JPEG file is created or overwritten, a new key is created or an existing key is updated in the previously configured Redis hash. If an existing object is deleted, the corresponding key is also removed from the hash. Therefore, the rows in the Redis hash map to `.jpg` objects in the \"images\" bucket. To configure bucket notifications, you need to use the Amazon Resource Name (ARN) information outputted by the gateway in the previous steps. See more information about . You can use the `mc` tool to add these configuration details. Assuming the gateway service alias is myminio, you can execute the following script: ```Shell mc mb myminio/images mc event add myminio/images arn:minio:sqs::1:redis --suffix .jpg mc event list myminio/images arn:minio:sqs::1:redis s3:ObjectCreated:,s3:ObjectRemoved:,s3:ObjectAccessed:* Filter: suffix=\".jpg\" ``` Verify Redis. Start the `redis-cli` Redis client program to check the content in Redis. Running the `monitor` Redis command will output every command executed on Redis. ```Shell redis-cli -a yoursecret 127.0.0.1:6379> monitor OK ``` Upload a file named `myphoto.jpg` to the `images` bucket. ```Shell mc cp myphoto.jpg myminio/images ``` In the previous terminal, you can see the operations performed by the gateway on Redis: ```Shell 127.0.0.1:6379> monitor OK 1712562516.867831 [1 192.168.65.1:59280] \"hset\" \"bucketevents\" \"images/myphoto.jpg\" \"{\\\"Records\\\":[{\\\"eventVersion\\\":\\\"2.0\\\",\\\"eventSource\\\":\\\"minio:s3\\\",\\\"awsRegion\\\":\\\"\\\",\\\"eventTime\\\":\\\"2024-04-08T07:48:36.865Z\\\",\\\"eventName\\\":\\\"s3:ObjectCreated:Put\\\",\\\"userIdentity\\\":{\\\"principalId\\\":\\\"admin\\\"},\\\"requestParameters\\\":{\\\"principalId\\\":\\\"admin\\\",\\\"region\\\":\\\"\\\",\\\"sourceIPAddress\\\":\\\"127.0.0.1\\\"},\\\"responseElements\\\":{\\\"content-length\\\":\\\"0\\\",\\\"x-amz-request-id\\\":\\\"17C43E891887BA48\\\",\\\"x-minio-origin-endpoint\\\":\\\"http://127.0.0.1:9001\\\"},\\\"s3\\\":{\\\"s3SchemaVersion\\\":\\\"1.0\\\",\\\"configurationId\\\":\\\"Config\\\",\\\"bucket\\\":{\\\"name\\\":\\\"images\\\",\\\"ownerIdentity\\\":{\\\"principalId\\\":\\\"admin\\\"},\\\"arn\\\":\\\"arn:aws:s3:::images\\\"},\\\"object\\\":{\\\"key\\\":\\\"myphoto.jpg\\\",\\\"size\\\":4,\\\"eTag\\\":\\\"40b134ab8a3dee5dd9760a7805fd495c\\\",\\\"userMetadata\\\":{\\\"content-type\\\":\\\"image/jpeg\\\"},\\\"sequencer\\\":\\\"17C43E89196AE2A0\\\"}},\\\"source\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"port\\\":\\\"\\\",\\\"userAgent\\\":\\\"MinIO (darwin; arm64) minio-go/v7.0.11 mc/RELEASE.2021-04-22T17-40-00Z\\\"}}]}\" ``` Here, you can see that the gateway executed the `HSET` command on the `minio_events` key. In the `access` format, `minio_events` is a list, and the gateway calls `RPUSH` to add it to the list. In the `monitor` command, you can see: ```Shell 127.0.0.1:6379> monitor OK 1712562751.922469 [1 192.168.65.1:61102] \"rpush\" \"aceesseventskey\" \"[{\\\"Event\\\":[{\\\"eventVersion\\\":\\\"2.0\\\",\\\"eventSource\\\":\\\"minio:s3\\\",\\\"awsRegion\\\":\\\"\\\",\\\"eventTime\\\":\\\"2024-04-08T07:52:31.921Z\\\",\\\"eventName\\\":\\\"s3:ObjectCreated:Put\\\",\\\"userIdentity\\\":{\\\"principalId\\\":\\\"admin\\\"},\\\"requestParameters\\\":{\\\"principalId\\\":\\\"admin\\\",\\\"region\\\":\\\"\\\",\\\"sourceIPAddress\\\":\\\"127.0.0.1\\\"},\\\"responseElements\\\":{\\\"content-length\\\":\\\"0\\\",\\\"x-amz-request-id\\\":\\\"17C43EBFD35A53B8\\\",\\\"x-minio-origin-endpoint\\\":\\\"http://127.0.0.1:9001\\\"},\\\"s3\\\":{\\\"s3SchemaVersion\\\":\\\"1.0\\\",\\\"configurationId\\\":\\\"Config\\\",\\\"bucket\\\":{\\\"name\\\":\\\"images\\\",\\\"ownerIdentity\\\":{\\\"principalId\\\":\\\"admin\\\"},\\\"arn\\\":\\\"arn:aws:s3:::images\\\"},\\\"object\\\":{\\\"key\\\":\\\"myphoto.jpg\\\",\\\"size\\\":4,\\\"eTag\\\":\\\"40b134ab8a3dee5dd9760a7805fd495c\\\",\\\"userMetadata\\\":{\\\"content-type\\\":\\\"image/jpeg\\\"},\\\"sequencer\\\":\\\"17C43EBFD3DACA70\\\"}},\\\"source\\\":{\\\"host\\\":\\\"127.0.0.1\\\",\\\"port\\\":\\\"\\\",\\\"userAgent\\\":\\\"MinIO (darwin; arm64) minio-go/v7.0.11 mc/RELEASE.2021-04-22T17-40-00Z\\\"}}],\\\"EventTime\\\":\\\"2024-04-08T07:52:31.921Z\\\"}]\" ``` The MySQL notification destination supports two formats: `namespace` and `access`. If you use the `namespace` format, the gateway synchronizes objects in the bucket to rows in the database table. Each row has two columns:"
},
{
"data": "It is the bucket name plus the object name. `value`. It is the JSON-formatted event data about that gateway object. If objects are updated or deleted, the corresponding rows in the table are also updated or deleted. If you use the `access` format, the gateway adds events to the table. Rows have two columns: `event_time`. It is the time the event occurred on the gateway server. `event_data`. It is the JSON-formatted event data about that gateway object. In this format, rows are not deleted or modified. The following steps show how to use the notification destination in `namespace` format. The `access` format is similar and not further described here. Ensure the MySQL version meets the minimum requirements. JuiceFS S3 Gateway requires MySQL version 5.7.8 or above, because it uses the data type introduced in MySQL 5.7.8. Configure MySQL to the gateway. Use the `mc admin config set` command to configure MySQL as the event notification destination. ```Shell mc admin config set myminio notifymysql:myinstance table=\"minioimages\" dsn_string=\"root:123456@tcp(172.17.0.1:3306)/miniodb\" ``` You can use `mc admin config get myminio notify_mysql` to view the configuration options. Different destination types have different configuration options. For MySQL type, the following configuration options are available: ```shell $ mc admin config get myminio notify_mysql format=namespace dsnstring= table= queuedir= queuelimit=0 maxopen_connections=2 ``` Here are the meanings of each configuration item: ```Shell KEY: notifymysql[:name] Publish bucket notifications to the MySQL database. When multiple MySQL server endpoints are required, you can add a user-specified \"name\" to each configuration, for example, \"notifymysql:myinstance.\" ARGS: dsn_string* (string) MySQL data source name connection string, for example, \"<user>:<password>@tcp(<host>:<port>)/<database>\". table* (string) Name of the database table to store/update events. The table is automatically created. format (namespace|access) 'namespace' or 'access.' The default is 'namespace.' queue_dir (path) The directory for storing unsent messages, for example, '/home/events'. queue_limit (number) The maximum limit of unsent messages. The default is '100000'. comment (sentence) Optional comment description. ``` `dsn_string` is required and must be in the format `<user>:<password>@tcp(<host>:<port>)/<database>`. MinIO supports persistent event storage. Persistent storage backs up events when the MySQL connection is offline and replays events when the broker comes back online. You can set the storage directory for events using the `queuedir` field, and the maximum storage limit using `queuelimit`. For example, you can set `queuedir` to `/home/events`, and `queuelimit` to 1,000. By default, `queue_limit` is set to 100,000. Before updating the configuration, you can use the `mc admin config get` command to get the current configuration. ```Shell $ mc admin config get myminio/ notify_mysql notifymysql:myinstance enable=off format=namespace host= port= username= password= database= dsnstring= table= queuedir= queuelimit=0 ``` Update the MySQL notification configuration using the `mc admin config set` command with the `dsn_string` parameter: ```Shell mc admin config set myminio notifymysql:myinstance table=\"minioimages\" dsn_string=\"root:xxxx@tcp(127.0.0.1:3306)/miniodb\" ``` You can add multiple MySQL server endpoints as needed, by providing the identifier of the MySQL instance (for example, \"myinstance\") and the configuration parameter information for each instance. After updating the configuration with the `mc admin config set` command, restart the gateway to apply the configuration changes. The gateway server will output a line during startup similar to `SQS ARNs: arn:minio:sqs::myinstance:mysql`. Enable bucket notifications. Now you can enable event notifications on a bucket named \"images.\" When a file is uploaded to the bucket, a new record is inserted into MySQL, or an existing record is updated. If an existing object is deleted, the corresponding record is also deleted from the MySQL"
},
{
"data": "Therefore, each row in the MySQL table corresponds to an object in the bucket. To configure bucket notifications, you need to use the ARN information outputted by MinIO in previous steps. See more information about . Assuming the gateway service alias is myminio, you can execute the following script: ```Shell mc mb myminio/images mc event add myminio/images arn:minio:sqs::myinstance:mysql --suffix .jpg mc event list myminio/images arn:minio:sqs::myinstance:mysql s3:ObjectCreated:,s3:ObjectRemoved:,s3:ObjectAccessed:* Filter: suffix=.jpg ``` Verify MySQL. Open a new terminal and upload a JPEG image to the `images` bucket: ```Shell mc cp myphoto.jpg myminio/images ``` Open a MySQL terminal and list all records in the `minio_images` table. You will find a newly inserted record. The method of publishing events using PostgreSQL is similar to publishing MinIO events using MySQL, with PostgreSQL version 9.5 or above required. The gateway uses PostgreSQL 9.5's (aka `UPSERT`) feature and 9.4's data type. use a push model to get data instead of continually pulling. Configure a webhook to the gateway. The gateway supports persistent event storage. Persistent storage backs up events when the webhook is offline and replays events when the broker comes back online. You can set the directory for event storage using the `queuedir` field, and the maximum storage limit using `queuelimit`. For example, you can set `queuedir` to `/home/events` and `queuelimit` to 1,000. By default, `queue_limit` is 100,000. ```Shell KEY: notify_webhook[:name] Publish bucket notifications to webhook endpoints. ARGS: endpoint* (url) Webhook server endpoint, for example, http://localhost:8080/minio/events. auth_token (string) Opaque token or JWT authorization token. queue_dir (path) The directory for storing unsent messages, for example, '/home/events'. queue_limit (number) The maximum limit of unsent messages. The default is '100000'. client_cert (string) The client certificate for mTLS authentication of the webhook. client_key (string) The client certificate key for mTLS authentication of the webhook. comment (sentence) Optional comment description. ``` Use the `mc admin config set` command to update the configuration. The endpoint here is the service that listens for webhook notifications. Save the configuration file and restart the MinIO service to apply the changes. Note that when restarting MinIO, this endpoint must be up and accessible. ```Shell mc admin config set myminio notifywebhook:1 queuelimit=\"0\" endpoint=\"http://localhost:3000\" queue_dir=\"\" ``` Enable bucket notifications. Now you can enable event notifications on a bucket named \"images.\" When a file is uploaded to the bucket, an event is triggered. Here, the ARN value is `arn:minio:sqs::1:webhook`. See more information about . ```Shell mc mb myminio/images mc mb myminio/images-thumbnail mc event add myminio/images arn:minio:sqs::1:webhook --event put --suffix .jpg ``` Use Thumbnailer to verify. is a project that generates thumbnails using MinIO's `listenBucketNotification` API. JuiceFS uses Thumbnailer to listen to gateway notifications. If a file is uploaded to the gateway service, Thumbnailer listens to that notification, generates a thumbnail, and uploads it to the gateway service. To install Thumbnailer: ```Shell git clone https://github.com/minio/thumbnailer/ npm install ``` Open the Thumbnailer's `config/webhook.json` configuration file, add the configuration for the MinIO server, and start Thumbnailer using: ```Shell NODE_ENV=webhook node thumbnail-webhook.js ``` Thumbnailer runs on `http://localhost:3000/`. Next, configure the MinIO server to send messages to this URL (mentioned in step 1) and set up bucket notifications using `mc` (mentioned in step 2). Then upload an image to the gateway server: ```Shell mc cp ~/images.jpg myminio/images .../images.jpg: 8.31 KB / 8.31 KB 100.00% 59.42 KB/s 0s ``` After a moment, use `mc ls` to check the content of the bucket. You will see a thumbnail. ```Shell mc ls myminio/images-thumbnail [2017-02-08 11:39:40 IST] 992B images-thumbnail.jpg ```"
}
] |
{
"category": "Runtime",
"file_name": "gateway.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Infra about: Create a test/dev infra task title: \"[INFRA] \" labels: [\"kind/task\", \"area/infra\"] assignees: '' <!--A clear and concise description of what test/dev infra you want to develop.--> <!-- Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists [ ] `item 1` --> <!--Add any other context or screenshots about the test infra request here.-->"
}
] |
{
"category": "Runtime",
"file_name": "infra.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(about-images)= Incus uses an image-based workflow. Each instance is based on an image, which contains a basic operating system (for example, a Linux distribution) and some Incus-related information. Images are available from remote image stores (see {ref}`image-servers` for an overview), but you can also create your own images, either based on an existing instances or a rootfs image. You can copy images from remote servers to your local image store, or copy local images to remote servers. You can also use a local image to create a remote instance. Each image is identified by a fingerprint (SHA256). To make it easier to manage images, Incus allows defining one or more aliases for each image. When you create an instance using a remote image, Incus downloads the image and caches it locally. It is stored in the local image store with the cached flag set. The image is kept locally as a private image until either: The image has not been used to create a new instance for the number of days set in {config:option}`server-images:images.remotecacheexpiry`. The image's expiry date (one of the image properties; see {ref}`images-manage-edit` for information on how to change it) is reached. Incus keeps track of the image usage by updating the `lastusedat` image property every time a new instance is spawned from the image. Incus can automatically keep images that come from a remote server up to date. ```{note} Only images that are requested through an alias can be updated. If you request an image through a fingerprint, you request an exact image version. ``` Whether auto-update is enabled for an image depends on how the image was downloaded: If the image was downloaded and cached when creating an instance, it is automatically updated if {config:option}`server-images:images.autoupdatecached` was set to `true` (the default) at download time. If the image was copied from a remote server using the command, it is automatically updated only if the `--auto-update` flag was specified. You can change this behavior for an image by . On startup and after every {config:option}`server-images:images.autoupdateinterval` (by default, every six hours), the Incus daemon checks for more recent versions of all the images in the store that are marked to be auto-updated and have a recorded source server. When a new version of an image is found, it is downloaded into the image store. Then any aliases pointing to the old image are moved to the new one, and the old image is removed from the store. To not delay instance creation, Incus does not check if a new version is available when creating an instance from a cached image. This means that the instance might use an older version of an image for the new instance until the image is updated at the next update interval. Image properties that begin with the prefix `requirements` (for example, `requirements.XYZ`) are used by Incus to determine the compatibility of the host system and the instance that is created based on the image. If these are incompatible, Incus does not start the instance. The following requirements are supported: % Include content from ```{include} config_options.txt :start-after: <!-- config group image-requirements start --> :end-before: <!-- config group image-requirements end --> ```"
}
] |
{
"category": "Runtime",
"file_name": "image-handling.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Upgrade Antrea base image to ubuntu:22.04. ( , [@antoninbas]) Ensure NO_FLOOD is always set for IPsec tunnel ports and TrafficControl ports. ( , [@xliuxu] [@tnqn]) Fix Service routes being deleted on Agent startup on Windows. (, [@hongliangl]) Fix route deletion for Service ClusterIP and LoadBalancerIP when AntreaProxy is enabled. (, [@tnqn]) Fix OpenFlow Group being reused with wrong type because groupDb cache was not cleaned up. (, [@ceclinux]) Add a periodic job to rejoin dead Nodes to fix Egress not working properly after long network downtime. (, [@tnqn]) Fix Agent crash in dual-stack clusters when any Node is not configured with an IP address for each address family. (, [@hongliangl]) Fix potential deadlocks and memory leaks of memberlist maintenance in large-scale clusters. (, [@wenyingd]) Fix connectivity issues caused by MAC address changes with systemd v242 and later. (, [@wenyingd]) Fix a ClusterInfo export bug when Multi-cluster Gateway changes. (, [@luolanzone]) Fix OpenFlow rules not being updated when Multi-cluster Gateway updates. (, [@luolanzone]) Set no-flood config with ports for TrafficControl after Agent restarting. (, [@hongliangl]) Add the following capabilities to the Multi-cluster feature: Add support for Pod-to-Pod connectivity across clusters. (, [@hjiajing]) Add active-passive mode high availability support for Gateway Nodes. (, [@luolanzone]) Allow Pod IPs as Endpoints of Multi-cluster Service; option `endpointIPType` is added to the Multi-cluster Controller ConfigMap to specify the Service Endpoints type. (, [@luolanzone]) Add `antctl mc get joinconfig` command to print ClusterSet join parameters. (, [@jianjuns]) Add `antctl mc get|delete membertoken` commands to get/delete member token. (, [@bangqipropel]) Add rule name to Audit Logging for Antrea-native policies. (, [@qiyueyao]) Add Service health check similar to kube-proxy in antrea-agent; it provides HTTP endpoints \"<nodeIP>:<healthCheckNodePort>/healthz\" for querying number of local Endpoints of a Service. (, [@shettyg]) Add S3Uploader as a new exporter of Flow Aggregator, which periodically exports expired flow records to AWS S3 storage"
},
{
"data": "(, [@heanlan]) Add scripts and binaries needed for running Antrea on non-Kubernetes Nodes (ExternalNode) in release assets. ( , [@antoninbas] [@Anandkumar26]) AntreaProxy now supports more than 800 Endpoints for a Service. (, [@hongliangl]) Add OVS connection check to Agent's liveness probes for self-healing on OVS disconnection. (, [@tnqn]) antrea-agent startup scripts now perform cleanup automatically on non-Kubernetes Nodes (ExternalNode) upon Node restart. (, [@Anandkumar26]) Make tunnel csum option configurable and default to false which avoids double encapsulation checksum issues on some platforms. (, [@tnqn]) Use standard value type for k8s.v1.cni.cncf.io/networks annotation for the SecondaryNetwork feature. (, [@antoninbas]) Update Go to v1.19. (, [@antoninbas]) Add API support for reporting Antrea NetworkPolicy realization failure. (, [@wenyingd]) Update ResourceExport's json tag to lowerCamelCase. (, [@luolanzone]) Add clusterUUID column to S3 uploader and ClickHouseExporter to support multiple clusters in the same data warehouse. (, [@heanlan]) Fix nil pointer error when collecting support bundle from Agent fails. (, [@tnqn]) Set no-flood config for TrafficControl ports after restarting Agent to prevent ARP packet loops. (, [@hongliangl]) Fix packet resubmission issue when AntreaProxy is enabled and AntreaPolicy is disable. (, [@GraysonWu]) Fix ownerReferences in APIExternalEntities generated from ExternalNodes. (, [@wenyingd]) Fix the issue that \"MulticastGroup\" API returned wrong Pods that have joined multicast groups. (, [@ceclinux]) Fix inappropriate route for IPv6 ClusterIPs in the host network when proxyAll is enabled. (, [@tnqn]) Fix log spam when there is any DNS based LoadBalancer Service. (, [@tnqn]) Remove multicast group from cache when group is uninstalled. (, [@wenyingd]) Remove redundant Openflow messages when syncing an updated group to OVS. (, [@hongliangl]) Fix nil pointer error when there is no ClusterSet found during MemberClusterAnnounce validation. (, [@luolanzone]) Fix data race when Multi-cluster controller reconciles ServiceExports concurrently. (, [@Dyanngg]) Fix memory leak in Multi-cluster resource import controllers. (, [@Dyanngg]) Fix Antrea-native policies for multicast traffic matching IGMP traffic unexpectedly. (, [@liu4480]) Fix IPsec not working in UBI-based image. (, [@xliuxu]) Fix `antctl mc get clusterset` command output when a ClusterSet's status is empty. (, [@luolanzone])"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.9.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "New and additional registry hosts config support has been implemented in containerd v1.5 for the `ctr` client (the containerd tool for admins/developers), containerd image service clients, and CRI clients such as `kubectl` and `crictl`. Configuring registries, for these clients, will be done by specifying (optionally) a `hosts.toml` file for each desired registry host in a configuration directory. Note: Updates under this directory do not require restarting the containerd daemon. All configured registry hosts are expected to comply with the . Registries which are non-compliant or implement non-standard behavior are not guaranteed to be supported and may break unexpectedly between releases. Currently supported OCI Distribution version: When pulling a container image via `ctr` using the `--hosts-dir` option tells `ctr` to find and use the host configuration files located in the specified path: ``` ctr images pull --hosts-dir \"/etc/containerd/certs.d\" myregistry.io:5000/image_name:tag ``` _The old CRI config pattern for specifying registry.mirrors and registry.configs has been DEPRECATED. You should now point your registry `configpath` to the path where your `hosts.toml` files are located. Modify your `config.toml` (default location: `/etc/containerd/config.toml`) as follows: Before containerd 2.0 ```toml version = 2 [plugins.\"io.containerd.grpc.v1.cri\".registry] config_path = \"/etc/containerd/certs.d\" ``` In containerd 2.0 ``` version = 3 [plugins.\"io.containerd.cri.v1.images\".registry] config_path = \"/etc/containerd/certs.d\" ``` If no hosts.toml configuration exists in the host directory, it will fallback to check certificate files based on (\".crt\" files for CA certificates and \".cert\"/\".key\" files for client certificates). A registry host is the location where container images and artifacts are sourced. These registry hosts may be local or remote and are typically accessed via http/https using the . A registry mirror is not a registry host but these mirrors can also be used to pull content. Registry hosts are typically referred to by their internet domain names, aka. registry host names. For example, docker.io, quay.io, gcr.io, and ghcr.io. A registry host namespace is, for the purpose of containerd registry configuration, a path to the `hosts.toml` file specified by the registry host name, or ip address, and an optional port identifier. When making a pull request for an image the format is typically as follows: ``` pull <image_name>[:tag|@DIGEST] ``` The registry host namespace portion is ``. Example tree for docker.io: ``` $ tree /etc/containerd/certs.d /etc/containerd/certs.d docker.io hosts.toml ``` Optionally the `_default` registry host namespace can be used as a fallback, if no other namespace matches. The `/v2` portion of the pull request format shown above refers to the version of the distribution api. If not included in the pull request, `/v2` is added by default for all clients compliant to the distribution specification linked above. If a host is configured that's different to the registry host namespace (e.g. a mirror), then containerd will append the registry host namespace to requests as a query parameter called `ns`. For example when pulling `imagename:tagname` from a private registry named `myregistry.io` over port 5000: ``` pull myregistry.io:5000/imagename:tagname ``` The pull will resolve to `https://myregistry.io:5000/v2/imagename/manifests/tagname`. The same pull with a host configuration for `mymirror.io` will resolve to `https://mymirror.io/v2/imagename/manifests/tagname?ns=myregistry.io:5000`. When performing image operations via `ctr` use the --help option to get a list of options you can set for specifying credentials: ``` ctr i pull --help"
},
{
"data": "OPTIONS: --skip-verify, -k skip SSL certificate validation --plain-http allow connections using plain HTTP --user value, -u value user[:password] Registry user and password --refresh value refresh token for authorization server --hosts-dir value Custom hosts configuration directory --tlscacert value path to TLS root CA --tlscert value path to TLS client certificate --tlskey value path to TLS client key --http-dump dump all HTTP request/responses when interacting with container registry --http-trace enable HTTP tracing for registry interactions --snapshotter value snapshotter name. Empty value stands for the default value. [$CONTAINERD_SNAPSHOTTER] --label value labels to attach to the image --platform value Pull content from a specific platform --all-platforms pull content and metadata from all platforms --all-metadata Pull metadata for all platforms --print-chainid Print the resulting image's chain ID --max-concurrent-downloads value Set the max concurrent downloads for each pull (default: 0) ``` Although we have deprecated the old CRI config pattern for specifying registry.mirrors and registry.configs you can still specify your credentials via . Additionally, the containerd CRI plugin implements/supports the authentication parameters passed in through CRI pull image service requests. For example, when containerd is the container runtime implementation for `Kubernetes`, the containerd CRI plugin receives authentication credentials from kubelet as retrieved from Here is a simple example for a default registry hosts configuration. Set `config_path = \"/etc/containerd/certs.d\"` in your config.toml for containerd. Make a directory tree at the config path that includes `docker.io` as a directory representing the host namespace to be configured. Then add a `hosts.toml` file in the `docker.io` to configure the host namespace. It should look like this: ``` $ tree /etc/containerd/certs.d /etc/containerd/certs.d docker.io hosts.toml $ cat /etc/containerd/certs.d/docker.io/hosts.toml server = \"https://docker.io\" [host.\"https://registry-1.docker.io\"] capabilities = [\"pull\", \"resolve\"] ``` ``` server = \"https://registry-1.docker.io\" # Exclude this to not use upstream [host.\"https://public-mirror.example.com\"] capabilities = [\"pull\"] # Requires less trust, won't resolve tag to digest from this host [host.\"https://docker-mirror.internal\"] capabilities = [\"pull\", \"resolve\"] ca = \"docker-mirror.crt\" # Or absolute path /etc/containerd/certs.d/docker.io/docker-mirror.crt ``` This is an example of using a mirror regardless of the intended registry. The upstream registry will automatically be used after all defined hosts have been tried. ``` $ tree /etc/containerd/certs.d /etc/containerd/certs.d _default hosts.toml $ cat /etc/containerd/certs.d/_default/hosts.toml [host.\"https://registry.example.com\"] capabilities = [\"pull\", \"resolve\"] ``` If you wish to ensure only the mirror is utilised and the upstream not consulted, set the mirror as the `server` instead of a host. You may still specify additional hosts if you'd like to use other mirrors first. ``` $ cat /etc/containerd/certs.d/_default/hosts.toml server = \"https://registry.example.com\" ``` To bypass the TLS verification for a private registry at `192.168.31.250:5000` Create a path and `hosts.toml` text at the path \"/etc/containerd/certs.d/docker.io/hosts.toml\" with following or similar contents: ```toml server = \"https://registry-1.docker.io\" [host.\"http://192.168.31.250:5000\"] capabilities = [\"pull\", \"resolve\", \"push\"] skip_verify = true ``` For each registry host namespace directory in your registry `config_path` you may include a `hosts.toml` configuration file. The following root level toml fields apply to the registry host namespace: Note: All paths specified in the `hosts.toml` file may be absolute or relative to the `hosts.toml` file. `server` specifies the default server for this registry host namespace. When `host`(s) are specified, the hosts will be tried first in the order listed. If all `host`(s) are tried then `server` will be used as a fallback. If `server` is not specified then the image's registry host namespace will automatically be used. ``` server = \"https://docker.io\" ``` `capabilities` is an optional setting for specifying what operations a host is capable of performing. Include only the values that"
},
{
"data": "``` capabilities = [\"pull\", \"resolve\", \"push\"] ``` capabilities (or Host capabilities) represent the capabilities of the registry host. This also represents the set of operations for which the registry host may be trusted to perform. For example, pushing is a capability which should only be performed on an upstream source, not a mirror. Resolving (the process of converting a name into a digest) must be considered a trusted operation and only done by a host which is trusted (or more preferably by secure process which can prove the provenance of the mapping). A public mirror should never be trusted to do a resolve action. | Registry Type | Pull | Resolve | Push | ||||| | Public Registry | yes | yes | yes | | Private Registry | yes | yes | yes | | Public Mirror | yes | no | no | | Private Mirror | yes | yes | no | `ca` (Certificate Authority Certification) can be set to a path or an array of paths each pointing to a ca file for use in authenticating with the registry namespace. ``` ca = \"/etc/certs/mirror.pem\" ``` or ``` ca = [\"/etc/certs/test-1-ca.pem\", \"/etc/certs/special.pem\"] ``` `client` certificates are configured as follows a path: ``` client = \"/etc/certs/client.pem\" ``` an array of paths: ``` client = [\"/etc/certs/client-1.pem\", \"/etc/certs/client-2.pem\"] ``` an array of pairs of paths: ``` client = [[\"/etc/certs/client.cert\", \"/etc/certs/client.key\"],[\"/etc/certs/client.pem\", \"\"]] ``` `skip_verify` skips verifications of the registry's certificate chain and host name when set to `true`. This should only be used for testing or in combination with other method of verifying connections. (Defaults to `false`) ``` skip_verify = false ``` `[header]` contains some number of keys where each key is to one of a string or an array of strings as follows: ``` [header] x-custom-1 = \"custom header\" ``` or ``` [header] x-custom-1 = [\"custom header part a\",\"part b\"] ``` or ``` [header] x-custom-1 = \"custom header\", x-custom-1-2 = \"another custom header\" ``` `override_path` is used to indicate the host's API root endpoint is defined in the URL path rather than by the API specification. This may be used with non-compliant OCI registries which are missing the `/v2` prefix. (Defaults to `false`) ``` override_path = true ``` `[host].\"https://namespace\"` and `[host].\"http://namespace\"` entries in the `hosts.toml` configuration are registry namespaces used in lieu of the default registry host namespace. These hosts are sometimes called mirrors because they may contain a copy of the container images and artifacts you are attempting to retrieve from the default registry. Each `host`/`mirror` namespace is also configured in much the same way as the default registry namespace. Notably the `server` is not specified in the `host` description because it is specified in the namespace. Here are a few rough examples configuring host mirror namespaces for this registry host namespace: ``` [host.\"https://mirror.registry\"] capabilities = [\"pull\"] ca = \"/etc/certs/mirror.pem\" skip_verify = false [host.\"https://mirror.registry\".header] x-custom-2 = [\"value1\", \"value2\"] [host.\"https://mirror-bak.registry/us\"] capabilities = [\"pull\"] skip_verify = true [host.\"http://mirror.registry\"] capabilities = [\"pull\"] [host.\"https://test-1.registry\"] capabilities = [\"pull\", \"resolve\", \"push\"] ca = [\"/etc/certs/test-1-ca.pem\", \"/etc/certs/special.pem\"] client = [[\"/etc/certs/client.cert\", \"/etc/certs/client.key\"],[\"/etc/certs/client.pem\", \"\"]] [host.\"https://test-2.registry\"] client = \"/etc/certs/client.pem\" [host.\"https://test-3.registry\"] client = [\"/etc/certs/client-1.pem\", \"/etc/certs/client-2.pem\"] [host.\"https://non-compliant-mirror.registry/v2/upstream\"] capabilities = [\"pull\"] override_path = true ``` Note: Recursion is not supported in the specification of host mirror namespaces in the hosts.toml file. Thus the following is not allowed/supported: ``` [host.\"http://mirror.registry\"] capabilities = [\"pull\"] [host.\"http://double-mirror.registry\"] capabilities = [\"pull\"] ```"
}
] |
{
"category": "Runtime",
"file_name": "hosts.md",
"project_name": "containerd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "rkt supports measuring container state and configuration into the event log. Enable this functionality by building rkt with the . rkt accesses the TPM via the . This `tpmd` is expected to listen on port 12041. Events are logged to PCR 15, with event type `0x1000`. Each event contains the following data: The hash of the container root filesystem The hash of the contents of the container manifest data The hash of the arguments passed to `stage1` This provides a cryptographically verifiable audit log of the containers executed on a node, including the configuration of each."
}
] |
{
"category": "Runtime",
"file_name": "tpm.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This design includes the changes to the RestoreItemAction (RIA) api design as required by the feature. It also includes changes as required by the feature. The BIA v2 interface will have three new methods, and the RestoreItemActionExecuteOutput() struct in the return from Execute() will have three optional fields added. If there are any additional RIA API changes that are needed in the same Velero release cycle as this change, those can be added here as well. This API change is needed to facilitate long-running plugin actions that may not be complete when the Execute() method returns. It is an optional feature, so plugins which don't need this feature can simply return an empty operation ID and the new methods can be no-ops. This will allow long-running plugin actions to continue in the background while Velero moves on to the next plugin, the next item, etc. The other change allows Velero to wait until newly-restored AdditionalItems returned by a RIA plugin are ready before moving on to restoring the current item. Allow for RIA Execute() to optionally initiate a long-running operation and report on operation status. Allow for RIA to allow Velero to call back into the plugin to wait until AdditionalItems are ready before continuing with restore. Allowing velero control over when the long-running operation begins. As per the design, a new RIAv2 plugin `.proto` file will be created to define the GRPC interface. v2 go files will also be created in `plugin/clientmgmt/restoreitemaction` and `plugin/framework/restoreitemaction`, and a new PluginKind will be created. Changes to RestoreItemActionExecuteOutput will be made to the existing struct. Since the new fields are optional elements of the struct, the new enlarged struct will work with both v1 and v2 plugins. The velero Restore process will be modified to reference v2 plugins instead of v1 plugins. An adapter will be created so that any existing RIA v1 plugin can be executed as a v2 plugin when executing a restore. The v2 RestoreItemAction.proto will be like the current v1 version with the following changes: RestoreItemActionExecuteOutput gets three new fields (defined in the current (v1)"
},
{
"data": "file: ``` message RestoreItemActionExecuteResponse { bytes item = 1; repeated ResourceIdentifier additionalItems = 2; bool skipRestore = 3; string operationID = 4; bool waitForAdditionalItems = 5; google.protobuf.Duration additionalItemsReadyTimeout = 6; } ``` The RestoreItemAction service gets three new rpc methods: ``` service RestoreItemAction { rpc AppliesTo(RestoreItemActionAppliesToRequest) returns (RestoreItemActionAppliesToResponse); rpc Execute(RestoreItemActionExecuteRequest) returns (RestoreItemActionExecuteResponse); rpc Progress(RestoreItemActionProgressRequest) returns (RestoreItemActionProgressResponse); rpc Cancel(RestoreItemActionCancelRequest) returns (google.protobuf.Empty); rpc AreAdditionalItemsReady(RestoreItemActionItemsReadyRequest) returns (RestoreItemActionItemsReadyResponse); } ``` To support these new rpc methods, we define new request/response message types: ``` message RestoreItemActionProgressRequest { string plugin = 1; string operationID = 2; bytes restore = 3; } message RestoreItemActionProgressResponse { generated.OperationProgress progress = 1; } message RestoreItemActionCancelRequest { string plugin = 1; string operationID = 2; bytes restore = 3; } message RestoreItemActionItemsReadyRequest { string plugin = 1; bytes restore = 2; repeated ResourceIdentifier additionalItems = 3; } message RestoreItemActionItemsReadyResponse { bool ready = 1; } ``` One new shared message type will be needed, as defined in the v2 BackupItemAction design: ``` message OperationProgress { bool completed = 1; string err = 2; int64 completed = 3; int64 total = 4; string operationUnits = 5; string description = 6; google.protobuf.Timestamp started = 7; google.protobuf.Timestamp updated = 8; } ``` In addition to the three new rpc methods added to the RestoreItemAction interface, there is also a new `Name()` method. This one is only actually used internally by Velero to get the name that the plugin was registered with, but it still must be defined in a plugin which implements RestoreItemActionV2 in order to implement the interface. It doesn't really matter what it returns, though, as this particular method is not delegated to the plugin via RPC calls. The new (and modified) interface methods for `RestoreItemAction` are as follows: ``` type BackupItemAction interface { ... Name() string ... Progress(operationID string, restore *api.Restore) (velero.OperationProgress, error) Cancel(operationID string, backup *api.Restore) error AreAdditionalItemsReady(AdditionalItems []velero.ResourceIdentifier, restore *api.Restore) (bool, error) ... } type RestoreItemActionExecuteOutput struct { UpdatedItem runtime.Unstructured AdditionalItems []ResourceIdentifier SkipRestore bool OperationID string WaitForAdditionalItems bool } ``` A new PluginKind, `RestoreItemActionV2`, will be created, and the restore process will be modified to use this plugin kind. See for more details on implementation plans, including v1 adapters, etc. The included v1 adapter will allow any existing RestoreItemAction plugin to work as expected, with no-op AreAdditionalItemsReady(), Progress(), and Cancel() methods. This will be implemented during the Velero 1.11 development cycle."
}
] |
{
"category": "Runtime",
"file_name": "riav2-design.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "% runc-ps \"8\" runc-ps - display the processes inside a container runc ps [option ...] container-id [ps-option ...] The command ps is a wrapper around the stock ps(1) utility, which filters its output to only contain processes belonging to a specified container-id. Therefore, the PIDs shown are the host PIDs. Any ps(1) options can be used, but some might break the filtering. In particular, if PID column is not available, an error is returned, and if there are columns with values containing spaces before the PID column, the result is undefined. --format|-f table|json : Output format. Default is table. The json format shows a mere array of PIDs belonging to a container; if used, all ps options are gnored. runc-list(8), runc(8)."
}
] |
{
"category": "Runtime",
"file_name": "runc-ps.8.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Alerting with prometheus is two step process. First we setup alerts in Prometheus server and then we need to send alerts to the AlertManager. Prometheus AlertManager is the component that manages sending, inhibition and silencing of the alerts generated from Prometheus. The AlertManager can be configured to send alerts to variety of receivers. Refer for more details. Follow below steps to enable and use AlertManager. Install Prometheus AlertManager from https://prometheus.io/download/ and create configuration as below ```yaml route: group_by: ['alertname'] group_wait: 30s group_interval: 5m repeat_interval: 1h receiver: 'web.hook' receivers: name: 'web.hook' webhook_configs: url: 'http://127.0.0.1:8010/webhook' inhibit_rules: source_match: severity: 'critical' target_match: severity: 'warning' equal: ['alertname', 'dev', 'instance'] ``` This sample configuration uses a `webhook` at http://127.0.0.1:8010/webhook to post the alerts. Start the AlertManager and it listens on port `9093` by default. Make sure your webhook is up and listening for the alerts. Add below section to your `prometheus.yml` ```yaml alerting: alertmanagers: static_configs: targets: ['localhost:9093'] rule_files: rules.yml ``` Here `rules.yml` is the file which should contain the alerting rules defined. Below is a sample alerting rules configuration for MinIO. Refer https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ for more instructions on writing alerting rules for Prometheus. ```yaml groups: name: example rules: alert: MinIOClusterTolerance expr: minioclusterhealtherasureset_status < 1 for: 5m labels: severity: critical annotations: summary: \"Instance {{ $labels.server }} has lost quorum on pool {{ $labels.pool }} on set {{ $labels.set }}\" description: \"MinIO instance {{ $labels.server }} of job {{ $labels.job }} has lost quorum on pool {{ $labels.pool }} on set {{ $labels.set }} for more than 5 minutes.\" ``` To verify the above sample alert follow below steps Start a distributed MinIO instance (4 nodes setup) Start Prometheus server and AlertManager Bring down couple of MinIO instances to bring down the Erasure Set tolerance to -1 and verify the same with `mc admin prometheus metrics ALIAS | grep minioclusterhealtherasureset_status` Wait for 5 mins (as alert is configured to be firing after 5 mins), and verify that you see an entry in webhook for the alert as well as in Prometheus console as shown below ```json { \"receiver\": \"web\\\\.hook\", \"status\": \"firing\", \"alerts\": [ { \"status\": \"firing\", \"labels\": { \"alertname\": \"MinIOClusterTolerance\", \"instance\": \"localhost:9000\", \"job\": \"minio-job-node\", \"pool\": \"0\", \"server\": \"127.0.0.1:9000\", \"set\": \"0\", \"severity\": \"critical\" }, \"annotations\": { \"description\": \"MinIO instance 127.0.0.1:9000 of job minio-job has tolerance <=0 for more than 5 minutes.\", \"summary\": \"Instance 127.0.0.1:9000 unable to tolerate node failures\" }, \"startsAt\": \"2023-11-18T06:20:09.456Z\", \"endsAt\": \"0001-01-01T00:00:00Z\", \"generatorURL\": \"http://fedora-minio:9090/graph?g0.expr=minioclusterhealtherasureset_tolerance+%3C%3D+0&g0.tab=1\", \"fingerprint\": \"2255608b0da28ca3\" } ], \"groupLabels\": { \"alertname\": \"MinIOClusterTolerance\" }, \"commonLabels\": { \"alertname\": \"MinIOClusterTolerance\", \"instance\": \"localhost:9000\", \"job\": \"minio-job-node\", \"pool\": \"0\", \"server\": \"127.0.0.1:9000\", \"set\": \"0\", \"severity\": \"critical\" }, \"commonAnnotations\": { \"description\": \"MinIO instance 127.0.0.1:9000 of job minio-job has lost quorum on pool 0 on set 0 for more than 5 minutes.\", \"summary\": \"Instance 127.0.0.1:9000 has lot quorum on pool 0 on set 0\" }, \"externalURL\": \"http://fedora-minio:9093\", \"version\": \"4\", \"groupKey\": \"{}:{alertname=\\\"MinIOClusterTolerance\\\"}\", \"truncatedAlerts\": 0 } ```"
}
] |
{
"category": "Runtime",
"file_name": "alerts.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(disaster-recovery)= Incus provides a tool for disaster recovery in case the {ref}`Incus database <database>` is corrupted or otherwise lost. The tool scans the storage pools for instances and imports the instances that it finds back into the database. You need to re-create the required entities that are missing (usually profiles, projects, and networks). ```{important} This tool should be used for disaster recovery only. Do not rely on this tool as an alternative to proper backups; you will lose data like profiles, network definitions, or server configuration. The tool must be run interactively and cannot be used in automated scripts. ``` When you run the tool, it scans all storage pools that still exist in the database, looking for missing volumes that can be recovered. You can also specify the details of any unknown storage pools (those that exist on disk but do not exist in the database), and the tool attempts to scan those too. After mounting the specified storage pools (if not already mounted), the tool scans them for unknown volumes that look like they are associated with Incus. Incus maintains a `backup.yaml` file in each instance's storage volume, which contains all necessary information to recover a given instance (including instance configuration, attached devices, storage volume, and pool configuration). This data can be used to rebuild the instance, storage volume, and storage pool database records. Before recovering an instance, the tool performs some consistency checks to compare what is in the `backup.yaml` file with what is actually on disk (such as matching snapshots). If all checks out, the database records are re-created. If the storage pool database record also needs to be created, the tool uses the information from an instance's `backup.yaml` file as the basis of its configuration, rather than what the user provided during the discovery phase. However, if this information is not available, the tool falls back to restoring the pool's database record with what was provided by the user. The tool asks you to re-create missing entities like networks. However, the tool does not know how the instance was configured. That means that if some configuration was specified through the `default` profile, you must also re-add the required configuration to the"
},
{
"data": "For example, if the `incusbr0` bridge is used in an instance and you are prompted to re-create it, you must add it back to the `default` profile so that the recovered instance uses it. This is how a recovery process could look: ```{terminal} :input: incus admin recover This Incus server currently has the following storage pools: Would you like to recover another storage pool? (yes/no) [default=no]: yes Name of the storage pool: default Name of the storage backend (btrfs, ceph, cephfs, cephobject, dir, lvm, lvmcluster, zfs): zfs Source of the storage pool (block device, volume group, dataset, path, ... as applicable): /var/lib/incus/storage-pools/default/containers Additional storage pool configuration property (KEY=VALUE, empty when done): zfs.pool_name=default Additional storage pool configuration property (KEY=VALUE, empty when done): Would you like to recover another storage pool? (yes/no) [default=no]: The recovery process will be scanning the following storage pools: NEW: \"default\" (backend=\"zfs\", source=\"/var/lib/incus/storage-pools/default/containers\") Would you like to continue with scanning for lost volumes? (yes/no) [default=yes]: yes Scanning for unknown volumes... The following unknown volumes have been found: Container \"u1\" on pool \"default\" in project \"default\" (includes 0 snapshots) Container \"u2\" on pool \"default\" in project \"default\" (includes 0 snapshots) You are currently missing the following: Network \"incusbr0\" in project \"default\" Please create those missing entries and then hit ENTER: ^Z [1]+ Stopped incus admin recover :input: incus network create incusbr0 Network incusbr0 created :input: fg incus admin recover The following unknown volumes have been found: Container \"u1\" on pool \"default\" in project \"default\" (includes 0 snapshots) Container \"u2\" on pool \"default\" in project \"default\" (includes 0 snapshots) Would you like those to be recovered? (yes/no) [default=no]: yes Starting recovery... :input: incus list +++++--+--+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +++++--+--+ | u1 | STOPPED | | | CONTAINER | 0 | +++++--+--+ | u2 | STOPPED | | | CONTAINER | 0 | +++++--+--+ :input: incus profile device add default eth0 nic network=incusbr0 name=eth0 Device eth0 added to default :input: incus start u1 :input: incus list +++-++--+--+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +++-++--+--+ | u1 | RUNNING | 192.0.2.49 (eth0) | 2001:db8:8b6:abfe:216:3eff:fe82:918e (eth0) | CONTAINER | 0 | +++-++--+--+ | u2 | STOPPED | | | CONTAINER | 0 | +++-++--+--+ ```"
}
] |
{
"category": "Runtime",
"file_name": "disaster_recovery.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Path | string | | NumQueues | int32 | | [default to 1] Iommu | Pointer to bool | | [optional] [default to false] PciSegment | Pointer to int32 | | [optional] Id | Pointer to string | | [optional] `func NewVdpaConfig(path string, numQueues int32, ) *VdpaConfig` NewVdpaConfig instantiates a new VdpaConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVdpaConfigWithDefaults() *VdpaConfig` NewVdpaConfigWithDefaults instantiates a new VdpaConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VdpaConfig) GetPath() string` GetPath returns the Path field if non-nil, zero value otherwise. `func (o VdpaConfig) GetPathOk() (string, bool)` GetPathOk returns a tuple with the Path field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VdpaConfig) SetPath(v string)` SetPath sets Path field to given value. `func (o *VdpaConfig) GetNumQueues() int32` GetNumQueues returns the NumQueues field if non-nil, zero value otherwise. `func (o VdpaConfig) GetNumQueuesOk() (int32, bool)` GetNumQueuesOk returns a tuple with the NumQueues field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VdpaConfig) SetNumQueues(v int32)` SetNumQueues sets NumQueues field to given value. `func (o *VdpaConfig) GetIommu() bool` GetIommu returns the Iommu field if non-nil, zero value otherwise. `func (o VdpaConfig) GetIommuOk() (bool, bool)` GetIommuOk returns a tuple with the Iommu field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VdpaConfig) SetIommu(v bool)` SetIommu sets Iommu field to given value. `func (o *VdpaConfig) HasIommu() bool` HasIommu returns a boolean if a field has been set. `func (o *VdpaConfig) GetPciSegment() int32` GetPciSegment returns the PciSegment field if non-nil, zero value otherwise. `func (o VdpaConfig) GetPciSegmentOk() (int32, bool)` GetPciSegmentOk returns a tuple with the PciSegment field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VdpaConfig) SetPciSegment(v int32)` SetPciSegment sets PciSegment field to given value. `func (o *VdpaConfig) HasPciSegment() bool` HasPciSegment returns a boolean if a field has been set. `func (o *VdpaConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o VdpaConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VdpaConfig) SetId(v string)` SetId sets Id field to given value. `func (o *VdpaConfig) HasId() bool` HasId returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "VdpaConfig.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Longhorn could reuse the existing data of failed replicas to speed up rebuild progress as well as save bandwidth. https://github.com/longhorn/longhorn/issues/1304 The (data of) failed replicas can be reused during the replica rebuild. The rebuild won't be blocked when the data of failed replicas are completely corrupted, or there is no existing replica. With the existing data, some of the data transferring can be skipped, and replica rebuild may speed up. Add a new setting `ReplicaReplenishmentWaitInterval` to delay the replica rebuild. If the failed replica currently is unavailable but it may be able to be reused later(we call it potential reusable failed replica), Longhorn may need to delay the new replica replenishment so that there is a chance to reuse this kind of replica. For eviction/data locality/new volume cases, a new replica should be recreated immediately hence this setting won't be applied. In order to reuse the existing data, Longhorn can directly reuse the failed replica objects for the rebuild. Add max retry count for the replica rebuild with failed replicas. Otherwise, the rebuild will get stuck of the reusing the failed replicas there if the data of failed replicas are completely corrupted. Add backoff interval for the retry of the failed replica reuse. Before the enhancement, there is no chance to reuse the failed replicas on the node, and the rebuild can take a long time with heavy bandwidth usage. After the enhancement, the replica rebuild won't start until the new worker nodes with old disks are up. Then the failed replicas will be reused during the rebuild, and the rebuild can be pretty fast. Users don't need to do anything except for setting `ReplicaReplenishmentWaitInterval` No API change is required. Add a setting `ReplicaReplenishmentWaitInterval`. This will block the rebuilding when there is a failed replica that is temporarily unavailable in the volume. Add a field `volume.Status.LastDegradedAt` so that we can determine if `ReplicaReplenishmentWaitInterval` is passed. Add field `Replica.Spec.RebuildRetryCount` to indicate how many times Longhorn tries to reuse this failed replica for the rebuild. In Volume Controller && Replica Scheduler: Check if there is a reusable failed replica and if the replica reuse is not in the backoff window. If YES, directly try to reuse the failed replica. Otherwise, replenish a new replica is required for one of the following cases: the volume is a new volume (volume.Status.Robustness is Empty) data locality is required (hardNodeAffinity is not Empty and volume.Status.Robustness is Healthy) replica eviction happens (volume.Status.Robustness is Healthy) there is no potential reusable replica there is a potential reusable replica but the replica replenishment wait interval is passed. Reuse the failed replica by cleaning up `ReplicaSpec.HealthyAt` and `ReplicaSpec.FailedAt`. And `Replica.Spec.RebuildRetryCount` will be increased by 1. Clean up the related record in `Replica.Spec.RebuildRetryCount` when the rebuilding replica becomes mode `RW`. Guarantee the reused failed replica will be stopped before re-launching it. Set `ReplicaReplenishmentWaitInterval`. Make sure it's longer than the node recovery interval. Create and attach a large volume. Set a short `staleReplicaTimeout` for the volume, e.g., 1 minute. Write a large amount of data then take a snapshot. Repeat step 3 several times. Reboot/Temporarily disconnect a node contains replica only. According to the `ReplicaReplenishmentWaitInterval` and the node recovery interval: Verify the failed replica is reused and there is no new replica for the rebuild after the node recovery. Verify the replica rebuild only takes a relatively short time. Create and attach a large volume. Write data then take"
},
{
"data": "Hack into one replica directory and make the directory and files read-only. Crash the related replica process and wait for the replica failure. Wait and check if Longhorn tries to reuse the corrupted replica but always fail. Since there is backoff mechanism, this will take a long time(8 ~ 10min). Check if Longhorn will create a new replica and succeeds to finish the rebuild when the max retry count is reached. Verify the data content. And check if the volume still works fine. Set `ReplicaReplenishmentWaitInterval` to 60s. Create and attach a large volume. Write data then take snapshots. Shut down a node containing replica only for 60s. Wait and check if Longhorn tries to reuse the failed replica for 2~3 times but always fail. Check if Longhorn will create a new replica once the replenishment wait interval is passed. Verify the data content. And check if the volume still works fine. Deploy Longhorn v1.0.2. Create and attach a volume. Write data to the volume. Disable scheduling for 1 node. Crash the replica on the node. Upgrade Longhorn to the latest. Verify the volume robustness `Degraded`. Enable scheduling for the node. Verify the failed replica of the existing degraded volume will be reused. Verify the data content, and the volume r/w still works fine. Deploy the latest Longhorn. Create and attach a volume. Write data to the volume. Update `Replica Replenishment Wait Interval` to 60s. Crash a replica: removing the volume head file and creating a directory with the volume head file name. Then the replica reuse will continuously fail. e.g., `rm volume-head-001.img && mkdir volume-head-001.img` Verify: There is a backoff interval for the failed replica reuse. A new replica will be created after (around) 60s despite the failed replica reuse is in backoff. the data content. the volume r/w still works fine. Set a long wait interval for setting `replica-replenishment-wait-interval`. Disable the setting soft node anti-affinity. Create and attach a volume. Then write data to the volume. Disable the scheduling for a node. Mess up the data of a random snapshot or the volume head for a replica. Then crash the replica on the node. --> Verify Longhorn won't create a new replica on the node for the volume. Update setting `replica-replenishment-wait-interval` to a small value. Verify Longhorn starts to create a new replica for the volume. Notice that the new replica scheduling will fail. Update setting `replica-replenishment-wait-interval` to a large value. Delete the newly created replica. --> Verify Longhorn won't create a new replica on the node for the volume. Enable the scheduling for the node. Verify the failed replica (in step 5) will be reused. Verify the volume r/w still works fine. Set a long wait interval for setting `replica-replenishment-wait-interval`. Disable the setting soft node anti-affinity. Add tags for all nodes and disks. Create and attach a volume with node and disk selectors. Then write data to the volume. Disable the scheduling for the 2 nodes (node1 and node2). Crash the replicas on the node1 and node2. --> Verify Longhorn won't create new replicas on the nodes. Remove tags for node1 and the related disks. Enable the scheduling for node1 and node2. Verify the only failed replica on node2 is reused. Add the tags back for node1 and the related disks. Verify the failed replica on node1 is reused. Verify the volume r/w still works fine. Need to update `volume.Status.LastDegradedAt` for existing degraded volumes during live upgrade."
}
] |
{
"category": "Runtime",
"file_name": "20200821-rebuild-replica-with-existing-data.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. Examples of behavior that contributes to a positive environment for our community include: Demonstrating empathy and kindness toward other people Being respectful of differing opinions, viewpoints, and experiences Giving and gracefully accepting constructive feedback Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: The use of sexualized language or imagery, and sexual attention or advances of any kind Trolling, insulting or derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or email address, without their explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline"
},
{
"data": "Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [INSERT CONTACT METHOD]. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. Community Impact: A violation through a single incident or series of actions. Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. Community Impact: A serious violation of community standards, including sustained inappropriate behavior. Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. Consequence: A permanent ban from any sort of public interaction within the community. This Code of Conduct is adapted from the , version 2.0, available at . Community Impact Guidelines were inspired by . For answers to common questions about this code of conduct, see the FAQ at . Translations are available at ."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "Kube-OVN",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Migrated to godep, as depman is not longer supported Introduced golang vendoring feature Fixed issue related to reopen deleted file Fix inotify watcher leak; remove `Cleanup` (#51) Don't return partial lines (PR #40) Use stable version of fsnotify (#46) Fix tail for Windows (PR #36) Improved rate limiting using leaky bucket (PR #29) Fix odd line splitting (PR #30) LimitRate now discards read buffer (PR #28) allow reading of longer lines if MaxLineSize is unset (PR #24) updated deps.json to latest fsnotify (441bbc86b1) added `Config.Logger` to suppress library logging add Cleanup to remove leaky inotify watches (PR #20) redesigned Location field (PR #12) add tail.Tell (PR #14) Rate limiting (PR #10) Detect file deletions/renames in polling file watcher (PR #1) Detect file truncation Fix potential race condition when reopening the file (issue 5) Fix potential blocking of `tail.Stop` (issue 4) Fix uncleaned up ChangeEvents goroutines after calling tail.Stop Support Follow=false Initial open source release"
}
] |
{
"category": "Runtime",
"file_name": "CHANGES.md",
"project_name": "CNI-Genie",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The scope of this document is to describe how to setup the needed for to use to discover & scape kube-router . For help with installing Prometheus please see their Metrics options: ```sh --metrics-path string Path to serve Prometheus metrics on ( default: /metrics ) --metrics-port uint16 <0-65535> Prometheus metrics port to use ( default: 0, disabled ) ``` To enable kube-router metrics, start kube-router with `--metrics-port` and provide a port over 0 Metrics is generally exported at the same rate as the sync period for each service. The default values unless other specified is iptables-sync-period - `1 min`` ipvs-sync-period - `1 min`` routes-sync-period - `1 min`` By enabling in Prometheus configuration & adding required annotations Prometheus can automaticly discover & scrape kube-router metrics kube-router v0.2.4 received a metrics overhaul where some metrics were changed into histograms, additional metrics were also added. Please make sure you are using the latest dashboard version with versions => v0.2.4 kube-router 0.1.0-rc2 and upwards supports the runtime configuration for controlling where to expose the metrics. If you are using a older version, metrics path & port is locked to `/metrics` & `8080` If metrics is enabled only services that are running have their metrics exposed The following metrics is exposed by kube-router prefixed by `kuberouter` controllerbgppeers Number of BGP peers of the instance controllerbgpadvertisements_received Total number of BGP advertisements received since kube-router started controllerbgpadvertisements_sent Total number of BGP advertisements sent since kube-router started controllerbgpinternalpeerssync_time Time it took for the BGP internal peer sync loop to complete controllerroutessync_time Time it took for controller to sync routes controlleriptablessync_time Time it took for the iptables sync loop to complete controllerpolicychainssynctime Time it took for controller to sync policy chains controlleripvsservicessynctime Time it took for the ipvs sync loop to complete controlleripvsservices The number of ipvs services in the instance controlleripvsmetricsexporttime The time it took to run the metrics export for IPVS services servicetotalconnections Total connections made to the service since creation servicepacketsin Total n/o packets received by service servicepacketsout Total n/o packets sent by service servicebytesin Total bytes received by the service servicebytesout Total bytes sent by the service serviceppsin Incoming packets per second serviceppsout Outgoing packets per second service_cps Connections per second servicebpsin Incoming bytes per second servicebpsout Outgoing bytes per second To get a grouped list of CPS for each service a Prometheus query could look like this e.g: `sum(kuberouterservicecps) by (svcnamespace, service_name)` This repo contains a example utilizing all the above exposed metrics from kube-router."
}
] |
{
"category": "Runtime",
"file_name": "metrics.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This framework provides a basic infrastructure to abstract I/O operations from the actual system calls to support multiple lower level I/O implementations. This abstraction makes it possible to always use the same generic API for I/O related operations independently of how they are internally implemented. The changes required to use this framework can be significant given that it's based on a callback architecture while current implementation is basically sequential. For this reason it will be very useful that the framework can fully replace the current code to avoid maintaining two very different implementations, even if the legacy implementation is used. For example, in the current implementation there are two supported ways to do I/O: Synchronous I/O (legacy implementation) This is the most simple approach. Every time a I/O operation is done, it's executed in the foreground, blocking the executing thread until the operation finishes. io_uring I/O ^[1] This is a new and powerful kernel API that provides asynchronous I/O execution with little overhead. This approach is superior because it doesn't block the executing thread, allowing more work to be done while the I/O operation is being processed in the background. threaded I/O This mode is not yet implemented, but it should be a replacement for the legacy mode when io_uring is not present. It will have all the advantages related to the thread pool, but it will use another set of system calls for actual I/O operations instead of the io_uring system calls. Worker threads won't be blocked during I/O. iouring_ is only present on latest linux kernels and it's dynamically detected and used if available. Otherwise it silently fails back to the synchronous implementation in a transparent way for the rest of the code that uses this framework (once implemented, it will use the threaded I/O instead of the legacy mode when io_uring is not supported). The implementation is done with the io_uring API in mind. This means that io_uring fits very well in the I/O framework API, and the other modes are adjusted to follow the same semantics. In this section a general overview of the operation will be provided, focused on the io_uring-based implementation. For differences when io_uring is not present, check section The framework is initialized using `gfiorun()`. ```c typedef int32t (*gfioasynct)(gfioop_t *op); typedef struct { gfioasync_t setup; gfioasync_t cleanup; } gfiohandlers_t; int32t gfiorun(gfiohandlerst handlers, void *data); ``` The handlers structure contains two async functions, one that is called just after having initialized the I/O infrastructure, and another one that is called after stopping everything else. The 'data' argument is an extra argument that will be passed to each function. The returned value can be a negative error code if there has been any problem while initializing the system, or 0 if everything worked fine. In this case, the function only returns when the program is terminating. When it's determined that the process must be terminated, a call to `gfioshutdown()` must be done. ```c void gfioshutdown(void); ``` This function initiates a shutdown procedure, but returns immediately. Once the shutdown is completed, `gfiorun()` will"
},
{
"data": "It can be called from anywhere. When shutdown is initiated, all I/O should have been stopped. If there is active I/O during the shutdown, they can complete, fail or be cancelled, depending on what state the request was. To ensure consistent behavior, try to always stop I/O before terminating the I/O framework. Note: This function is not yet implemented because even with the io_uring engine we still rely on gfeventdispatch() function to run the main program loop. Once the events infrastructure is integrated into the I/O framework, this function will be available. After everything is ready, the normal operation of the I/O framework is very simple: A worker picks one completion event from the kernel. The callback associated to the completion event is executed. 2.1. The callback can prepare new I/O requests using one of the `gfio*` I/O functions available for I/O operations. 2.2. Requests can be sent one by one or submitted in a batch. In all cases they are added to the io_uring SQ ring. Once the callback finishes, any queued requests (from this worker or any other worker that has added requests to the queue) are automatically flushed. The I/O framework supports two ways of sending operations to the kernel. In direct mode, each request is sent independently of the others. In batch mode multiple requests are sent together all at once. All operations will also have a `data` argument to pass any additional per-request private data that the callback may need. This data will be available in `op->data` for most of the cases (there's an exception for asynchronous requests. See later). Many of the I/O operations will have a timeout argument, which represents the maximum time allowed for the I/O to complete. If the operation takes more than that time, the system call will be cancelled and the callback will be executed passing a `-ETIMEDOUT` error. I/O operations will also have a priority argument that makes it possible to give different priorities to each requests so that the kernel scheduler can efficiently manage them based on their priority. An identifier is returned for each request. This value can be used to try to cancel the associated request if it has not been started or completed yet. In direct mode the interface is really simple. Each function only requires the data needed to perform the operation and returns an identifier. No memory allocations are needed. In batch mode a `gfiobatch_t` object needs to be created, which will contain all requests to send. Then one or more `gfiorequest_t` objects need to be created and added to the batch object. Both types of objects can be allocated in the stack because they are not needed once the batch is submitted. The functions to prepare requests have the same name as those used in direct mode but with the `_prepare` suffix. The function signature is exactly the same, but adding a `gfiorequest_t` argument. Once a request is prepared, it can be added to the batch object using `gfiobatch_add()`. This function also receives a pointer to an"
},
{
"data": "If it's not NULL, the id of this request will be copied to that location once the batch is submitted. Optionally, it's possible to create a chain of dependencies between requests of a batch. In this case, a chained request will only be executed once the previous request has finished with a success. Once the batch is ready, it can be processed by calling `gfiobatch_submit()`. ```c uint64_t gfiocancel(gfiocallbackt cbk, uint64t ref, void *data); ``` Tries to cancel the request identified by `ref`. `cbk` will be called with error 0 if the request has been cancelled, `-ENOENT` if the request cannot be found (it has already terminated probably), or `-EALREADY` if the request is still there but cannot be cancelled. In case that the request can be successfully cancelled, the callback associated to that request will be called with error `-ECANCELED`. ```c uint64_t gfiocallback(gfiocallback_t cbk, void *data); ``` This request simply causes the `cbk` to be executed in the background. Error is always 0 and can be ignored. ```c uint64_t gfioasync(gfioasynct async, void *data, gfiocallbackt cbk, void *cbk_data); ``` This is very similar to a callback request, but it provides the `async` function that does something that can potentially fail (i.e. return an error), and a `cbk` that will be called once the previous function completes. The callback will receive the error code returned by the asynchronous function. ```c uint64_t gfiopreadv(gfiocallbackt cbk, void *data, int32t fd, const struct iovec *iov, uint32t count, uint64t offset, int32t flags, uint64t to, int32_t prio); ``` Note: Example I/O request. Not yet implemented. ```c uint64_t gfiowritev(gfiocallbackt cbk, void *data, int32t fd, const struct iovec *iov, uint32t count, uint64t offset, int32t flags, uint64t to, int32_t prio); ``` Note: Example I/O request. Not yet implemented. gf_io_worker_t: Context information of a worker. gf_io_request_t: Object to track requests. gf_io_callback_t: Callback function signature to process completion events. gf_io_mode_t: Enumeration of available I/O modes. gf_io_run: Main initialization function. ```c int32_t gfiorun(); ``` gf_io_shutdown: Trigger termination of the I/O framework. ```c void gfioshutdown(); ``` gf_io_mode: Check the current running mode. ```c gfiomode_t gfiomode(); ``` When iouring_ cannot be started for any reason, the framework falls back to a legacy operation mode. In this mode the API will be the same but it will work in a more simpler way. In this case, the thread pool won't be started. The most important difference is that most of the requests are processed as soon as they are initialized, for example in `gfioreadv()` a `readv()` system call will be executed synchronously. The result will be kept into the request object. When a request is added to a worker with `gfioworker_add()`, instead of deferring the execution of the callback till the worker processes it, the callback will be immediately executed. The other functions do nothing in this mode. Reorganize initialization and termination of the process Replace io-threads Move fuse I/O to this framework Move posix I/O to this framework Move sockets I/O to this framework Move timers to this framework Move synctasks to this framework Implement a third threaded mode not based on io_uring"
}
] |
{
"category": "Runtime",
"file_name": "io-framework.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "An entropy device is a that provides guests with \"high-quality randomness for guest use\". Guests issue requests in the form of a buffer that will be filled with random bytes from the device. The source of random bytes that the device will use to fill the buffers is an implementation decision. On the guest side, the kernel uses random bytes received through the device as an extra source of entropy. Moreover, the guest VirtIO driver exposes the `/dev/hwrng` character device. User-space applications can use this device to request random bytes from the device. Firecracker offers the option of attaching a single `virtio-rng` device. Users can configure it through the `/entropy` API endpoint. The request body includes a single (optional) parameter for configuring a rate limiter. For example, users can configure the entropy device with a bandwidth rate limiter of 10KB/sec like this: ```console curl --unix-socket $socket_location -i \\ -X PUT 'http://localhost/entropy' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d \"{ \\\"rate_limiter\\\": { \\\"bandwidth\\\": { \\\"size\\\": 1000, \\\"onetimeburst\\\": 0, \\\"refill_time\\\": 100 } } }\" ``` If a configuration file is used for configuring a microVM, the same setup can be achieved by adding a section like this: ```json \"entropy\": { \"rate_limiter\": { \"bandwidth\" { \"size\": 1000, \"onetimeburst\": 0, \"refill_time\": 100 } } } ``` On the host side, Firecracker relies on to retrieve the random bytes. `aws-lc-rs` uses the . In order to use the entropy device, users must use a kernel with the `virtio-rng` front-end driver compiled in or loaded as a module. The relevant kernel configuration option is `CONFIGHWRANDOM_VIRTIO` (which depends on `CONFIGHWRANDOM` and `CONFIG_VIRTIO`)."
}
] |
{
"category": "Runtime",
"file_name": "entropy.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "| Type | Name | Since | Website | Use-Case | |:-|:-|:-|:-|:-| | Vendor | Red Hat, Inc. | 2019 | | Building Submariner into multicluster solutions like . |"
}
] |
{
"category": "Runtime",
"file_name": "ADOPTERS.md",
"project_name": "Submariner",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "https://github.com/vmware-tanzu/velero/releases/tag/v1.11.0 `velero/velero:v1.11.0` https://velero.io/docs/v1.11/ https://velero.io/docs/v1.11/upgrade-to-1.11/ This feature implements the BackupItemAction v2. BIA v2 has two new methods: Progress() and Cancel() and modifies the Execute() return value. The API change is needed to facilitate long-running BackupItemAction plugin actions that may not be complete when the Execute() method returns. This will allow long-running BackupItemAction plugin actions to continue in the background while the Velero moves to the following plugin or the next item. This feature implemented the RestoreItemAction v2. RIA v2 has three new methods: Progress(), Cancel(), and AreAdditionalItemsReady(), and it modifies RestoreItemActionExecuteOutput() structure in the RIA return value. The Progress() and Cancel() methods are needed to facilitate long-running RestoreItemAction plugin actions that may not be complete when the Execute() method returns. This will allow long-running RestoreItemAction plugin actions to continue in the background while the Velero moves to the following plugin or the next item. The AreAdditionalItemsReady() method is needed to allow plugins to tell Velero to wait until the returned additional items have been restored and are ready for use in the cluster before restoring the current item. This is intended as a replacement for the previously-approved Upload Progress Monitoring design () to expand the supported use cases beyond snapshot upload to include what was previously called Async Backup/Restore Item Actions. This feature provides a flexible policy to filter volumes in the backup without requiring patching any labels or annotations to the pods or volumes. This policy is configured as k8s ConfigMap and maintained by the users themselves, and it can be extended to more scenarios in the future. By now, the policy rules out volumes from backup depending on the CSI driver, NFS setting, volume size, and StorageClass setting. Please refer to for the policy's ConifgMap format. It is not guaranteed to work on unofficial third-party plugins as it may not follow the existing backup workflow code logic of Velero. This feature adds four new resource filters for backup. The new filters are separated into cluster scope and namespace scope. Before this feature, Velero could not filter cluster scope resources precisely. This feature provides the ability and refactors existing resource filter parameters. In Velero, some code pieces need to communicate with the k8s API server. Before v1.11, these code pieces used hard-code timeout settings. This feature adds a resource-timeout parameter in the velero server binary to make it configurable. Before this feature, Velero restore didn't have a restored resources list as the Velero backup. It's not convenient for users to learn what is"
},
{
"data": "This feature adds the resources list and the handling result of the resources (including created, updated, failed, and skipped). In v1.11, Backup Controller and Restore controller are refactored with controller-runtime. Till v1.11, all Velero controllers use the controller-runtime framework. To fix CVEs and keep pace with Golang, Velero made changes as follows: Bump Golang runtime to v1.19.8. Bump several dependent libraries to new versions. Compile Restic (v0.15.0) with Golang v1.19.8 instead of packaging the official binary. The Velero CSI plugin now determines whether to restore Volume's data from snapshots on the restore's restorePVs setting. Before v1.11, the CSI plugin doesn't check the restorePVs parameter setting. The Flexible resource policy that can filter volumes to skip in the backup is not guaranteed to work on unofficial third-party plugins because the plugins may not follow the existing backup workflow code logic of Velero. The ConfigMap used as the policy is supposed to be maintained by users. Modify new scope resource filters name. (#6089, @blackpiglet) Make Velero not exits when EnableCSI is on and CSI snapshot not installed (#6062, @blackpiglet) Restore Services before Clusters (#6057, @ywk253100) Fixed backup deletion bug related to async operations (#6041, @sseago) Update Golang version to v1.19 for branch main. (#6039, @blackpiglet) Fix issue #5972, don't assume errorField as error type when dealing with logger.WithError (#6028, @Lyndon-Li) distinguish between New and InProgress operations (#6012, @sseago) Modify golangci.yaml file. Resolve found lint issues. (#6008, @blackpiglet) Remove Reference of itemsnapshotter (#5997, @reasonerjt) minor fixes for backupoperationscontroller (#5996, @sseago) RIAv2 async operations controller work (#5993, @sseago) Follow-on fixes for BIAv2 controller work (#5971, @sseago) Refactor backup controller based on the controller-runtime framework. (#5969, @qiuming-best) Fix client wait problem after async operation change, velero backup/restore --wait should check a full list of the terminal status (#5964, @Lyndon-Li) Fix issue #5935, refactor the logics for backup/restore persistent log, so as to remove the contest to gzip writer (#5956, @Lyndon-Li) Switch the base image to distroless/base-nossl-debian11 to reduce the CVE triage efforts (#5939, @ywk253100) Wait for additional items to be ready before restoring current item (#5933, @sseago) Add configurable server setting for default timeouts (#5926, @eemcmullan) Add warning/error result to cmd `velero backup describe` (#5916, @allenxu404) Fix Dependabot alerts. Use 1.18 and 1.19 golang instead of patch image in dockerfile. Add release-1.10 and release-1.9 in Trivy daily scan. (#5911, @blackpiglet) Update client-go to v0.25.6 (#5907, @kaovilai) Limit the concurrent number for backup's VolumeSnapshot operation. (#5900, @blackpiglet) Fix goreleaser issue for resolving tags and updated it's version. (#5899, @anshulahuja98) This is to fix issue 5881, enhance the PVB tracker in two modes, Track and Taken (#5894, @Lyndon-Li) Add labels for velero installed namespace to support"
},
{
"data": "(#5873, @blackpiglet) Add restored resource list in the restore describe command (#5867, @ywk253100) Add a json output to cmd velero backup describe (#5865, @allenxu404) Make restore controller adopting the controller-runtime framework. (#5864, @blackpiglet) Replace k8s.io/apimachinery/pkg/util/clock with k8s.io/utils/clock (#5859, @hezhizhen) Restore finalizer and managedFields of metadata during the restoration (#5853, @ywk253100) BIAv2 async operations controller work (#5849, @sseago) Add secret restore item action to handle service account token secret (#5843, @ywk253100) Add new resource filters can separate cluster and namespace scope resources. (#5838, @blackpiglet) Correct PVB/PVR Failed Phase patching during startup (#5828, @kaovilai) bump up golang net to fix CVE-2022-41721 (#5812, @Lyndon-Li) Update CRD descriptions for SnapshotVolumes and restorePVs (#5807, @shubham-pampattiwar) Add mapped selected-node existence check (#5806, @blackpiglet) Add option \"--service-account-name\" to install cmd (#5802, @reasonerjt) Enable staticcheck linter. (#5788, @blackpiglet) Set Kopia IgnoreUnknownTypes in ErrorHandlingPolicy to True for ignoring backup unknown file type (#5786, @qiuming-best) Bump up Restic version to 0.15.0 (#5784, @qiuming-best) Add File system backup related metrics to Grafana dashboard Add metrics backupwarningtotal for record of total warnings Add metrics backuplaststatus for record of last status of the backup (#5779, @allenxu404) Design for Handling backup of volumes by resources filters (#5773, @qiuming-best) Add PR container build action, which will not push image. Add GOARM parameter. (#5771, @blackpiglet) Fix issue 5458, track pod volume backup until the CR is submitted in case it is skipped half way (#5769, @Lyndon-Li) Fix issue 5226, invalidate the related backup repositories whenever the backup storage info change in BSL (#5768, @Lyndon-Li) Add Restic builder in Dockerfile, and keep the used built Golang image version in accordance with upstream Restic. (#5764, @blackpiglet) Fix issue 5043, after the restore pod is scheduled, check if the node-agent pod is running in the same node. (#5760, @Lyndon-Li) Remove restore controller's redundant client. (#5759, @blackpiglet) Define itemoperations.json format and update DownloadRequest API (#5752, @sseago) Add Trivy nightly scan. (#5740, @jxun) Fix issue 5696, check if the repo is still openable before running the prune and forget operation, if not, try to reconnect the repo (#5715, @Lyndon-Li) Fix error with Restic backup empty volumes (#5713, @qiuming-best) new backup and restore phases to support async plugin operations: WaitingForPluginOperations WaitingForPluginOperationsPartiallyFailed (#5710, @sseago) Prevent nil panic on exec restore hooks (#5675, @dymurray) Fix CVEs scanned by trivy (#5653, @qiuming-best) Publish backupresults json to enhance error info during backups. (#5576, @anshulahuja98) RestoreItemAction v2 API implementation (#5569, @sseago) add new RestoreItemAction of \"velero.io/change-image-name\" to handle the issue mentioned at #5519 (#5540, @wenterjoy) BackupItemAction v2 API implementation (#5442, @sseago) Proposal to separate resource filter into cluster scope and namespace scope (#5333, @blackpiglet)"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.11.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Deploy JuiceFS S3 Gateway sidebar_position: 4 slug: /s3_gateway JuiceFS , applications often use the exposed POSIX API. But if you ever need to use S3-compatible API to access JuiceFS files, S3 Gateway comes in handy, its architecture: JuiceFS Gateway implements its functionalities through . By implementing its and using the JuiceFS file system as the backend storage for its server, JuiceFS has achieved a use experience almost the same as using native MinIO and inherited many advanced features from MinIO. In this architecture, JuiceFS acts as a local disk for MinIO's server command, similar to `minio server /data1` in principle. Common application scenarios for JuiceFS Gateway include: Expose S3 API for JuiceFS file system, so that applications may access JuiceFS via S3 SDK Use tools like s3cmd, AWS CLI and MinIO Client to access and modify files stored in JuiceFS S3 gateway also provides a file manager that allows users to manage JuiceFS file system directly in web browsers When transferring data across regions, use S3 Gateway as an unified data export endpoint, this eliminates metadata latency and improve performance. See The S3 gateway can be enabled on the current host using the `gateway` subcommand of JuiceFS. Before enabling the feature, you need to set the environment variables `MINIOROOTUSER` and `MINIOROOTPASSWORD`. These are the Access Key and Secret Key for authenticating when accessing the S3 API, and can be simply considered as the username and password of the S3 gateway. For example. ```shell export MINIOROOTUSER=admin export MINIOROOTPASSWORD=12345678 ``` ```shell juicefs gateway redis://localhost:6379 localhost:9000 ``` The first two commands of the above three are used to set environment variables. Note that the length of `MINIOROOTUSER` is at least 3 characters, and the length of `MINIOROOTPASSWORD` is at least 8 characters. If you are a Windows user, replace `export` with `set` in the above commands to set the environment variable. i.e., `set MINIOROOTUSER=admin`. The last command is used to enable the S3 gateway. The `gateway` subcommand requires at least two parameters. The first is the URL of the database where the metadata is stored, and the second is the address and port on which the S3 gateway is listening. You can add to the `gateway` subcommand to optimize the S3 gateway as needed, for example, to set the default local cache to 20 GiB. ```shell juicefs gateway --cache-size 20480 redis://localhost:6379 localhost:9000 ``` In this example, we assume that the JuiceFS file system is using a local Redis database. When the S3 gateway is enabled, the administrative interface of the S3 gateway can be accessed from the current host using the address"
},
{
"data": "If you want to access the S3 gateway from other hosts on the LAN or over the Internet, you need to change the listening address, e.g. ```shell juicefs gateway redis://localhost:6379 0.0.0.0:9000 ``` In this way, the S3 gateway will accept all network requests by default. S3 clients in different locations can access the S3 gateway using different addresses, e.g. A third-party client in the host where the S3 gateway is located can use `http://127.0.0.1:9000` or `http://localhost:9000` for access. A third-party client on the same LAN as the host where the S3 gateway is located can access it using `http://192.168.1.8:9000` (assuming the intranet IP address of the S3 gateway-enabled host is 192.168.1.8). The S3 gateway can be accessed over the Internet using `http://110.220.110.220:9000` (assuming that the public IP address of the S3 gateway-enabled host is 110.220.110.220). Starting from version 1.2, JuiceFS Gateway supports running in the background. Simply add the `-d`` parameter when starting: ``` juicefs gateway redis://localhost:6379 localhost:9000 -d ``` When running in the background, you can specify the log output file path using `--log`. The S3 gateway can be configured as a `daemon service` with `systemd`. ```shell cat > /lib/systemd/system/juicefs-gateway.service<<EOF [Unit] Description=Juicefs S3 Gateway Requires=network.target After=multi-user.target StartLimitIntervalSec=0 [Service] Type=simple User=root Environment=\"MINIOROOTUSER=admin\" Environment=\"MINIOROOTPASSWORD=12345678\" ExecStart=/usr/local/bin/juicefs gateway redis://localhost:6379 localhost:9000 Restart=on-failure RestartSec=60 [Install] WantedBy=multi-user.target EOF ``` To enable the service at startup ```shell sudo systemctl daemon-reload sudo systemctl enable juicefs-gateway --now sudo systemctl status juicefs-gateway ``` To inspect logs ```bash sudo journalctl -xefu juicefs-gateway.service ``` Create a secret (take Amazon S3 as an example): ```shell export NAMESPACE=default ``` ```shell kubectl -n ${NAMESPACE} create secret generic juicefs-secret \\ --from-literal=name=<NAME> \\ --from-literal=metaurl=redis://[:<PASSWORD>]@<HOST>:6379[/<DB>] \\ --from-literal=storage=s3 \\ --from-literal=bucket=https://<BUCKET>.s3.<REGION>.amazonaws.com \\ --from-literal=access-key=<ACCESS_KEY> \\ --from-literal=secret-key=<SECRET_KEY> ``` Here we have: `name`: name of the JuiceFS file system. `metaurl`: URL of the metadata engine (e.g. Redis). Read for more information. `storage`: Object storage type, such as `s3`, `gs`, `oss`. Read to find all supported object storages. `bucket`: Bucket URL. Read to learn how to set up different object storage. `access-key`: Access key of object storage. Read for more information. `secret-key`: Secret key of object storage. Read for more information. Then download the S3 gateway and create the `Deployment` and `Service` resources with `kubectl`. The following points require special attention: Please replace `${NAMESPACE}` in the following command with the Kubernetes namespace of the actual S3 gateway deployment, which defaults to `kube-system`. The `replicas` for `Deployment` defaults to 1. Please adjust as needed. The latest version of `juicedata/juicefs-csi-driver` image is used by default, which has already integrated the latest version of JuiceFS client. Please check for the specific integrated JuiceFS client version. The `initContainers` of `Deployment` will first try to format the JuiceFS file system, if you have already formatted it in advance, this step will not affect the existing JuiceFS file"
},
{
"data": "The default port number that the S3 gateway listens on is 9000 The of S3 gateway will use default values if not specified. The value of `MINIOROOTUSER` environment variable is `access-key` in Secret, and the value of `MINIOROOTPASSWORD` environment variable is `secret-key` in Secret. ```shell curl -sSL https://raw.githubusercontent.com/juicedata/juicefs/main/deploy/juicefs-s3-gateway.yaml | sed \"s@kube-system@${NAMESPACE}@g\" | kubectl apply -f - ``` Check if it's deployed successfully: ```shell $ kubectl -n $NAMESPACE get po -o wide -l app.kubernetes.io/name=juicefs-s3-gateway juicefs-s3-gateway-5c7d65c77f-gj69l 1/1 Running 0 37m 10.244.2.238 kube-node-3 <none> <none> ``` ```shell $ kubectl -n $NAMESPACE get svc -l app.kubernetes.io/name=juicefs-s3-gateway NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE juicefs-s3-gateway ClusterIP 10.101.108.42 <none> 9000/TCP 142m ``` You can use `juicefs-s3-gateway.${NAMESPACE}.svc.cluster.local:9000` or pod IP and port number of `juicefs-s3-gateway` (e.g. `10.244.2.238:9000`) in the application pod to access JuiceFS S3 Gateway. If you want to access through Ingress, you need to ensure that the Ingress Controller has been deployed in the cluster. Refer to . Then create an `Ingress` resource: ```yaml kubectl apply -f - <<EOF apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: juicefs-s3-gateway namespace: ${NAMESPACE} spec: ingressClassName: nginx rules: http: paths: path: / pathType: Prefix backend: service: name: juicefs-s3-gateway port: number: 9000 EOF ``` The S3 gateway can be accessed through `<external IP>` of ingress controller as follows (no need to include the 9000 port number): ```shell kubectl get services -n ingress-nginx ``` There are some differences between the various versions of Ingress. For more usage methods, please refer to . Prepare a YAML file Create a configuration file, for example: `values.yaml`. Copy and fill in the following configuration information. Among them, the `secret` part is the information related to the JuiceFS file system, and you can refer to for more information. ```yaml title=\"values.yaml\" secret: name: \"<name>\" metaurl: \"<meta-url>\" storage: \"<storage-type>\" accessKey: \"<access-key>\" secretKey: \"<secret-key>\" bucket: \"<bucket>\" ``` If you want to deploy Ingress, add the following snippet into `values.yaml`: ```yaml title=\"values.yaml\" ingress: enabled: true ``` Deploy Execute the following three commands in sequence to deploy the JuiceFS S3 gateway with Helm (note that the following example is deployed to the `kube-system` namespace). ```sh helm repo add juicefs-s3-gateway https://juicedata.github.io/charts/ helm repo update helm install juicefs-s3-gateway juicefs-s3-gateway/juicefs-s3-gateway -n kube-system -f ./values.yaml ``` Check the deployment Check pods are running: the deployment will launch a `Deployment` named `juicefs-s3-gateway`, so run `kubectl -n kube-system get po -l app.kubernetes.io/name=juicefs-s3-gateway` should see all running pods. For example: ```sh $ kubectl -n kube-system get po -l app.kubernetes.io/name=juicefs-s3-gateway NAME READY STATUS RESTARTS AGE juicefs-s3-gateway-5c69d574cc-t92b6 1/1 Running 0 136m ``` Check Service: run `kubectl -n kube-system get svc -l app.kubernetes.io/name=juicefs-s3-gateway` to check Service: ```shell $ kubectl -n kube-system get svc -l app.kubernetes.io/name=juicefs-s3-gateway NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE juicefs-s3-gateway ClusterIP 10.101.108.42 <none> 9000/TCP 142m ``` Please see the documentation to learn how to collect and display JuiceFS monitoring metrics."
}
] |
{
"category": "Runtime",
"file_name": "s3_gateway.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for bash Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: source <(cilium-operator-azure completion bash) To load completions for every new session, execute once: cilium-operator-azure completion bash > /etc/bash_completion.d/cilium-operator-azure cilium-operator-azure completion bash > $(brew --prefix)/etc/bash_completion.d/cilium-operator-azure You will need to start a new shell for this setup to take effect. ``` cilium-operator-azure completion bash ``` ``` -h, --help help for bash --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-azure_completion_bash.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Handle and block node drains that would cause data unavailability and loss. Unblock drains dynamically so that a rolling upgrade is made possible. Allow for rolling upgrade of nodes in automated kubernetes environments like OSDs do not fit under the single PodDisruptionBudget pattern. Ceph's ability to tolerate pod disruptions in one failure domain is dependent on the overall health of the cluster. Even if an upgrade agent were only to drain one node at a time, Ceph would have to wait until there were no undersized PGs before moving on the next. The failure domain will be determined by the smallest failure domain of all the Ceph Pools in that cluster. We begin with creating a single PodDisruptionBudget for all the OSD with maxUnavailable=1. This will allow one OSD to go down anytime. Once the user drains a node and an OSD goes down, we determine the failure domain for the draining OSD (using the OSD deployment labels). Then we create blocking PodDisruptionBudgets (maxUnavailable=0) for all other failure domains and delete the main PodDisruptionBudget. This blocks OSDs from going down in multiple failure domains simultaneously. Once the drained OSDs are back and all the pgs are active+clean, that is, the cluster is healed, the default PodDisruptionBudget (with maxUnavailable=1) is added back and the blocking ones are deleted. User can also add a timeout for the pgs to become healthy. If the timeout exceeds, the operator will ignore the pg health, add the main PodDisruptionBudget and delete the blocking ones. Detecting drains is not easy as they are a client side operation. The client cordons the node and continuously attempts to evict all pods from the node until it succeeds. Whenever an OSD goes into pending state, that is, `ReadyReplicas` count is 0, we assume that some drain operation is happening. Example scenario: Zone x Node a osd.0 osd.1 Zone y Node b osd.2 osd.3 Zone z Node c osd.4 osd.5 Rook Operator creates a single PDB that covers all OSDs with"
},
{
"data": "When Rook Operator sees an OSD go down (for example, osd.0 goes down): Create a PDB for each failure domain (zones y and z) with maxUnavailable=0 where the OSD did not go down. Delete the original PDB that covers all OSDs Now all remaining OSDs in zone x would be allowed to be drained When Rook sees the OSDs are back up and all PGs are clean Restore the PDB that covers all OSDs with maxUnavailable=1 Delete the PDBs (in zone y and z) where maxUnavailable=0 An example of an operator that will attempt to do rolling upgrades of nodes is the Machine Config Operator in openshift. Based on what I have seen in , kubernetes deployments based on cluster-api approach will be a common way of deploying kubernetes. This will also work to mitigate manual drains from accidentally disrupting storage. When an node is drained, we will also delay it's DOWN/OUT process by placing a noout on that node. We will remove that noout after a timeout. An OSD can be down due to reasons other than node drain, say, disk failure. In such a situation, if the pgs are unhealthy then rook will create a blocking PodDisruptionBudget on other failure domains to prevent further node drains on them. `noout` flag won't be set on node this is case. If the OSD is down but all the pgs are `active+clean`, the cluster will be treated as fully healthy. The default PodDisruptionBudget (with maxUnavailable=1) will be added back and the blocking ones will be deleted. Since there is no strict failure domain requirement for each of these, and they are not logically grouped, a static PDB will suffice. A single PodDisruptionBudget is created and owned by the respective controllers, and updated only according to changes in the CRDs that change the amount of pods. Eg: For a 3 Mon configuration, we can have PDB with the same labelSelector as the Deployment and have maxUnavailable as 1. If the mon count is increased to 5, we can replace it with a PDB that has maxUnavailable set to 2."
}
] |
{
"category": "Runtime",
"file_name": "ceph-managed-disruptionbudgets.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "| Software | License | Repo Link |||| |k8s.io/api | Apache License 2.0 |https://github.com/kubernetes/api |k8s.io/apimachinery | Apache License 2.0 |https://github.com/kubernetes/apimachinery |k8s.io/client-go | Apache License 2.0 |https://github.com/kubernetes/client-go |k8s.io/kube-openapi | Apache License 2.0 |https://github.com/kubernetes/kube-openapi |fsnotify |BSD-3-Clause | https://github.com/fsnotify/fsnotify |inf | BSD-3-Clause |https://github.com/go-inf/inf |tomb.v1 | BSD-3-Clause |https://github.com/go-tomb/tomb/tree/v1 |yaml.v2 | Apache License 2.0 |https://github.com/go-yaml/yaml/tree/v2 |crypto | BSD-3-Clause | https://github.com/golang/crypto |exp | BSD-3-Clause | https://github.com/golang/exp |net | BSD-3-Clause | https://github.com/golang/net |sys | BSD-3-Clause | https://github.com/golang/sys |text | BSD-3-Clause | https://github.com/golang/text |time | BSD-3-Clause | https://github.com/golang/time |cni | Apache License 2.0 | https://github.com/containernetworking/cni |go-iptables| Apache License 2.0 | https://github.com/coreos/go-iptables |go-spew |ISC | https://github.com/davecgh/go-spew |yaml |MIT | https://github.com/ghodss/yaml |protobuf |BSD-3-Clause | https://github.com/gogo/protobuf golang/glog |Apache License 2.0 | https://github.com/golang/glog |golang/groupcache |Apache License 2.0 | https://github.com/golang/groupcache |golang/protobuf |Apache License 2.0 | https://github.com/golang/protobuf |cadvisor |Apache License 2.0 | https://github.com/google/cadvisor |gofuzz |Apache License 2.0 | https://github.com/google/gofuzz |btree |Apache License 2.0 | https://github.com/google/btree |gnostic |Apache License 2.0 | https://github.com/googleapis/gnostic |httpcache | MIT| https://github.com/gregjones/httpcache |golang-lru |MLP-2.0 | https://github.com/hashicorp/golang-lru |tail |MIT | https://github.com/hpcloud/tail |mergo |BSD-3-Clause | https://github.com/imdario/mergo |json-iterator/go |MIT | https://github.com/json-iterator/go |concurrent |Apache License 2.0 | https://github.com/modern-go/concurrent |reflect2 |Apache License 2.0 | https://github.com/modern-go/reflect2 |ginkgo |MIT | https://github.com/onsi/ginkgo |gomega |MIT | https://github.com/onsi/gomega |GoLLRB |BSD-3-Clause | https://github.com/petar/GoLLRB |diskv | MIT| https://github.com/peterbourgon/diskv |spf13/pflag |MIT | https://github.com/spf13/pflag"
}
] |
{
"category": "Runtime",
"file_name": "external-build-dependencies.md",
"project_name": "CNI-Genie",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "FUSE mounting is a little bit tricky. There's a userspace helper tool that performs the handshake with the kernel, and then steps out of the way. This helper behaves differently on different platforms, forcing a more complex API on us. On Linux, the mount is immediate and file system accesses wait until the requests are served. On OS X, the mount becomes visible only after `InitRequest` (and maybe more) have been served. Let's see what happens if `InitRequest` gets an error response. On Linux, the mountpoint is there but all operations will fail: On OS X, the mount never happened:"
}
] |
{
"category": "Runtime",
"file_name": "mount-sequence.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "We've added a new command, `velero install`, to make it easier to get up and running with Velero. This CLI command replaces the static YAML installation files that were previously part of release tarballs. See the updated for more information. We've made a number of improvements to the plugin framework: we've reorganized the relevant packages to minimize the import surface for plugin authors all plugins are now wrapped in panic handlers that will report information on panics back to Velero Velero's `--log-level` flag is now passed to plugin implementations Errors logged within plugins are now annotated with the file/line of where the error occurred Restore item actions can now optionally return a list of additional related items that should be restored Restore item actions can now indicate that an item should not be restored For Azure installation, the `cloud-credentials` secret can now be created from a file containing a list of environment variables. Note that `velero install` always uses this method of providing credentials for Azure. For more details, see . We've added a new phase, `PartiallyFailed`, for both backups and restores. This new phase is used for backups/restores that successfully process some but not all of their items. We removed all legacy Ark references, including API types, prometheus metrics, restic & hook annotations, etc. The restic integration remains a beta feature. Please continue to try it out and provide feedback, and we'll be working over the next couple of releases to bring it to GA. All legacy Ark data types and pre-1.0 compatibility code has been removed. Users should migrate any backups created pre-v0.11.0 with the `velero migrate-backups` command, available in . The base container image has been switched to `ubuntu:bionic` The \"ark\" annotations for specifying hooks are no longer supported, and have been replaced with \"velero\"-based equivalents. The \"ark\" annotation for specifying restic backups is no longer supported, and has been replaced with a \"velero\"-based equivalent. The \"ark\" prometheus metrics no longer exist, and have been replaced with \"velero\"-based equivalents. `BlockStore` plugins are now named `VolumeSnapshotter` plugins Plugin APIs have moved to reduce the import surface: Plugin gRPC servers live in `github.com/heptio/velero/pkg/plugin/framework` Plugin interface types live in `github.com/heptio/velero/pkg/plugin/velero` RestoreItemAction interface now takes the original item from the backup as a parameter RestoreItemAction plugins can now return additional items to restore RestoreItemAction plugins can now skip restoring an item Plugins may now send stack traces with errors to the Velero server, so that the errors may be put into the server log Plugins must now be \"namespaced,\" using `example.domain.com/plugin-name` format For external ObjectStore and VolumeSnapshotter"
},
{
"data": "this name will also be the provider name in BackupStorageLoction and VolumeSnapshotLocation objects `--log-level` flag is now passed to all plugins Configs for Azure, AWS, and GCP are now checked for invalid or extra keys, and the server is halted if any are found https://github.com/heptio/velero/releases/tag/v1.0.0 `gcr.io/heptio-images/velero:v1.0.0` https://velero.io/docs/v1.0.0/ To upgrade from a previous version of Velero, see our . Change base images to ubuntu:bionic (#1488, @skriss) Expose the timestamp of the last successful backup in a gauge (#1448, @fabito) check backup existence before download (#1447, @fabito) Use `latest` image tag if no version information is provided at build time (#1439, @nrb) switch from `restic stats` to `restic snapshots` for checking restic repository existence (#1416, @skriss) GCP: add optional 'project' config to volume snapshot location for if snapshots are in a different project than the IAM account (#1405, @skriss) Disallow bucket names starting with '-' (#1407, @nrb) Shorten label values when they're longer than 63 characters (#1392, @anshulc) Fail backup if it already exists in object storage. (#1390, @ncdc,carlisia) Add PartiallyFailed phase for backups, log + continue on errors during backup process (#1386, @skriss) Remove deprecated \"hooks\" for backups (they've been replaced by \"pre hooks\") (#1384, @skriss) Restic repo ensurer: return error if new repository does not become ready within a minute, and fix channel closing/deletion (#1367, @skriss) Support non-namespaced names for built-in plugins (#1366, @nrb) Change container base images to debian:stretch-slim and upgrade to go 1.12 (#1365, @skriss) Azure: allow credentials to be provided in a .env file (path specified by $AZURECREDENTIALSFILE), formatted like (#1364, @skriss): ``` AZURETENANTID=${AZURETENANTID} AZURESUBSCRIPTIONID=${AZURESUBSCRIPTIONID} AZURECLIENTID=${AZURECLIENTID} AZURECLIENTSECRET=${AZURECLIENTSECRET} AZURERESOURCEGROUP=${AZURERESOURCEGROUP} ``` Instantiate the plugin manager with the per-restore logger so plugin logs are captured in the per-restore log (#1358, @skriss) Add gauge metrics for number of existing backups and restores (#1353, @fabito) Set default TTL for backups (#1352, @vorar) Validate that there can't be any duplicate plugin name, and that the name format is `example.io/name`. (#1339, @carlisia) AWS/Azure/GCP: fail fast if unsupported keys are provided in BackupStorageLocation/VolumeSnapshotLocation config (#1338, @skriss) `velero backup logs` & `velero restore logs`: show helpful error message if backup/restore does not exist or is not finished processing (#1337, @skriss) Add support for allowing a RestoreItemAction to skip item restore. (#1336, @sseago) Improve error message around invalid S3 URLs, and gracefully handle trailing backslashes. (#1331, @skriss) Set backup's start timestamp before patching it to InProgress so start times display in `velero backup get` while in progress (#1330, @skriss) Added ability to dynamically disable controllers (#1326, @amanw) Remove deprecated code in preparation for v1.0 release (#1323, @skriss): remove ark.heptio.com API group remove support for reading ark-backup.json files from object storage remove Ark field from RestoreResult type remove support for \"hook.backup.ark.heptio.com/...\" annotations for specifying hooks remove support for"
},
{
"data": "client config directory remove support for restoring Azure snapshots using short snapshot ID formats in backup metadata stop applying \"velero-restore\" label to restored resources and remove it from the API pkg remove code that strips the \"gc.ark.heptio.com\" finalizer from backups remove support for \"backup.ark.heptio.com/...\" annotations for requesting restic backups remove \"ark\"-prefixed prometheus metrics remove VolumeBackups field and related code from Backup's status Rename BlockStore plugin to VolumeSnapshotter (#1321, @skriss) Bump plugin ProtocolVersion to version 2 (#1319, @carlisia) Remove Warning field from restore item action output (#1318, @skriss) Fix for #1312, use describe to determine if AWS EBS snapshot is encrypted and explicitly pass that value in EC2 CreateVolume call. (#1316, @mstump) Allow restic restore helper image name to be optionally specified via ConfigMap (#1311, @skriss) Compile only once to lower the initialization cost for regexp.MustCompile. (#1306, @pei0804) Enable restore item actions to return additional related items to be restored; have pods return PVCs and PVCs return PVs (#1304, @skriss) Log error locations from plugin logger, and don't overwrite them in the client logger if they exist already (#1301, @skriss) Send stack traces from plugin errors to Velero via gRPC so error location info can be logged (#1300, @skriss) Azure: restore volumes in the original region's zone (#1298, @sylr) Check for and exclude hostPath-based persistent volumes from restic backup (#1297, @skriss) Make resticrepositories non-restorable resources (#1296, @skriss) Gracefully handle failed API groups from the discovery API (#1293, @fabito) Add `velero install` command for basic use cases. (#1287, @nrb) Collect 3 new metrics: backupdeletion{attempt|failure|success}_total (#1280, @fabito) Pass --log-level flag to internal/external plugins, matching Velero server's log level (#1278, @skriss) AWS EBS Volume IDs now contain AZ (#1274, @tsturzl) Add panic handlers to all server-side plugin methods (#1270, @skriss) Move all the interfaces and associated types necessary to implement all of the Velero plugins to under the new package `velero`. (#1264, @carlisia) Update `velero restore` to not open every single file open during extraction of the data (#1261, @asaf) Remove restore code that waits for a PV to become Available (#1254, @skriss) Improve `describe` output Move Phase to right under Metadata(name/namespace/label/annotations) Move Validation errors: section right after Phase: section and only show it if the item has a phase of FailedValidation For restores move Warnings and Errors under Validation errors. Leave their display as is. (#1248, @DheerajSShetty) Don't remove storage class from a persistent volume when restoring it (#1246, @skriss) Need to defer closing the the ReadCloser in ObjectStoreGRPCServer.GetObject (#1236, @DheerajSShetty) Update Kubernetes dependencies to match v1.12, and update Azure SDK to v19.0.0 (GA) (#1231, @skriss) Remove pkg/util/collections/map_utils.go, replace with structured API types and apimachinery's unstructured helpers (#1146, @skriss) Add original resource (from backup) to restore item action interface (#1123, @mwieczorek)"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.0.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "VMCache is a new function that creates VMs as caches before using it. It helps speed up new container creation. The function consists of a server and some clients communicating through Unix socket. The protocol is gRPC in . The VMCache server will create some VMs and cache them by factory cache. It will convert the VM to gRPC format and transport it when gets requested from clients. Factory `grpccache` is the VMCache client. It will request gRPC format VM and convert it back to a VM. If VMCache function is enabled, `kata-runtime` will request VM from factory `grpccache` when it creates a new sandbox. Both and VMCache help speed up new container creation. When VM templating enabled, new VMs are created by cloning from a pre-created template VM, and they will share the same initramfs, kernel and agent memory in readonly mode. So it saves a lot of memory if there are many Kata Containers running on the same host. VMCache is not vulnerable to because each VM doesn't share the memory. VMCache can be enabled by changing your Kata Containers config file (`/usr/share/defaults/kata-containers/configuration.toml`, overridden by `/etc/kata-containers/configuration.toml` if provided) such that: `vmcachenumber` specifies the number of caches of VMCache: unspecified or == 0 VMCache is disabled `> 0` will be set to the specified number `vmcacheendpoint` specifies the address of the Unix socket. Then you can create a VM templating for later usage by calling: ``` $ sudo kata-runtime factory init ``` and purge it by `ctrl-c` it. Cannot work with VM templating. Only supports the QEMU hypervisor."
}
] |
{
"category": "Runtime",
"file_name": "what-is-vm-cache-and-how-do-I-use-it.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This document describes implementation details of the fast datapath encryption. At the high level, we use the ESP protocol () in the Transport mode. Each packet is encrypted with AES in GCM mode (), with 32 byte key and 4 byte salt. This combo provides the following security properties: Data confidentiality. Data origin authentication. Integrity. Anti-replay. Limited traffic flow confidentiality as fast datapath VXLAN packets are fully encrypted. `SAin`: IPsec security association for inbound connections. `SAout`: IPsec security association for outbound connections. `SPout`: IPsec security policy for outbound connections. Used to match outbound flows to SAout. For each connection direction, a different AES-GCM key and salt is used. The pairs are derived with HKDF (): ``` SAin[KeyAndSalt] = HKDF(sha256, ikm=sessionKey, salt=nonceIn, info=localPeerName) SAout[KeyAndSalt] = HKDF(sha256, ikm=sessionKey, salt=nonceOut, info=remotePeerName) ``` Where: the mutual `sessionKey` is derived by the library during the control plane connection establishment between the local peer and the remote peer. `nonceIn` and `nonceOut` are randomly generated 32byte nonces which are exchanged over the encrypted control plane channel. A directional secure connection between two peers is identified with SPI. The kernel requires the pair of SPI and dst IP to be unique among security associations. Thus, to avoid collisions, we generate outbound SPI on a remote peer. ``` Peer A Peer B -- fastdp.fwd.Confirm(): install iptables blocking rules, fastdp.fwd.Confirm(): nonce_BA = rand(), install iptables blocking rules, {key,salt}BA = hkdf(sessionKey, nonceBA, A), nonce_AB = rand(), spiBA = allocspi(), {key,salt}AB = hkdf(sessionKey, nonce_AB, B), create SABA(B<-A, spiBA, keyBA, saltBA), spi_AB = allocspi(), send InitSARemote(spiBA, nonceBA). --> create SAAB(A<-B, spiAB, keyAB, saltAB), <-- send InitSARemote(spiAB, nonceAB). --> recv InitSARemote(spiBA, nonceBA): recv InitSARemote(spiAB, nonceAB): <-- {key,salt}BA = hkdf(sessionKey, nonceBA, A), {key,salt}AB = hkdf(sessionKey, nonceAB, B), create SABA(B<-A, spiBA, keyBA, saltBA), create SAAB(A<-B, spiAB, keyAB, saltAB), create SPBA(B<-A, spiBA), create SPAB(A<-B, spiAB), install marking rule. install marking rule. ``` The implementation is based on the kernel IP packet transformation framework called XFRM. Unfortunately, docs are barely existing and the complexity of the framework is high. The best resource I found is Chapter 10 in \"Linux Kernel Networking: Implementation and Theory\" by Rami"
},
{
"data": "The kernel VXLAN driver does not set a dst port of a tunnel in the ip flow descriptor, thus XFRM policy lookup cannot match a policy (SPout) which includes the port. This makes impossible to encrypt only tunneled traffic between peers. To work around, we mark such outgoing packets with iptables and set the same mark in the policy selector (funnily enough, `iptables_mangle` module eventually sets the missing dst port in the flow descriptor). The challenge here is to pick a mark that it would not interfere with other networking applications before OUTPUT'ing a packet. For example, Kubernetes by default uses 1<<14 and 1<<15 marks and we choose 1<<17 (0x20000). Additionally, such workaround brings the requirement for at least the 4.2 kernel. The marking rules are the following: ``` iptables -t mangle -A OUTPUT -j WEAVE-IPSEC-OUT iptables -t mangle -A WEAVE-IPSEC-OUT -s ${LOCALPEERIP} -d ${REMOTEPEERIP} \\ -p udp --dport ${TUNNEL_PORT} -j WEAVE-IPSEC-OUT-MARK iptables -t mangle -A WEAVE-IPSEC-OUT-MARK --set-xmark ${MARK} -j MARK ``` As Linux does not implement , we install additional iptables rules to prevent from accidentally sending unencrypted traffic between peers which have previously established the secure connection. For inbound traffic, we mark each ESP packet with the mark and drop non-marked tunnel traffic: ``` iptables -t mangle -A INPUT -j WEAVE-IPSEC-IN iptables -t mangle -A WEAVE-IPSEC-IN -s ${REMOTEPEERIP} -d ${LOCALPEERIP} \\ -m esp --espspi ${SPIin} -j WEAVE-IPSEC-IN-MARK iptables -t mangle -A WEAVE-IPSEC-IN-MARK --set-xmark ${MARK} -j MARK iptables -t filter -A INPUT -j WEAVE-IPSEC-IN iptables -t filter -A WEAVE-IPSEC-IN -s ${REMOTEPEERIP} -d ${LOCALPEERIP} \\ -p udp --dport ${TUNNEL_PORT} -m mark ! --mark ${MARK} -j DROP ``` For outbound traffic, we drop marked traffic which does not match any SPout: ``` iptables -t filter -A OUTPUT ! -p esp -m policy --dir out --pol none \\ -m mark --mark ${MARK} -j DROP ``` To prevent from cycling SeqNo which makes replay attacks possible, we use 64-bit extended sequence numbers known as . In addition to the VXLAN overhead, the MTU calculation should take into account the ESP overhead which is 34-37 bytes (encrypted Payload is 4 bytes aligned) and consists of: 4 bytes (SPI). 4 bytes (Sequence Number). 8 bytes (ESP IV). 1 byte (Pad Length). 1 byte (NextHeader). 16 bytes (ICV). 0-3 bytes (Padding)."
}
] |
{
"category": "Runtime",
"file_name": "fastdp-crypto.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "layout: global title: Configuration Settings An Alluxio cluster can be configured by setting the values of Alluxio within `${ALLUXIO_HOME}/conf/alluxio-site.properties`. When different client applications (Alluxio Shell CLI, Spark jobs, MapReduce jobs) or Alluxio workers connect to an Alluxio master, they will initialize their own Alluxio configuration properties with the default values supplied by the masters based on the master-side `${ALLUXIO_HOME}/conf/alluxio-site.properties` files. As a result, cluster admins can set default client-side settings (e.g., `alluxio.user.*`), or network transport settings (e.g., `alluxio.security.authentication.type`) in `${ALLUXIO_HOME}/conf/alluxio-site.properties` on all the masters, which will be distributed and become cluster-wide default values when clients and workers connect. For example, the property `alluxio.user.file.writetype.default` defaults to `ASYNC_THROUGH`, which first writes to Alluxio and then asynchronously writes to the UFS. In an Alluxio cluster where data persistence is preferred and all jobs need to write to both the UFS and Alluxio, the administrator can add `alluxio.user.file.writetype.default=CACHE_THROUGH` in each master's `alluxio-site.properties` file. After restarting the cluster, all jobs will automatically set `alluxio.user.file.writetype.default` to `CACHE_THROUGH`. Clients can ignore or overwrite the cluster-wide default values by following the approaches described in to overwrite the same properties. Alluxio properties can be configured from multiple sources. A property's final value is determined by the following priority list, from highest priority to lowest: : When an Alluxio cluster starts, each server process including master and worker searches for `alluxio-site.properties` within the following directories in the given order, stopping when a match is found: `${CLASSPATH}`, `${HOME}/.alluxio/`, `/etc/alluxio/`, and `${ALLUXIO_HOME}/conf` : An Alluxio client may initialize its configuration based on the cluster-wide default configuration served by the masters. If no user-specified configuration is found for a property, Alluxio will fall back to its . To check the value of a specific configuration property and the source of its value, users can run the following command: ```shell $ ./bin/alluxio conf get alluxio.worker.rpc.port 29998 $ ./bin/alluxio conf get --source alluxio.worker.rpc.port DEFAULT ``` To list all of the configuration properties with sources: ```shell $ ./bin/alluxio conf get --source alluxio.conf.dir=/Users/bob/alluxio/conf (SYSTEM_PROPERTY) alluxio.debug=false (DEFAULT) ... ``` Users can also specify the `--master` option to list all of the cluster-wide configuration properties served by the masters. Note that with the `--master` option, `bin/alluxio conf get` will query the master which requires the master process to be running. Otherwise, without `--master` option, this command only checks the local configuration. ```shell $ ./bin/alluxio conf get --master --source alluxio.conf.dir=/Users/bob/alluxio/conf (SYSTEM_PROPERTY) alluxio.debug=false (DEFAULT) ... ``` Alluxio also supports Java 11. To run Alluxio on Java 11, configure the `JAVA_HOME` environment variable to point to a Java 11 installation directory. If you only want to use Java 11 for Alluxio, you can set the `JAVA_HOME` environment variable in the `alluxio-env.sh` file. Setting the `JAVA_HOME` in `alluxio-env.sh` will not affect the Java version which may be used by other applications running in the same environment. The server-side configuration checker helps discover configuration errors and warnings. Suspected configuration errors are reported through the web UI, `info doctor` CLI, and master logs. The web UI shows the result of the server configuration check. Users can also run the `info doctor` command to get the same results. ```shell $ ./bin/alluxio info doctor configuration ``` Configuration warnings can also be found in the master logs."
}
] |
{
"category": "Runtime",
"file_name": "Configuration.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Enhancement Request about: Suggest an enhancement to multus <!-- Please only use this template for submitting enhancement requests --> What would you like to be added: Why is this needed:"
}
] |
{
"category": "Runtime",
"file_name": "enhancement.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Flect does not try to reinvent the wheel! Instead, it uses the already great wheels developed by the Go community and puts them all together in the best way possible. Without these giants, this project would not be possible. Please make sure to check them out and thank them for all of their hard work. Thank you to the following GIANTS:"
}
] |
{
"category": "Runtime",
"file_name": "SHOULDERS.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `[email protected]` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`"
}
] |
{
"category": "Runtime",
"file_name": "RELEASE.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The OVS software based solution is CPU intensive, affecting system performance and preventing fully utilizing available bandwidth. OVS 2.8 and above support a feature called OVS Hardware Offload which improves performance significantly. This feature allows offloading the OVS data-plane to the NIC while maintaining OVS control-plane unmodified. It is using SR-IOV technology with VF representor host net-device. The VF representor plays the same role as TAP devices in Para-Virtual (PV) setup. A packet sent through the VF representor on the host arrives to the VF, and a packet sent through the VF is received by its representor. The following manufacturers are known to work: Mellanox ConnectX-5 and above Antrea v0.9.0 or greater Linux Kernel 5.7 or greater iproute 4.12 or greater In order to enable Open vSwitch hardware offload, the following steps are required. Please make sure you have root privileges to run the commands below. Check the Number of VF Supported on the NIC ```bash cat /sys/class/net/enp3s0f0/device/sriov_totalvfs 8 ``` Create the VFs ```bash echo '4' > /sys/class/net/enp3s0f0/device/sriov_numvfs ``` Verify that the VFs are created ```bash ip link show enp3s0f0 8: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT qlen 1000 link/ether a0:36:9f:8f:3f:b8 brd ff:ff:ff:ff:ff:ff vf 0 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 1 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 2 MAC 00:00:00:00:00:00, spoof checking on, link-state auto vf 3 MAC 00:00:00:00:00:00, spoof checking on, link-state auto ``` Set up the PF to be up ```bash ip link set enp3s0f0 up ``` Unbind the VFs from the driver ```bash echo 0000:03:00.2 > /sys/bus/pci/drivers/mlx5_core/unbind echo 0000:03:00.3 > /sys/bus/pci/drivers/mlx5_core/unbind echo 0000:03:00.4 > /sys/bus/pci/drivers/mlx5_core/unbind echo 0000:03:00.5 > /sys/bus/pci/drivers/mlx5_core/unbind ``` Configure SR-IOV VFs to switchdev mode ```bash devlink dev eswitch set pci/0000:03:00.0 mode switchdev ethtool -K enp3s0f0 hw-tc-offload on ``` Bind the VFs to the driver ```bash echo 0000:03:00.2 > /sys/bus/pci/drivers/mlx5_core/bind echo 0000:03:00.3 > /sys/bus/pci/drivers/mlx5_core/bind echo 0000:03:00.4 > /sys/bus/pci/drivers/mlx5_core/bind echo 0000:03:00.5 > /sys/bus/pci/drivers/mlx5_core/bind ``` Create a ConfigMap that defines SR-IOV resource pool configuration ```yaml apiVersion: v1 kind: ConfigMap metadata: name: sriovdp-config namespace: kube-system data: config.json: | { \"resourceList\": [{ \"resourcePrefix\": \"mellanox.com\", \"resourceName\": \"cx5sriovswitchdev\", \"isRdma\": true, \"selectors\": { \"vendors\": [\"15b3\"], \"devices\": [\"1018\"], \"drivers\": [\"mlx5_core\"] } } ] } ``` Deploy SR-IOV network device plugin as DaemonSet. See <https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin>. Deploy multus CNI as DaemonSet. See"
},
{
"data": "Create NetworkAttachementDefinition CRD with Antrea CNI config. ```yaml apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: default namespace: kube-system annotations: k8s.v1.cni.cncf.io/resourceName: mellanox.com/cx5sriovswitchdev spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"antrea\", \"plugins\": [ { \"type\": \"antrea\", \"ipam\": { \"type\": \"host-local\" } }, { \"type\": \"portmap\", \"capabilities\": {\"portMappings\": true} }, { \"type\": \"bandwidth\", \"capabilities\": {\"bandwidth\": true} }] }' ``` Modify the build/yamls/antrea.yml with offload flag ```yaml command: start_ovs --hw-offload ``` Create POD spec and request a VF ```yaml apiVersion: v1 kind: Pod metadata: name: ovs-offload-pod1 annotations: v1.multus-cni.io/default-network: default spec: containers: name: ovs-offload-app image: networkstatic/iperf3 command: sh -c | sleep 1000000 resources: requests: mellanox.com/cx5sriovswitchdev: '1' limits: mellanox.com/cx5sriovswitchdev: '1' ``` Run iperf3 server on POD 1 ```bash kubectl exec -it ovs-offload-pod1 -- iperf3 -s ``` Run iperf3 client on POD 2 ```bash kubectl exec -it ovs-offload-pod2 -- iperf3 -c 192.168.1.17 -t 100 ``` Check traffic on the VF representor port. Verify only TCP connection establishment appears ```text tcpdump -i mofed-te-b5583b tcp listening on mofed-te-b5583b, link-type EN10MB (Ethernet), capture size 262144 bytes 22:24:44.969516 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [S], seq 89800743, win 64860, options [mss 1410,sackOK,TS val 491087056 ecr 0,nop,wscale 7], length 0 22:24:44.969773 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43558: Flags [S.], seq 1312764151, ack 89800744, win 64308, options [mss 1410,sackOK,TS val 4095895608 ecr 491087056,nop,wscale 7], length 0 22:24:45.085558 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [.], ack 1, win 507, options [nop,nop,TS val 491087222 ecr 4095895608], length 0 22:24:45.085592 IP 192.168.1.16.43558 > 192.168.1.17.targus-getdata1: Flags [P.], seq 1:38, ack 1, win 507, options [nop,nop,TS val 491087222 ecr 4095895608], length 37 22:24:45.086311 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [S], seq 3802331506, win 64860, options [mss 1410,sackOK,TS val 491087279 ecr 0,nop,wscale 7], length 0 22:24:45.086462 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43560: Flags [S.], seq 441940709, ack 3802331507, win 64308, options [mss 1410,sackOK,TS val 4095895725 ecr 491087279,nop,wscale 7], length 0 22:24:45.086624 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [.], ack 1, win 507, options [nop,nop,TS val 491087279 ecr 4095895725], length 0 22:24:45.086654 IP 192.168.1.16.43560 > 192.168.1.17.targus-getdata1: Flags [P.], seq 1:38, ack 1, win 507, options [nop,nop,TS val 491087279 ecr 4095895725], length 37 22:24:45.086715 IP 192.168.1.17.targus-getdata1 > 192.168.1.16.43560: Flags [.], ack 38, win 503, options [nop,nop,TS val 4095895725 ecr 491087279], length 0 ``` Check datapath rules are offloaded ```text ovs-appctl dpctl/dump-flows --names type=offloaded recircid(0),inport(eth0),eth(src=16:fd:c6:0b:60:52),eth_type(0x0800),ipv4(src=192.168.1.17,frag=no), packets:2235857, bytes:147599302, used:0.550s, actions:ct(zone=65520),recirc(0x18) ctstate(+est+trk),ctmark(0),recircid(0x18),inport(eth0),eth(dst=42:66:d7:45:0d:7e),eth_type(0x0800),ipv4(dst=192.168.1.0/255.255.255.0,frag=no), packets:2235857, bytes:147599302, used:0.550s, actions:eth1 recircid(0),inport(eth1),eth(src=42:66:d7:45:0d:7e),eth_type(0x0800),ipv4(src=192.168.1.16,frag=no), packets:133410141, bytes:195255745684, used:0.550s, actions:ct(zone=65520),recirc(0x16) ctstate(+est+trk),ctmark(0),recircid(0x16),inport(eth1),eth(dst=16:fd:c6:0b:60:52),eth_type(0x0800),ipv4(dst=192.168.1.0/255.255.255.0,frag=no), packets:133410138, bytes:195255745483, used:0.550s, actions:eth0 ```"
}
] |
{
"category": "Runtime",
"file_name": "ovs-offload.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This website is built using , a modern static website generator. ``` $ yarn ``` ``` $ yarn start ``` This command starts a local development server and open up a browser window. Most changes are reflected live without having to restart the server. ``` $ yarn build ``` This command generates static content into the `build` directory and can be served using any static contents hosting service. ``` $ GITUSER=<Your GitHub username> USESSH=true yarn deploy ``` If you are using GitHub pages for hosting, this command is a convenient way to build the website and push to the `gh-pages` branch. iled overview of the rationale behind this method, . This package uses and therefore supports . Caveat #1: When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example: ``` BAD: exampleKey: !!binary gIGC GOOD: exampleKey: gIGC ... and decode the base64 data in your code. ``` Caveat #2: When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys. To install, run: ``` $ go get sigs.k8s.io/yaml ``` And import using: ``` import \"sigs.k8s.io/yaml\" ``` Usage is very similar to the JSON library: ```go package main import ( \"fmt\" \"sigs.k8s.io/yaml\" ) type Person struct { Name string `json:\"name\"` // Affects YAML field names too. Age int `json:\"age\"` } func main() { // Marshal a Person struct to YAML. p := Person{\"John\", 30} y, err := yaml.Marshal(p) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: age: 30 name: John */ // Unmarshal the YAML back into a Person struct. var p2 Person err = yaml.Unmarshal(y, &p2) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(p2) /* Output: {John 30} */ } ``` `yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available: ```go package main import ( \"fmt\" \"sigs.k8s.io/yaml\" ) func main() { j := []byte(`{\"name\": \"John\", \"age\": 30}`) y, err := yaml.JSONToYAML(j) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: age: 30 name: John */ j2, err := yaml.YAMLToJSON(y) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(j2)) /* Output: {\"age\":30,\"name\":\"John\"} */ } ```"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "MIT License =========== Copyright (C) 2015 Matt Layher Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "% crio 8 crio - OCI-based implementation of Kubernetes Container Runtime Interface crio ``` [--absent-mount-sources-to-reject]=[value] [--add-inheritable-capabilities] [--additional-devices]=[value] [--allowed-devices]=[value] [--apparmor-profile]=[value] [--auto-reload-registries] [--big-files-temporary-dir]=[value] [--bind-mount-prefix]=[value] [--blockio-config-file]=[value] [--blockio-reload] [--cdi-spec-dirs]=[value] [--cgroup-manager]=[value] [--clean-shutdown-file]=[value] [--cni-config-dir]=[value] [--cni-default-network]=[value] [--cni-plugin-dir]=[value] [--collection-period]=[value] [--config-dir|-d]=[value] [--config|-c]=[value] [--conmon-cgroup]=[value] [--conmon-env]=[value] [--conmon]=[value] [--container-attach-socket-dir]=[value] [--container-exits-dir]=[value] [--ctr-stop-timeout]=[value] [--decryption-keys-path]=[value] [--default-capabilities]=[value] [--default-env]=[value] [--default-mounts-file]=[value] [--default-runtime]=[value] [--default-sysctls]=[value] [--default-transport]=[value] [--default-ulimits]=[value] [--device-ownership-from-security-context] [--disable-hostport-mapping] [--drop-infra-ctr] [--enable-criu-support] [--enable-metrics] [--enable-nri] [--enable-pod-events] [--enable-profile-unix-socket] [--enable-tracing] [--gid-mappings]=[value] [--global-auth-file]=[value] [--grpc-max-recv-msg-size]=[value] [--grpc-max-send-msg-size]=[value] [--help|-h] [--hooks-dir]=[value] [--hostnetwork-disable-selinux] [--image-volumes]=[value] [--imagestore]=[value] [--included-pod-metrics]=[value] [--infra-ctr-cpuset]=[value] [--insecure-registry]=[value] [--internal-repair] [--internal-wipe] [--irqbalance-config-file]=[value] [--irqbalance-config-restore-file]=[value] [--listen]=[value] [--log-dir]=[value] [--log-filter]=[value] [--log-format]=[value] [--log-journald] [--log-level|-l]=[value] [--log-size-max]=[value] [--log]=[value] [--metrics-cert]=[value] [--metrics-collectors]=[value] [--metrics-host]=[value] [--metrics-key]=[value] [--metrics-port]=[value] [--metrics-socket]=[value] [--minimum-mappable-gid]=[value] [--minimum-mappable-uid]=[value] [--namespaces-dir]=[value] [--no-pivot] [--nri-disable-connections]=[value] [--nri-listen]=[value] [--nri-plugin-config-dir]=[value] [--nri-plugin-dir]=[value] [--nri-plugin-registration-timeout]=[value] [--nri-plugin-request-timeout]=[value] [--pause-command]=[value] [--pause-image-auth-file]=[value] [--pause-image]=[value] [--pids-limit]=[value] [--pinned-images]=[value] [--pinns-path]=[value] [--profile-cpu]=[value] [--profile-mem]=[value] [--profile-port]=[value] [--profile] [--rdt-config-file]=[value] [--read-only] [--registry]=[value] [--root|-r]=[value] [--runroot]=[value] [--runtimes]=[value] [--seccomp-profile]=[value] [--selinux] [--separate-pull-cgroup]=[value] [--shared-cpuset]=[value] [--signature-policy-dir]=[value] [--signature-policy]=[value] [--stats-collection-period]=[value] [--storage-driver|-s]=[value] [--storage-opt]=[value] [--stream-address]=[value] [--stream-enable-tls] [--stream-idle-timeout]=[value] [--stream-port]=[value] [--stream-tls-ca]=[value] [--stream-tls-cert]=[value] [--stream-tls-key]=[value] [--timezone|--tz]=[value] [--tracing-endpoint]=[value] [--tracing-sampling-rate-per-million]=[value] [--uid-mappings]=[value] [--version-file-persist]=[value] [--version-file]=[value] [--version|-v] ``` OCI-based implementation of Kubernetes Container Runtime Interface Daemon crio is meant to provide an integration path between OCI conformant runtimes and the kubelet. Specifically, it implements the Kubelet Container Runtime Interface (CRI) using OCI conformant runtimes. The scope of crio is tied to the scope of the CRI. Support multiple image formats including the existing Docker and OCI image formats. Support for multiple means to download images including trust & image verification. Container image management (managing image layers, overlay filesystems, etc). Container process lifecycle management. Monitoring and logging required to satisfy the CRI. Resource isolation as required by the CRI. Usage: ``` crio [GLOBAL OPTIONS] command [COMMAND OPTIONS] [ARGUMENTS...] ``` --absent-mount-sources-to-reject=\"\": A list of paths that, when absent from the host, will cause a container creation to fail (as opposed to the current behavior of creating a directory). --add-inheritable-capabilities: Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective. --additional-devices=\"\": Devices to add to the containers. --allowed-devices=\"\": Devices a user is allowed to specify with the \"io.kubernetes.cri-o.Devices\" allowed annotation. (default: \"/dev/fuse\") --apparmor-profile=\"\": Name of the apparmor profile to be used as the runtime's default. This only takes effect if the user does not specify a profile via the Kubernetes Pod's metadata annotation. (default: \"crio-default\") --auto-reload-registries: If true, CRI-O will automatically reload the mirror registry when there is an update to the 'registries.conf.d' directory. Default value is set to 'false'. --big-files-temporary-dir=\"\": Path to the temporary directory to use for storing big files, used to store image blobs and data streams related to containers image management. --bind-mount-prefix=\"\": A prefix to use for the source of the bind mounts. This option would be useful if you were running CRI-O in a container. And had '/' mounted on '/host' in your container. Then if you ran CRI-O with the '--bind-mount-prefix=/host' option, CRI-O would add /host to any bind mounts it is handed over CRI. If Kubernetes asked to have '/var/lib/foobar' bind mounted into the container, then CRI-O would bind mount '/host/var/lib/foobar'. Since CRI-O itself is running in a container with '/' or the host mounted on '/host', the container would end up with '/var/lib/foobar' from the host mounted in the container rather then '/var/lib/foobar' from the CRI-O"
},
{
"data": "--blockio-config-file=\"\": Path to the blockio class configuration file for configuring the cgroup blockio controller. --blockio-reload: Reload blockio-config-file and rescan blockio devices in the system before applying blockio parameters. --cdi-spec-dirs=\"\": Directories to scan for CDI Spec files. (default: \"/etc/cdi\", \"/var/run/cdi\") --cgroup-manager=\"\": cgroup manager (cgroupfs or systemd). (default: \"systemd\") --clean-shutdown-file=\"\": Location for CRI-O to lay down the clean shutdown file. It indicates whether we've had time to sync changes to disk before shutting down. If not found, crio wipe will clear the storage directory. (default: \"/var/lib/crio/clean.shutdown\") --cni-config-dir=\"\": CNI configuration files directory. (default: \"/etc/cni/net.d/\") --cni-default-network=\"\": Name of the default CNI network to select. If not set or \"\", then CRI-O will pick-up the first one found in --cni-config-dir. --cni-plugin-dir=\"\": CNI plugin binaries directory. --collection-period=\"\": The number of seconds between collecting pod/container stats and pod sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead. (default: 0) --config, -c=\"\": Path to configuration file (default: \"/etc/crio/crio.conf\") --config-dir, -d=\"\": Path to the configuration drop-in directory. This directory will be recursively iterated and each file gets applied to the configuration in their processing order. This means that a configuration file named '00-default' has a lower priority than a file named '01-my-overwrite'. The global config file, provided via '--config,-c' or per default in /etc/crio/crio.conf, always has a lower priority than the files in the directory specified by '--config-dir,-d'. Besides that, provided command line parameters have a higher priority than any configuration file. (default: \"/etc/crio/crio.conf.d\") --conmon=\"\": Path to the conmon binary, used for monitoring the OCI runtime. Will be searched for using $PATH if empty. This option is deprecated, and will be removed in the future. --conmon-cgroup=\"\": cgroup to be used for conmon process. This option is deprecated and will be removed in the future. --conmon-env=\"\": Environment variable list for the conmon process, used for passing necessary environment variables to conmon or the runtime. This option is deprecated and will be removed in the future. --container-attach-socket-dir=\"\": Path to directory for container attach sockets. (default: \"/var/run/crio\") --container-exits-dir=\"\": Path to directory in which container exit files are written to by conmon. (default: \"/var/run/crio/exits\") --ctr-stop-timeout=\"\": The minimal amount of time in seconds to wait before issuing a timeout regarding the proper termination of the container. The lowest possible value is 30s, whereas lower values are not considered by CRI-O. (default: 30) --decryption-keys-path=\"\": Path to load keys for image decryption. (default: \"/etc/crio/keys/\") --default-capabilities=\"\": Capabilities to add to the containers. (default: \"CHOWN\", \"DACOVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NETBIND_SERVICE\", \"KILL\") --default-env=\"\": Additional environment variables to set for all containers. --default-mounts-file=\"\": Path to default mounts file. --default-runtime=\"\": Default OCI runtime from the runtimes config. (default: \"runc\") --default-sysctls=\"\": Sysctls to add to the containers. --default-transport=\"\": A prefix to prepend to image names that cannot be pulled as-is. (default: \"docker://\") --default-ulimits=\"\": Ulimits to apply to containers by default (name=soft:hard). --device-ownership-from-security-context: Set devices' uid/gid ownership from runAsUser/runAsGroup. --disable-hostport-mapping: If true, CRI-O would disable the hostport mapping. --drop-infra-ctr: Determines whether pods are created without an infra container, when the pod is not using a pod level PID namespace. --enable-criu-support: Enable CRIU integration, requires that the criu binary is available in"
},
{
"data": "--enable-metrics: Enable metrics endpoint for the server. --enable-nri: Enable NRI (Node Resource Interface) support. (default: true) --enable-pod-events: If true, CRI-O starts sending the container events to the kubelet --enable-profile-unix-socket: Enable pprof profiler on crio unix domain socket. --enable-tracing: Enable OpenTelemetry trace data exporting. --gid-mappings=\"\": Specify the GID mappings to use for the user namespace. This option is deprecated, and will be replaced with Kubernetes user namespace (KEP-127) support in the future. --global-auth-file=\"\": Path to a file like /var/lib/kubelet/config.json holding credentials necessary for pulling images from secure registries. --grpc-max-recv-msg-size=\"\": Maximum grpc receive message size in bytes. (default: 83886080) --grpc-max-send-msg-size=\"\": Maximum grpc receive message size. (default: 83886080) --help, -h: show help --hooks-dir=\"\": Set the OCI hooks directory path (may be set multiple times) If one of the directories does not exist, then CRI-O will automatically skip them. Each '\\*.json' file in the path configures a hook for CRI-O containers. For more details on the syntax of the JSON files and the semantics of hook injection, see 'oci-hooks(5)'. CRI-O currently support both the 1.0.0 and 0.1.0 hook schemas, although the 0.1.0 schema is deprecated. This option may be set multiple times; paths from later options have higher precedence ('oci-hooks(5)' discusses directory precedence). For the annotation conditions, CRI-O uses the Kubernetes annotations, which are a subset of the annotations passed to the OCI runtime. For example, 'io.kubernetes.cri-o.Volumes' is part of the OCI runtime configuration annotations, but it is not part of the Kubernetes annotations being matched for hooks. For the bind-mount conditions, only mounts explicitly requested by Kubernetes configuration are considered. Bind mounts that CRI-O inserts by default (e.g. '/dev/shm') are not considered. (default: \"/usr/share/containers/oci/hooks.d\") --hostnetwork-disable-selinux: Determines whether SELinux should be disabled within a pod when it is running in the host network namespace. --image-volumes=\"\": Image volume handling ('mkdir', 'bind', or 'ignore') mkdir: A directory is created inside the container root filesystem for the volumes. bind: A directory is created inside container state directory and bind mounted into the container for the volumes. ignore: All volumes are just ignored and no action is taken. (default: \"mkdir\") --imagestore=\"\": Store newly pulled images in the specified path, rather than the path provided by --root. --included-pod-metrics=\"\": A list of pod metrics to include. Specify the names of the metrics to include in this list. --infra-ctr-cpuset=\"\": CPU set to run infra containers, if not specified CRI-O will use all online CPUs to run infra containers. --insecure-registry=\"\": Enable insecure registry communication, i.e., enable un-encrypted and/or untrusted communication. List of insecure registries can contain an element with CIDR notation to specify a whole subnet. Insecure registries accept HTTP or accept HTTPS with certificates from unknown CAs. Enabling '--insecure-registry' is useful when running a local registry. However, because its use creates security vulnerabilities, it should ONLY be enabled for testing purposes. For increased security, users should add their CA to their system's list of trusted CAs instead of using '--insecure-registry'. --internal-repair: If true, CRI-O will check if the container and image storage was corrupted after a sudden restart, and attempt to repair the storage if it"
},
{
"data": "--internal-wipe: Whether CRI-O should wipe containers after a reboot and images after an upgrade when the server starts. If set to false, one must run 'crio wipe' to wipe the containers and images in these situations. This option is deprecated, and will be removed in the future. --irqbalance-config-file=\"\": The irqbalance service config file which is used by CRI-O. (default: \"/etc/sysconfig/irqbalance\") --irqbalance-config-restore-file=\"\": Determines if CRI-O should attempt to restore the irqbalance config at startup with the mask in this file. Use the 'disable' value to disable the restore flow entirely. (default: \"/etc/sysconfig/origirqbanned_cpus\") --listen=\"\": Path to the CRI-O socket. (default: \"/var/run/crio/crio.sock\") --log=\"\": Set the log file path where internal debug information is written. --log-dir=\"\": Default log directory where all logs will go unless directly specified by the kubelet. (default: \"/var/log/crio/pods\") --log-filter=\"\": Filter the log messages by the provided regular expression. For example 'request.\\*' filters all gRPC requests. --log-format=\"\": Set the format used by logs: 'text' or 'json'. (default: \"text\") --log-journald: Log to systemd journal (journald) in addition to kubernetes log file. --log-level, -l=\"\": Log messages above specified level: trace, debug, info, warn, error, fatal or panic. (default: \"info\") --log-size-max=\"\": Maximum log size in bytes for a container. If it is positive, it must be >= 8192 to match/exceed conmon read buffer. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead. (default: -1) --metrics-cert=\"\": Certificate for the secure metrics endpoint. --metrics-collectors=\"\": Enabled metrics collectors. (default: \"imagepullslayersize\", \"containerseventsdroppedtotal\", \"containersoomtotal\", \"processesdefunct\", \"operationstotal\", \"operationslatencyseconds\", \"operationslatencysecondstotal\", \"operationserrorstotal\", \"imagepullsbytestotal\", \"imagepullsskippedbytestotal\", \"imagepullsfailuretotal\", \"imagepullssuccesstotal\", \"imagelayerreusetotal\", \"containersoomcounttotal\", \"containersseccompnotifiercounttotal\", \"resourcesstalledat_stage\") --metrics-host=\"\": Host for the metrics endpoint. (default: \"127.0.0.1\") --metrics-key=\"\": Certificate key for the secure metrics endpoint. --metrics-port=\"\": Port for the metrics endpoint. (default: 9090) --metrics-socket=\"\": Socket for the metrics endpoint. --minimum-mappable-gid=\"\": Specify the lowest host GID which can be specified in mappings for a pod that will be run as a UID other than 0. This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future. (default: -1) --minimum-mappable-uid=\"\": Specify the lowest host UID which can be specified in mappings for a pod that will be run as a UID other than 0. This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future. (default: -1) --namespaces-dir=\"\": The directory where the state of the managed namespaces gets tracked. Only used when manage-ns-lifecycle is true. (default: \"/var/run\") --no-pivot: If true, the runtime will not use 'pivotroot', but instead use 'MSMOVE'. --nri-disable-connections=\"\": Disable connections from externally started NRI plugins. (default: false) --nri-listen=\"\": Socket to listen on for externally started NRI plugins to connect to. (default: \"/var/run/nri/nri.sock\") --nri-plugin-config-dir=\"\": Directory to scan for configuration of pre-installed NRI plugins. (default: \"/etc/nri/conf.d\") --nri-plugin-dir=\"\": Directory to scan for pre-installed NRI plugins to start automatically. (default: \"/opt/nri/plugins\") --nri-plugin-registration-timeout=\"\": Timeout for a plugin to register itself with NRI. (default: 5s) --nri-plugin-request-timeout=\"\": Timeout for a plugin to handle an NRI request. (default: 2s) --pause-command=\"\": Path to the pause executable in the pause image. (default: \"/pause\") --pause-image=\"\": Image which contains the pause executable. (default: \"registry.k8s.io/pause:3.9\") --pause-image-auth-file=\"\": Path to a config file containing credentials for --pause-image. --pids-limit=\"\": Maximum number of processes allowed in a container. This option is"
},
{
"data": "The Kubelet flag '--pod-pids-limit' should be used instead. (default: -1) --pinned-images=\"\": A list of images that will be excluded from the kubelet's garbage collection. --pinns-path=\"\": The path to find the pinns binary, which is needed to manage namespace lifecycle. Will be searched for in $PATH if empty. --profile: Enable pprof remote profiler on localhost:6060. --profile-cpu=\"\": Write a pprof CPU profile to the provided path. --profile-mem=\"\": Write a pprof memory profile to the provided path. --profile-port=\"\": Port for the pprof profiler. (default: 6060) --rdt-config-file=\"\": Path to the RDT configuration file for configuring the resctrl pseudo-filesystem. --read-only: Setup all unprivileged containers to run as read-only. Automatically mounts the containers' tmpfs on '/run', '/tmp' and '/var/tmp'. --registry=\"\": Registry to be prepended when pulling unqualified images. Can be specified multiple times. --root, -r=\"\": The CRI-O root directory. (default: \"/var/lib/containers/storage\") --runroot=\"\": The CRI-O state directory. (default: \"/run/containers/storage\") --runtimes=\"\": OCI runtimes, format is 'runtimename:runtimepath:runtimeroot:runtimetype:privilegedwithouthostdevices:runtimeconfigpath:containermin_memory'. --seccomp-profile=\"\": Path to the seccomp.json profile to be used as the runtime's default. If not specified, then the internal default seccomp profile will be used. --selinux: Enable selinux support. This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future. --separate-pull-cgroup=\"\": [EXPERIMENTAL] Pull in new cgroup. --shared-cpuset=\"\": CPUs set that will be used for guaranteed containers that want access to shared cpus --signature-policy=\"\": Path to signature policy JSON file. --signature-policy-dir=\"\": Path to the root directory for namespaced signature policies. Must be an absolute path. (default: \"/etc/crio/policies\") --stats-collection-period=\"\": The number of seconds between collecting pod and container stats. If set to 0, the stats are collected on-demand instead. DEPRECATED: This option will be removed in the future. (default: 0) --storage-driver, -s=\"\": OCI storage driver. --storage-opt=\"\": OCI storage driver option. --stream-address=\"\": Bind address for streaming socket. (default: \"127.0.0.1\") --stream-enable-tls: Enable encrypted TLS transport of the stream server. --stream-idle-timeout=\"\": Length of time until open streams terminate due to lack of activity. --stream-port=\"\": Bind port for streaming socket. If the port is set to '0', then CRI-O will allocate a random free port number. (default: \"0\") --stream-tls-ca=\"\": Path to the x509 CA(s) file used to verify and authenticate client communication with the encrypted stream. This file can change and CRI-O will automatically pick up the changes within 5 minutes. --stream-tls-cert=\"\": Path to the x509 certificate file used to serve the encrypted stream. This file can change and CRI-O will automatically pick up the changes within 5 minutes. --stream-tls-key=\"\": Path to the key file used to serve the encrypted stream. This file can change and CRI-O will automatically pick up the changes within 5 minutes. --timezone, --tz=\"\": To set the timezone for a container in CRI-O. If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine. --tracing-endpoint=\"\": Address on which the gRPC tracing collector will listen. (default: \"0.0.0.0:4317\") --tracing-sampling-rate-per-million=\"\": Number of samples to collect per million OpenTelemetry spans. Set to 1000000 to always sample. (default: 0) --uid-mappings=\"\": Specify the UID mappings to use for the user namespace. This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the"
},
{
"data": "--version, -v: print the version --version-file=\"\": Location for CRI-O to lay down the temporary version file. It is used to check if crio wipe should wipe containers, which should always happen on a node reboot. (default: \"/var/run/crio/version\") --version-file-persist=\"\": Location for CRI-O to lay down the persistent version file. It is used to check if crio wipe should wipe images, which should only happen when CRI-O has been upgraded. Generate bash, fish or zsh completions. Generate the man page documentation. Generate the markdown documentation. --help, -h: show help Shows a list of commands or help for one command Outputs a commented version of the configuration file that could be used by CRI-O. This allows you to save you current configuration setup and then load it later with --config. Global options will modify the output. --default: Output the default configuration (without taking into account any configuration options). --migrate-defaults, -m=\"\": Migrate the default config from a specified version. The migrate-defaults command has been deprecated and will be removed in the future. To run a config migration, just select the input config via the global '--config,-c' command line argument, for example: ``` crio -c /etc/crio/crio.conf.d/00-default.conf config -m 1.17 ``` The migration will print converted configuration options to stderr and will output the resulting configuration to stdout. Please note that the migration will overwrite any fields that have changed defaults between versions. To save a custom configuration change, it should be in a drop-in configuration file instead. Possible values: \"1.17\" (default: \"1.17\") display detailed version information --json, -j: print JSON instead of text --verbose, -v: print verbose information (for example all golang dependencies) wipe CRI-O's container and image storage --force, -f: force wipe by skipping the version check Display status information --socket, -s=\"\": absolute path to the unix socket (default: \"/var/run/crio/crio.sock\") Show the configuration of CRI-O as a TOML string. Display detailed information about the provided container ID. --id, -i=\"\": the container ID Retrieve generic information about CRI-O, such as the cgroup and storage driver. Shows a list of commands or help for one command crio.conf (/etc/crio/crio.conf) cri-o configuration file for all of the available command-line options for the crio(8) program, but in a TOML format that can be more easily modified and versioned. policy.json (/etc/containers/policy.json) Signature verification policy files are used to specify policy, e.g. trusted keys, applicable when deciding whether to accept an image, or individual signatures of that image, as valid. registries.conf (/etc/containers/registries.conf) Registry configuration file specifies registries which are consulted when completing image names that do not include a registry or domain portion. storage.conf (/etc/containers/storage.conf) Storage configuration file specifies all of the available container storage options for tools using shared container storage. All command-line options may also be specified as environment variables. The options detailed in this section, however, can only be set via environment variables. KUBENSMNT: Path to a bind-mounted mount namespace that CRI-O should join before launching any containers. If the path does not exist, or does not point to a mount namespace bindmount, CRI-O will run in its parent's mount namespace and log a warning that the requested namespace was not joined. crio.conf(5), crio.conf.d(5), oci-hooks(5), policy.json(5), registries.conf(5), storage.conf(5)"
}
] |
{
"category": "Runtime",
"file_name": "crio.8.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "(metrics)= <!-- Include start metrics intro --> Incus collects metrics for all running instances as well as some internal metrics. These metrics cover the CPU, memory, network, disk and process usage. They are meant to be consumed by Prometheus, and you can use Grafana to display the metrics as graphs. See {ref}`provided-metrics` for lists of available metrics. <!-- Include end metrics intro --> In a cluster environment, Incus returns only the values for instances running on the server that is being accessed. Therefore, you must scrape each cluster member separately. The instance metrics are updated when calling the `/1.0/metrics` endpoint. To handle multiple scrapers, they are cached for 8 seconds. Fetching metrics is a relatively expensive operation for Incus to perform, so if the impact is too high, consider scraping at a higher than default interval. To view the raw data that Incus collects, use the command to query the `/1.0/metrics` endpoint: ```{terminal} :input: incus query /1.0/metrics incuscpuseconds_total{cpu=\"0\",mode=\"system\",name=\"u1\",project=\"default\",type=\"container\"} 60.304517 incuscpuseconds_total{cpu=\"0\",mode=\"user\",name=\"u1\",project=\"default\",type=\"container\"} 145.647502 incuscpuseconds_total{cpu=\"0\",mode=\"iowait\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 4614.78 incuscpuseconds_total{cpu=\"0\",mode=\"irq\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 0 incuscpuseconds_total{cpu=\"0\",mode=\"idle\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 412762 incuscpuseconds_total{cpu=\"0\",mode=\"nice\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 35.06 incuscpuseconds_total{cpu=\"0\",mode=\"softirq\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 2.41 incuscpuseconds_total{cpu=\"0\",mode=\"steal\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 9.84 incuscpuseconds_total{cpu=\"0\",mode=\"system\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 340.84 incuscpuseconds_total{cpu=\"0\",mode=\"user\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 261.25 incuscpueffective_total{name=\"u1\",project=\"default\",type=\"container\"} 4 incuscpueffective_total{name=\"vm\",project=\"default\",type=\"virtual-machine\"} 0 incusdiskreadbytestotal{device=\"loop5\",name=\"u1\",project=\"default\",type=\"container\"} 2048 incusdiskreadbytestotal{device=\"loop3\",name=\"vm\",project=\"default\",type=\"virtual-machine\"} 353280 ... ``` To gather and store the raw metrics, you should set up . You can then configure it to scrape the metrics through the metrics API endpoint. To expose the `/1.0/metrics` API endpoint, you must set the address on which it should be available. To do so, you can set either the {config:option}`server-core:core.metricsaddress` server configuration option or the {config:option}`server-core:core.httpsaddress` server configuration option. The `core.metricsaddress` option is intended for metrics only, while the `core.httpsaddress` option exposes the full API. So if you want to use a different address for the metrics API than for the full API, or if you want to expose only the metrics endpoint but not the full API, you should set the `core.metrics_address` option. For example, to expose the full API on the `8443` port, enter the following command: incus config set core.https_address \":8443\" To expose only the metrics API endpoint on the `8444` port, enter the following command: incus config set core.metrics_address \":8444\" To expose only the metrics API endpoint on a specific IP address and port, enter a command similar to the following: incus config set core.metrics_address \"192.0.2.101:8444\" Authentication for the `/1.0/metrics` API endpoint is done through a metrics certificate. A metrics certificate (type `metrics`) is different from a client certificate (type `client`) in that it is meant for metrics only and doesn't work for interaction with instances or any other Incus entities. To create a certificate, enter the following command: openssl req -x509 -newkey ec -pkeyopt ecparamgencurve:secp384r1 -sha384 -keyout metrics.key -nodes -out metrics.crt -days 3650 -subj \"/CN=metrics.local\" ```{note} The command requires OpenSSL version 1.1.0 or later. ``` Then add this certificate to the list of trusted clients, specifying the type as `metrics`: incus config trust add-certificate metrics.crt --type=metrics If requiring TLS client authentication isn't possible in your environment, the `/1.0/metrics` API endpoint can be made available to unauthenticated clients. While not recommended, this might be acceptable if you have other controls in place to restrict who can reach that API endpoint. To disable the authentication on the metrics API: ```bash incus config set core.metrics_authentication false ``` If you run Prometheus on a different machine than your Incus server, you must copy the required certificates to the Prometheus machine: The metrics certificate (`metrics.crt`) and key (`metrics.key`) that you created The Incus server certificate (`server.crt`) located in `/var/lib/incus/` Copy these files into a `tls` directory that is accessible to Prometheus, for example,"
},
{
"data": "See the following example commands: ```bash mkdir /etc/prometheus/tls/ cp metrics.crt metrics.key /etc/prometheus/tls/ cp /var/lib/incus/server.crt /etc/prometheus/tls/ chown -R prometheus:prometheus /etc/prometheus/tls ``` Finally, you must add Incus as a target to the Prometheus configuration. To do so, edit `/etc/prometheus/prometheus.yaml` and add a job for Incus. Here's what the configuration needs to look like: ```yaml global: scrape_interval: 15s scrape_configs: job_name: incus metrics_path: '/1.0/metrics' scheme: 'https' static_configs: targets: ['foo.example.com:8443'] tls_config: ca_file: 'tls/server.crt' cert_file: 'tls/metrics.crt' key_file: 'tls/metrics.key' server_name: 'foo' ``` ````{note} The `scrape_interval` is assumed to be 15s by the Grafana Prometheus data source by default. If you decide to use a different `scrape_interval` value, you must change it in both the Prometheus configuration and the Grafana Prometheus data source configuration. Otherwise the Grafana `$rate_interval` value will be calculated incorrectly and possibly cause a `no data` response in queries using it. The `server_name` must be specified if the Incus server certificate does not contain the same host name as used in the `targets` list. To verify this, open `server.crt` and check the Subject Alternative Name (SAN) section. For example, assume that `server.crt` has the following content: ```{terminal} :input: openssl x509 -noout -text -in /etc/prometheus/tls/server.crt ... X509v3 Subject Alternative Name: DNS:foo, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1 ... ``` Since the Subject Alternative Name (SAN) list doesn't include the host name provided in the `targets` list (`foo.example.com`), you must override the name used for comparison using the `server_name` directive. ```` Here is an example of a `prometheus.yml` configuration where multiple jobs are used to scrape the metrics of multiple Incus servers: ```yaml global: scrape_interval: 15s scrape_configs: job_name: \"incus-hdc\" metrics_path: '/1.0/metrics' params: project: ['jdoe'] scheme: 'https' static_configs: targets: 'abydos.hosts.example.net:8444' 'langara.hosts.example.net:8444' 'orilla.hosts.example.net:8444' tls_config: ca_file: 'tls/abydos.crt' cert_file: 'tls/metrics.crt' key_file: 'tls/metrics.key' server_name: 'abydos' job_name: \"incus-jupiter\" metrics_path: '/1.0/metrics' scheme: 'https' static_configs: targets: ['jupiter.example.com:9101'] tls_config: ca_file: 'tls/jupiter.crt' cert_file: 'tls/metrics.crt' key_file: 'tls/metrics.key' server_name: 'jupiter' job_name: \"incus-mars\" metrics_path: '/1.0/metrics' scheme: 'https' static_configs: targets: ['mars.example.com:9101'] tls_config: ca_file: 'tls/mars.crt' cert_file: 'tls/metrics.crt' key_file: 'tls/metrics.key' server_name: 'mars' job_name: \"incus-saturn\" metrics_path: '/1.0/metrics' scheme: 'https' static_configs: targets: ['saturn.example.com:9101'] tls_config: ca_file: 'tls/saturn.crt' cert_file: 'tls/metrics.crt' key_file: 'tls/metrics.key' server_name: 'saturn' ``` After editing the configuration, restart Prometheus (for example, `systemctl restart prometheus`) to start scraping. To visualize the metrics data, set up . Incus provides a that is configured to display the Incus metrics scraped by Prometheus and log entries from Loki. ```{note} The dashboard requires Grafana 8.4 or later. ``` See the Grafana documentation for instructions on installing and signing in: Complete the following steps to import the : Configure Prometheus as a data source: Go to {guilabel}`Configuration` > {guilabel}`Data sources`. Click {guilabel}`Add data source`. Select {guilabel}`Prometheus`. In the {guilabel}`URL` field, enter `http://localhost:9090/` if running Prometheus locally. Keep the default configuration for the other fields and click {guilabel}`Save & test`. Configure Loki as a data source: Go to {guilabel}`Configuration` > {guilabel}`Data sources`. Click {guilabel}`Add data source`. Select {guilabel}`Loki`. In the {guilabel}`URL` field, enter `http://localhost:3100/` if running Loki locally. Keep the default configuration for the other fields and click {guilabel}`Save & test`. Import the Incus dashboard: Go to {guilabel}`Dashboards` > {guilabel}`Browse`. Click {guilabel}`New` and select {guilabel}`Import`. In the {guilabel}`Import via grafana.com` field, enter the dashboard ID `19727`. Click {guilabel}`Load`. In the {guilabel}`Incus` drop-down menu, select the Prometheus and Loki data sources that you configured. Click {guilabel}`Import`. You should now see the Incus dashboard. You can select the project and filter by instances. At the bottom of the page, you can see data for each instance. ```{note} For proper operation of the Loki part of the dashboard, you need to ensure that the `instance` field matches the Prometheus job name. You can change the `instance` field through the `loki.instance` configuration key. ```"
}
] |
{
"category": "Runtime",
"file_name": "metrics.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "SAIO is a containerized instance of Openstack Swift object storage. It is running the main services of Swift, designed to provide an endpoint for application developers to test against both the Swift and AWS S3 API. It can also be used when integrating with a CI/CD system. These images are not configured to provide data durability and are not intended for production use. ``` docker pull openstackswift/saio docker run -d -p 8080:8080 openstackswift/saio ``` Example using swift client to target endpoint: ``` swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat ``` Example using s3cmd to test AWS S3: Create config file: ``` [default] access_key = test:tester secret_key = testing host_base = localhost:8080 host_bucket = localhost:8080 use_https = False ``` Test with s3cmd: ``` s3cmd -c s3cfg_saio mb s3://bucket ``` Image tags: `latest` automatically built/published by Zuul, follows master branch. Releases are also tagged in case you want to test against a specific release. Source Code: github.com/openstack/swift Maintained by: Openstack Swift community Feedback/Questions: #openstack-swift on OFTC"
}
] |
{
"category": "Runtime",
"file_name": "dockerhub_description.md",
"project_name": "Swift",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This page demonstrates some of the built-in markdown extensions provided by VitePress. VitePress provides Syntax Highlighting powered by , with additional features like line-highlighting: Input ````md ```js{4} export default { data () { return { msg: 'Highlighted!' } } } ``` ```` Output ```js{4} export default { data () { return { msg: 'Highlighted!' } } } ``` Input ```md ::: info This is an info box. ::: ::: tip This is a tip. ::: ::: warning This is a warning. ::: ::: danger This is a dangerous warning. ::: ::: details This is a details block. ::: ``` Output ::: info This is an info box. ::: ::: tip This is a tip. ::: ::: warning This is a warning. ::: ::: danger This is a dangerous warning. ::: ::: details This is a details block. ::: Check out the documentation for the ."
}
] |
{
"category": "Runtime",
"file_name": "markdown-examples.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: FUSE Mount Options sidebar_position: 5 slug: /fusemountoptions JuiceFS provides several access methods, FUSE is the common one, which is the way to mount the file system locally using the `juicefs mount` command. Users can add FUSE mount options for more granular control. This guide describes the common FUSE mount options for JuiceFS, with two ways to add mount options: Run , and use `-o` to specify multiple options separated by commas. ```bash juicefs mount -d -o allowother,writebackcache sqlite3://myjfs.db ~/jfs ``` When writing `/etc/fstab` items, add FUSE options directly to the `options` field, with multiple options separated by commas. ``` redis://localhost:6379/1 /jfs juicefs netdev,writebackcache 0 0 ``` This option is automatically enabled when JuiceFS is mounted and does not need to be explicitly specified. It will enable the kernel's file access checks, which are performed outside the filesystem. When enabled, both the kernel checks and the file system checks must succeed before further operations. :::tip The kernel performs standard Unix permission checks based on mode bits, UID/GID, and directory entry ownership. ::: By default FUSE only allows access to the user mounting the file system. `allowother` option overrides this behavior to allow access for other users. When mounting JuiceFS using root, `allowother` is automatically assumed (search for `AllowOther` in ). When mounting by non-root users, you'll need to first modify `/etc/fuse.conf` and enable `userallowother`, and then add `allow_other` to the mount command. :::note This mount option requires at least version 3.15 Linux kernel ::: FUSE supports , which means the `write()` syscall can often complete rapidly. It's recommended to enable this mount option when write small data (e.g. 100 bytes) frequently. These two options are used to specify the owner ID and owner group ID of the mount point, but only allow to execute the mount command as root, e.g. `sudo juicefs mount -o userid=100,groupid=100`. This option will output Debug information from the low-level library (`go-fuse`) to `juicefs.log`. :::note This option will output debug information for the low-level library (`go-fuse`) to `juicefs.log`. Note that this option is different from the global `-debug` option for the JuiceFS client, where the former outputs debug information for the `go-fuse` library and the latter outputs debug information for the JuiceFS client. see the documentation . :::"
}
] |
{
"category": "Runtime",
"file_name": "fuse_mount_options.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document describes steps to deploy Antrea in `networkPolicyOnly` mode or `encap` mode to an AWS EKS cluster. In `networkPolicyOnly` mode, Antrea implements NetworkPolicy and other services for an EKS cluster, while Amazon VPC CNI takes care of IPAM and Pod traffic routing across Nodes. Refer to for more information about `networkPolicyOnly` mode. This document assumes you already have an EKS cluster, and have the `KUBECONFIG` environment variable point to the kubeconfig file of that cluster. You can follow to create the cluster. With Antrea >=v0.9.0 release, you should apply `antrea-eks-node-init.yaml` before deploying Antrea. This will restart existing Pods (except those in host network), so that Antrea can also manage them (i.e. enforce NetworkPolicies on them) once it is installed. ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-eks-node-init.yml ``` To deploy a released version of Antrea, pick a deployment manifest from the . Note that EKS support was added in release 0.5.0, which means you cannot pick a release older than 0.5.0. For any given release `<TAG>` (e.g. `v0.5.0`), you can deploy Antrea as follows: ```bash kubectl apply -f https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea-eks.yml ``` To deploy the latest version of Antrea (built from the main branch), use the checked-in : ```bash kubectl apply -f https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-eks.yml ``` Now Antrea should be plugged into the EKS CNI and is ready to enforce NetworkPolicy. In `encap` mode, Antrea acts as the primary CNI of an EKS cluster, and implements all Pod networking functionalities, including IPAM and routing across Nodes. The major benefit of Antrea as the primary CNI is that it can get rid of the Pods per Node limits with Amazon VPC CNI. For example, the default mode of VPC CNI allocates a secondary IP for each Pod, and the maximum number of Pods that can be created on a Node is decided by the maximum number of elastic network interfaces and secondary IPs per interface that can be attached to an EC2 instance type. When Antrea is the primary CNI, Pods are connected to the Antrea overlay network and Pod IPs are allocated from the private CIDRs configured for an EKS cluster, and so the number of Pods per Node is no longer limited by the number of secondary IPs per"
},
{
"data": "Note: as a general limitation when using custom CNIs with EKS, Antrea cannot be installed to the EKS control plane Nodes. As a result, EKS control plane cannot initiate a connection to a Pod in Antrea overlay network, when Antrea runs in `encap` mode, and so applications that require control plane to Pod connections might not work properly. For example, , , or , will not work with `encap` mode on EKS, when the Services are provided by Pods in overlay network. A workaround is to run such Pods in `hostNetwork`. This guide uses `eksctl` to create an EKS cluster, but you can also follow the to create an EKS cluster. `eksctl` can be installed following the . Run the following `eksctl` command to create a cluster named `antrea-eks-cluster`: ```bash eksctl create cluster --name antrea-eks-cluster --without-nodegroup ``` After the command runs successfully, you should be able to access the cluster using `kubectl`, for example: ```bash kubectl get node ``` Note, as the cluster does not have a node group configured yet, no Node will be returned by the command. As Antrea is the primary CNI in `encap` mode, the VPC CNI (`aws-node` DaemonSet) installed with the EKS cluster needs to be deleted: ```bash kubectl -n kube-system delete daemonset aws-node ``` First, download the Antrea deployment yaml. Note that `encap` mode support for EKS was added in release 1.4.0, which means you cannot pick a release older than 1.4.0. For any given release `<TAG>` (e.g. `v1.4.0`), get the Antrea deployment yaml at: ```text https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml ``` To deploy the latest version of Antrea (built from the main branch), get the deployment yaml at: ```text https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml ``` `encap` mode on EKS requires Antrea's built-in Node IPAM feature to be enabled. For information about how to configure Antrea Node IPAM, please refer to . After enabling Antrea Node IPAM in the deployment yaml, deploy Antrea with: ```bash kubectl apply -f antrea.yml ``` For example, you can run the following command to create a node group of two Nodes: ```bash eksctl create nodegroup --cluster antrea-eks-cluster --nodes 2 ``` After the EKS Nodes are successfully created and booted, you can verify that Antrea Controller and Agent Pods are running on the Nodes: ```bash $ kubectl get pods --namespace kube-system -l app=antrea NAME READY STATUS RESTARTS AGE antrea-agent-bpj72 2/2 Running 0 40s antrea-agent-j2sjz 2/2 Running 0 40s antrea-controller-6f7468cbff-5sk4t 1/1 Running 0 43s ```"
}
] |
{
"category": "Runtime",
"file_name": "eks-installation.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "performance/write-behind translator =================================== Basic working -- Write behind is basically a translator to lie to the application that the write-requests are finished, even before it is actually finished. On a regular translator tree without write-behind, control flow is like this: application makes a `write()` system call. VFS ==> FUSE ==> `/dev/fuse`. fuse-bridge initiates a glusterfs `writev()` call. `writev()` is `STACK_WIND()`ed up to client-protocol or storage translator. client-protocol, on receiving reply from server, starts `STACK_UNWIND()` towards the fuse-bridge. On a translator tree with write-behind, control flow is like this: application makes a `write()` system call. VFS ==> FUSE ==> `/dev/fuse`. fuse-bridge initiates a glusterfs `writev()` call. `writev()` is `STACK_WIND()`ed up to write-behind translator. write-behind adds the write buffer to its internal queue and does a `STACK_UNWIND()` towards the fuse-bridge. write call is completed in application's percepective. after `STACK_UNWIND()`ing towards the fuse-bridge, write-behind initiates a fresh writev() call to its child translator, whose replies will be consumed by write-behind itself. Write-behind doesn't cache the write buffer, unless `option flush-behind on` is specified in volume specification file. Windowing With respect to write-behind, each write-buffer has three flags: `stackwound`, `writebehind` and `got_reply`. `stackwound`: if set, indicates that write-behind has initiated `STACKWIND()` towards child translator. `writebehind`: if set, indicates that write-behind has done `STACKUNWIND()` towards fuse-bridge. `gotreply`: if set, indicates that write-behind has received reply from child translator for a `writev()` `STACKWIND()`. a request will be destroyed by write-behind only if this flag is set. Currently pending write requests = aggregate size of requests with writebehind = 1 and gotreply = 0. window size limits the aggregate size of currently pending write requests. once the pending requests' size has reached the window size, write-behind blocks writev() calls from fuse-bridge. Blocking is only from application's perspective. Write-behind does `STACK_WIND()` to child translator straight-away, but hold behind the `STACK_UNWIND()` towards fuse-bridge. `STACK_UNWIND()` is done only once write-behind gets enough replies to accommodate for currently blocked request. Flush behind If `option flush-behind on` is specified in volume specification file, then write-behind sends aggregate write requests to child translator, instead of regular per request `STACK_WIND()`s."
}
] |
{
"category": "Runtime",
"file_name": "write-behind.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "vineyard-ml: Accelerating Data Science Pipelines ================================================ Vineyard has been tightly integrated with the data preprocessing pipelines in widely-adopted machine learning frameworks like PyTorch, TensorFlow, and MXNet. Shared objects in vineyard, e.g., `vineyard::Tensor`, `vineyard::DataFrame`, `vineyard::Table`, etc., can be directly used as the inputs of the training and inference tasks in these frameworks. Examples -- The following examples shows how `DataFrame` in vineyard can be used as the input of Dataset for PyTorch: ```python import os import numpy as np import pandas as pd import torch import vineyard client = vineyard.connect(os.environ['VINEYARDIPCSOCKET']) df = pd.DataFrame({ 'data': vineyard.data.dataframe.NDArrayArray(np.random.rand(1000, 10)), 'label': np.random.rand(1000) }) object_id = client.put(df) from vineyard.contrib.ml.torch import torch_context with torch_context(): ds = client.get(object_id) from vineyard.contrib.ml.torch import datapipe pipe = datapipe(ds) for data, label in pipe: pass ``` The following example shows how to use vineyard to share pytorch modules between processes: ```python import torch import vineyard client = vineyard.connect(os.environ['VINEYARDIPCSOCKET']) class Model(nn.Module): def init(self): super().init() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) model = Model() from vineyard.contrib.ml.torch import torch_context with torch_context(): object_id = client.put(model) model = Model() with torch_context(): statedict = client.get(objectid) model.loadstatedict(state_dict, assign=True) ``` Reference and Implementation : including PyTorch datasets, torcharrow and torchdata. For more details about vineyard itself, please refer to the project."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Vineyard",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "An Intel Graphics device can be passed to a Kata Containers container using GPU passthrough (Intel GVT-d) as well as GPU mediated passthrough (Intel GVT-g). Intel GVT-d (one VM to one physical GPU) also named as Intel-Graphics-Device passthrough feature is one flavor of graphics virtualization approach. This flavor allows direct assignment of an entire GPU to a single user, passing the native driver capabilities through the hypervisor without any limitations. Intel GVT-g (multiple VMs to one physical GPU) is a full GPU virtualization solution with mediated pass-through.<br/> A virtual GPU instance is maintained for each VM, with part of performance critical resources, directly assigned. The ability to run a native graphics driver inside a VM without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability. | Technology | Description | Behaviour | Detail | |-|-|-|-| | Intel GVT-d | GPU passthrough | Physical GPU assigned to a single VM | Direct GPU assignment to VM without limitation | | Intel GVT-g | GPU sharing | Physical GPU shared by multiple VMs | Mediated passthrough | For client platforms, 5th generation Intel Core Processor Graphics or higher are required. For server platforms, E3_v4 or higher Xeon Processor Graphics are required. The following steps outline the workflow for using an Intel Graphics device with Kata. The following configurations need to be enabled on your host kernel: ``` CONFIGVFIOIOMMU_TYPE1=m CONFIG_VFIO=m CONFIGVFIOPCI=m CONFIGVFIOMDEV=m CONFIGVFIOMDEV_DEVICE=m CONFIGDRMI915_GVT=m CONFIGDRMI915GVTKVMGT=m ``` Your host kernel needs to be booted with `intel_iommu=on` on the kernel command line. To use this feature, you need Kata version 1.3.0 or above. Follow the to install the latest version of Kata. In order to pass a GPU to a Kata Container, you need to enable the `hotplugvfioonrootbus` configuration in the Kata `configuration.toml` file as shown below. ``` $ sudo sed -i -e 's/^# \\(hotplug_vfio_on_root_bus\\).=.*$/\\1 = true/g' /usr/share/defaults/kata-containers/configuration.toml ``` Make sure you are using the `q35` machine type by verifying `machine_type = \"q35\"` is set in the `configuration.toml`. Make sure `pcierootport` is set to a positive value. The default guest kernel installed with Kata Containers does not provide GPU support. To use an Intel GPU with Kata Containers, you need to build a kernel with the necessary GPU support. The following i915 kernel config options need to be enabled: ``` CONFIG_DRM=y CONFIGDRMI915=y CONFIGDRMI915_USERPTR=y ``` Build the Kata Containers kernel with the previous config options, using the instructions described in . For further details on building and installing guest kernels, see"
},
{
"data": "There is an easy way to build a guest kernel that supports Intel GPU: ``` $ ./build-kernel.sh -g intel -f setup $ ./build-kernel.sh -g intel build $ sudo -E ./build-kernel.sh -g intel install /usr/share/kata-containers/vmlinux-intel-gpu.container -> vmlinux-5.4.15-70-intel-gpu /usr/share/kata-containers/vmlinuz-intel-gpu.container -> vmlinuz-5.4.15-70-intel-gpu ``` Before using the new guest kernel, please update the `kernel` parameters in `configuration.toml`. ``` kernel = \"/usr/share/kata-containers/vmlinuz-intel-gpu.container\" ``` Use the following steps to pass an Intel Graphics device in GVT-d mode with Kata: Find the Bus-Device-Function (BDF) for GPU device: ``` $ sudo lspci -nn -D | grep Graphics 0000:00:02.0 VGA compatible controller [0300]: Intel Corporation Broadwell-U Integrated Graphics [8086:1616] (rev 09) ``` Run the previous command to determine the BDF for the GPU device on host.<br/> From the previous output, PCI address `0000:00:02.0` is assigned to the hardware GPU device.<br/> This BDF is used later to unbind the GPU device from the host.<br/> \"8086 1616\" is the device ID of the hardware GPU device. It is used later to rebind the GPU device to `vfio-pci` driver. Find the IOMMU group for the GPU device: ``` $ BDF=\"0000:00:02.0\" $ readlink -e /sys/bus/pci/devices/$BDF/iommu_group /sys/kernel/iommu_groups/1 ``` The previous output shows that the GPU belongs to IOMMU group 1. Unbind the GPU: ``` $ echo $BDF | sudo tee /sys/bus/pci/devices/$BDF/driver/unbind ``` Bind the GPU to the `vfio-pci` device driver: ``` $ sudo modprobe vfio-pci $ echo 8086 1616 | sudo tee /sys/bus/pci/drivers/vfio-pci/new_id $ echo $BDF | sudo tee --append /sys/bus/pci/drivers/vfio-pci/bind ``` After you run the previous commands, the GPU is bound to `vfio-pci` driver.<br/> A new directory with the IOMMU group number is created under `/dev/vfio`: ``` $ ls -l /dev/vfio total 0 crw- 1 root root 241, 0 May 18 15:38 1 crw-rw-rw- 1 root root 10, 196 May 18 15:37 vfio ``` Start a Kata container with GPU device: ``` $ sudo docker run -it --runtime=kata-runtime --rm --device /dev/vfio/1 -v /dev:/dev debian /bin/bash ``` Run `lspci` within the container to verify the GPU device is seen in the list of the PCI devices. Note the vendor-device id of the GPU (\"8086:1616\") in the `lspci` output. ``` $ lspci -nn -D 0000:00:00.0 Class [0600]: Device [8086:1237] (rev 02) 0000:00:01.0 Class [0601]: Device [8086:7000] 0000:00:01.1 Class [0101]: Device [8086:7010] 0000:00:01.3 Class [0680]: Device [8086:7113] (rev 03) 0000:00:02.0 Class [0604]: Device [1b36:0001] 0000:00:03.0 Class [0780]: Device [1af4:1003] 0000:00:04.0 Class [0100]: Device [1af4:1004] 0000:00:05.0 Class [0002]: Device [1af4:1009] 0000:00:06.0 Class [0200]: Device [1af4:1000] 0000:00:0f.0 Class [0300]: Device [8086:1616] (rev 09) ``` Additionally, you can access the device node for the graphics device: ``` $ ls /dev/dri card0 renderD128 ``` For GVT-g, you append"
},
{
"data": "in addition to `inteliommu=on` on your host kernel command line and then reboot your host. Use the following steps to pass an Intel Graphics device in GVT-g mode to a Kata Container: Find the BDF for GPU device: ``` $ sudo lspci -nn -D | grep Graphics 0000:00:02.0 VGA compatible controller [0300]: Intel Corporation Broadwell-U Integrated Graphics [8086:1616] (rev 09) ``` Run the previous command to find out the BDF for the GPU device on host. The previous output shows PCI address \"0000:00:02.0\" is assigned to the GPU device. Choose the MDEV (Mediated Device) type for VGPU (Virtual GPU): For background on `mdev` types, please follow this . List out the `mdev` types for the VGPU: ``` $ BDF=\"0000:00:02.0\" $ ls /sys/devices/pci0000:00/$BDF/mdevsupportedtypes i915-GVTgV41 i915-GVTgV42 i915-GVTgV44 i915-GVTgV48 ``` Inspect the `mdev` types and choose one that fits your requirement: ``` $ cd /sys/devices/pci0000:00/0000:00:02.0/mdevsupportedtypes/i915-GVTgV48 && ls availableinstances create description deviceapi devices $ cat description lowgmsize: 64MB highgmsize: 384MB fence: 4 resolution: 1024x768 weight: 2 $ cat available_instances 7 ``` The output of file `description` represents the GPU resources that are assigned to the VGPU with specified MDEV type.The output of file `available_instances` represents the remaining amount of VGPUs you can create with specified MDEV type. Create a VGPU: Generate a UUID: ``` $ gpu_uuid=$(uuid) ``` Write the UUID to the `create` file under the chosen `mdev` type: ``` $ echo $(gpuuuid) | sudo tee /sys/devices/pci0000:00/0000:00:02.0/mdevsupportedtypes/i915-GVTgV4_8/create ``` Find the IOMMU group for the VGPU: ``` $ ls -la /sys/devices/pci0000:00/0000:00:02.0/mdevsupportedtypes/i915-GVTgV48/devices/${gpuuuid}/iommugroup lrwxrwxrwx 1 root root 0 May 18 14:35 devices/bbc4aafe-5807-11e8-a43e-03533cceae7d/iommugroup -> ../../../../kernel/iommugroups/0 $ ls -l /dev/vfio total 0 crw- 1 root root 241, 0 May 18 11:30 0 crw-rw-rw- 1 root root 10, 196 May 18 11:29 vfio ``` The IOMMU group \"0\" is created from the previous output.<br/> Now you can use the device node `/dev/vfio/0` in docker command line to pass the VGPU to a Kata Container. Start Kata container with GPU device enabled: ``` $ sudo docker run -it --runtime=kata-runtime --rm --device /dev/vfio/0 -v /dev:/dev debian /bin/bash $ lspci -nn -D 0000:00:00.0 Class [0600]: Device [8086:1237] (rev 02) 0000:00:01.0 Class [0601]: Device [8086:7000] 0000:00:01.1 Class [0101]: Device [8086:7010] 0000:00:01.3 Class [0680]: Device [8086:7113] (rev 03) 0000:00:02.0 Class [0604]: Device [1b36:0001] 0000:00:03.0 Class [0780]: Device [1af4:1003] 0000:00:04.0 Class [0100]: Device [1af4:1004] 0000:00:05.0 Class [0002]: Device [1af4:1009] 0000:00:06.0 Class [0200]: Device [1af4:1000] 0000:00:0f.0 Class [0300]: Device [8086:1616] (rev 09) ``` BDF \"0000:00:0f.0\" is assigned to the VGPU device. Additionally, you can access the device node for the graphics device: ``` $ ls /dev/dri card0 renderD128 ```"
}
] |
{
"category": "Runtime",
"file_name": "Intel-GPU-passthrough-and-Kata.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: \"Upgrading to Velero 1.2\" layout: docs Velero or installed. Note: if you're upgrading from v1.0, follow the instructions first. Install the Velero v1.2 command-line interface (CLI) by following the . Verify that you've properly installed it by running: ```bash velero version --client-only ``` You should see the following output: ```bash Client: Version: v1.2.0 Git commit: <git SHA> ``` Scale down the existing Velero deployment: ```bash kubectl scale deployment/velero \\ --namespace velero \\ --replicas 0 ``` Update the container image used by the Velero deployment and, optionally, the restic daemon set: ```bash kubectl set image deployment/velero \\ velero=velero/velero:v1.2.0 \\ --namespace velero kubectl set image daemonset/restic \\ restic=velero/velero:v1.2.0 \\ --namespace velero ``` If using AWS, Azure, or GCP, add the respective plugin to your Velero deployment: For AWS: ```bash velero plugin add velero/velero-plugin-for-aws:v1.0.0 ``` For Azure: ```bash velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.0 ``` For GCP: ```bash velero plugin add velero/velero-plugin-for-gcp:v1.0.0 ``` Update the Velero custom resource definitions (CRDs) to include the structural schemas: ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` Scale back up the existing Velero deployment: ```bash kubectl scale deployment/velero \\ --namespace velero \\ --replicas 1 ``` Confirm that the deployment is up and running with the correct version by running: ```bash velero version ``` You should see the following output: ```bash Client: Version: v1.2.0 Git commit: <git SHA> Server: Version: v1.2.0 ```"
}
] |
{
"category": "Runtime",
"file_name": "upgrade-to-1.2.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Gluster library libglusterfs.so provides message logging abstractions that are intended to be used across all code/components within gluster. There could be potentially 2 major cases how the logging infrastructure is used, A new gluster service daemon or end point is created The service daemon infrastructure itself initlializes the logging infrastructure (i.e calling gfloginit and related set functions) See, glusterfsd.c:logging_init Alternatively there could be a case where an end point service (say gfapi) may need to do the required initialization This document does not (yet?) cover guidelines for these cases. Best bet would be to look at code in glusterfsd.c:logging_init (or equivalent) in case a need arises and you reach this document. A new xlator or subcomponent is written as a part of the stack Primarily in this case, the consumer of the logging APIs would only invoke an API to log a particular message at a certain severity This document elaborates on this use of the message logging framework in this context There are 3 interfaces provided to log messages: GF_LOG* structured message interface All new messages should be defined using this interface. More details about it in the next section. gf_msg* interface This interface is deprecated now. New log messages should use the new structured interface. gf_log* interface This interface was deprecated long ago and it must not be used anymore. This interface is designed to be easy to use, flexible and consistent. The main advantages are: Centralized message definition All messages are defined in a unique location. If a message text needs to be updated, only one place has to be changed, even if the same log message is used in many places. Customizable list of additional data per message Each message can contain a list of additional info that will be logged as part of the message itself. This extra data is: Declared once It's defined as part of the centralized message definition itself Typed Each value has a type that is checked by the C compiler at build time to ensure correctness. Enforced Each extra data field needs to be specified when a message of that type is logged. If the fields passed when a message is logged doesn't match the definition, the compiler will generate an error. This way it's easy to identify all places where a message has been used and update them. Better uniformity in data type representation Each data types are represented in the same way in all messages, increasing the consistency of the logs. Compile-time generation of messages The text and the extra data is formatted at compile time to reduce run time cost. All argument preparation is done only if the message will be logged Data types that need some preprocessing to be logged, are not computed until we are sure that the message needs to be logged based on the current log level. Very easy to use Definition of messages and its utilization is quite simple. There are some predefined types, but it's easy to create new data types if needed. Code auto-completion friendly Once a message is defined, logging it is very simple when an IDE with code auto-completion is used. The code auto-completion will help to find the name of the message and the list of arguments it"
},
{
"data": "All extra overhead is optimally optimized by gcc/clang The additional code and structures required to make all this possible are easily optimized by compilers, so resulting code is equivalent to directly logging the message. All messages at log level INFO or above need to be declared inside a header file. They will be assigned a unique identifier that will appear in the logs so that specific messages can be easily located even if the text description changes. For DEBUG and TRACE messages, we don't assign a unique identifier to them and the message is defined in-place where it's used with a very similar format. If a new xlator or component is created that requires some messages, the first thing to do is to reserve a component ID in file glusterfs/glfs-message-id.h. This is done by adding a new `GLFSMSGIDCOMP()` entry at the end of the `enum msgidcomp`. A unique name and a number of blocks to reserve must be specified (each block can contain up to 1000 messages). Example: ```c GLFSMSGIDCOMP(EXAMPLE, 1), / new segments for messages goes above this line / GLFSMSGIDEND ``` Once created, a copy of glusterfs/template-component-messages.h can be used as a starting point for the messages of the new component. Check the comments of that file for more information, but basically you need to use the macro `GLFS_COMPONENT()` before starting defining the messages. Example: ```c GLFS_COMPONENT(EXAMPLE); ``` Each message is automatically assigned a unique sequential number and it should remain the same once created. This means that you must create new messages at the end of the file, after any other message. This way the newly created message will take the next free sequential id, without touching any previously assigned id. To define a message, the macro `GLFS_NEW()` must be used. It requires four mandatory arguments: The name of the component. This is the one created in the previous section. The name of the message. This is the name to use when you want to log the message. The text associated to the message. This must be a fixed string without any formatting. The number of extra data fields to include to the message. If there are extra data fields, for each field you must add field definition inside the macro. For debug and trace logs, messages are not predefined. Wherever a these messages are used, the definition of the message itself is used instead of the name of the message. Each field consists of five arguments, written between parenthesis: Data type This is a regular C type that will be used to manipulate the data. It can be anything valid. Field name This is the name that will be used to reference the data and to show it in the log message. It must be a valid C identifier. Data source This is only used for in-place messages. It's a simple piece of code to access the data. It can be just a variable name or something a bit more complex like a structure access or even a function call returning a value. Format string This is a string representing the way in which this data will be shown in the log. It can be something as simple as '%u' or a bit more elaborated like '%d (%s)', depending on how we want to show something. Format data This must be a list of expressions to generate each of the arguments needed for the format"
},
{
"data": "In most cases this will be just the name of the field, but it could be something else if the data needs to be processed. Preparation code This is optional. If present it must contain any additional variable definition and code to prepare the format data. Examples for message definitions: ```c (uint32_t, value, , \"%u\", (value)) ``` ```c (int32_t, error, , \"%d (%s)\", (error, strerror(error))) ``` ```c (uuidt *, gfid, , \"%s\", (gfidstr), char gfidstr[48]; uuidunparse(*gfid, gfid_str)) ``` Examples for in-place messages: ```c (uint32_t, value, data->count, \"%u\", (value)) ``` ```c (int32_t, error, errno, \"%d (%s)\", (error, strerror(error))) ``` ```c (uuidt *, gfid, &inode->gfid, \"%s\", (gfidstr), char gfidstr[48]; uuidunparse(*gfid, gfid_str)) ``` Some macros are available to declare typical data types and make them easier to use: Signed integers: `GLFS_INT(name [, src])` Unsigned integers: `GLFS_UINT(name [, src])` Errors: Positive errors: `GLFS_ERR(name [, src])` Negative errors: `GLFS_RES(name [, src])` Strings: `GLFS_STR(name [, src])` UUIDs: `GLFS_UUID(name [, src])` Pointers: `GLFS_PTR(name [, src])` The `src` argument is only used for in-place messages. This is a full example that defines a new message using the previous macros: ```c GLFSNEW(EXAMPLE, MSGTEST, \"This is a test message\", 3, GLFS_UINT(number), GLFS_STR(name), GLFS_ERR(error) ) ``` This will generate a log message with the following format: ```c \"This is a test message <{number=%u}, {name='%s'}, {error=%d (%s)}>\" ``` Once a message is defined, it can be logged using the following macros: `GFLOGC()`: log a critical message `GFLOGE()`: log an error message `GFLOGW()`: log a warning message `GFLOGI()`: log an info message `GFLOGD()`: log a debug message `GFLOGT()`: log a trace message All macros receive a string, representing the domain of the log message. For INFO or higher messages, the name of the messages is passed, including all additional data between parenthesis. In case of DEBUG and TRACE messages, a message definition follows. Example: ```c GFLOGI(this->name, MSG_TEST(10, \"something\", ENOENT)); ``` The resulting logging message would be similar to this: ```c \"This is a test message <{number=10}, {name='something'}, {error=2 (File not found)}>\" ``` A similar example with a debug message: ```c GFLOGD(this->name, \"Debug message\", GLFS_UINT(number, data->value), GLFS_STR(name), GLFSERR(error, operrno) ); Note that if the field name matches the source of the data as in the case of the second field, the source argument can be omitted. Given the amount of existing messages, it's not feasible to migrate all of them at once, so a special macro is provided to allow incremental migration of existing log messages. Migrate header file The first step is to update the header file where all message IDs are defined. Initialize the component You need to add the `GLFS_COMPONENT()` macro at the beginning with the appropriate component name. This name can be found in the first argument of the existing `GLFS_MSGID()` macro. Replace message definitions All existing messages inside `GLFS_MSGID()` need to be converted to: ```c GLFS_MIG(component, id, \"\", 0) ``` Where `component` is the name of the component used in `GLFS_COMPONENT()`, and `id` is each of the existing IDs inside `GLFS_MSGID()`. This step will use the new way of defining messages, but is compatible with the old logging interface, so once this is done, the code should compile fine. Migrate a message It's possible to migrate the messages one by one without breaking anything. For each message to migrate: Choose one message. Replace `GLFSMIG` by `GLFSNEW`. Add a meaningful message text as the third argument. Update the number of fields if necessary. Add the required field definition. Look for each instance of the log message in the code. Replace the existing log macro by one of the `GFLOG*()` macros."
}
] |
{
"category": "Runtime",
"file_name": "logging-guidelines.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "sidebar_label: Integrations sidebar_position: 2 slug: /integrations team contributed . team participated in the development of JuiceFSRuntime cache engine, please refer to . team has integrated JuiceFS into , please refer to . Build a distributed cluster based on JuiceFS, the Milvus team wrote a and . that is a OLAP engine could deploy with the JuiceFS in dissaggregated storage and compute architecture on every public cloud platform, there is (in Chinese) and for this use case. supports JuiceFS since v0.10.0, you can refer to to learn how to configure JuiceFS. by Toowoxx IT GmbH, an IT service company from Germany"
}
] |
{
"category": "Runtime",
"file_name": "integrations.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List PCAP recorder entries ``` cilium-dbg bpf recorder list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - PCAP recorder"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_recorder_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "vminfo <SUBCOMMAND> [OPTIONS] `vminfo(8)` is a tool to interface with the `vminfod` service on a machine using the exposed HTTP interface. It can be used for administrators to determine service level health and status. help Print a help message and exit. ping Ping the server (GET /ping) and return the output with the appropriate exit status code set. status [-j] [-f] Show server status (GET /status). Supply `-f` for \"full\" output (more internal details about vminfod's state) and `-j` for JSON output. vms Return a JSON array of all VMs known by vminfod (GET /vms). To use this programatically prefer `vmadm lookp -j`. vm [uuid] Return a JSON object for the VM uuid given known by vminfod (GET /vms/:uuid). To use this programatically prefer `vmadm get :uuid`. events Connect to the events stream (GET /events) and print events as they come in. To use this programatically prefer `vmadm events`. `vminfo ping` Check if the service is up. `vminfo status -f` Print full status. This tool should be used for interactive output only, and is not meant to provide a stable interface to use for vminfod. If you are trying to interface with `vminfod` use the `vmadm(8)` command (especially `vmadm events`) and, for internal platform code, the `vminfod/client` Node.js library."
}
] |
{
"category": "Runtime",
"file_name": "vminfo.8.md",
"project_name": "SmartOS",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: How to Set Up Object Storage sidebar_position: 3 description: This article introduces the object storages supported by JuiceFS and how to configure and use it. import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; As you can learn from , JuiceFS is a distributed file system with data and metadata stored separately. JuiceFS uses object storage as the main data storage and uses databases such as Redis, PostgreSQL and MySQL as metadata storage. When creating a JuiceFS file system, there are following options to set up the storage: `--storage`: Specify the type of storage to be used by the file system, e.g. `--storage s3` `--bucket`: Specify the storage access address, e.g. `--bucket https://myjuicefs.s3.us-east-2.amazonaws.com` `--access-key` and `--secret-key`: Specify the authentication information when accessing the storage For example, the following command uses Amazon S3 object storage to create a file system: ```shell juicefs format --storage s3 \\ --bucket https://myjuicefs.s3.us-east-2.amazonaws.com \\ --access-key abcdefghijklmn \\ --secret-key nmlkjihgfedAcBdEfg \\ redis://192.168.1.6/1 \\ myjfs ``` When executing the `juicefs format` or `juicefs mount` command, you can set some special options in the form of URL parameters in the `--bucket` option, such as `tls-insecure-skip-verify=true` in `https://myjuicefs.s3.us-east-2.amazonaws.com?tls-insecure-skip-verify=true` is to skip the certificate verification of HTTPS requests. When creating a file system, multiple buckets can be defined as the underlying storage of the file system through the option. In this way, the system will distribute the files to multiple buckets based on the hashed value of the file name. Data sharding technology can distribute the load of concurrent writing of large-scale data to multiple buckets, thereby improving the writing performance. The following are points to note when using the data sharding function: The `--shards` option accepts an integer between 0 and 256, indicating how many Buckets the files will be scattered into. The default value is 0, indicating that the data sharding function is not enabled. Only multiple buckets under the same object storage can be used. The integer wildcard `%d` needs to be used to specify the buckets, for example, `\"http://192.168.1.18:9000/myjfs-%d\"`. Buckets can be created in advance in this format, or automatically created by the JuiceFS client when creating a file system. The data sharding is set at the time of creation and cannot be modified after creation. You cannot increase or decrease the number of buckets, nor cancel the shards function. For example, the following command creates a file system with 4 shards. ```shell juicefs format --storage s3 \\ --shards 4 \\ --bucket \"https://myjfs-%d.s3.us-east-2.amazonaws.com\" \\ ... ``` After executing the above command, the JuiceFS client will create 4 buckets named `myjfs-0`, `myjfs-1`, `myjfs-2`, and `myjfs-3`. In general, object storages are authenticated with Access Key ID and Access Key Secret. For JuiceFS file system, they are provided by options `--access-key` and `--secret-key` (or AK, SK for short). It is more secure to pass credentials via environment variables `ACCESSKEY` and `SECRETKEY` instead of explicitly specifying the options `--access-key` and `--secret-key` in the command line when creating a filesystem, e.g., ```shell export ACCESS_KEY=abcdefghijklmn export SECRET_KEY=nmlkjihgfedAcBdEfg juicefs format --storage s3 \\ --bucket https://myjuicefs.s3.us-east-2.amazonaws.com \\ redis://192.168.1.6/1 \\ myjfs ``` Public clouds typically allow users to create IAM (Identity and Access Management) roles, such as or , which can be assigned to VM instances. If the cloud server instance already has read and write access to the object storage, there is no need to specify `--access-key` and"
},
{
"data": "Permanent access credentials generally have two parts, Access Key, Secret Key, while temporary access credentials generally include three parts, Access Key, Secret Key and token, and temporary access credentials have an expiration time, usually between a few minutes and a few hours. Different cloud vendors have different acquisition methods. Generally, the Access Key, Secret Key and ARN representing the permission boundary of the temporary access credential are required as parameters to request access to the STS server of the cloud service vendor to obtain the temporary access credential. This process can generally be simplified by the SDK provided by the cloud vendor. For example, Amazon S3 can refer to this to obtain temporary credentials, and Alibaba Cloud OSS can refer to this . The way of using temporary credentials is not much different from using permanent credentials. When formatting the file system, pass the Access Key, Secret Key, and token of the temporary credentials through `--access-key`, `--secret-key`, `--session-token` can set the value. E.g: ```bash juicefs format \\ --storage oss \\ --access-key xxxx \\ --secret-key xxxx \\ --session-token xxxx \\ --bucket https://bucketName.oss-cn-hangzhou.aliyuncs.com \\ redis://localhost:6379/1 \\ test1 ``` Since temporary credentials expire quickly, the key is how to update the temporary credentials that JuiceFS uses after `format` the file system before the temporary credentials expire. The credential update process is divided into two steps: Before the temporary certificate expires, apply for a new temporary certificate; Without stopping the running JuiceFS, use the `juicefs config Meta-URL --access-key xxxx --secret-key xxxx --session-token xxxx` command to hot update the access credentials. Newly mounted clients will use the new credentials directly, and all clients already running will also update their credentials within a minute. The entire update process will not affect the running business. Due to the short expiration time of the temporary credentials, the above steps need to be executed in a long-term loop to ensure that the JuiceFS service can access the object storage normally. Typically, object storage services provide a unified URL for access, but the cloud platform usually provides both internal and external endpoints. For example, the platform cloud services that meet the criteria will automatically resolve requests to the internal endpoint of the object storage. This offers you a lower latency, and internal network traffic is free. Some cloud computing platforms also distinguish between internal and public networks, but instead of providing a unified access URL, they provide separate internal Endpoint and public Endpoint addresses. JuiceFS also provides flexible support for this object storage service that distinguishes between internal and public addresses. For scenarios where the same file system is shared, the object storage is accessed through internal Endpoint on the servers that meet the criteria, and other computers are accessed through public Endpoint, which can be used as follows: When creating a file system: It is recommended to use internal Endpoint address for `--bucket` When mounting a file system: For clients that do not satisfy the internal line, you can specify a public Endpoint address to `--bucket`. Creating a file system using an internal Endpoint ensures better performance and lower latency, and for clients that cannot be accessed through an internal address, you can specify a public Endpoint to mount with the option `--bucket`. Object storage usually supports multiple storage classes, such as standard storage, infrequent access storage, and archive storage. Different storage classes will have different prices and availability, you can set the default storage class with the option when creating the JuiceFS file system, or set a new storage class with the option when mounting the JuiceFS file"
},
{
"data": "Please refer to the user manual of the object storage you are using to see how to set the value of the `--storage-class` option (such as ). :::note When using certain storage classes (such as archive and deep archive), the data cannot be accessed immediately, and the data needs to be restored in advance and accessed after a period of time. ::: :::note When using certain storage classes (such as infrequent access), there are minimum bill units, and additional charges may be incurred for reading data. Please refer to the user manual of the object storage you are using for details. ::: If the network environment where the client is located is affected by firewall policies or other factors that require access to external object storage services through a proxy, the corresponding proxy settings are different for different operating systems. Please refer to the corresponding user manual for settings. On Linux, for example, the proxy can be set by creating `httpproxy` and `httpsproxy` environment variables. ```shell export http_proxy=http://localhost:8035/ export https_proxy=http://localhost:8035/ juicefs format \\ --storage s3 \\ ... \\ myjfs ``` If you wish to use a storage system that is not listed, feel free to submit a requirement . | Name | Value | |:--:|:-:| | | `s3` | | | `gs` | | | `wasb` | | | `b2` | | | `ibmcos` | | | `s3` | | | `scw` | | | `space` | | | `wasabi` | | | `s3` | | | `s3` | | | `s3` | | | `s3` | | | `bunny` | | | `oss` | | | `cos` | | | `obs` | | | `bos` | | | `tos` | | | `ks3` | | | `qingstor` | | | `qiniu` | | | `scs` | | | `oos` | | | `eos` | | | `s3` | | | `ufile` | | | `ceph` | | | `s3` | | | `gluster` | | | `swift` | | | `minio` | | | `webdav` | | | `hdfs` | | | `s3` | | | `redis` | | | `tikv` | | | `etcd` | | | `sqlite3` | | | `mysql` | | | `postgres` | | | `file` | | | `sftp` | S3 supports : virtual hosted-style and path-style. The difference is: Virtual-hosted-style: `https://<bucket>.s3.<region>.amazonaws.com` Path-style: `https://s3.<region>.amazonaws.com/<bucket>` The `<region>` should be replaced with specific region code, e.g. the region code of US East (N. Virginia) is `us-east-1`. All the available region codes can be found . :::note For AWS users in China, you need add `.cn` to the host, i.e. `amazonaws.com.cn`, and check for region code. ::: :::note If the S3 bucket has public access (anonymous access is supported), please set `--access-key` to `anonymous`. ::: In JuiceFS both the two styles are supported to specify the bucket address, for example: <Tabs groupId=\"amazon-s3-endpoint\"> <TabItem value=\"virtual-hosted-style\" label=\"Virtual-hosted-style\"> ```bash juicefs format \\ --storage s3 \\ --bucket https://<bucket>.s3.<region>.amazonaws.com \\ ... \\ myjfs ``` </TabItem> <TabItem value=\"path-style\" label=\"Path-style\"> ```bash juicefs format \\ --storage s3 \\ --bucket https://s3.<region>.amazonaws.com/<bucket> \\ ... \\ myjfs ``` </TabItem> </Tabs> You can also set `--storage` to `s3` to connect to S3-compatible object storage, e.g.: <Tabs groupId=\"amazon-s3-endpoint\"> <TabItem value=\"virtual-hosted-style\" label=\"Virtual-hosted-style\"> ```bash juicefs format \\ --storage s3 \\ --bucket https://<bucket>.<endpoint> \\ ... \\ myjfs ``` </TabItem> <TabItem value=\"path-style\" label=\"Path-style\"> ```bash juicefs format \\ --storage s3 \\ --bucket https://<endpoint>/<bucket> \\ ... \\ myjfs ``` </TabItem> </Tabs> :::tip The format of the option `--bucket` for all S3 compatible object storage services is `https://<bucket>.<endpoint>` or `https://<endpoint>/<bucket>`. The default `region` is"
},
{
"data": "When a different `region` is required, it can be set manually via the environment variable `AWSREGION` or `AWSDEFAULT_REGION`. ::: Google Cloud uses to manage permissions for accessing resources. Through authorizing , you can have a fine-grained control of the access rights of cloud servers and object storage. For cloud servers and object storage that belong to the same service account, as long as the account grants access to the relevant resources, there is no need to provide authentication information when creating a JuiceFS file system, and the cloud platform will automatically complete authentication. For cases where you want to access the object storage from outside the Google Cloud Platform, for example, to create a JuiceFS file system on your local computer using Google Cloud Storage, you need to configure authentication information. Since Google Cloud Storage does not use Access Key ID and Access Key Secret, but rather the JSON key file of the service account to authenticate the identity. Please refer to to create JSON key file for the service account and download it to the local computer, and define the path to the key file via the environment variable `GOOGLEAPPLICATION CREDENTIALS`, e.g.: ```shell export GOOGLEAPPLICATIONCREDENTIALS=\"$HOME/service-account-file.json\" ``` You can write the command to create environment variables to `~/.bashrc` or `~/.profile` and have the shell set it automatically every time you start. Once you have configured the environment variables for passing key information, the commands to create a file system locally and on Google Cloud Server are identical. For example, ```bash juicefs format \\ --storage gs \\ --bucket <bucket>[.region] \\ ... \\ myjfs ``` As you can see, there is no need to include authentication information in the command, and the client will authenticate the access to the object storage through the JSON key file set in the previous environment variable. Also, since the bucket name is , when creating a file system, you only need to specify the bucket name in the option `--bucket`. To use Azure Blob Storage as data storage of JuiceFS, please to learn how to view the storage account name and access key, which correspond to the values of the `--access-key` and `--secret-key` options, respectively. The `--bucket` option is set in the format `https://<container>.<endpoint>`, please replace `<container>` with the name of the actual blob container and `<endpoint>` with `core.windows.net` (Azure Global) or `core.chinacloudapi.cn` (Azure China). For example: ```bash juicefs format \\ --storage wasb \\ --bucket https://<container>.<endpoint> \\ --access-key <storage-account-name> \\ --secret-key <storage-account-access-key> \\ ... \\ myjfs ``` In addition to providing authorization information through the options `--access-key` and `--secret-key`, you could also create a and set the environment variable `AZURESTORAGECONNECTION_STRING`. For example: ```bash export AZURESTORAGECONNECTION_STRING=\"DefaultEndpointsProtocol=https;AccountName=XXX;AccountKey=XXX;EndpointSuffix=core.windows.net\" juicefs format \\ --storage wasb \\ --bucket https://<container> \\ ... \\ myjfs ``` :::note For Azure users in China, the value of `EndpointSuffix` is `core.chinacloudapi.cn`. ::: To use Backblaze B2 as a data storage for JuiceFS, you need to create first. Application Key ID and Application Key corresponds to Access Key and Secret Key, respectively. Backblaze B2 supports two access interfaces: the B2 native API and the S3-compatible API. The storage type should be set to `b2`, and only the bucket name needs to be set in the option `--bucket`. For example: ```bash juicefs format \\ --storage b2 \\ --bucket <bucket> \\ --access-key <application-key-ID> \\ --secret-key <application-key> \\ ... \\ myjfs ``` The storage type should be set to `s3`, and the full bucket address in the option `bucket` needs to be specified. For example: ```bash juicefs format \\ --storage s3 \\ --bucket https://s3.eu-central-003.backblazeb2.com/<bucket> \\ --access-key <application-key-ID> \\ --secret-key <application-key> \\"
},
{
"data": "\\ myjfs ``` When creating JuiceFS file system using IBM Cloud Object Storage, you first need to create an and an . The \"API key\" and \"instance ID\" are the equivalent of access key and secret key, respectively. IBM Cloud Object Storage provides for each region, depending on your network (e.g. public or private). Thus, please choose an appropriate endpoint. For example: ```bash juicefs format \\ --storage ibmcos \\ --bucket https://<bucket>.<endpoint> \\ --access-key <API-key> \\ --secret-key <instance-ID> \\ ... \\ myjfs ``` Oracle Cloud Object Storage supports S3 compatible access. Please refer to for more information. The `endpoint` format for this object storage is: `${namespace}.compat.objectstorage.${region}.oraclecloud.com`, for example: ```bash juicefs format \\ --storage s3 \\ --bucket https://<bucket>.<endpoint> \\ --access-key <your-access-key> \\ --secret-key <your-sceret-key> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.s3.<region>.scw.cloud`. Remember to replace `<region>` with specific region code, e.g. the region code of \"Amsterdam, The Netherlands\" is `nl-ams`. All available region codes can be found . For example: ```bash juicefs format \\ --storage scw \\ --bucket https://<bucket>.s3.<region>.scw.cloud \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<space-name>.<region>.digitaloceanspaces.com`. Please replace `<region>` with specific region code, e.g. `nyc3`. All available region codes can be found . For example: ```bash juicefs format \\ --storage space \\ --bucket https://<space-name>.<region>.digitaloceanspaces.com \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.s3.<region>.wasabisys.com`, replace `<region>` with specific region code, e.g. the region code of US East 1 (N. Virginia) is `us-east-1`. All available region codes can be found . For example: ```bash juicefs format \\ --storage wasabi \\ --bucket https://<bucket>.s3.<region>.wasabisys.com \\ ... \\ myjfs ``` :::note For users in Tokyo (ap-northeast-1) region, please refer to to learn how to get appropriate endpoint URI.* ::: Prerequisites A this will be used as both `access-key` and `secret-key` Set up JuiceFS: ```bash juicefs format \\ --storage s3 \\ --bucket https://<regional-endpoint>.telnyxstorage.com/<bucket> \\ --access-key <api-key> \\ --secret-key <api-key> \\ ... \\ myjfs ``` Available regional endpoints are . Please refer to to learn how to create access key and secret key. Storj DCS is an S3-compatible storage, using `s3` for option `--storage`. The setting format of the option `--bucket` is `https://gateway.<region>.storjshare.io/<bucket>`, and please replace `<region>` with the corresponding region code you need. There are currently three available regions: `us1`, `ap1` and `eu1`. For example: ```shell juicefs format \\ --storage s3 \\ --bucket https://gateway.<region>.storjshare.io/<bucket> \\ --access-key <your-access-key> \\ --secret-key <your-sceret-key> \\ ... \\ myjfs ``` :::caution Storj DCS API is not fully S3 compatible (result list is not sorted), so some features of JuiceFS do not work. For example, `juicefs gc`, `juicefs fsck`, `juicefs sync`, `juicefs destroy`. And when using `juicefs mount`, you need to disable function by adding `--backup-meta 0`. ::: Vultr Object Storage is an S3-compatible storage, using `s3` for `--storage` option. The format of the option `--bucket` is `https://<bucket>.<region>.vultrobjects.com/`. For example: ```shell juicefs format \\ --storage s3 \\ --bucket https://<bucket>.ewr1.vultrobjects.com/ \\ --access-key <your-access-key> \\ --secret-key <your-sceret-key> \\ ... \\ myjfs ``` Please find the access and secret keys for object storage . R2 is Cloudflare's object storage service and provides an S3-compatible API, so usage is the same as Amazon S3. Please refer to to learn how to create Access Key and Secret Key. ```shell juicefs format \\ --storage s3 \\ --bucket https://<ACCOUNT_ID>.r2.cloudflarestorage.com/myjfs \\ --access-key <your-access-key> \\ --secret-key <your-sceret-key> \\"
},
{
"data": "\\ myjfs ``` For production, it is recommended to pass key information via the `ACCESSKEY` and `SECRETKEY` environment variables, e.g. ```shell export ACCESS_KEY=<your-access-key> export SECRET_KEY=<your-sceret-key> juicefs format \\ --storage s3 \\ --bucket https://<ACCOUNT_ID>.r2.cloudflarestorage.com/myjfs \\ ... \\ myjfs ``` :::caution Cloudflare R2 `ListObjects` API is not fully S3 compatible (result list is not sorted), so some features of JuiceFS do not work. For example, `juicefs gc`, `juicefs fsck`, `juicefs sync`, `juicefs destroy`. And when using `juicefs mount`, you need to disable function by adding `--backup-meta 0`. ::: Bunny Storage offers a non-S3 compatible object storage with multiple performance tiers and many storage regions. It uses . This is not included by default, please build it with tag `bunny` Create a Storage Zone and use the Zone Name with the Hostname of the Location seperated by a dot as Bucket name and the `Write Password` as Secret Key. ```shell juicefs format \\ --storage bunny \\ --secret-key \"write-password\" \\ --bucket \"https://uk.storage.bunnycdn.com/myzone\" \\ # https://<Endpoint>/<Zonename> myjfs ``` Please follow to learn how to get access key and secret key. If you have already created and assigned it to a VM instance, you could omit the options `--access-key` and `--secret-key`. Alibaba Cloud also supports using to authorize temporary access to OSS. If you wanna use STS, you should omit the options `--access-key` and `--secret-key` and set environment variables `ALICLOUDACCESSKEYID`, `ALICLOUDACCESSKEYSECRET` and `SECURITY_TOKEN`instead, for example: ```bash export ALICLOUDACCESSKEY_ID=XXX export ALICLOUDACCESSKEY_SECRET=XXX export SECURITY_TOKEN=XXX juicefs format \\ --storage oss \\ --bucket https://<bucket>.<endpoint> \\ ... \\ myjfs ``` OSS provides for each region, depending on your network (e.g. public or internal network). Please choose an appropriate endpoint. If you are creating a file system on AliCloud's server, you can specify the bucket name directly in the option `--bucket`. For example. ```bash juicefs format \\ --storage oss \\ --bucket <bucket> \\ ... \\ myjfs ``` The naming rule of bucket in Tencent Cloud is `<bucket>-<APPID>`, so you must append `APPID` to the bucket name. Please follow to learn how to get `APPID`. The full format of `--bucket` option is `https://<bucket>-<APPID>.cos.<region>.myqcloud.com`, and please replace `<region>` with specific region code. E.g. the region code of Shanghai is `ap-shanghai`. You could find all available region codes . For example: ```bash juicefs format \\ --storage cos \\ --bucket https://<bucket>-<APPID>.cos.<region>.myqcloud.com \\ ... \\ myjfs ``` If you are creating a file system on Tencent Cloud's server, you can specify the bucket name directly in the option `--bucket`. For example. ```bash juicefs format \\ --storage cos \\ --bucket <bucket>-<APPID> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.obs.<region>.myhuaweicloud.com`, and please replace `<region>` with specific region code. E.g. the region code of Beijing 1 is `cn-north-1`. You could find all available region codes . For example: ```bash juicefs format \\ --storage obs \\ --bucket https://<bucket>.obs.<region>.myhuaweicloud.com \\ ... \\ myjfs ``` If you are creating a file system on Huawei Cloud's server, you can specify the bucket name directly in the option `--bucket`. For example, ```bash juicefs format \\ --storage obs \\ --bucket <bucket> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.<region>.bcebos.com`, and please replace `<region>` with specific region code. E.g. the region code of Beijing is `bj`. You could find all available region codes . For example: ```bash juicefs format \\ --storage bos \\ --bucket https://<bucket>.<region>.bcebos.com \\"
},
{
"data": "\\ myjfs ``` If you are creating a file system on Baidu Cloud's server, you can specify the bucket name directly in the option `--bucket`. For example, ```bash juicefs format \\ --storage bos \\ --bucket <bucket> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The TOS provides for each region, depending on your network (e.g. public or internal). Please choose an appropriate endpoint. For example: ```bash juicefs format \\ --storage tos \\ --bucket https://<bucket>.<endpoint> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. KS3 provides for each region, depending on your network (e.g. public or internal). Please choose an appropriate endpoint. For example: ```bash juicefs format \\ --storage ks3 \\ --bucket https://<bucket>.<endpoint> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.<region>.qingstor.com`, replace `<region>` with specific region code. E.g. the region code of Beijing 3-A is `pek3a`. You could find all available region codes . For example: ```bash juicefs format \\ --storage qingstor \\ --bucket https://<bucket>.<region>.qingstor.com \\ ... \\ myjfs ``` :::note The format of `--bucket` option for all QingStor compatible object storage services is `http://<bucket>.<endpoint>`. ::: Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.s3-<region>.qiniucs.com`, replace `<region>` with specific region code. E.g. the region code of China East is `cn-east-1`. You could find all available region codes . For example: ```bash juicefs format \\ --storage qiniu \\ --bucket https://<bucket>.s3-<region>.qiniucs.com \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.stor.sinaapp.com`. For example: ```bash juicefs format \\ --storage scs \\ --bucket https://<bucket>.stor.sinaapp.com \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.<endpoint>`, For example: ```bash juicefs format \\ --storage oos \\ --bucket https://<bucket>.<endpoint> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. ECloud Object Storage provides for each region, depending on your network (e.g. public or internal). Please choose an appropriate endpoint. For example: ```bash juicefs format \\ --storage eos \\ --bucket https://<bucket>.<endpoint> \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. The `--bucket` option format is `https://<bucket>.<region>.jdcloud-oss.com`and please replace `<region>` with specific region code. You could find all available region codes . For example: ```bash juicefs format \\ --storage s3 \\ --bucket https://<bucket>.<region>.jdcloud-oss.com \\ ... \\ myjfs ``` Please follow to learn how to get access key and secret key. US3 (formerly UFile) provides for each region, depending on your network (e.g. public or internal). Please choose an appropriate endpoint. For example: ```bash juicefs format \\ --storage ufile \\ --bucket https://<bucket>.<endpoint> \\ ... \\ myjfs ``` :::note JuiceFS v1.0 uses `go-ceph` v0.4.0, which supports Ceph Luminous (v12.2.x) and above. JuiceFS v1.1 uses `go-ceph` v0.18.0, which supports Ceph Octopus (v15.2.x) and above. Make sure that JuiceFS matches your Ceph and `librados` version, see . ::: The has a messaging layer protocol that enables clients to interact with a Ceph Monitor and a Ceph OSD Daemon. The API enables you to interact with the two types of daemons: The , which maintains a master copy of the cluster map. The , which stores data as objects on a storage node. JuiceFS supports the use of native Ceph APIs based on `librados`. You need to install `librados` library and build `juicefs` binary"
},
{
"data": "First, install a `librados` that matches the version of your Ceph installation, For example, if Ceph version is Octopus (v15.2.x), then it is recommended to use `librados` v15.2.x. <Tabs> <TabItem value=\"debian\" label=\"Debian and derivatives\"> ```bash sudo apt-get install librados-dev ``` </TabItem> <TabItem value=\"centos\" label=\"RHEL and derivatives\"> ```bash sudo yum install librados2-devel ``` </TabItem> </Tabs> Then compile JuiceFS for Ceph (make sure you have Go 1.20+ and GCC 5.4+ installed): ```bash make juicefs.ceph ``` When using with Ceph, the JuiceFS Client object storage related options are interpreted differently: `--bucket` stands for the Ceph storage pool, the format is `ceph://<pool-name>`. A is a logical partition for storing objects. Create a pool before use. `--access-key` stands for the Ceph cluster name, the default value is `ceph`. `--secret-key` option is , the default user name is `client.admin`. In order to reach Ceph Monitor, `librados` reads Ceph configuration file by searching default locations and the first found will be used. The locations are: `CEPH_CONF` environment variable `/etc/ceph/ceph.conf` `~/.ceph/config` `ceph.conf` in the current working directory Since these additional Ceph configuration files are needed during the mount, CSI Driver users need to . To format a volume, run: ```bash juicefs.ceph format \\ --storage ceph \\ --bucket ceph://<pool-name> \\ --access-key <cluster-name> \\ --secret-key <user-name> \\ ... \\ myjfs ``` is an object storage interface built on top of `librados` to provide applications with a RESTful gateway to Ceph Storage Clusters. Ceph Object Gateway supports S3-compatible interface, so we could set `--storage` to `s3` directly. The `--bucket` option format is `http://<bucket>.<endpoint>` (virtual hosted-style). For example: ```bash juicefs format \\ --storage s3 \\ --bucket http://<bucket>.<endpoint> \\ ... \\ myjfs ``` is a software defined distributed storage that can scale to several petabytes. JuiceFS communicates with Gluster via the `libgfapi` library, so it needs to be built separately before used. First, install `libgfapi` (version 6.0 - 10.1, ) <Tabs> <TabItem value=\"debian\" label=\"Debian and derivatives\"> ```bash sudo apt-get install uuid-dev libglusterfs-dev glusterfs-common ``` </TabItem> <TabItem value=\"centos\" label=\"RHEL and derivatives\"> ```bash sudo yum install glusterfs glusterfs-api-devel glusterfs-libs ``` </TabItem> </Tabs> Then compile JuiceFS supporting Gluster: ```bash make juicefs.gluster ``` Now we can create a JuiceFS volume on Gluster: ```bash juicefs format \\ --storage gluster \\ --bucket host1,host2,host3/gv0 \\ ... \\ myjfs ``` The format of `--bucket` option is `<host[,host...]>/<volumename>`. Please note the `volumename` here is the name of Gluster volume, and has nothing to do with the name of JuiceFS volume. is a distributed object storage system designed to scale from a single machine to thousands of servers. Swift is optimized for multi-tenancy and high concurrency. Swift is ideal for backups, web and mobile content, and any other unstructured data that can grow without bound. The `--bucket` option format is `http://<container>.<endpoint>`. A container defines a namespace for objects. Currently, JuiceFS only supports . The value of `--access-key` option is username. The value of `--secret-key` option is password. For example: ```bash juicefs format \\ --storage swift \\ --bucket http://<container>.<endpoint> \\ --access-key <username> \\ --secret-key <password> \\ ... \\ myjfs ``` is an open source lightweight object storage, compatible with Amazon S3 API. It is easy to run a MinIO instance locally using Docker. For example, the following command sets and maps port `9900` for the console with `--console-address \":9900\"` and also maps the data path for the MinIO to the `minio-data` folder in the current directory, which can be modified if"
},
{
"data": "```shell sudo docker run -d --name minio \\ -p 9000:9000 \\ -p 9900:9900 \\ -e \"MINIOROOTUSER=minioadmin\" \\ -e \"MINIOROOTPASSWORD=minioadmin\" \\ -v $PWD/minio-data:/data \\ --restart unless-stopped \\ minio/minio server /data --console-address \":9900\" ``` After container is up and running, you can access: MinIO API: , this is the object storage service address used by JuiceFS MinIO UI: , this is used to manage the object storage itself, not related to JuiceFS The initial Access Key and Secret Key of the object storage are both `minioadmin`. When using MinIO as data storage for JuiceFS, set the option `--storage` to `minio`. ```bash juicefs format \\ --storage minio \\ --bucket http://127.0.0.1:9000/<bucket> \\ --access-key minioadmin \\ --secret-key minioadmin \\ ... \\ myjfs ``` :::note Currently, JuiceFS only supports path-style MinIO URI addresses, e.g., `http://127.0.0.1:9000/myjfs`. The `MINIO_REGION` environment variable can be used to set the region of MinIO, if not set, the default is `us-east-1`. When using Multi-Node MinIO deployment, consider setting using a DNS address in the service endpoint, resolving to all MinIO Node IPs, as a simple load-balancer, e.g. `http://minio.example.com:9000/myjfs` ::: is an extension of the Hypertext Transfer Protocol (HTTP) that facilitates collaborative editing and management of documents stored on the WWW server among users. From JuiceFS v0.15+, JuiceFS can use a storage that speaks WebDAV as a data storage. You need to set `--storage` to `webdav`, and `--bucket` to the endpoint of WebDAV. If basic authorization is enabled, username and password should be provided as `--access-key` and `--secret-key`, for example: ```bash juicefs format \\ --storage webdav \\ --bucket http://<endpoint>/ \\ --access-key <username> \\ --secret-key <password> \\ ... \\ myjfs ``` is the file system for Hadoop, which can be used as the object storage for JuiceFS. When HDFS is used, `--access-key` can be used to specify the `username`, and `hdfs` is usually the default superuser. For example: ```bash juicefs format \\ --storage hdfs \\ --bucket namenode1:8020 \\ --access-key hdfs \\ ... \\ myjfs ``` When `--access-key` is not specified on formatting, JuiceFS will use the current user of `juicefs mount` or Hadoop SDK to access HDFS. It will hang and fail with IO error eventually, if the current user don't have enough permission to read/write the blocks in HDFS. JuiceFS will try to load configurations for HDFS client based on `$HADOOPCONFDIR` or `$HADOOP_HOME`. If an empty value is provided to `--bucket`, the default HDFS found in Hadoop configurations will be used. bucket format: `[hdfs://]namenode:port[/path]` for HA cluster: `[hdfs://]namenode1:port,namenode2:port[/path]` `[hdfs://]nameservice[/path]` For HDFS which enable Kerberos, `KRB5KEYTAB` and `KRB5PRINCIPAL` environment var can be used to set keytab and principal. Apache Ozone is a scalable, redundant, and distributed object storage for Hadoop. It supports S3-compatible interface, so we could set `--storage` to `s3` directly. ```bash juicefs format \\ --storage s3 \\ --bucket http://<endpoint>/<bucket>\\ --access-key <your-access-key> \\ --secret-key <your-sceret-key> \\ ... \\ myjfs ``` can be used as both metadata storage for JuiceFS and as data storage, but when using Redis as a data storage, it is recommended not to store large-scale data. The `--bucket` option format is `redis://<host>:<port>/<db>`. The value of `--access-key` option is username. The value of `--secret-key` option is password. For example: ```bash juicefs format \\ --storage redis \\ --bucket redis://<host>:<port>/<db> \\ --access-key <username> \\ --secret-key <password> \\ ... \\ myjfs ``` In Redis Sentinel mode, the format of the `--bucket` option is `redis[s]://MASTERNAME,SENTINELADDR[,SENTINELADDR]:SENTINELPORT[/DB]`. Sentinel's password needs to be declared through the `SENTINELPASSWORDFOR_OBJ` environment variable. For example: ```bash export SENTINELPASSWORDFOROBJ=sentinelpassword juicefs format \\ --storage redis \\ --bucket redis://masterName,1.2.3.4,1.2.5.6:26379/2 \\ --access-key <username> \\ --secret-key <password> \\ ... \\ myjfs ``` In Redis Cluster mode, the format of `--bucket` option is `redis[s]://ADDR:PORT,[ADDR:PORT],[ADDR:PORT]`. For example: ```bash juicefs format \\ --storage redis \\ --bucket"
},
{
"data": "\\ --access-key <username> \\ --secret-key <password> \\ ... \\ myjfs ``` is a highly scalable, low latency, and easy to use key-value database. It provides both raw and ACID-compliant transactional key-value API. TiKV can be used as both metadata storage and data storage for JuiceFS. :::note It's recommended to use dedicated TiKV 5.0+ cluster as the data storage for JuiceFS. ::: The `--bucket` option format is `<host>:<port>,<host>:<port>,<host>:<port>`, and `<host>` is the address of Placement Driver (PD). The options `--access-key` and `--secret-key` have no effect and can be omitted. For example: ```bash juicefs format \\ --storage tikv \\ --bucket \"<host>:<port>,<host>:<port>,<host>:<port>\" \\ ... \\ myjfs ``` :::note Don't use the same TiKV cluster for both metadata and data, because JuiceFS uses non-transactional protocol (RawKV) for objects and transactional protocol (TnxKV) for metadata. The TxnKV protocol has special encoding for keys, so they may overlap with keys even they has different prefixes. BTW, it's recommmended to enable in TiKV for data cluster. ::: If you need to enable TLS, you can set the TLS configuration item by adding the query parameter after the bucket URL. Currently supported configuration items: | Name | Value | |-|| | `ca` | CA root certificate, used to connect TiKV/PD with TLS | | `cert` | certificate file path, used to connect TiKV/PD with TLS | | `key` | private key file path, used to connect TiKV/PD with TLS | | `verify-cn` | verify component caller's identity, | For example: ```bash juicefs format \\ --storage tikv \\ --bucket \"<host>:<port>,<host>:<port>,<host>:<port>?ca=/path/to/ca.pem&cert=/path/to/tikv-server.pem&key=/path/to/tikv-server-key.pem&verify-cn=CN1,CN2\" \\ ... \\ myjfs ``` is a small-scale key-value database with high availability and reliability, which can be used as both the metadata storage of JuiceFS and the data storage of JuiceFS. etcd will a single request to no more than 1.5MB by default, you need to change the block size (`--block-size` option) of JuiceFS to 1MB or even lower. The `--bucket` option needs to fill in the etcd address, the format is similar to `<host1>:<port>,<host2>:<port>,<host3>:<port>`. The `--access-key` and `--secret-key` options are filled with username and password, which can be omitted when etcd does not enable user authentication. E.g: ```bash juicefs format \\ --storage etcd \\ --block-size 1024 \\ # This option is very important --bucket \"<host1>:<port>,<host2>:<port>,<host3>:<port>/prefix\" \\ --access-key myname \\ --secret-key mypass \\ ... \\ myjfs ``` If you need to enable TLS, you can set the TLS configuration item by adding the query parameter after the bucket URL. Currently supported configuration items: | Name | Value | ||--| | `cacert` | CA root certificate | | `cert` | certificate file path | | `key` | private key file path | | `server-name` | name of server | | `insecure-skip-verify` | 1 | For example: ```bash juicefs format \\ --storage etcd \\ --bucket \"<host>:<port>,<host>:<port>,<host>:<port>?cacert=/path/to/ca.pem&cert=/path/to/server.pem&key=/path/to/key.pem&server-name=etcd\" \\ ... \\ myjfs ``` :::note The path to the certificate needs to be an absolute path, and make sure that all machines that need to mount can use this path to access them. ::: is a small, fast, single-file, reliable, full-featured single-file SQL database engine widely used around the world. When using SQLite as a data store, you only need to specify its absolute path. ```shell juicefs format \\ --storage sqlite3 \\ --bucket /path/to/sqlite3.db \\ ... \\ myjfs ``` :::note Since SQLite is an embedded database, only the host where the database is located can access it, and cannot be used in multi-machine sharing scenarios. If a relative path is used when formatting, it will cause problems when mounting, please use an absolute"
},
{
"data": "::: is one of the popular open source relational databases, often used as the database of choice for web applications, both as a metadata engine for JuiceFS and for storing files data. MySQL-compatible , , etc. can be used as data storage. When using MySQL as a data storage, you need to create a database in advance and add the desired permissions, specify the access address through the `--bucket` option, specify the user name through the `--access-key` option, and specify the password through the `--secret-key` option. An example is as follows: ```shell juicefs format \\ --storage mysql \\ --bucket (<host>:3306)/<database-name> \\ --access-key <username> \\ --secret-key <password> \\ ... \\ myjfs ``` After the file system is created, JuiceFS creates a table named `jfs_blob` in the database to store the data. :::note Don't miss the parentheses `()` in the `--bucket` parameter. ::: is a powerful open source relational database with a complete ecology and rich application scenarios. It can be used as both the metadata engine of JuiceFS and the data storage. Other databases compatible with the PostgreSQL protocol (such as , etc.) can also be used as data storage. When creating a file system, you need to create a database and add the corresponding read and write permissions. Use the `--bucket` option to specify the address of the data, use the `--access-key` option to specify the username, and use the `--secret-key` option to specify the password. An example is as follows: ```shell juicefs format \\ --storage postgres \\ --bucket <host>:<port>/<db>[?parameters] \\ --access-key <username> \\ --secret-key <password> \\ ... \\ myjfs ``` After the file system is created, JuiceFS creates a table named `jfs_blob` in the database to store the data. The JuiceFS client uses SSL encryption to connect to PostgreSQL by default. If the connection error `pq: SSL is not enabled on the server` indicates that the database does not have SSL enabled. You can enable SSL encryption for PostgreSQL according to your business scenario, or you can add the parameter `sslmode=disable` to the bucket URL to disable encryption verification. When creating JuiceFS storage, if no storage type is specified, the local disk will be used to store data by default. The default storage path for root user is `/var/jfs`, and `~/.juicefs/local` is for ordinary users. For example, using the local Redis database and local disk to create a JuiceFS storage named `test`: ```shell juicefs format redis://localhost:6379/1 test ``` Local storage is usually only used to help users understand how JuiceFS works and to give users an experience on the basic features of JuiceFS. The created JuiceFS storage cannot be mounted by other clients within the network and can only be used on a single machine. SFTP - Secure File Transfer Protocol, It is not a type of storage. To be precise, JuiceFS reads and writes to disks on remote hosts via SFTP/SSH, thus allowing any SSH-enabled operating system to be used as a data storage for JuiceFS. For example, the following command uses the SFTP protocol to connect to the remote server `192.168.1.11` and creates the `myjfs/` folder in the `$HOME` directory of user `tom` as the data storage of JuiceFS. ```shell juicefs format \\ --storage sftp \\ --bucket 192.168.1.11:myjfs/ \\ --access-key tom \\ --secret-key 123456 \\ ... redis://localhost:6379/1 myjfs ``` `--bucket` is used to set the server address and storage path in the format `[sftp://]<IP/Domain>:[port]:<Path>`. Note that the directory name should end with `/`, and the port number is optionally defaulted to `22`, e.g. `192.168.1.11:22:myjfs/`. `--access-key` set the username of the remote server `--secret-key` set the password of the remote server"
}
] |
{
"category": "Runtime",
"file_name": "how_to_set_up_object_storage.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(image-servers)= The CLI command comes pre-configured with the following default remote image server: `images:` : This server provides unofficial images for a variety of Linux distributions. The images are maintained by the team and are built to be compact and minimal. See for an overview of available images. Additional image servers can be added through `incus remote add`. (image-server-types)= Incus supports the following types of remote image servers: Simple streams servers : Pure image servers that use the . No special software is required to run such a server as it's only made of static files. The default `images:` server uses simplestreams. Public Incus servers : Incus servers that are used solely to serve images and do not run instances themselves. To make an Incus server publicly available over the network on port 8443, set the {config:option}`server-core:core.https_address` configuration option to `:8443` and do not configure any authentication methods (see {ref}`server-expose` for more information). Then set the images that you want to share to `public`. Incus servers : Regular Incus servers that you can manage over a network, and that can also be used as image servers. For security reasons, you should restrict the access to the remote API and configure an authentication method to control access. See {ref}`server-expose` and {ref}`authentication` for more information. (image-server-tooling)= Incus includes a tool called `incus-simplestreams` which can be used to manage a file system tree using the Simple streams format. It supports importing either a container (`squashfs`) or virtual-machine (`qcow2`) image with `incus-simplestreams add`, list all images available as well as their fingerprints with `incus-simplestreams list` and remove images from the server with `incus-simplestreams remove`. That file system tree must then be placed on a regular web server which supports HTTPS with a valid certificate. When importing an image that doesn't come with an Incus metadata tarball, the `incus-simplestreams generate-metadata` command can be used to generate a new basic metadata tarball from a few questions."
}
] |
{
"category": "Runtime",
"file_name": "image_servers.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "``` bash curl -v \"http://10.196.59.198:17010/metaPartition/create?name=test&start=10000\" ``` Manually splits the metadata shard. If the maximum metadata shard inode range of the volume is `[begin, end)`: If start is greater than begin and less than end, the inode range of the original maximum metadata shard becomes `[begin, start]`, and the range of the newly created metadata shard is `[start+1,+inf)`. If start is less than begin, max is the maximum inode number on the current shard, and the inode range becomes `[begin, max+16777216]`, and the range of the newly created metadata shard is `[max+16777217,+inf)`. If start is greater than end, max is the maximum inode number on the current shard, and the inode range becomes `[begin, start]`, and the range of the newly created metadata shard is `[start+1, +inf)`. ::: warning Note A large start value will cause a large inode on a single shard, occupying a large amount of memory. When there are too many inodes on the last shard, automatic splitting of the metadata partition will also be triggered. ::: Parameter List | Parameter | Type | Description | |--|--|-| | name | string | Volume name | | start | uint64 | Split the metadata shard based on this value | ``` bash curl -v \"http://10.196.59.198:17010/metaPartition/get?id=1\" | python -m json.tool ``` Displays detailed information about the metadata shard, including the shard ID, the starting range of the shard, etc. Parameter List | Parameter | Type | Description | |--|--|-| | id | uint64 | Metadata shard ID | Response Example ``` json { \"PartitionID\": 1, \"Start\": 0, \"End\": 9223372036854776000, \"MaxNodeID\": 1, \"VolName\": \"test\", \"Replicas\": {}, \"ReplicaNum\": 3, \"Status\": 2, \"IsRecover\": true, \"Hosts\": {}, \"Peers\": {}, \"Zones\": {}, \"MissNodes\": {}, \"LoadResponse\": {} } ``` ``` bash curl -v \"http://10.196.59.198:17010/metaPartition/decommission?id=13&addr=10.196.59.202:17210\" ``` Removes a replica of the metadata shard and creates a new replica. Parameter List | Parameter | Type | Description | |--|--|--| | id | uint64 | Metadata partition ID | | addr | string | Address of the replica to be removed | ``` bash curl -v \"http://10.196.59.198:17010/metaPartition/load?id=1\" ``` Sends a task to compare the replica to each replica, and then checks whether the CRC of each replica is consistent. Parameter List | Parameter | Type | Description | |--|--|--| | id | uint64 | Metadata partition ID |"
}
] |
{
"category": "Runtime",
"file_name": "meta-partition.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Datadog, Inc. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC"
}
] |
{
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Disconnect an endpoint from the network ``` cilium-dbg endpoint disconnect <endpoint-id> [flags] ``` ``` -h, --help help for disconnect ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage endpoints"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_endpoint_disconnect.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. --> ontributor-cheatsheet) - Common resources for existing developers You can reach the maintainers of this project via the . - We have a diverse set of mentorship programs available that are always looking for volunteers!"
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "New versions of the [OpenTelemetry Semantic Conventions] mean new versions of the `semconv` package need to be generated. The `semconv-generate` make target is used for this. Checkout a local copy of the [OpenTelemetry Semantic Conventions] to the desired release tag. Pull the latest `otel/semconvgen` image: `docker pull otel/semconvgen:latest` Run the `make semconv-generate ...` target from this repository. For example, ```sh export TAG=\"v1.21.0\" # Change to the release version you are generating. export OTELSEMCONVREPO=\"/absolute/path/to/opentelemetry/semantic-conventions\" docker pull otel/semconvgen:latest make semconv-generate # Uses the exported TAG and OTELSEMCONVREPO. ``` This should create a new sub-package of . Ensure things look correct before submitting a pull request to include the addition. You can run `make gorelease` that runs to ensure that there are no unwanted changes done in the public API. You can check/report problems with `gorelease` . First, decide which module sets will be released and update their versions in `versions.yaml`. Commit this change to a new branch. Update go.mod for submodules to depend on the new release which will happen in the next step. Run the `prerelease` make target. It creates a branch `prerelease<module set><new tag>` that will contain all release changes. ``` make prerelease MODSET=<module set> ``` Verify the changes. ``` git diff ...prerelease<module set><new tag> ``` This should have changed the version for all modules to be `<new tag>`. If these changes look correct, merge them into your pre-release branch: ```go git merge prerelease<module set><new tag> ``` Update the . Make sure all relevant changes for this release are included and are in language that non-contributors to the project can understand. To verify this, you can look directly at the commits since the `<last tag>`. ``` git --no-pager log --pretty=oneline \"<last tag>..HEAD\" ``` Move all the `Unreleased` changes into a new section following the title scheme (`[<new tag>] - <date of release>`). Update all the appropriate links at the"
},
{
"data": "Push the changes to upstream and create a Pull Request on GitHub. Be sure to include the curated changes from the in the description. Once the Pull Request with all the version changes has been approved and merged it is time to tag the merged commit. *IMPORTANT*: It is critical you use the same tag that you used in the Pre-Release step! Failure to do so will leave things in a broken state. As long as you do not change `versions.yaml` between pre-release and this step, things should be fine. *IMPORTANT*: . It is critical you make sure the version you push upstream is correct. . For each module set that will be released, run the `add-tags` make target using the `<commit-hash>` of the commit on the main branch for the merged Pull Request. ``` make add-tags MODSET=<module set> COMMIT=<commit hash> ``` It should only be necessary to provide an explicit `COMMIT` value if the current `HEAD` of your working directory is not the correct commit. Push tags to the upstream remote (not your fork: `github.com/open-telemetry/opentelemetry-go.git`). Make sure you push all sub-modules as well. ``` git push upstream <new tag> git push upstream <submodules-path/new tag> ... ``` Finally create a Release for the new `<new tag>` on GitHub. The release body should include all the release notes from the Changelog for this release. After releasing verify that examples build outside of the repository. ``` ./verify_examples.sh ``` The script copies examples into a different directory removes any `replace` declarations in `go.mod` and builds them. This ensures they build with the published release, not the local copy. Once verified be sure to that uses this release. Update the [Go instrumentation documentation] in the OpenTelemetry website under [content/en/docs/languages/go]. Importantly, bump any package versions referenced to be the latest one you just released and ensure all code examples still compile and are accurate. Bump the dependencies in the following Go services:"
}
] |
{
"category": "Runtime",
"file_name": "RELEASING.md",
"project_name": "CRI-O",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Metadata Backup & Recovery sidebar_position: 2 slug: /metadatadumpload :::tip JuiceFS v1.0.0 starts to support automatic metadata backup. JuiceFS v1.0.4 starts to support importing an encrypted backup. ::: JuiceFS supports , and each engine stores and manages data in a different format internally. JuiceFS provides the command to export metadata in a uniform JSON format, also there's the command to restore or migrate backups to any metadata storage engine. This dump / load process can also be used to migrate a community edition file system to enterprise edition (read for more), and vice versa. :::note `juicefs dump` does not provide snapshot consistency. If files are modified during the export, the final backup file will contain information from different points in time, which might prove unusable for some applications (like databases). If you have higher standards for consistency, you should suspend all writes to the system before exporting. For large scale file systems, dumping directly from online database may prove risks to system reliability, use with caution. ::: Using the `dump` command provided by JuiceFS client, you can export metadata to a JSON file, for example: ```shell juicefs dump redis://192.168.1.6:6379 meta-dump.json ``` The JSON file exported by using the `dump` command provided by the JuiceFS client can have any filename and extension that you prefer, as shown in the example above. In particular, if the file extension is `.gz` (e.g. `meta-dump.json.gz`), the exported data will be compressed using the Gzip algorithm. By default, the `dump` command starts from the root directory `/` and iterates recursively through all the files in the directory tree, and writes the metadata of each file to a JSON output. The object storage credentials will be omitted for data security, but it can be preserved using the `--keep-secret-key` option. The value of `juicefs dump` is that it can export complete metadata information in a uniform JSON format for easy management and preservation, and it can be recognized and imported by different metadata storage engines. In practice, the `dump` command should be used in conjunction with the backup tool that comes with the database to complement each other, such as and , etc. Starting with JuiceFS v1.0.0, the client automatically backs up metadata and copies it to the object storage every hour, regardless of whether the file system is mounted via the `mount` command or accessed via the JuiceFS S3 gateway and Hadoop Java SDK. The backup files are stored in the `meta` directory of the object storage. It is a separate directory from the data store and not visible in the mount point and does not interact with the data store, and the directory can be viewed and managed using the file browser of the object storage. By default, the JuiceFS client backs up metadata once an hour. The frequency of automatic backups can be adjusted by the `--backup-meta` option when mounting the filesystem, for example, to set the auto-backup to be performed every 8 hours. ```shell juicefs mount -d --backup-meta 8h redis://127.0.0.1:6379/1 /mnt ``` The backup frequency can be accurate to the second and it supports the following units. `h`: accurate to the hour, e.g. `1h`. `m`: accurate to the minute, e.g. `30m`,"
},
{
"data": "`s`: accurate to the second, such as `50s`, `30m50s`, `1h30m50s`; It is worth mentioning that the time cost of backup will increase with the number of files in the filesystem. Hence, when the number is too large (by default 1 million) with the automatic backup frequency 1 hour (by default), JuiceFS will automatically skip backup and print the corresponding warning log. At this point you may mount a new client with a bigger `--backup-meta` option value to re-enable automatic backups. For reference, when using Redis as the metadata engine, backing up the metadata for one million files takes about 1 minute and consumes about 1GB of memory. Although automatic metadata backup becomes a default action for clients, backup conflicts do not occur when multiple hosts share the same file system mount. JuiceFS maintains a global timestamp to ensure that only one client performs the backup operation at the same time. When different backup periods are set between clients, then it will back up based on the shortest period setting. JuiceFS periodically cleans up backups according to the following rules. Keep all backups up to 2 days. For backups older than 2 days and less than 2 weeks, keep 1 backup for each day. For backups older than 2 weeks and less than 2 months, keep 1 backup for each week. For backups older than 2 months, keep 1 backup for each month. Use the command to restore the metadata dump file into an empty database, for example: ```shell juicefs load redis://192.168.1.6:6379 meta-dump.json ``` Once imported, JuiceFS will recalculate the file system statistics including space usage, inode counters, and eventually generates a globally consistent metadata in the database. If you have a deep understanding of the metadata design of JuiceFS, you can also modify the metadata backup file before restoring to debug. The dump file is written in an uniform format, which can be recognized and imported by all metadata engines, making it easy to migrate to other types of metadata engines. For instance, to migrate from a Redis database to MySQL: Exporting metadata backup from Redis: ```shell juicefs dump redis://192.168.1.6:6379 meta-dump.json ``` Restoring metadata to a new MySQL database: ```shell juicefs load mysql://user:password@(192.168.1.6:3306)/juicefs meta-dump.json ``` It is also possible to migrate directly through the system's pipe: ```shell juicefs dump redis://192.168.1.6:6379 | juicefs load mysql://user:password@(192.168.1.6:3306)/juicefs ``` Note that since the API access key for object storage is excluded by default from the backup, when loading metadata, you need to use the command to reconfigure the object storage credentials. For example: ```shell juicefs config --secret-key xxxxx mysql://user:password@(192.168.1.6:3306)/juicefs ``` For , all data is encrypted before uploading to the object storage, including automatic metadata backups. This is different from the `dump` command, which only output metadata in plain text. For an encrypted file system, it is necessary to additionally set the `JFSRSAPASSPHRASE` environment variable and specify the RSA private key and encryption algorithm when restoring the automatically backed-up metadata: ```shell export JFSRSAPASSPHRASE=xxxxxx juicefs load \\ --encrypt-rsa-key my-private.pem \\ --encrypt-algo aes256gcm-rsa \\ redis://192.168.1.6:6379/1 \\ dump-2023-03-16-090750.json.gz ``` In addition to completely exporting metadata, you can also export specific subdirectories. You can intuitively inspect the metadata in the directory tree. ```shell juicefs dump redis://192.168.1.6:6379 meta-dump.json --subdir /path/in/juicefs ``` Using tools like `jq` to analyze the exported file is also an option."
}
] |
{
"category": "Runtime",
"file_name": "metadata_dump_load.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "CubeFS supports atomic operation. After turning on the atomic function, the metadata atomicity of the file operation is met, and the metadata uses the final consistency, that is, the metadata modification of the file operation is ether successfully committed, or rolled back. Create Mkdir Remove Rename Mknod Symlink Link ::: tip tip Atomicity supports the Fuse interface, and S3 interface is not supported for now. ::: CubeFS implements part of the characteristics of transaction, but it is different from the traditional strict sense of transaction. For the convenience, the operation of files is referred to as transaction. The transaction is effective in the volume, and a cluster can have multiple volumes. Each volume can switch on or off transaction separately. The client get configuration of transaction from the master when started. The switch can control all supported interfaces, or only open some interfaces, the default transaction time out is 1 minute. ``` curl \"192.168.0.11:17010/vol/update?name=ltptest&enableTxMask=all&txForceReset=true&txConflictRetryNum=13&txConflictRetryInterval=30&txTimeout=1&authKey=0e20229116d5a9a4a9e876806b514a85\" ``` ::: tip tip After modifying the parameter of the transaction, new config will be synchronized from Master within 2 minutes. ::: | Parameters | Type | Description | |-|--|-| | enableTxMask | String | Value can be: Create, MKDIR, Remove, RENAME, MKNod, Symlink, Link, off, all. off and all and other values are mutually exclusive | | txtimeout | uint32 | the default transaction time out is 1 minute, the maximum 60 minutes | | txForceReset | bool | The difference from enableTxMask is that value of enableTxMask will be merged, and txForceReset is forced to reset to the specified value | | txConflictRetryNum | uint32 | Value range [1-100], defaults to 10 | | txConflictRetryInterval | uint32 | Unit: milliseconds, the range of value [10-1000], the default is 20ms | You can check transaction config by getting the volume information ``` curl \"192.168.0.11:17010/admin/getvol?name=ltptest\" ``` You can also specify transaction parameters when creating the volume ``` curl \"192.168.0.11:17010/admin/createVol?name=test&capacity=100&owner=cfs&mpCount=3&enableTxMask=all&txTimeout=5\" ```"
}
] |
{
"category": "Runtime",
"file_name": "atomicity.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `[email protected]` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`"
}
] |
{
"category": "Runtime",
"file_name": "RELEASE.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Core scheduling is a Linux kernel feature that allows only trusted tasks to run concurrently on CPUs sharing compute resources (for example, hyper-threads on a core). Containerd versions >= 1.6.4 leverage this to treat all of the processes associated with a given pod or container to be a single group of trusted tasks. To indicate this should be carried out, containerd sets the `SCHED_CORE` environment variable for each shim it spawns. When this is set, the Kata Containers shim implementation uses the `prctl` syscall to create a new core scheduling domain for the shim process itself as well as future VMM processes it will start. For more details on the core scheduling feature, see the ."
}
] |
{
"category": "Runtime",
"file_name": "core-scheduling.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: Use JuiceFS on KubeSphere sidebar_position: 3 slug: /juicefsonkubesphere is an application-centric multi-tenant container platform built on Kubernetes. It provides full-stack IT automated operation and maintenance capabilities and simplifies the DevOps workflow of the enterprise. KubeSphere provides a friendly wizard-style operation interface for operation and maintenance, even users who are not experienced in Kubernetes can start management and use relatively easily. It provides a Helm-based application market that can easily install various Kubernetes applications under a graphical interface. This article will introduce how to deploy JuiceFS CSI Driver in KubeSphere with one click to provide data persistence for various applications on the cluster. Install KubeSphere There are two ways to install KubeSphere. One is installing in Linux, you can refer to the document: , One is installing in Kubernetes, you can refer to the document: Enable app store in KubeSphere You can refer to the documentation for enabling the app store in KubeSphere: If the version of KubeSphere is v3.2.0 and above, you can install CSI Driver directly in the app store, skip the \"Configure Application Template/Application Repository\" step, and go directly to the \"Install\" step; if the KubeSphere version is lower than v3.2.0, follow the steps below to configure application templates/application repository. To install JuiceFS CSI Driver, you first need to create an application template. There are two methods. Click in the workspace to enter the application management, select \"App Repositories\", click the create button to add JuiceFS CSI Repository, fill in: Repository name: `juicefs-csi-driver` Index URL: `https://juicedata.github.io/charts/` Download the chart compressed package from the JuiceFS CSI Driver warehouse: <https://github.com/juicedata/juicefs-csi-driver/releases>. In the \"Workspace\", click to enter the \"App Management\", select \"App Templates\", click \"create\", upload the chart compression package: Select \"Project\" where you want to deploy in the \"Workspace\" (the project in KubeSphere is the namespace in K8s), select \"Apps\", click the \"create\" button, select \"From App Store\", and then Select `juicefs`: If KubeSphere version is lower than v3.2.0, select button \"From App Template\" according to the application template configured in the previous step: It's the same after entering the configuration modification page, modify the following two places: namespace: Change to the corresponding project name storageClass.backend: The `backend` part is used to define the backend database and object storage of the file system. Refer to for related content. You can also quickly create databases (such as Redis) and object storage (such as MinIO) by KubeSphere's app store. For example, build on the KubeSphere platform Redis: Select \"Apps\" in the current project, click the \"create\" button, select \"From App Store\", select \"Redis\", and then quickly"
},
{
"data": "The access URL of Redis can be the service name of the deployed application, as follows: Deploying MinIO on the KubeSphere platform is a similar process, but you can modify the accessKey and secretKey of MinIO before deploying MinIO, and you need to remember the configured values. As shown below: Attention: If there are permissions error when deploying MinIO, you can set the `securityContext.enables` in the configuration to false. MinIO's access URL can be the service name of the deployed application, as follows: After both Redis and MinIO are set up, you can fill in the `backend` value of JuiceFS CSI Driver. `metaurl` is the database address of Redis just created, the access address of Redis can be the service name corresponding to the Redis application, such as `redis://redis-rzxoz6:6379/1` `storage` is type of storage for the object, such as `minio` `bucket` is the available bucket of MinIO just created (JuiceFS will automatically create it, no need to create it manually), the access address of MinIO can be the service name corresponding to the MinIO application, such as `http://minio-qkp9my:9000/minio/test` `accessKey` and `secretKey` are the accessKey and secretKey of MinIO just created After the configuration is modified, click \"Install\". The JuiceFS CSI Driver installed above has created a `StorageClass`, for example, the `StorageClass` created above is `juicefs-sc` , Can be used directly. Then you need to create a PVC. In \"Project\", select \"Storage Management\", then select \"Storage Volume\", click the \" Create\" button to create a PVC, and select `juicefs-sc` for the \"StorageClass\", as follows: After the PVC is created, in the \"Apps\" of \"Project\", select \"Workloads\", click \"Create\" button to deploy the workload, and fill in your favorite name on the \"Basic Information\" page; the \"Container Image\" page can fill in the mirror image `centos`; Start command `sh,-c,while true; do echo $(date -u) >> /data/out.txt; sleep 5; done`; \"Mount Volume\" select \"Existing Volume\", and then select PVC created in one step, fill in the path in the container with `/data` as follows: After the deployment completed, you can see the running pod: If you did not create a `StorageClass` when installing JuiceFS CSI Driver, or you need to create a new one, you can follow the steps below: After preparing the metadata service and object storage service, create a new `Secret`. On the \"Platform Management\" page, select \"Configuration\", select \"Secret\", and click the \"Create\" button to create a new one: Fill in the metadata service and object storage information in \"Data Settings\", as follows: After creating `Secret`, create `StorageClass`, select \"Storage\" on the \"Platform Management\" page, select \"Storage Classes\", click the \"Create\" button to create a new one, and select \"Custom\" for \"Storage Class\": The setting page information is as follows, where \"Storage System\" fills in `csi.juicefs.com`, and 4 more parameters are set: `csi.storage.k8s.io/provisioner-secret-name`: secret name `csi.storage.k8s.io/provisioner-secret-namespace`: project of secret `csi.storage.k8s.io/node-publish-secret-name`: secret name `csi.storage.k8s.io/node-publish-secret-namespace`: project of secret After clicking the \"Create\" button, the `StorageClass` is created."
}
] |
{
"category": "Runtime",
"file_name": "juicefs_on_kubesphere.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Longhorn supports multiple disks per node, but there is currently no way to ensure that two replicas for the same volume that schedule to the same node end up on different disks. In fact, the replica scheduler currently doesn't make any attempt achieve this goal, even when it is possible to do so. With the addition of a Disk Anti-Affinity feature, the Longhorn replica scheduler will attempt to schedule two replicas for the same volume to different disks when possible. Optionally, the scheduler will refuse to schedule a replica to a disk that has another replica for the same volume. Although the comparison is not perfect, this enhancement can be thought of as enabling RAID 1 for Longhorn (mirroring across multiple disks on the same node). See the for potential benefits. https://github.com/longhorn/longhorn/issues/3823 https://github.com/longhorn/longhorn/issues/5149 Disabled by default. When disabled, prevents the scheduling of a replica to a node with an existing healthy replica of the same volume. Can also be set at the volume level to override the global default. Enabled by default. When disabled, prevents the scheduling of a replica to a zone with an existing healthy replica of the same volume. Can also be set at the volume level to override the global default. Large, multi-node clusters will likely not benefit from this enhancement. Single-node clusters and small, multi-node clusters (on which the number of replicas per volume exceeds the number of available nodes) will experience: Increased data durability. If a single disk fails, a healthy replica will still exist on an disk that has not failed. Increased data availability. If a single disk on a node becomes unavailable, but the node itself remains healthy, at least one replica remains healthy. On a single-node cluster, this can directly prevent a volume crash. On a small, multi-node cluster, this can prevent a future volume crash due to the loss of a different node. In all situations, the Longhorn replica scheduler will make a best effort to ensure two replicas for the same volume do not schedule to the same disk. Optionally, the scheduler will refuse to schedule a replica to a disk that has another replica of the same volume. My cluster consists of a single node with multiple attached SSDs. When I create any new volume, I want replicas to distribute across these disks so that I can recover from n - 1 disk failures. If there are not as many available disks as desired replicas, I want Longhorn to do the best it can. My cluster consists of a single node with multiple attached SSDs. When I create any new volume, I want replicas to distribute across these disks so that I can recover from n - 1 disk failure. If there are not as many available disks as desired replicas, I want scheduling to fail obviously. It is important that I know my volumes aren't being protected so I can take action. My cluster consists of a single node with multiple attached"
},
{
"data": "When I create a specific, high-priority volume, I want replicas to distribute across these disks so that I can recover from n - 1 disk failure. If there are not as many available disks as desired replicas, I want scheduling to fail obviously. It is important that I know high-priority volume isn't being protected so I can take action. Introduce a new Replica Disk Level Soft Anti-Affinity setting with the following definition. By default, set it to `true`. While it is generally desirable to schedule replicas to different disks, it would break with existing behavior to refuse to schedule replicas when different disks are not available. ```golang SettingDefinitionReplicaDiskSoftAntiAffinity = SettingDefinition{ DisplayName: \"Replica Disk Level Soft Anti-Affinity\", Description: \"Allow scheduling on disks with existing healthy replicas of the same volume\", Category: SettingCategoryScheduling, Type: SettingTypeBool, Required: true, ReadOnly: false, Default: \"true\", } ``` Introduce a new `spec.replicaDiskSoftAntiAffinity` volume field. By default, set it to `ignored`. Similar to the existing `spec.replicaSoftAntiAffinity` and `spec.replicaSoftZoneAntiAffinityFields`, override the global setting if this field is set to `enabled` or `disabled`. ```yaml replicaDiskSoftAntiAffinity: description: Replica disk soft anti affinity of the volume. Set enabled to allow replicas to be scheduled in the same disk. enum: ignored enabled disabled type: string ``` The current replica scheduler does the following: Determines which nodes a replica can be scheduled to based on node condition and the `ReplicaSoftAntiAffinity` and `ReplicaZoneSoftAntiAffinity` settings. Creates a list of all schedulable disks on these nodes. Chooses the disk with the most available space for scheduling. Add a step so that the replica scheduler: Determines which nodes a replica can be scheduled to based on node condition and the `ReplicaSoftAntiAffinity` and `ReplicaZoneSoftAntiAffinity` settings. Creates a list of all schedulable disks on these nodes. Filters the list to include only disks with the least number of existing matching replicas and optionally only disks with no existing matching replicas. Chooses the disk from the filtered list with the most available space for scheduling. Minimally implement two new test cases: In a cluster that includes nodes with multiple available disks, create a volume with `spec.replicaSoftAntiAffinity = true`, `spec.replicaDiskSoftAntiAffinity = true`, and `numberOfReplicas` equal to the total number of disks in the cluster. Confirm that each replica schedules to a different disk. It may be necessary to tweak additional factors. For example, ensure that one disk has enough free space that the old scheduling behavior would assign two replicas to it instead of distributing the replicas evenly among the disks. In a cluster that includes nodes with multiple available disks, create a volume with `spec.replicaSoftAntiAffinity = true`, `spec.replicaDiskSoftAntiAffinity = false`, and `numberOfReplicas` equal to one more than the total number of disks in the cluster. Confirm that a replica fails to schedule. Previously, multiple replicas would have scheduled to the same disk and no error would have occurred. The Replica Disk Level Soft Anti-Affinity setting defaults to `true` to maintain backwards compatibility. It if is set to `false``, new replicas that require scheduling will follow the new behavior. The `spec.replicaDiskSoftAntiAffinity` volume field defaults to `ignored` to maintain backwards compatibility. If it is set to `enabled` on a volume, new replicas for that volume that require scheduling will follow the new behavior."
}
] |
{
"category": "Runtime",
"file_name": "20230718-disk-anti-affinity.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List all runtime config entries List all runtime config entries ``` cilium-dbg bpf config list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage runtime config"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_config_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The following annotations can be added to any Kubernetes Node object to configure the Kilo network. |Name|type|examples| |-|-|-| ||host:port|`55.55.55.55:51820`, `example.com:1337`| ||CIDR|`55.55.55.55/32`, `\"-\"`,`\"\"`| ||string|`\"\"`, `true`| ||string|`gcp-east`, `lab`| ||uint|`10`| ||CIDR|`66.66.66.66/32`| In order to create links between locations, Kilo requires at least one node in each location to have an endpoint, ie a `host:port` combination, that is routable from the other locations. If the locations are in different cloud providers or in different private networks, then the `host` portion of the endpoint should be a publicly accessible IP address, or a DNS name that resolves to a public IP, so that the other locations can route packets to it. The Kilo agent running on each node will use heuristics to automatically detect an external IP address for the node and correctly configure its endpoint; however, in some circumstances it may be necessary to explicitly configure the endpoint to use, for example: no automatic public IP on ethernet device: on some cloud providers it is common for nodes to be allocated a public IP address but for the Ethernet devices to only be automatically configured with the private network address; in this case the allocated public IP address should be specified; multiple public IP addresses: if a node has multiple public IPs but one is preferred, then the preferred IP address should be specified; IPv6: if a node has both public IPv4 and IPv6 addresses and the Kilo network should operate over IPv6, then the IPv6 address should be specified; dynamic IP address: if a node has a dynamically allocated public IP address, for example an IP leased from a network provider, then a dynamic DNS name can be given can be given and Kilo will periodically lookup the IP to keep the endpoint up-to-date; override port: if a node should listen on a specific port that is different from the mesh's default WireGuard port, then this annotation can be used to override the port; this can be useful, for example, to ensure that two nodes operating behind the same port-forwarded NAT gateway can each be allocated a different port. Kilo routes packets destined for nodes inside the same logical location using the node's internal IP address. The Kilo agent running on each node will use heuristics to automatically detect a private IP address for the node; however, in some circumstances it may be necessary to explicitly configure the IP address, for example: multiple private IP addresses: if a node has multiple private IPs but one is preferred, then the preferred IP address should be specified; IPv6: if a node has both private IPv4 and IPv6 addresses and the Kilo network should operate over IPv6, then the IPv6 address should be specified. disable private IP with \"-\" or \"\": a node has a private and public address, but the private address ought to be"
},
{
"data": "By default, Kilo creates a network mesh at the data-center granularity. This means that one leader node is selected from each location to be an edge server and act as the gateway to other locations; the network topology will be a full mesh between leaders. Kilo automatically selects the leader for each location in a stable and deterministic manner to avoid churn in the network configuration, while giving preference to nodes that are known to have public IP addresses. In some situations it may be desirable to manually select the leader for a location, for example: firewall: Kilo requires an open UDP port, which defaults to 51820, to communicate between locations; if only one node is configured to have that port open, then that node should be given the leader annotation; bandwidth: if certain nodes in the cluster have a higher bandwidth or lower latency Internet connection, then those nodes should be given the leader annotation. Note: multiple nodes within a single location can be given the leader annotation; in this case, Kilo will select one leader from the set of annotated nodes. Kilo allows nodes in different logical or physical locations to route packets to one-another. In order to know what connections to create, Kilo needs to know which nodes are in each location. Kilo will try to infer each node's location from the node label. If the label is not present for a node, for example if running a bare-metal cluster or on an unsupported cloud provider, then the location annotation should be specified. Note: all nodes without a defined location will be considered to be in the default location `\"\"`. In certain deployments, cluster nodes may be located behind NAT or a firewall, e.g. edge nodes located behind a commodity router. In these scenarios, the nodes behind NAT can send packets to the nodes outside of the NATed network, however the outside nodes can only send packets into the NATed network as long as the NAT mapping remains valid. In order for a node behind NAT to receive packets from nodes outside of the NATed network, it must maintain the NAT mapping by regularly sending packets to those nodes, ie by sending keepalives. The frequency of emission of these keepalive packets can be controlled by setting the persistent-keepalive annotation on the node behind NAT. The annotated node will use the specified value will as the persistent-keepalive interval for all of its peers. For more background, . It is possible to add allowed-location-ips to a location by annotating any node within that location. Adding allowed-location-ips to a location makes these IPs routable from other locations as well. In an example deployment of Kilo with two locations A and B, a printer in location A can be accessible from nodes and pods in location B. Additionally, Kilo Peers can use the printer in location A."
}
] |
{
"category": "Runtime",
"file_name": "annotations.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List load-balancing configuration ``` cilium-dbg bpf lb list [flags] ``` ``` --backends List all service backend entries --frontends List all service frontend entries -h, --help help for list -o, --output string json| yaml| jsonpath='{}' --revnat List reverse NAT entries --source-ranges List all source range entries ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Load-balancing configuration"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_lb_list.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Design Title: Authors: Date: This section explains the motivation of the work, providing the status quo and the problems associated with it. This section also proposes the high level solution to the problem. This section explains the detailed goals and minimum requirements for the design. It should also include any non-goals of the design which would be expected from a reader. This also includes some high level functionality or performance gains. This section explains the use cases which will be enabled by the implementation of this design. This section should be from the perspective of a user and how they would be able to interact with the feature described. This section explains the high level changes proposed to achieve the design. Changes should be grouped by project components or in another logical way which makes the design easy to follow. If applicable, the design should be broken down into separate phases which each delivers incremental value. The first phase should achieve the minimum requirements outlined in the goals. If applicable, alternative approaches should be noted and evaluated for different sections of the design. This is a section detailing any user facing or internal APIs added to components, including configuration changes. Any changes to user facing components are mentioned here (ie. WebUI, command line). If no changes are made, it should be stated in this section. Changes should be formatted in a reader friendly manner. This section explains any compatibility assumptions or limitations. This section also analyzes any dependency changes (ie. pom file changes). In particular, the addition of new external libraries should be justified. This section explains any security considerations which must be taken into account. If there are no security implications, it should be stated explicitly here. This is an OPTIONAL and possibly REPEATED section which dives into a specific implementation detail in the design. Diagrams, code snippets, and other figures are encouraged. This section explains the various failure cases which the design must tolerate as well as failure cases the design neglects to address. Each failure should be noted with the expected behavior as well as ideal behavior, and if different, what effort is required to bridge the gap. This section should also outline any potential risks due to the implementation of this design such as performance loss, undesired behavior, and system instability. This section details the tests which will be added to ensure the quality and maintainability of the implementation. Unit tests should be added under the corresponding Alluxio module. Integration tests should be added to alluxio/tests module. System tests performed manually can be mentioned in the PR. In addition, any new functionality to support testing should be noted. This section covers the schedule of the design, implementation and testing phases. This will help us estimate the complete time of the feature and align with releases."
}
] |
{
"category": "Runtime",
"file_name": "Design-Document-Template.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Kata Containers supports multiple hypervisors. This document provides a very high level overview of the available hypervisors, giving suggestions as to which hypervisors you may wish to investigate further. Note: This document is not prescriptive or authoritative: It is up to you to decide which hypervisors may be most appropriate for your use-case. Refer to the official documentation for each hypervisor for further details. | Hypervisor | Written in | Architectures | Type | |-|-|-|-| |[ACRN] | C | `x86_64` | Type 1 (bare metal) | |[Cloud Hypervisor] | rust | `aarch64`, `x86_64` | Type 2 ([KVM]) | |[Firecracker] | rust | `aarch64`, `x86_64` | Type 2 ([KVM]) | |[QEMU] | C | all | Type 2 ([KVM]) |"
},
{
"data": "| |[`Dragonball`] | rust | `aarch64`, `x86_64` | Type 2 ([KVM]) | |[StratoVirt] | rust | `aarch64`, `x86_64` | Type 2 ([KVM]) | ```bash $ kata-runtime kata-env | awk -v RS= '/\\[Hypervisor\\]/' | grep Path ``` The table below provides a brief summary of some of the differences between the hypervisors: | Hypervisor | Summary | Features | Limitations | Container Creation speed | Memory density | Use cases | Comment | |-|-|-|-|-|-|-|-| |[ACRN] | Safety critical and real-time workloads | | | excellent | excellent | Embedded and IOT systems | For advanced users | |[Cloud Hypervisor] | Low latency, small memory footprint, small attack surface | Minimal | | excellent | excellent | High performance modern cloud workloads | | |[Firecracker] | Very slimline | Extremely minimal | Doesn't support all device types | excellent | excellent | Serverless / FaaS | | |[QEMU] | Lots of features | Lots | | good | good | Good option for most users | | |[`Dragonball`] | Built-in VMM, low CPU and memory overhead| Minimal | | excellent | excellent | Optimized for most container workloads | `out-of-the-box` Kata Containers experience | |[StratoVirt] | Unified architecture supporting three scenarios: VM, container, and serverless | Extremely minimal(`MicroVM`) to Lots(`StandardVM`) | | excellent | excellent | Common container workloads | `StandardVM` type of StratoVirt for Kata is under development | For further details, see the document and the official documentation for each hypervisor. Since each hypervisor offers different features and options, Kata Containers provides a separate for each. The configuration files contain comments explaining which options are available, their default values and how each setting can be used. | Hypervisor | Golang runtime config file | golang runtime short name | golang runtime default | rust runtime config file | rust runtime short name | rust runtime default | |-|-|-|-|-|-|-| | | `acrn` | | | | | | | `clh` | | | `cloud-hypervisor` | | | | `fc` | | | | | | | `qemu` | yes | | `qemu` | | | | `dragonball` | yes | | | `stratovirt` | | | | | Notes: The short names specified are used by the tool. As shown by the default columns, each runtime type has its own default hypervisor. The is the current default runtime. The , also known as `runtime-rs`, is the newer runtime written in the rust language. See the for further details. The configuration file links in the table link to the \"source\" versions: these are not usable configuration files as they contain variables that need to be expanded: The links are provided for reference only. The final (installed) versions, where all variables have been expanded, are built from these source configuration files. The pristine configuration files are usually installed in the `/opt/kata/share/defaults/kata-containers/` or `/usr/share/defaults/kata-containers/` directories. Some hypervisors may have the same name for both golang and rust runtimes, but the file contents may differ. If there is no configuration file listed for the golang or rust runtimes, this either means the hypervisor cannot be run with a particular runtime, or that a driver has not yet been made available for that runtime. To switch the configured hypervisor, you only need to run a single command. See for further details."
}
] |
{
"category": "Runtime",
"file_name": "hypervisors.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Rook plans to release a new minor version three times a year, or about every four months. The most recent two minor Rook releases are actively maintained. Patch releases for the latest minor release are typically bi-weekly. Urgent patches may be released sooner. Patch releases for the previous minor release are commonly monthly, though will vary depending on the urgency of fixes. The Rook community defines maintenance in that relevant bug fixes that are merged to the main development branch will be eligible to be back-ported to the release branch of any currently maintained version. Patches will be released as needed. It is also possible that a fix may be merged directly to the release branch if no longer applicable on the main development branch. While Rook maintainers make significant efforts to release urgent issues in a timely manner, maintenance does not indicate any SLA on response time. The minimum version supported by a Rook release is specified in the . Rook expects to support the most recent six versions of Kubernetes. While these K8s versions may not all be supported by the K8s release cycle, we understand that clusters may take time to update."
}
] |
{
"category": "Runtime",
"file_name": "release-cycle.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "| Author | | | | | | Date | 2021-02-19 | | Email | | Native-adaptor mainly includes two parts: config and container network. The config module mainly implements the user's management functions for the network, including the creation, query, and deletion of the network. The container network module mainly performs network creation and deletion operations for a container. ````c struct subnet_scope { char *begin; char *end; }; / Reserved IPv4 address ranges for private networks / const struct subnetscope gprivate_networks[] = { / Class C network 192.168.0.0/16 / { \"192.168.0.0/24\", \"192.168.255.0/24\" }, / Class B network 172.16.0.0/12 / { \"172.16.0.0/24\", \"172.31.255.0/24\" }, / Class A network 10.0.0.0/8 / { \"10.0.0.0/24\","
},
{
"data": "}, }; typedef struct nativenewtorkt { // network conflist struct cninetworklist_conf *conflist; // containers linked to network struct linkedlist containerslist; pthreadrwlockt rwlock; } native_network; typedef struct nativestoret { // string -> ptr(native_newtork) mapt *nameto_network; sizet networklen; char *conf_dir; char bin_paths; sizet binpaths_len; // do not need write lock in nativeinit and nativedestory pthreadrwlockt rwlock; } native_store; struct plugin_op { const char *plugin; cninetconf (op)(const networkcreaterequest *request); }; struct netdriverops { cninetconflist * (*conf)(const networkcreate_request *request); int (check)(const network_create_request request); int (detect)(const char *cnibindir, int bindirlen); int (remove)(cni_net_conf_list list); }; struct net_driver { const char *driver; const struct netdriverops *ops; }; ```` gprivatenetworks records the address of the recognized private network segment, which is used to assign the subnet to create the network nativestoret records the currently saved network information plugin_op records plugin (bridge/portmap/firewall) and its corresponding operation netdriverops and net_driver record the driver (bridge) and its corresponding operation ````c /* Description: Initialize, read the locally stored network conflist, set the directory of the configuration file and bin file; conf_dir: cni configuration file storage directory; bin_paths: cni plugin storage directory list; binpathslen: directory listing length; Return value: return 0 on success, non-zero on failure */ int nativeinit(const char *confdir, const char binpaths, const sizet binpathslen); /* Description: Check if there is an available network; Return value: Returns true if it exists, returns non-false if it does not exist */ bool native_ready(); /* Description: Destroy the network information stored in memory; return value: none */ void native_destory(); /* Description: Add one or more networks to the container; conf: prepared network, container and other information; result: the returned attach result; Return value: return 0 on success, non-zero on failure */ int nativeattachnetworks(const networkapiconf conf, network_api_result_list result); /* Description: remove one or more networks from the container; conf: prepared network, container and other information; result: the returned detach result; Return value: return 0 on success, non-zero on failure */ int nativedetachnetworks(const networkapiconf conf, network_api_result_list result); /* Description: Check whether a network exists; name: the name of the network; Return value: Returns true if it exists, returns non-false if it does not exist */ bool nativenetworkexist(const char *name); /* Description: Create a network; request: request to create a network; response: the return information for creating a network request; Return value: return 0 on success, non-zero on failure */ int nativeconfigcreate(const networkcreaterequest request, network_create_response *response); /* Description: query a network; name: query network name; network_json: the queried network json; Return value: return 0 on success, non-zero on failure */ int nativeconfiginspect(const char name, char *network_json); /* Description: Query all networks; filters: filter conditions; networks: the network information queried; networks_len: the number of queried networks; Return value: return 0 on success, non-zero on failure */ int nativeconfiglist(const struct filtersargs *filters, networknetworkinfo ***networks, sizet *networks_len); /* Description: delete network; name: delete the name of the network; res_name: Returns the deleted network name; Return value: return 0 on success, non-zero on failure */ int nativeconfigremove(const char name, char *res_name); /* Description: Add the container to the container list of a network; network_name: the name of the network; cont_id: the id of the container; Return value: return 0 on success, non-zero on failure */ int nativenetworkaddcontainerlist(const char network_name, const char cont_id); ```` Determine whether the network mode of the container is bridge and the container is not a system container. If it does not meet, exit directly, no need to prepare network for the container. Determine whether the container network has been started, and exit directly if it has been started. Check whether the container network is legal, if it is illegal, exit and report an error. Prepare the network namespace. Prepare attach network, port port mapping data. First attach the loopback device to the container. Attach the specified network plane to the container in turn, and record the result. If it fails, detach the network, delete the network namespace. Update the container's network information, port mapping information, and place it on the disk. Update the hosts and resolve.conf files in the container. Determine whether the network mode of the container is bridge and the container is not a system container. If it does not meet, exit directly, no need to remove network for the container. If the container is in the restart phase, skip the remove network phase. Prepare detached network and port mapping data. First detach the loopback device for the container. The container detaches the network plane. Update the hosts and resolve.conf files in the container. Update the container's network information, port mapping information, and place it on the disk. Delete the container network namespace. Client: Parse the parameters passed in by the user. Verify the incoming parameters, including: Only one network is allowed to be created at a time, that is, at most one name can be specified. If name is specified, check whether the length of name exceeds MAXNETWORKNAME_LEN(128). Send the request to the server Server: Check the received parameters, including If name is specified, check the validity of the name, including whether the length of the name exceeds MAXNETWORKNAME_LEN, and whether the name matches the regular expression ^* $. If the subnet or gateway is specified, check whether the user only specifies the gateway without specifying the subnet, check whether the format of the subnet and gateway is correct, and check whether the subnet and gateway match. If the user specifies a driver, check if the driver is a bridge If the user specifies a name, check whether the name conflicts with the name of the configured native network; if it is not specified, the generated bridge name will be used as the name of the network. The bridge name ensures that it does not conflict with the existing network name, bridge name and network device name on the"
},
{
"data": "If the user specifies a subnet, check whether the subnet network segment is in conflict with the configured network subnet and the host's IP; if it is not specified, find an idle private network segment as the subnet network segment If the user specifies a gateway, set the gateway IP as the IP specified by the user; if not specified, use the first IP in the subnet network segment as the gateway IP Check whether the CNI network plug-in exists on the host Generate network configuration Write the network configuration file Command Line ````sh ~ isula network create --help Usage: isula network create [OPTIONS] [NETWORK] Create a network -d, --driver Driver to manager the network (default \"bridge\") --gateway IPv4 or IPv6 gateway for the subnet --internal Restrict external access from this network --subnet Subnet in CIDR format ```` grpc interface ````c service NetworkService { rpc Create(NetworkCreateRequest) returns (NetworkCreateResponse); } message NetworkCreateRequest { string name = 1; string driver = 2; string gateway = 3; bool internal = 4; string subnet = 5; } message NetworkCreateResponse { string name = 1; uint32 cc = 2; string errmsg = 3; } ```` rest interface ````c ```` ````sh ~ cat /etc/cni/net.d/isulacni-isula-br0.conflist { \"cniVersion\": \"0.4.0\", \"name\": \"isula-br0\", \"plugins\": [ { \"type\": \"bridge\", \"bridge\": \"isula-br0\", \"isGateway\": true, \"ipMasq\": true, \"hairpinMode\": true, \"ipam\": { \"type\": \"host-local\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ], \"ranges\": [ [ { \"subnet\": \"192.168.0.0/24\", \"gateway\": \"192.168.0.1\" } ] ] } }, { \"type\": \"portmap\", \"capabilities\": { \"portMappings\": true } }, { \"type\": \"firewall\" } ] } ```` Client: Parse the parameters passed in by the user. Verify the incoming parameters, including Specify at least one name that needs to be queried. If format is specified, check whether the format is legal. Send the request to the server. Server: Verify the received network name. Query the corresponding network in memory. If present, network information json will be returned. If not, return not found. Command Line ````sh isula network inspect [OPTIONS] NETWORK [NETWORK...] -f, --format Format the output using the given go template ```` grpc interface ````c service NetworkService { rpc Inspect(NetworkInspectRequest) returns (NetworkInspectResponse); } message NetworkInspectRequest { string name = 1; } message NetworkInspectResponse { string NetworkJSON = 1; uint32 cc = 2; string errmsg = 3; } ```` rest interface ````c ```` Client: Parse the parameters passed in by the user Send the request to the server Server: Read the request information sent by the client Check whether the condition specified by the filter is legal Filter out the appropriate network according to the filter condition specified by the user and return it to the client Command Line ````sh isula network ls [OPTIONS] -q, --quiet Only display network Names -f, --filter Filter output based on conditions provided ```` grpc interface ````c service NetworkService { rpc List(NetworkListRequest) returns (NetworkListResponse); } message Network { string name = 1; string version = 2; repeated string plugins = 3; } message NetworkListRequest { map<string, string> filters = 1; } message NetworkListResponse { repeated Network networks = 1; uint32 cc = 2; string errmsg = 3; ```` rest interface ````c ```` Client: Parse the parameters passed in by the user. Send a request to the server. Server: Check whether the name is legal. Find the corresponding network. Determine whether any containers use the network. If there is, the network cannot be deleted. Remove the bridge device on the host. Delete the network configuration file. Delete network information in memory. Command Line ````sh isula network rm [OPTIONS] NETWORK [NETWORK...] ```` grpc interface ````c service NetworkService { rpc Remove(NetworkRemoveRequest) returns (NetworkRemoveResponse); } message NetworkRemoveRequest { string name ="
}
] |
{
"category": "Runtime",
"file_name": "native_network_adapter_design.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This branch of runc implements the for the `linux` platform. The following features are not implemented yet: Spec version | Feature | PR -||- v1.1.0 | `SECCOMPFILTERFLAGWAITKILLABLE_RECV` | The following architectures are supported: runc binary | seccomp -|- `amd64` | `SCMPARCHX86`, `SCMPARCHX8664`, `SCMPARCH_X32` `arm64` | `SCMPARCHARM`, `SCMPARCHAARCH64` `armel` | `SCMPARCHARM` `armhf` | `SCMPARCHARM` `ppc64le` | `SCMPARCHPPC64LE` `riscv64` | `SCMPARCHRISCV64` `s390x` | `SCMPARCHS390`, `SCMPARCHS390X` The runc binary might be compilable for i386, big-endian PPC64, and several MIPS variants too, but these architectures are not officially supported."
}
] |
{
"category": "Runtime",
"file_name": "spec-conformance.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "StratoVirt is an open-source lightweight virtualization technology based on Linux Kernel-based Virtual Machine(KVM), which reduces memory resource consumption and improves VM startup speed while retains isolation capability and security capability of traditional virtualization. StratoVirt can be applied to microservices or serverless scenarios such as function computing, and reserves interface and design for importing more features, even standard virtualization. The following figure shows StratoVirt's core architecture which consist of three layers from top to bottom. External API: StratoVirt uses the QMP protocol to communicate with external systems and is compatible with OCI. Meanwhile, StratoVirt can be managed by libvirt too. BootLoader: abandon the traditional BIOS+GRUB boot mode to achieve fast boot in lightweight scenarios, and provide UEFI boot support for standard VM. Emulated mainboard: microvm: To improve performance as well as reduce the attack surface, StratoVirt minimizes the simulation of user-mode devices. KVM simulation devices and paravirtualization devices, such as GIC, serial, RTC and virtio-mmio devices are implemented; standard VM: realize UEFI boot with constructed ACPI tables. Virtio-pci and VFIO devices can be attached to greatly improve the I/O performance; High isolation ability based on hardware; Fast cold boot: Benefit from the minimalist design, microvm can be started within 50ms; Low memory overhead: StratoVirt works with a memory footprint at 4MB; IO enhancement: StratoVirt offers normal IO ability with minimalist IO device emulation; OCI compatibility: StratoVirt works with isula and kata container, and can be integrated in Kubernetes ecosystem perfectly; Multi-platform support: Fully support for Intel and Arm platform; Expansibility: StratoVirt reserves interface and design for importing more features, even expand to standard virtualization support; Security: less than 55 syscalls while running; StratoVirt VM is an independent process in Linux. The process has three types of threads: main thread, VCPU thread and I/O thread: The main thread is a cycle for asynchronous collecting and processing events from external modules, such as a VCPU thread; Each VCPU has a thread to process trap events of this VCPU; Iothreads can be configured for I/O devices to improve I/O performance; Only the Linux operating system is supported; The recommended kernel version is 4.19; Only Linux is supported as the client operating system, and the recommended kernel version is 4.19; StratoVirt is fully tested on openEuler; Supports a maximum of 254 CPUs;"
}
] |
{
"category": "Runtime",
"file_name": "design.md",
"project_name": "StratoVirt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "layout: global title: List of Configuration Properties An Alluxio cluster can be configured by setting the values of Alluxio configuration properties within `${ALLUXIOHOME}/conf/alluxio-site.properties`. If this file does not exist, it can be copied from the template file under `${ALLUXIOHOME}/conf`: ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Make sure that this file is distributed to `${ALLUXIO_HOME}/conf` on every Alluxio master and worker before starting the cluster. Restarting Alluxio processes is the safest way to ensure any configuration updates are applied. All Alluxio configuration settings fall into one of the five categories: (shared by Master and Worker), , , , and (shared by Master, Worker, and User). properties prefixed with `alluxio.master` affect the Alluxio master processes properties prefixed with `alluxio.worker` affect the Alluxio worker processes properties prefixed with `alluxio.user` affect Alluxio client operations (e.g. compute applications) The common configuration contains constants shared by different components. <table class=\"table table-striped\"> <tr><th>Property Name</th><th>Default</th><th>Description</th></tr> {% for item in site.data.generated.common-configuration %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.propertyName }}\"></a> `{{ item.propertyName }}`</td> <td>{{ item.defaultValue }}</td> <td>{{ site.data.generated.en.common-configuration[item.propertyName] }}</td> </tr> {% endfor %} </table> The master configuration specifies information regarding the master node, such as the address and the port number. <table class=\"table table-striped\"> <tr><th>Property Name</th><th>Default</th><th>Description</th></tr> {% for item in site.data.generated.master-configuration %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.propertyName }}\"></a> `{{ item.propertyName }}`</td> <td>{{ item.defaultValue }}</td> <td>{{ site.data.generated.en.master-configuration[item.propertyName] }}</td> </tr> {% endfor %} </table> The worker configuration specifies information regarding the worker nodes, such as the address and the port number. <table class=\"table table-striped\"> <tr><th>Property Name</th><th>Default</th><th>Description</th></tr> {% for item in site.data.generated.worker-configuration %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.propertyName }}\"></a> `{{ item.propertyName }}`</td> <td>{{ item.defaultValue }}</td> <td>{{ site.data.generated.en.worker-configuration[item.propertyName] }}</td> </tr> {% endfor %} </table> The user configuration specifies values regarding file system access. <table class=\"table table-striped\"> <tr><th>Property Name</th><th>Default</th><th>Description</th></tr> {% for item in site.data.generated.user-configuration %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.propertyName }}\"></a> `{{ item.propertyName }}`</td> <td>{{ item.defaultValue }}</td> <td>{{ site.data.generated.en.user-configuration[item.propertyName] }}</td> </tr> {% endfor %} </table> The security configuration specifies information regarding the security features, such as authentication and file permission. Settings for authentication take effect for master, worker, and user. Settings for file permission only take effect for master. <table class=\"table table-striped\"> <tr><th>Property Name</th><th>Default</th><th>Description</th></tr> {% for item in site.data.generated.security-configuration %} <tr> <td markdown=\"span\"><a class=\"anchor\" name=\"{{ item.propertyName }}\"></a> `{{ item.propertyName }}`</td> <td>{{ item.defaultValue }}</td> <td>{{ site.data.generated.en.security-configuration[item.propertyName] }}</td> </tr> {% endfor %} </table>"
}
] |
{
"category": "Runtime",
"file_name": "Properties-List.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"CSI Snapshot Data Movement\" layout: docs CSI Snapshot Data Movement is built according to the and is specifically designed to move CSI snapshot data to a backup storage location. CSI Snapshot Data Movement takes CSI snapshots through the CSI plugin in nearly the same way as . However, it doesn't stop after a snapshot is taken. Instead, it tries to access the snapshot data through various data movers and back up the data to a backup storage connected to the data movers. Consequently, the volume data is backed up to a pre-defined backup storage in a consistent manner. After the backup completes, the CSI snapshot will be removed by Velero and the snapshot data space will be released on the storage side. CSI Snapshot Data Movement is useful in below scenarios: For on-premises users, the storage usually doesn't support durable snapshots, so it is impossible/less efficient/cost ineffective to keep volume snapshots by the storage, as required by the . This feature helps to move the snapshot data to a storage with lower cost and larger scale for long time preservation. For public cloud users, this feature helps users to fulfil the multiple cloud strategy. It allows users to back up volume snapshots from one cloud provider and preserve or restore the data to another cloud provider. Then users will be free to flow their business data across cloud providers based on Velero backup and restore. Besides, Velero which could also back up the volume data to a pre-defined backup storage. CSI Snapshot Data Movement works together with to satisfy different requirements for the above scenarios. And whenever available, CSI Snapshot Data Movement should be used in preference since the reads data from the live PV, in which way the data is not captured at the same point in time, so is less consistent. Moreover, CSI Snapshot Data Movement brings more possible ways of data access, i.e., accessing the data from the block level, either fully or incrementally. On the other hand, there are quite some cases that CSI snapshot is not available (i.e., you need a volume snapshot plugin for your storage platform, or you're using EFS, NFS, emptyDir, local, or any other volume type that doesn't have a native snapshot), then will be the only option. CSI Snapshot Data Movement supports both built-in data mover and customized data movers. For the details of how Velero works with customized data movers, check the . Velero provides a built-in data mover which uses Velero built-in uploaders (at present the available uploader is Kopia uploader) to read the snapshot data and write to the Unified Repository (by default implemented by Kopia repository). The way for Velero built-in data mover to access the snapshot data is based on the hostpath access by Velero node-agent, so the node-agent pods need to run as root user and even under privileged mode in some environments, as same as . The source cluster is Kubernetes version 1.20 or greater. The source cluster is running a CSI driver capable of support volume snapshots at the . CSI Snapshot Data Movement requires the Kubernetes . Velero Node Agent is a Kubernetes daemonset that hosts Velero data movement modules, i.e., data mover controller, uploader & repository. If you are using Velero built-in data mover, Node Agent must be installed. To install Node Agent, use the `--use-node-agent` flag. ``` velero install --use-node-agent ``` After installation, some PaaS/CaaS platforms based on Kubernetes also require modifications the node-agent DaemonSet"
},
{
"data": "The steps in this section are only needed if you are installing on RancherOS, Nutanix, OpenShift, VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS), or Microsoft Azure. RancherOS Update the host path for volumes in the node-agent DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/opt/rke/var/lib/kubelet/pods`. ```yaml hostPath: path: /var/lib/kubelet/pods ``` to ```yaml hostPath: path: /opt/rke/var/lib/kubelet/pods ``` Nutanix Update the host path for volumes in the node-agent DaemonSet in the Velero namespace from `/var/lib/kubelet/pods` to `/var/nutanix/var/lib/kubelet`. ```yaml hostPath: path: /var/lib/kubelet/pods ``` to ```yaml hostPath: path: /var/nutanix/var/lib/kubelet ``` OpenShift To mount the correct hostpath to pods volumes, run the node-agent pod in `privileged` mode. Add the `velero` ServiceAccount to the `privileged` SCC: ``` oc adm policy add-scc-to-user privileged -z velero -n velero ``` Install Velero with the '--privileged-node-agent' option to request a privileged mode: ``` velero install --use-node-agent --privileged-node-agent ``` If node-agent is not running in a privileged mode, it will not be able to access snapshot volumes within the mounted hostpath directory because of the default enforced SELinux mode configured in the host system level. You can to relax the security in your cluster so that node-agent pods are allowed to use the hostPath volume plugin without granting them access to the `privileged` SCC. By default a userland openshift namespace will not schedule pods on all nodes in the cluster. To schedule on all nodes the namespace needs an annotation: ``` oc annotate namespace <velero namespace> openshift.io/node-selector=\"\" ``` This should be done before velero installation. Or the ds needs to be deleted and recreated: ``` oc get ds node-agent -o yaml -n <velero namespace> > ds.yaml oc annotate namespace <velero namespace> openshift.io/node-selector=\"\" oc create -n <velero namespace> -f ds.yaml ``` VMware Tanzu Kubernetes Grid Integrated Edition (formerly VMware Enterprise PKS) You need to enable the `Allow Privileged` option in your plan configuration so that Velero is able to mount the hostpath. The hostPath should be changed from `/var/lib/kubelet/pods` to `/var/vcap/data/kubelet/pods` ```yaml hostPath: path: /var/vcap/data/kubelet/pods ``` At present, Velero backup repository supports object storage as the backup storage. Velero gets the parameters from the to compose the URL to the backup storage. Velero's known object storage providers are included here , for which, Velero pre-defines the endpoints. If you want to use a different backup storage, make sure it is S3 compatible and you provide the correct bucket name and endpoint in BackupStorageLocation. Velero handles the creation of the backup repo prefix in the backup storage, so make sure it is specified in BackupStorageLocation correctly. Velero creates one backup repository per namespace. For example, if backing up 2 namespaces, namespace1 and namespace2, using kopia repository on AWS S3, the full backup repo path for namespace1 would be `https://s3-us-west-2.amazonaws.com/bucket/kopia/ns1` and for namespace2 would be `https://s3-us-west-2.amazonaws.com/bucket/kopia/ns2`. There may be additional installation steps depending on the cloud provider plugin you are using. You should refer to the for the must up to date information. Note: Currently, Velero creates a secret named `velero-repo-credentials` in the velero install namespace, containing a default backup repository password. You can update the secret with your own password encoded as base64 prior to the first backup (i.e., , snapshot data movements) targeting to the backup repository. The value of the key to update is ``` data: repository-password: <custom-password> ``` Backup repository is created during the first execution of backup targeting to it after installing Velero with node agent. If you update the secret password after the first backup which created the backup repository, then Velero will not be able to connect with the older"
},
{
"data": "On source cluster, Velero needs to manipulate CSI snapshots through the CSI volume snapshot APIs, so you must enable the `EnableCSI` feature flag and install the Velero on the Velero server. Both of these can be added with the `velero install` command. ```bash velero install \\ --features=EnableCSI \\ --plugins=<object storage plugin>,velero/velero-plugin-for-csi:v0.6.0 \\ ... ``` For Velero built-in data movement, CSI facilities are not required necessarily in the target cluster. On the other hand, Velero built-in data movement creates a PVC with the same specification as it is in the source cluster and expects the volume to be provisioned similarly. For example, the same storage class should be working in the target cluster. By default, Velero won't restore storage class resources from the backup since they are cluster scope resources. However, if you specify the `--include-cluster-resources` restore flag, they will be restored. For a cross provider scenario, the storage class from the source cluster is probably not usable in the target cluster. In either of the above cases, the best practice is to create a working storage class in the target cluster with the same name as it in the source cluster. In this way, even though `--include-cluster-resources` is specified, Velero restore will skip restoring the storage class since it finds an existing one. Otherwise, if the storage class name in the target cluster is different, you can change the PVC's storage class name during restore by the method. You can also configure to skip restoring the storage class resources from the backup since they are not usable. If you are using a customized data mover, follow the data mover's instructions for any further prerequisites. For Velero side configurations mentioned above, the installation and configuration of node-agent may not be required. Velero uses a new custom resource `DataUpload` to drive the data movement. The selected data mover will watch and reconcile the CRs. Velero allows users to decide whether the CSI snapshot data should be moved per backup. Velero also allows users to select the data mover to move the CSI snapshot data per backup. The both selections are simply done by a parameter when running the backup. To take a backup with Velero's built-in data mover: ```bash velero backup create NAME --snapshot-move-data OPTIONS... ``` Or if you want to use a customized data mover: ```bash velero backup create NAME --snapshot-move-data --data-mover DATA-MOVER-NAME OPTIONS... ``` When the backup starts, you will see the `VolumeSnapshot` and `VolumeSnapshotContent` objects created, but after the backup finishes, the objects will disppear. After snapshots are created, you will see one or more `DataUpload` CRs created. You may also see some intermediate objects (i.e., pods, PVCs, PVs) created in Velero namespace or the cluster scope, they are to help data movers to move data. And they will be removed after the backup completes. The phase of a `DataUpload` CR changes several times during the backup process and finally goes to one of the terminal status, `Completed`, `Failed` or `Cancelled`. You can see the phase changes as well as the data upload progress by watching the `DataUpload` CRs: ```bash kubectl -n velero get datauploads -l velero.io/backup-name=YOURBACKUPNAME -w ``` When the backup completes, you can view information about the backups: ```bash velero backup describe YOURBACKUPNAME ``` ```bash kubectl -n velero get datauploads -l velero.io/backup-name=YOURBACKUPNAME -o yaml ``` You don't need to set any additional information when creating a data mover restore. The configurations are automatically retrieved from the backup, i.e., whether data movement should be involved and which data mover conducts the data movement. To restore from your Velero backup: ```bash velero restore create --from-backup BACKUP_NAME"
},
{
"data": "``` When the restore starts, you will see one or more `DataDownload` CRs created. You may also see some intermediate objects (i.e., pods, PVCs, PVs) created in Velero namespace or the cluster scope, they are to help data movers to move data. And they will be removed after the restore completes. The phase of a `DataDownload` CR changes several times during the restore process and finally goes to one of the terminal status, `Completed`, `Failed` or `Cancelled`. You can see the phase changes as well as the data download progress by watching the DataDownload CRs: ```bash kubectl -n velero get datadownloads -l velero.io/restore-name=YOURRESTORENAME -w ``` When the restore completes, view information about your restores: ```bash velero restore describe YOURRESTORENAME ``` ```bash kubectl -n velero get datadownloads -l velero.io/restore-name=YOURRESTORENAME -o yaml ``` CSI and CSI snapshot support both file system volume mode and block volume mode. At present, Velero built-in data mover doesn't support block mode volume or volume snapshot. [Velero built-in data mover] At present, Velero uses a static, common encryption key for all backup repositories it creates. This means that anyone who has access to your backup storage can decrypt your backup data. Make sure that you limit access to the backup storage appropriately. [Velero built-in data mover] Even though the backup data could be incrementally preserved, for a single file data, Velero built-in data mover leverages on deduplication to find the difference to be saved. This means that large files (such as ones storing a database) will take a long time to scan for data deduplication, even if the actual difference is small. to make sure backups complete successfully for massive small files or large backup size cases, for more details refer to . The block mode is supported by the Kopia uploader, but it only supports non-Windows platforms, because the block mode code invokes some system calls that are not present in the Windows platform. Run the following checks: Are your Velero server and daemonset pods running? ```bash kubectl get pods -n velero ``` Does your backup repository exist, and is it ready? ```bash velero repo get velero repo get REPO_NAME -o yaml ``` Are there any errors in your Velero backup/restore? ```bash velero backup describe BACKUP_NAME velero backup logs BACKUP_NAME velero restore describe RESTORE_NAME velero restore logs RESTORE_NAME ``` What is the status of your `DataUpload` and `DataDownload`? ```bash kubectl -n velero get datauploads -l velero.io/backup-name=BACKUP_NAME -o yaml kubectl -n velero get datadownloads -l velero.io/restore-name=RESTORE_NAME -o yaml ``` Is there any useful information in the Velero server or daemonset pod logs? ```bash kubectl -n velero logs deploy/velero kubectl -n velero logs DAEMONPODNAME ``` NOTE: You can increase the verbosity of the pod logs by adding `--log-level=debug` as an argument to the container command in the deployment/daemonset pod template spec. If you are using a customized data mover, follow the data mover's instruction for additional troubleshooting methods. CSI snapshot data movement is a combination of CSI snapshot and data movement, which is jointly executed by Velero server, CSI plugin and the data mover. This section lists some general concept of how CSI snapshot data movement backup and restore work. For the detailed mechanisms and workflows, you can check the . Velero has three custom resource definitions and associated controllers: `DataUpload` - represents a data upload of a volume snapshot. The CSI plugin creates one `DataUpload` per CSI snapshot. Data movers need to handle these CRs to finish the data upload process. Velero built-in data mover runs a controller for this resource on each node (in node-agent"
},
{
"data": "Controllers from different nodes may handle one CR in different phases, but finally the data transfer is done by one single controller which will call uploaders from the backend. `DataDownload` - represents a data download of a volume snapshot. The CSI plugin creates one `DataDownload` per volume to be restored. Data movers need to handle these CRs to finish the data upload process. Velero built-in data mover runs a controller for this resource on each node (in node-agent daemonset). Controllers from different nodes may handle one CR in different phases, but finally the data transfer is done by one single controller which will call uploaders from the backend. `BackupRepository` - represents/manages the lifecycle of Velero's backup repositories. Velero creates a backup repository per namespace when the first CSI snapshot backup/restore for a namespace is requested. You can see information about your Velero's backup repositories by running `velero repo get`. This CR is used by Velero built-in data movers, customized data movers may or may not use it. For other resources or controllers involved by customized data movers, check the data mover's instructions. Velero backs up resources for CSI snapshot data movement backup in the same way as other backup types. When it encounters a PVC, particular logics will be conducted: When it finds a PVC object, Velero calls CSI plugin through a Backup Item Action. CSI plugin first takes a CSI snapshot to the PVC by creating the `VolumeSnapshot` and `VolumeSnapshotContent`. CSI plugin checks if a data movement is required, if so it creates a `DataUpload` CR and then returns to Velero backup. Velero now is able to back up other resources, including other PVC objects. Velero backup controller periodically queries the data movement status from CSI plugin, the period is configurable through the Velero server parameter `--item-operation-sync-frequency`, by default it is 10s. On the call, CSI plugin turns to check the phase of the `DataUpload` CRs. When all the `DataUpload` CRs come to a terminal state (i.e., `Completed`, `Failed` or `Cancelled`), Velero backup persists all the necessary information and finish the backup. CSI plugin expects a data mover to handle the `DataUpload` CR. If no data mover is configured for the backup, Velero built-in data mover will handle it. If the `DataUpload` CR does not reach to the terminal state with in the given time, the `DataUpload` CR will be cancelled. You can set the timeout value per backup through the `--item-operation-timeout` parameter, the default value is `4 hours`. Velero built-in data mover creates a volume from the CSI snapshot and transfer the data to the backup storage according to the backup storage location defined by users. After the volume is created from the CSI snapshot, Velero built-in data mover waits for Kubernetes to provision the volume, this may take some time varying from storage providers, but if the provision cannot be finished in a given time, Velero built-in data mover will cancel this `DataUpload` CR. The timeout is configurable through a node-agent's parameter `data-mover-prepare-timeout`, the default value is 30 minutes. When the data transfer completes or any error happens, Velero built-in data mover sets the `DataUpload` CR to the terminal state, either `Completed` or `Failed`. Velero built-in data mover also monitors the cancellation request to the `DataUpload` CR, once that happens, it cancels its ongoing activities, cleans up the intermediate resources and set the `DataUpload` CR to `Cancelled`. Velero restores resources for CSI snapshot data movement restore in the same way as other restore"
},
{
"data": "When it encounters a PVC, particular logics will be conducted: When it finds a PVC object, Velero calls CSI plugin through a Restore Item Action. CSI plugin checks the backup information, if a data movement was involved, it creates a `DataDownload` CR and then returns to Velero restore. Velero is now able to restore other resources, including other PVC objects. Velero restore controller periodically queries the data movement status from CSI plugin, the period is configurable through the Velero server parameter `--item-operation-sync-frequency`, by default it is 10s. On the call, CSI plugin turns to check the phase of the `DataDownload` CRs. When all `DataDownload` CRs come to a terminal state (i.e., `Completed`, `Failed` or `Cancelled`), Velero restore will finish. CSI plugin expects the same data mover for the backup to handle the `DataDownload` CR. If no data mover was configured for the backup, Velero built-in data mover will handle it. If the `DataDownload` CR does not reach to the terminal state with in the given time, the `DataDownload` CR will be cancelled. You can set the timeout value per backup through the same `--item-operation-timeout` parameter. Velero built-in data mover creates a volume with the same specification of the source volume. Velero built-in data mover waits for Kubernetes to provision the volume, this may take some time varying from storage providers, but if the provision cannot be finished in a given time, Velero built-in data mover will cancel this `DataDownload` CR. The timeout is configurable through the same node-agent's parameter `data-mover-prepare-timeout`. After the volume is provisioned, Velero built-in data mover starts to transfer the data from the backup storage according to the backup storage location defined by users. When the data transfer completes or any error happens, Velero built-in data mover sets the `DataDownload` CR to the terminal state, either `Completed` or `Failed`. Velero built-in data mover also monitors the cancellation request to the `DataDownload` CR, once that happens, it cancels its ongoing activities, cleans up the intermediate resources and set the `DataDownload` CR to `Cancelled`. Velero calls the CSI plugin concurrently for the volume, so `DataUpload`/`DataDownload` CRs are created concurrently by the CSI plugin. For more details about the call between Velero and CSI plugin, check the . In which manner the `DataUpload`/`DataDownload` CRs are processed is totally decided by the data mover you select for the backup/restore. For Velero built-in data mover, it uses Kubernetes' scheduler to mount a snapshot volume/restore volume associated to a `DataUpload`/`DataDownload` CR into a specific node, and then the `DataUpload`/`DataDownload` controller (in node-agent daemonset) in that node will handle the `DataUpload`/`DataDownload`. At present, a `DataUpload`/`DataDownload` controller in one node handles one request at a time. That is to say, the snapshot volumes/restore volumes may spread in different nodes, then their associated `DataUpload`/`DataDownload` CRs will be processed in parallel; while for the snapshot volumes/restore volumes in the same node, their associated `DataUpload`/`DataDownload` CRs are processed sequentially. You can check in which node the `DataUpload`/`DataDownload` CRs are processed and their parallelism by watching the `DataUpload`/`DataDownload` CRs: ```bash kubectl -n velero get datauploads -l velero.io/backup-name=YOURBACKUPNAME -w ``` ```bash kubectl -n velero get datadownloads -l velero.io/restore-name=YOURRESTORENAME -w ``` At present, Velero backup and restore doesn't support end to end cancellation that is launched by users. However, Velero cancels the `DataUpload`/`DataDownload` in below scenarios automatically: When Velero server is restarted When node-agent is restarted When an ongoing backup/restore is deleted When a backup/restore does not finish before the item operation timeout (default value is `4 hours`) Customized data movers that support cancellation could cancel their ongoing tasks and clean up any intermediate resources. If you are using Velero built-in"
}
] |
{
"category": "Runtime",
"file_name": "csi-snapshot-data-movement.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: Running Apache Spark on Alluxio This guide describes how to configure to access Alluxio. Applications using Spark 1.1 or later can access Alluxio through its HDFS-compatible interface. Using Alluxio as the data access layer, Spark applications can transparently access data in many different types of persistent storage services (e.g., AWS S3 buckets, Azure Object Store buckets, remote HDFS deployments and etc). Data can be actively fetched or transparently cached into Alluxio to speed up I/O performance especially when the Spark deployment is remote to the data. In addition, Alluxio can help simplify the architecture by decoupling compute and physical storage. When the data path in persistent under storage is hidden from Spark, changes to under storage can be independent from application logic; meanwhile, as a near-compute cache, Alluxio can still provide compute frameworks data-locality. Java 8 Update 60 or higher (8u60+), 64-bit. An Alluxio cluster is set up and is running. This guide assumes the persistent under storage is a local HDFS deployment. E.g., a line of `alluxio.dora.client.ufs.root=hdfs://localhost:9000/alluxio/` is included in `${ALLUXIO_HOME}/conf/alluxio-site.properties`. Note that Alluxio supports many other under storage systems in addition to HDFS. Make sure that the Alluxio client jar is available. This Alluxio client jar file can be found at `{{site.ALLUXIOCLIENTJAR_PATH}}` in the tarball distribution downloaded from Alluxio . The Alluxio client jar must be distributed across the all nodes where Spark drivers or executors are running. Place the client jar on the same local path (e.g. `{{site.ALLUXIOCLIENTJAR_PATH}}`) on each node. The Alluxio client jar must be in the classpath of all Spark drivers and executors in order for Spark applications to access Alluxio. Add the following line to `spark/conf/spark-defaults.conf` on every node running Spark. Also, make sure the client jar is copied to every node running Spark. ``` spark.driver.extraClassPath {{site.ALLUXIOCLIENTJAR_PATH}} spark.executor.extraClassPath {{site.ALLUXIOCLIENTJAR_PATH}} ``` This section shows how to use Alluxio as input and output sources for your Spark applications. Copy local data to the Alluxio file system. Put the `LICENSE` file into Alluxio, assuming you are in the Alluxio installation directory: ```shell $ ./bin/alluxio fs cp file://LICENSE /Input ``` Run the following commands from `spark-shell`, assuming the Alluxio Master is running on `localhost`: ```scala val s = sc.textFile(\"alluxio://localhost:19998/Input\") val double = s.map(line => line + line) double.saveAsTextFile(\"alluxio://localhost:19998/Output\") ``` You may also open your browser and check . There should be an output directory `/Output` which contains the doubled content of the input file `Input`. Alluxio transparently fetches data from the under storage system, given the exact path. For this section, HDFS is used as an example of a distributed under storage system. Put a file `Input_HDFS` into HDFS: ```shell $ hdfs dfs -copyFromLocal -f ${ALLUXIOHOME}/LICENSE hdfs://localhost:9000/alluxio/InputHDFS ``` At this point, Alluxio does not know about this file since it was added to HDFS directly. Verify this by going to the web UI. Run the following commands from `spark-shell` assuming Alluxio Master is running on `localhost`: ```scala val s = sc.textFile(\"alluxio://localhost:19998/Input_HDFS\") val double = s.map(line => line + line) double.saveAsTextFile(\"alluxio://localhost:19998/Output_HDFS\") ``` Open your browser and check"
},
{
"data": "There should be an output directory `Output_HDFS` which contains the doubled content of the input file `Input_HDFS`. Also, the input file `Input_HDFS` now will be 100% loaded in the Alluxio file system space. When connecting to an HA-enabled Alluxio cluster using internal leader election, set the `alluxio.master.rpc.addresses` property via the Java options in `${SPARK_HOME}/conf/spark-defaults.conf` so Spark applications know which Alluxio masters to connect to and how to identify the leader. For example: ```properties spark.driver.extraJavaOptions -Dalluxio.master.rpc.addresses=masterhostname1:19998,masterhostname2:19998,masterhostname3:19998 spark.executor.extraJavaOptions -Dalluxio.master.rpc.addresses=masterhostname1:19998,masterhostname2:19998,masterhostname3:19998 ``` Alternatively, add the property to the Hadoop configuration file `${SPARK_HOME}/conf/core-site.xml`: ```xml <configuration> <property> <name>alluxio.master.rpc.addresses</name> <value>masterhostname1:19998,masterhostname2:19998,masterhostname3:19998</value> </property> </configuration> ``` Users can also configure Spark to connect to an Alluxio HA cluster using Zookeeper-based leader election. Refer to . Spark users can use pass JVM system properties to set Alluxio properties on to Spark jobs by adding `\"-Dproperty=value\"` to `spark.executor.extraJavaOptions` for Spark executors and `spark.driver.extraJavaOptions` for Spark drivers. For example, to submit a Spark job with that uses the Alluxio `CACHE_THROUGH` write type: ```shell $ spark-submit \\ --conf 'spark.driver.extraJavaOptions=-Dalluxio.user.file.writetype.default=CACHE_THROUGH' \\ --conf 'spark.executor.extraJavaOptions=-Dalluxio.user.file.writetype.default=CACHE_THROUGH' \\ ... ``` To customize Alluxio client-side properties for a Spark job, see . Note that in client mode you need to set `--driver-java-options \"-Dalluxio.user.file.writetype.default=CACHE_THROUGH\"` instead of `--conf spark.driver.extraJavaOptions=-Dalluxio.user.file.writetype.default=CACHE_THROUGH` (see ). If Spark configured using the instructions in , you can write URIs using the `alluxio:///` scheme without specifying cluster information in the authority. This is because in HA mode, the address of leader Alluxio master will be served by the internal leader election or by the configured Zookeeper service. ```scala val s = sc.textFile(\"alluxio:///Input\") val double = s.map(line => line + line) double.saveAsTextFile(\"alluxio:///Output\") ``` Alternatively, users may specify the HA authority directly in the URI without any configuration setup. For example, specify the master rpc addresses in the URI to connect to Alluxio configured for HA using internal leader election: ```scala val s = sc.textFile(\"alluxio://masterhostname1:19998;masterhostname2:19998;masterhostname3:19998/Input\") val double = s.map(line => line + line) double.saveAsTextFile(\"alluxio://masterhostname1:19998;masterhostname2:19998;masterhostname3:19998/Output\") ``` Note that you must use semicolons rather than commas to separate different addresses to refer a URI of Alluxio in HA mode in Spark. Otherwise, the URI will be considered invalid by Spark. Please refer to the instructions in . Storing RDDs in Alluxio memory is as simple as saving the RDD file to Alluxio. Two common ways to save RDDs as files in Alluxio are `saveAsTextFile`: writes the RDD as a text file, where each element is a line in the file, `saveAsObjectFile`: writes the RDD out to a file, by using Java serialization on each element. The saved RDDs in Alluxio can be read again (from memory) by using `sc.textFile` or `sc.objectFile` respectively. ```scala // as text file rdd.saveAsTextFile(\"alluxio://localhost:19998/rdd1\") rdd = sc.textFile(\"alluxio://localhost:19998/rdd1\") ``` ```scala // as object file rdd.saveAsObjectFile(\"alluxio://localhost:19998/rdd2\") rdd = sc.objectFile(\"alluxio://localhost:19998/rdd2\") ``` See the blog article \"\". Storing Spark DataFrames in Alluxio memory is as simple as saving the DataFrame as a file to Alluxio. DataFrames are commonly written as parquet files, with `df.write.parquet()`. After the parquet is written to Alluxio, it can be read from memory by using `spark.read.parquet()` (or `sqlContext.read.parquet()` for older versions of Spark). ```scala df.write.parquet(\"alluxio://localhost:19998/data.parquet\") df ="
},
{
"data": "``` See the blog article \"\". You may configure Spark's application logging for debugging purposes. The Spark documentation explains If you are using YARN then there is a separate section which explains If Spark task locality is `ANY` while it should be `NODE_LOCAL`, it is probably because Alluxio and Spark use different network address representations. One of them them may use hostname while another uses IP address. Refer to JIRA ticket for more details, where you can find solutions from the Spark community. Note: Alluxio workers use hostnames to represent network addresses to be consistent with HDFS. There is a workaround when launching Spark to achieve data locality. Users can explicitly specify hostnames by using the following script offered in Spark. Start the Spark Worker in each slave node with slave-hostname: ```shell $ ${SPARK_HOME}/sbin/start-worker.sh -h <slave-hostname> <spark master uri> ``` Note for older versions of Spark the script is called `start-slave.sh`. For example: ```shell $ ${SPARK_HOME}/sbin/start-worker.sh -h simple30 spark://simple27:7077 ``` You can also set the `SPARKLOCALHOSTNAME` in `$SPARK_HOME/conf/spark-env.sh` to achieve this. For example: ```properties SPARKLOCALHOSTNAME=simple30 ``` Either way, the Spark Worker addresses become hostnames and Locality Level becomes `NODE_LOCAL` as shown in Spark WebUI below. To maximize the amount of locality your Spark jobs attain, you should use as many executors as possible, hopefully at least one executor per node. It is recommended to co-locate Alluxio workers with the Spark executors. When a Spark job is run on YARN, Spark launches its executors without taking data locality into account. Spark will then correctly take data locality into account when deciding how to distribute tasks to its executors. For example, if `host1` contains `blockA` and a job using `blockA` is launched on the YARN cluster with `--num-executors=1`, Spark might place the only executor on `host2` and have poor locality. However, if `--num-executors=2` and executors are started on `host1` and `host2`, Spark will be smart enough to prioritize placing the job on `host1`. To run the `spark-shell` with the Alluxio client, the Alluxio client jar will must be added to the classpath of the Spark driver and Spark executors, as . However, sometimes SparkSQL may fail to save tables to the Hive Metastore (location in Alluxio), with an error message similar to the following: ``` org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.RuntimeException: java.lang.ClassNotFoundException: Class alluxio.hadoop.FileSystem not found) ``` The recommended solution is to configure . In Spark 1.4.0 and later, Spark uses an isolated classloader to load java classes for accessing the Hive Metastore. The isolated classloader ignores certain packages and allows the main classloader to load \"shared\" classes (the Hadoop HDFS client is one of these \"shared\" classes). The Alluxio client should also be loaded by the main classloader, and you can append the `alluxio` package to the configuration parameter `spark.sql.hive.metastore.sharedPrefixes` to inform Spark to load Alluxio with the main classloader. For example, the parameter may be set in `spark/conf/spark-defaults.conf`: ```properties spark.sql.hive.metastore.sharedPrefixes=com.mysql.jdbc,org.postgresql,com.microsoft.sqlserver,oracle.jdbc,alluxio ``` If you use Spark on YARN with Alluxio and run into the exception `java.io.IOException: No FileSystem for scheme: alluxio`, please add the following content to `${SPARK_HOME}/conf/core-site.xml`: ```xml <configuration> <property> <name>fs.alluxio.impl</name> <value>alluxio.hadoop.FileSystem</value> </property> </configuration> ```"
}
] |
{
"category": "Runtime",
"file_name": "Spark.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(instances-limit-units)= Any value that represents bytes or bits can make use of a number of suffixes to make it easier to understand what a particular limit is. Both decimal and binary (kibi) units are supported, with the latter mostly making sense for storage limits. The full list of bit suffixes currently supported is: bit (1) kbit (1000) Mbit (1000^2) Gbit (1000^3) Tbit (1000^4) Pbit (1000^5) Ebit (1000^6) Kibit (1024) Mibit (1024^2) Gibit (1024^3) Tibit (1024^4) Pibit (1024^5) Eibit (1024^6) The full list of byte suffixes currently supported is: B or bytes (1) kB (1000) MB (1000^2) GB (1000^3) TB (1000^4) PB (1000^5) EB (1000^6) KiB (1024) MiB (1024^2) GiB (1024^3) TiB (1024^4) PiB (1024^5) EiB (1024^6)"
}
] |
{
"category": "Runtime",
"file_name": "instance_units.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Thank you for taking the time out to contribute to project Antrea! This guide will walk you through the process of making your first commit and how to effectively get it merged upstream. <!-- toc --> - - - - - - - - <!-- /toc --> To get started, let's ensure you have completed the following prerequisites for contributing to project Antrea: Read and observe the . Check out the for the Antrea architecture and design. Set up necessary . Now that you're setup, skip ahead to learn how to . At minimum, you need the following accounts for effective participation: Github: Committing any change requires you to have a [github account](https://github.com/join). Slack: Join the and look for our channel. Google Group: Join our . There are multiple ways in which you can contribute, either by contributing code in the form of new features or bug-fixes or non-code contributions like helping with code reviews, triaging of bugs, documentation updates, filing or writing blogs/manuals etc. In order to help you get your hands \"dirty\", there is a list of issues from which you can choose. There are a few recommended git client hooks which we advise you to use. You can find them here: . You can run `make install-hooks` to copy them to your local `.git/hooks/` folder, and remove them via `make uninstall-hooks` Developers work in their own forked copy of the repository and when ready, submit pull requests to have their changes considered and merged into the project's repository. Fork your own copy of the repository to your GitHub account by clicking on `Fork` button on . Clone the forked repository on your local setup. ```bash git clone https://github.com/$user/antrea ``` Add a remote upstream to track upstream Antrea repository. ```bash git remote add upstream https://github.com/antrea-io/antrea ``` Never push to upstream remote ```bash git remote set-url --push upstream no_push ``` Create a topic branch. ```bash git checkout -b branchName ``` Make changes and commit it locally. Make sure that your commit is . ```bash git add <modifiedFile> git commit -s ``` Keeping branch in sync with upstream. ```bash git checkout branchName git fetch upstream git rebase upstream/main ``` Push local branch to your forked repository. ```bash git push -f $remoteBranchName branchName ``` Create a Pull request on GitHub. Visit your fork at `https://github.com/antrea-io/antrea` and click `Compare & Pull Request` button next to your `remoteBranchName` branch. Once you have opened a Pull Request (PR), reviewers will be assigned to your PR and they may provide review comments which you need to address. Commit changes made in response to review comments to the same branch on your fork. Once a PR is ready to merge, squash any fix review feedback, typo and merged sorts of commits. To make it easier for reviewers to review your PR, consider the following: Follow the golang and check out this for common comments we made during reviews and suggestions for fixing them. Format your code with `make golangci-fix`; if the flag an issue that cannot be fixed automatically, an error message will be displayed so you can address the issue. Follow guidelines. Follow guidelines. Please refer to for spelling conventions when writing documentation or commenting"
},
{
"data": "If your PR fixes a bug or implements a new feature, add the appropriate test cases to our to guarantee enough coverage. A PR that makes significant code changes without contributing new test cases will be flagged by reviewers and will not be accepted. It is a requirement to get your PR verified with CI checks before it gets merged. Also, it helps to find possible bugs before the review work starts. Once you create a PR, or you push new commits, CI checks at the bottom of a PR page will be refreshed. Checks include Github Action ones and Jenkins ones. Github Action ones will be triggered automatically when you push to the head branch of the PR but Jenkins ones need to be triggered manually with comments. Please note that if you are a first-time contributor, the Github workflows need approval from someone with write access to the repo. It's a Github security mechanism. Here are the trigger phrases for individual checks: `/test-e2e`: Linux IPv4 e2e tests `/test-conformance`: Linux IPv4 conformance tests `/test-networkpolicy`: Linux IPv4 networkpolicy tests `/test-all-features-conformance`: Linux IPv4 conformance tests with all features enabled `/test-windows-e2e`: Windows IPv4 e2e tests `/test-windows-conformance`: Windows IPv4 conformance tests `/test-windows-networkpolicy`: Windows IPv4 networkpolicy tests `/test-ipv6-e2e`: Linux dual stack e2e tests `/test-ipv6-conformance`: Linux dual stack conformance tests `/test-ipv6-networkpolicy`: Linux dual stack networkpolicy tests `/test-ipv6-only-e2e`: Linux IPv6 only e2e tests `/test-ipv6-only-conformance`: Linux IPv6 only conformance tests `/test-ipv6-only-networkpolicy`: Linux IPv6 only networkpolicy tests `/test-flexible-ipam-e2e`: Flexible IPAM e2e tests `/test-multicast-e2e`: Multicast e2e tests `/test-multicluster-e2e`: Multicluster e2e tests `/test-vm-e2e`: ExternalNode e2e tests `/test-whole-conformance`: All conformance tests on Linux `/test-hw-offload`: Hardware offloading e2e tests `/test-rancher-e2e`: Linux IPv4 e2e tests on Rancher clusters. `/test-rancher-conformance`: Linux IPv4 conformance tests on Rancher clusters. `/test-rancher-networkpolicy`: Linux IPv4 networkpolicy tests on Rancher clusters. `/test-kind-ipv6-e2e`: Linux dual stack e2e tests on Kind cluster. `/test-kind-ipv6-only-e2e`: Linux IPv6 only e2e tests on Kind cluster. `/test-kind-conformance`: Linux IPv4 conformance tests on Kind cluster. `/test-kind-ipv6-only-conformance`: Linux IPv6 only conformance tests on Kind cluster. `/test-kind-ipv6-conformance`: Linux dual stack conformance tests on Kind cluster. `/test-kind-networkpolicy`: Linux IPv4 networkpolicy tests on Kind cluster. `/test-kind-ipv6-only-networkpolicy`: Linux IPv6 only networkpolicy tests on Kind cluster. `/test-kind-ipv6-networkpolicy`: Linux dual stack networkpolicy tests on Kind cluster. Here are the trigger phrases for groups of checks: `/test-all`: Linux IPv4 tests `/test-windows-all`: Windows IPv4 tests, including e2e tests with proxyAll enabled. It also includes all containerd runtime based Windows tests since 1.10.0. `/test-ipv6-all`: Linux dual stack tests `/test-ipv6-only-all`: Linux IPv6 only tests `/test-kind-ipv6-only-all`: Linux IPv6 only tests on Kind cluster. `/test-kind-ipv6-all`: Linux dual stack tests on Kind cluster. Besides, you can skip a check with `/skip-*`, e.g. `/skip-e2e`: skip Linux IPv4 e2e tests. Skipping a check should be used only when the change doesn't influence the specific function. For example: doc change: skip all checks comment change: skip all checks test/e2e/ change: skip conformance and networkpolicy checks _windows.go change: skip Linux checks Besides skipping specific checks you can also cancel all stale running or waiting capv jenkins jobs related to your PR with `/stop-all-jobs`. For more information about the tests we run as part of CI, please refer to . If your PR fixes a critical bug, it may need to be backported to older release branches which are still maintained. If this is the case, one of the Antrea maintainers will let you know once your PR is"
},
{
"data": "Please refer to the documentation on for more information. Short name of `IP Security` should be `IPsec` as per . Any Kubernetes object in log/comment should start with upper case, eg: Namespace, Pod, Service. For symbol names and documentation, do not introduce new usage of harmful language such as 'master / slave' (or 'slave' independent of 'master') and 'blacklist / whitelist'. For more information about what constitutes harmful language and for a reference word replacement list, please refer to the . We are committed to removing all harmful language from the project. If you detect existing usage of harmful language in code or documentation, please report the issue to us or open a Pull Request to address it directly. Thanks! To build the Antrea Docker image together with all Antrea bits, you can simply do: Checkout your feature branch and `cd` into it. Run `make` The second step will compile the Antrea code in a `golang` container, and build an Ubuntu-based Docker image that includes all the generated binaries. must be installed on your local machine in advance. If you are a macOS user and cannot use to contribute to Antrea for licensing reasons, check out this for possible alternatives. Alternatively, you can build the Antrea code in your local Go environment. The Antrea project uses the which was introduced in Go 1.11. It facilitates dependency tracking and no longer requires projects to live inside the `$GOPATH`. To develop locally, you can follow these steps: 2. Checkout your feature branch and `cd` into it. To build all Go files and install them under `bin`, run `make bin` To run all Go unit tests, run `make test-unit` To build the Antrea Ubuntu Docker image separately with the binaries generated in step 2, run `make ubuntu` Create a branch in your forked repo ```bash git checkout -b revertName ``` Sync the branch with upstream ```bash git fetch upstream git rebase upstream/main ``` Create a revert based on the SHA of the commit. The commit needs to be . ```bash git revert -s SHA ``` Push this new commit. ```bash git push $remoteRevertName revertName ``` Create a Pull Request on GitHub. Visit your fork at `https://github.com/antrea-io/antrea` and click `Compare & Pull Request` button next to your `remoteRevertName` branch. As a CNCF project, Antrea must enforce the [Developer Certificate of Origin](https://developercertificate.org/) (DCO) on all Pull Requests. We require that for all commits constituting the Pull Request, the commit message contains the `Signed-off-by` line with an email address that matches the commit author. By adding this line to their commit messages, contributors sign-off that they adhere to the requirements of the DCO. Git provides the `-s` command-line option to append the required line automatically to the commit message: ```bash git commit -s -m 'This is my commit message' ``` For an existing commit, you can also use this option with `--amend`: ```bash git commit -s --amend ``` If more than one person works on something it's possible for more than one person to sign-off on it. For example: ```bash Signed-off-by: Some Developer [email protected] Signed-off-by: Another Developer [email protected] ``` We use the to enforce that all commits in a Pull Request include the required `Signed-off-by`"
},
{
"data": "If this is not the case, the app will report a failed status for the Pull Request and it will be blocked from being merged. Compared to our earlier CLA, DCO tends to make the experience simpler for new contributors. If you are contributing as an employee, there is no need for your employer to sign anything; the DCO assumes you are authorized to submit contributions (it's your responsibility to check with your employer). We use labels and workflows (some manual, some automated with GitHub Actions) to help us manage triage, prioritize, and track issue progress. For a detailed discussion, see . Help is always appreciated. If you find something that needs fixing, please file an issue . Please ensure that the issue is self explanatory and has enough information for an assignee to get started. Before picking up a task, go through the existing and make sure that your change is not already being worked on. If it does not exist, please create a new issue and discuss it with other members. For simple contributions to Antrea, please ensure that this minimum set of labels are included on your issue: kind* -- common ones are `kind/feature`, `kind/support`, `kind/bug`, `kind/documentation`, or `kind/design`. For an overview of the different types of issues that can be submitted, see [Issue and PR Kinds](#issue-and-pr-kinds). The kind of issue will determine the issue workflow. area* (optional) -- if you know the area the issue belongs in, you can assign it. Otherwise, another community member will label the issue during triage. The area label will identify the area of interest an issue or PR belongs in and will ensure the appropriate reviewers shepherd the issue or PR through to its closure. For an overview of areas, see the . size* (optional) -- if you have an idea of the size (lines of code, complexity, effort) of the issue, you can label it using a size label. The size can be updated during backlog grooming by contributors. This estimate is used to guide the number of features selected for a milestone. All other labels will be assigned during issue triage. Once an issue has been submitted, the CI (GitHub actions) or a human will automatically review the submitted issue or PR to ensure that it has all relevant information. If information is lacking or there is another problem with the submitted issue, an appropriate `triage/<?>` label will be applied. After an issue has been triaged, the maintainers can prioritize the issue with an appropriate `priority/<?>` label. Once an issue has been submitted, categorized, triaged, and prioritized it is marked as `ready-to-work`. A ready-to-work issue should have labels indicating assigned areas, prioritization, and should not have any remaining triage labels. Use a `kind` label to describe the kind of issue or PR you are submitting. Valid kinds include: -- for api changes -- for filing a bug -- for code cleanup and organization -- for deprecating a feature -- for proposing a design or architectural change -- for updating documentation -- for reporting a failed test (may create with automation in future) -- for proposing a feature -- to request support. You may also get support by using our channel for interactive help. If you have not set up the appropriate accounts, please follow the instructions in . For more details on how we manage issues, please read our ."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Kata Containers is with existing standards and runtime. From the perspective of storage, this means no limits are placed on the amount of storage a container may use. Since cgroups are not able to set limits on storage allocation, if you wish to constrain the amount of storage a container uses, consider using an existing facility such as `quota(1)` limits or limits. If a block-based graph driver is , `virtio-scsi` is used to share the workload image (such as `busybox:latest`) into the container's environment inside the VM. If a block-based graph driver is not , a (`VIRTIO`) overlay filesystem mount point is used to share the workload image instead. The uses this mount point as the root filesystem for the container processes. For virtio-fs, the starts one `virtiofsd` daemon (that runs in the host context) for each VM created. The is a special case. The `snapshotter` uses dedicated block devices rather than formatted filesystems, and operates at the block level rather than the file level. This knowledge is used to directly use the underlying block device instead of the overlay file system for the container root file system. The block device maps to the top read-write layer for the overlay. This approach gives much better I/O performance compared to using `virtio-fs` to share the container file system. Kata Containers has the ability to hot plug add and hot plug remove block devices. This makes it possible to use block devices for containers started after the VM has been launched. Users can check to see if the container uses the `devicemapper` block device as its rootfs by calling `mount(8)` within the container. If the `devicemapper` block device is used, the root filesystem (`/`) will be mounted from `/dev/vda`. Users can disable direct mounting of the underlying block device through the runtime ."
}
] |
{
"category": "Runtime",
"file_name": "storage.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.