content
listlengths
1
171
tag
dict
[ { "data": "As we discussed, wrapping WebAssembly inside a Docker Linux container results in performance and security penalties. However, we cannot easily replace the OCI runtime (`runc`) in the Docker toolchain as well. In this chapter, we will discuss another approach to start and run WebAssembly bytecode applications directly from the Docker CLI. Coming soon" } ]
{ "category": "Runtime", "file_name": "containerd.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete local endpoint entries ``` cilium-dbg bpf endpoint delete [flags] ``` ``` -h, --help help for delete ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Local endpoint map" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_endpoint_delete.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: cilium-operator-aws completion powershell | Out-String | Invoke-Expression To load completions for every new session, add the output of the above command to your powershell profile. ``` cilium-operator-aws completion powershell [flags] ``` ``` -h, --help help for powershell --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-aws_completion_powershell.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This document outlines the roadmap for the kube-vip project and only covers the technologies within this particular project, other projects that augment or provide additional functionality (such as cloud-providers) may have their own roadmaps in future. The functionality for kube-vip has grown either been developed organically or through real-world needs, and this is the first attempt to put into words a plan for the future of kube-vip and will additional evolve over time. This means that items listed or detailed here are not necessarily set in stone and the roadmap can grow/shrink as the project matures. We definitely welcome suggestions and ideas from everyone about the roadmap and kube-vip features. Reach us through Issues, Slack or email <catch-all>@kube-vip.io. The kube-vip project attempts to follow a tick-tock release cycle, this typically means that one release will come packed with new features where the following release will come with fixes, code sanitation and performance enhancements. The kube-vip project offers two main areas of functionality: HA Kubernetes clusters through a control-plane VIP Kubernetes `service type:LoadBalancer` Whilst both of these functions share underlying technologies and code they will have slightly differing roadmaps. Re-implement LoadBalancing - due to a previous request the HTTP loadbalancing was removed leaving just HA for the control plane. This functionality will be re-implemented either through the original round-robin HTTP requests or utilising IPVS. Utilise the Kubernetes API to determine additional Control Plane members - Once a single node cluster is running kube-vip could use the API to determine the additional members, at this time a Cluster-API provider needs to drop a static manifest per CP node. Re-evaluate raft - kube-vip is mainly designed to run within a Kubernetes cluster, however it's original design was a raft cluster external to Kubernetes. Unfortunately given some of the upgrade paths identified in things like CAPV moving to leaderElection within Kubernetes became a better idea. `ARP` LeaderElection per loadBalancer - Currently only one pod that is elected leader will field all traffic for a VIP.. extending this to generate a leaderElection token per service would allow services to proliferate across all pods across the cluster Aligning of `service` and `manager` - The move to allow hybrid (be both HA control plane and offer load-balancer services at the same time) introduced a duplicate code path.. these need to converge as it's currently confusing for contributors. Improved metrics - At this time the scaffolding for monitoring exists, however this needs drastically extending to provide greater observability to what is happening within kube-vip Windows support - The Go SDK didn't support the capability for low-levels sockets for ARP originally, this should be revisited. Additional BGP features : Communities BFD" } ]
{ "category": "Runtime", "file_name": "ROADMAP.md", "project_name": "kube-vip", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark\" layout: docs Back up and restore Kubernetes cluster resources. Heptio Ark is a tool for managing disaster recovery, specifically for Kubernetes cluster resources. It provides a simple, configurable, and operationally robust way to back up your application state and associated data. If you're familiar with kubectl, Ark supports a similar model, allowing you to execute commands such as 'ark get backup' and 'ark create schedule'. The same operations can also be performed as 'ark backup get' and 'ark schedule create'. ``` --alsologtostderr log to standard error as well as files -h, --help help for ark --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with backups - Ark client related commands - Output shell completion code for the specified shell (bash or zsh) - Create ark resources - Delete ark resources - Describe ark resources - Get ark resources - Work with plugins - Work with restic - Work with restores - Work with schedules - Run the ark server - Print the ark version and associated image" } ]
{ "category": "Runtime", "file_name": "ark.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List Maglev lookup tables ``` cilium-dbg bpf lb maglev list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Maglev lookup table" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_lb_maglev_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "github.com/gobuffalo/flect does not try to reinvent the wheel! Instead, it uses the already great wheels developed by the Go community and puts them all together in the best way possible. Without these giants, this project would not be possible. Please make sure to check them out and thank them for all of their hard work. Thank you to the following GIANTS:" } ]
{ "category": "Runtime", "file_name": "SHOULDERS.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "VM HOST: Travis Machine: Ubuntu 16.04.6 LTS x64 Date: May 04th, 2020 Version: Gin v1.6.3 Go Version:" }, { "data": "linux/amd64 Source: Result: or ```sh Gin: 34936 Bytes HttpServeMux: 14512 Bytes Ace: 30680 Bytes Aero: 34536 Bytes Bear: 30456 Bytes Beego: 98456 Bytes Bone: 40224 Bytes Chi: 83608 Bytes Denco: 10216 Bytes Echo: 80328 Bytes GocraftWeb: 55288 Bytes Goji: 29744 Bytes Gojiv2: 105840 Bytes GoJsonRest: 137496 Bytes GoRestful: 816936 Bytes GorillaMux: 585632 Bytes GowwwRouter: 24968 Bytes HttpRouter: 21712 Bytes HttpTreeMux: 73448 Bytes Kocha: 115472 Bytes LARS: 30640 Bytes Macaron: 38592 Bytes Martini: 310864 Bytes Pat: 19696 Bytes Possum: 89920 Bytes R2router: 23712 Bytes Rivet: 24608 Bytes Tango: 28264 Bytes TigerTonic: 78768 Bytes Traffic: 538976 Bytes Vulcan: 369960 Bytes ``` ```sh Gin: 58512 Bytes Ace: 48688 Bytes Aero: 318568 Bytes Bear: 84248 Bytes Beego: 150936 Bytes Bone: 100976 Bytes Chi: 95112 Bytes Denco: 36736 Bytes Echo: 100296 Bytes GocraftWeb: 95432 Bytes Goji: 49680 Bytes Gojiv2: 104704 Bytes GoJsonRest: 141976 Bytes GoRestful: 1241656 Bytes GorillaMux: 1322784 Bytes GowwwRouter: 80008 Bytes HttpRouter: 37144 Bytes HttpTreeMux: 78800 Bytes Kocha: 785120 Bytes LARS: 48600 Bytes Macaron: 92784 Bytes Martini: 485264 Bytes Pat: 21200 Bytes Possum: 85312 Bytes R2router: 47104 Bytes Rivet: 42840 Bytes Tango: 54840 Bytes TigerTonic: 95264 Bytes Traffic: 921744 Bytes Vulcan: 425992 Bytes ``` ```sh Gin: 4384 Bytes Ace: 3712 Bytes Aero: 26056 Bytes Bear: 7112 Bytes Beego: 10272 Bytes Bone: 6688 Bytes Chi: 8024 Bytes Denco: 3264 Bytes Echo: 9688 Bytes GocraftWeb: 7496 Bytes Goji: 3152 Bytes Gojiv2: 7376 Bytes GoJsonRest: 11400 Bytes GoRestful: 74328 Bytes GorillaMux: 66208 Bytes GowwwRouter: 5744 Bytes HttpRouter: 2808 Bytes HttpTreeMux: 7440 Bytes Kocha: 128880 Bytes LARS: 3656 Bytes Macaron: 8656 Bytes Martini: 23920 Bytes Pat: 1856 Bytes Possum: 7248 Bytes R2router: 3928 Bytes Rivet: 3064 Bytes Tango: 5168 Bytes TigerTonic: 9408 Bytes Traffic: 46400 Bytes Vulcan: 25544 Bytes ``` ```sh Gin: 7776 Bytes Ace: 6704 Bytes Aero: 28488 Bytes Bear: 12320 Bytes Beego: 19280 Bytes Bone: 11440 Bytes Chi: 9744 Bytes Denco: 4192 Bytes Echo: 11664 Bytes GocraftWeb: 12800 Bytes Goji: 5680 Bytes Gojiv2: 14464 Bytes GoJsonRest: 14072 Bytes GoRestful: 116264 Bytes GorillaMux: 105880 Bytes GowwwRouter: 9344 Bytes HttpRouter: 5072 Bytes HttpTreeMux: 7848 Bytes Kocha: 181712 Bytes LARS: 6632 Bytes Macaron: 13648 Bytes Martini: 45888 Bytes Pat: 2560 Bytes Possum: 9200 Bytes R2router: 7056 Bytes Rivet: 5680 Bytes Tango: 8920 Bytes TigerTonic: 9840 Bytes Traffic: 79096 Bytes Vulcan: 44504 Bytes ``` ```sh BenchmarkGin_StaticAll 62169 19319 ns/op 0 B/op 0 allocs/op BenchmarkAce_StaticAll 65428 18313 ns/op 0 B/op 0 allocs/op BenchmarkAero_StaticAll 121132 9632 ns/op 0 B/op 0 allocs/op BenchmarkHttpServeMux_StaticAll 52626 22758 ns/op 0 B/op 0 allocs/op BenchmarkBeego_StaticAll 9962 179058 ns/op 55264 B/op 471 allocs/op BenchmarkBear_StaticAll 14894 80966 ns/op 20272 B/op 469 allocs/op BenchmarkBone_StaticAll 18718 64065 ns/op 0 B/op 0 allocs/op BenchmarkChi_StaticAll 10000 149827 ns/op 67824 B/op 471 allocs/op BenchmarkDenco_StaticAll 211393 5680 ns/op 0 B/op 0 allocs/op BenchmarkEcho_StaticAll 49341 24343 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_StaticAll 10000 126209 ns/op 46312 B/op 785 allocs/op BenchmarkGoji_StaticAll 27956 43174 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_StaticAll 3430 370718 ns/op 205984 B/op 1570 allocs/op BenchmarkGoJsonRest_StaticAll 9134 188888 ns/op 51653 B/op 1727 allocs/op BenchmarkGoRestful_StaticAll 706 1703330 ns/op 613280 B/op 2053 allocs/op BenchmarkGorillaMux_StaticAll 1268 924083 ns/op 153233 B/op 1413 allocs/op BenchmarkGowwwRouter_StaticAll 63374 18935 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_StaticAll 109938 10902 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_StaticAll 109166 10861 ns/op 0 B/op 0 allocs/op BenchmarkKocha_StaticAll 92258 12992 ns/op 0 B/op 0 allocs/op BenchmarkLARS_StaticAll 65200 18387 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_StaticAll 5671 291501 ns/op 115553 B/op 1256 allocs/op BenchmarkMartini_StaticAll 807 1460498 ns/op 125444 B/op 1717 allocs/op BenchmarkPat_StaticAll 513 2342396 ns/op 602832 B/op 12559 allocs/op BenchmarkPossum_StaticAll 10000 128270 ns/op 65312 B/op 471 allocs/op BenchmarkR2router_StaticAll 16726 71760 ns/op 22608 B/op 628 allocs/op BenchmarkRivet_StaticAll 41722 28723 ns/op 0 B/op 0 allocs/op BenchmarkTango_StaticAll 7606 205082 ns/op 39209 B/op 1256 allocs/op BenchmarkTigerTonic_StaticAll 26247 45806 ns/op 7376 B/op 157 allocs/op BenchmarkTraffic_StaticAll 550 2284518 ns/op 754864 B/op 14601 allocs/op BenchmarkVulcan_StaticAll 10000 131343 ns/op 15386 B/op 471 allocs/op ``` ```sh BenchmarkGin_Param 18785022 63.9 ns/op 0 B/op 0 allocs/op BenchmarkAce_Param 14689765 81.5 ns/op 0 B/op 0 allocs/op BenchmarkAero_Param 23094770 51.2 ns/op 0 B/op 0 allocs/op BenchmarkBear_Param 1417045 845 ns/op 456 B/op 5 allocs/op BenchmarkBeego_Param 1000000 1080 ns/op 352 B/op 3 allocs/op BenchmarkBone_Param 1000000 1463 ns/op 816 B/op 6 allocs/op BenchmarkChi_Param 1378756 885 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Param 8557899 143 ns/op 32 B/op 1 allocs/op BenchmarkEcho_Param 16433347 75.5 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Param 1000000 1218 ns/op 648 B/op 8 allocs/op BenchmarkGoji_Param 1921248 617 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_Param 561848 2156 ns/op 1328 B/op 11 allocs/op BenchmarkGoJsonRest_Param 1000000 1358 ns/op 649 B/op 13 allocs/op BenchmarkGoRestful_Param 224857 5307 ns/op 4192 B/op 14 allocs/op BenchmarkGorillaMux_Param 498313 2459 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_Param 1864354 654 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Param 26269074 47.7 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Param 2109829 557 ns/op 352 B/op 3 allocs/op BenchmarkKocha_Param 5050216 243 ns/op 56 B/op 3 allocs/op BenchmarkLARS_Param 19811712 59.9 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Param 662746 2329 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_Param 279902 4260 ns/op 1072 B/op 10 allocs/op BenchmarkPat_Param 1000000 1382 ns/op 536 B/op 11 allocs/op BenchmarkPossum_Param 1000000 1014 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Param 1712559 707 ns/op 432 B/op 5 allocs/op BenchmarkRivet_Param 6648086 182 ns/op 48 B/op 1 allocs/op BenchmarkTango_Param 1221504 994 ns/op 248 B/op 8 allocs/op BenchmarkTigerTonic_Param 891661 2261 ns/op 776 B/op 16 allocs/op BenchmarkTraffic_Param 350059 3598 ns/op 1856 B/op 21 allocs/op BenchmarkVulcan_Param 2517823 472 ns/op 98 B/op 3 allocs/op BenchmarkAce_Param5 9214365 130 ns/op 0 B/op 0 allocs/op BenchmarkAero_Param5 15369013 77.9 ns/op 0 B/op 0 allocs/op BenchmarkBear_Param5 1000000 1113 ns/op 501 B/op 5 allocs/op BenchmarkBeego_Param5 1000000 1269 ns/op 352 B/op 3 allocs/op BenchmarkBone_Param5 986820 1873 ns/op 864 B/op 6 allocs/op BenchmarkChi_Param5 1000000 1156 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Param5 3036331 400 ns/op 160 B/op 1 allocs/op BenchmarkEcho_Param5 6447133 186 ns/op 0 B/op 0 allocs/op BenchmarkGin_Param5 10786068 110 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Param5 844820 1944 ns/op 920 B/op 11 allocs/op BenchmarkGoji_Param5 1474965 827 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_Param5 442820 2516 ns/op 1392 B/op 11 allocs/op BenchmarkGoJsonRest_Param5 507555 2711 ns/op 1097 B/op 16 allocs/op BenchmarkGoRestful_Param5 216481 6093 ns/op 4288 B/op 14 allocs/op BenchmarkGorillaMux_Param5 314402 3628 ns/op 1344 B/op 10 allocs/op BenchmarkGowwwRouter_Param5 1624660 733 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Param5 13167324" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Param5 1000000 1295 ns/op 576 B/op 6 allocs/op BenchmarkKocha_Param5 1000000 1138 ns/op 440 B/op 10 allocs/op BenchmarkLARS_Param5 11580613 105 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Param5 473596 2755 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_Param5 230756 5111 ns/op 1232 B/op 11 allocs/op BenchmarkPat_Param5 469190 3370 ns/op 888 B/op 29 allocs/op BenchmarkPossum_Param5 1000000 1002 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Param5 1422129 844 ns/op 432 B/op 5 allocs/op BenchmarkRivet_Param5 2263789 539 ns/op 240 B/op 1 allocs/op BenchmarkTango_Param5 1000000 1256 ns/op 360 B/op 8 allocs/op BenchmarkTigerTonic_Param5 175500 7492 ns/op 2279 B/op 39 allocs/op BenchmarkTraffic_Param5 233631 5816 ns/op 2208 B/op 27 allocs/op BenchmarkVulcan_Param5 1923416 629 ns/op 98 B/op 3 allocs/op BenchmarkAce_Param20 4321266 281 ns/op 0 B/op 0 allocs/op BenchmarkAero_Param20 31501641 35.2 ns/op 0 B/op 0 allocs/op BenchmarkBear_Param20 335204 3489 ns/op 1665 B/op 5 allocs/op BenchmarkBeego_Param20 503674 2860 ns/op 352 B/op 3 allocs/op BenchmarkBone_Param20 298922 4741 ns/op 2031 B/op 6 allocs/op BenchmarkChi_Param20 878181 1957 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Param20 1000000 1360 ns/op 640 B/op 1 allocs/op BenchmarkEcho_Param20 2104946 580 ns/op 0 B/op 0 allocs/op BenchmarkGin_Param20 4167204 290 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Param20 173064 7514 ns/op 3796 B/op 15 allocs/op BenchmarkGoji_Param20 458778 2651 ns/op 1247 B/op 2 allocs/op BenchmarkGojiv2_Param20 364862 3178 ns/op 1632 B/op 11 allocs/op BenchmarkGoJsonRest_Param20 125514 9760 ns/op 4485 B/op 20 allocs/op BenchmarkGoRestful_Param20 101217 11964 ns/op 6715 B/op 18 allocs/op BenchmarkGorillaMux_Param20 147654 8132 ns/op 3452 B/op 12 allocs/op BenchmarkGowwwRouter_Param20 1000000 1225 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Param20 4920895 247 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Param20 173202 6605 ns/op 3196 B/op 10 allocs/op BenchmarkKocha_Param20 345988 3620 ns/op 1808 B/op 27 allocs/op BenchmarkLARS_Param20 4592326 262 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Param20 166492 7286 ns/op 2924 B/op 12 allocs/op BenchmarkMartini_Param20 122162 10653 ns/op 3595 B/op 13 allocs/op BenchmarkPat_Param20 78630 15239 ns/op 4424 B/op 93 allocs/op BenchmarkPossum_Param20 1000000 1008 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Param20 294981 4587 ns/op 2284 B/op 7 allocs/op BenchmarkRivet_Param20 691798 2090 ns/op 1024 B/op 1 allocs/op BenchmarkTango_Param20 842440 2505 ns/op 856 B/op 8 allocs/op BenchmarkTigerTonic_Param20 38614 31509 ns/op 9870 B/op 119 allocs/op BenchmarkTraffic_Param20 57633 21107 ns/op 7853 B/op 47 allocs/op BenchmarkVulcan_Param20 1000000 1178 ns/op 98 B/op 3 allocs/op BenchmarkAce_ParamWrite 7330743 180 ns/op 8 B/op 1 allocs/op BenchmarkAero_ParamWrite 13833598 86.7 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParamWrite 1363321 867 ns/op 456 B/op 5 allocs/op BenchmarkBeego_ParamWrite 1000000 1104 ns/op 360 B/op 4 allocs/op BenchmarkBone_ParamWrite 1000000 1475 ns/op 816 B/op 6 allocs/op BenchmarkChi_ParamWrite 1320590 892 ns/op 432 B/op 3 allocs/op BenchmarkDenco_ParamWrite 7093605 172 ns/op 32 B/op 1 allocs/op BenchmarkEcho_ParamWrite 8434424 161 ns/op 8 B/op 1 allocs/op BenchmarkGin_ParamWrite 10377034 118 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParamWrite 1000000 1266 ns/op 656 B/op 9 allocs/op BenchmarkGoji_ParamWrite 1874168 654 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_ParamWrite 459032 2352 ns/op 1360 B/op 13 allocs/op BenchmarkGoJsonRest_ParamWrite 499434 2145 ns/op 1128 B/op 18 allocs/op BenchmarkGoRestful_ParamWrite 241087 5470 ns/op 4200 B/op 15 allocs/op BenchmarkGorillaMux_ParamWrite 425686 2522 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_ParamWrite 922172 1778 ns/op 976 B/op 8 allocs/op BenchmarkHttpRouter_ParamWrite 15392049" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParamWrite 1973385 597 ns/op 352 B/op 3 allocs/op BenchmarkKocha_ParamWrite 4262500 281 ns/op 56 B/op 3 allocs/op BenchmarkLARS_ParamWrite 10764410 113 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParamWrite 486769 2726 ns/op 1176 B/op 14 allocs/op BenchmarkMartini_ParamWrite 264804 4842 ns/op 1176 B/op 14 allocs/op BenchmarkPat_ParamWrite 735116 2047 ns/op 960 B/op 15 allocs/op BenchmarkPossum_ParamWrite 1000000 1004 ns/op 496 B/op 5 allocs/op BenchmarkR2router_ParamWrite 1592136 768 ns/op 432 B/op 5 allocs/op BenchmarkRivet_ParamWrite 3582051 339 ns/op 112 B/op 2 allocs/op BenchmarkTango_ParamWrite 2237337 534 ns/op 136 B/op 4 allocs/op BenchmarkTigerTonic_ParamWrite 439608 3136 ns/op 1216 B/op 21 allocs/op BenchmarkTraffic_ParamWrite 306979 4328 ns/op 2280 B/op 25 allocs/op BenchmarkVulcan_ParamWrite 2529973 472 ns/op 98 B/op 3 allocs/op ``` ```sh BenchmarkGin_GithubStatic 15629472 76.7 ns/op 0 B/op 0 allocs/op BenchmarkAce_GithubStatic 15542612 75.9 ns/op 0 B/op 0 allocs/op BenchmarkAero_GithubStatic 24777151 48.5 ns/op 0 B/op 0 allocs/op BenchmarkBear_GithubStatic 2788894 435 ns/op 120 B/op 3 allocs/op BenchmarkBeego_GithubStatic 1000000 1064 ns/op 352 B/op 3 allocs/op BenchmarkBone_GithubStatic 93507 12838 ns/op 2880 B/op 60 allocs/op BenchmarkChi_GithubStatic 1387743 860 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GithubStatic 39384996 30.4 ns/op 0 B/op 0 allocs/op BenchmarkEcho_GithubStatic 12076382 99.1 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GithubStatic 1596495 756 ns/op 296 B/op 5 allocs/op BenchmarkGoji_GithubStatic 6364876 189 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_GithubStatic 550202 2098 ns/op 1312 B/op 10 allocs/op BenchmarkGoRestful_GithubStatic 102183 12552 ns/op 4256 B/op 13 allocs/op BenchmarkGoJsonRest_GithubStatic 1000000 1029 ns/op 329 B/op 11 allocs/op BenchmarkGorillaMux_GithubStatic 255552 5190 ns/op 976 B/op 9 allocs/op BenchmarkGowwwRouter_GithubStatic 15531916 77.1 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_GithubStatic 27920724 43.1 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GithubStatic 21448953 55.8 ns/op 0 B/op 0 allocs/op BenchmarkKocha_GithubStatic 21405310 56.0 ns/op 0 B/op 0 allocs/op BenchmarkLARS_GithubStatic 13625156" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GithubStatic 1000000 1747 ns/op 736 B/op 8 allocs/op BenchmarkMartini_GithubStatic 187186 7326 ns/op 768 B/op 9 allocs/op BenchmarkPat_GithubStatic 109143 11563 ns/op 3648 B/op 76 allocs/op BenchmarkPossum_GithubStatic 1575898 770 ns/op 416 B/op 3 allocs/op BenchmarkR2router_GithubStatic 3046231 404 ns/op 144 B/op 4 allocs/op BenchmarkRivet_GithubStatic 11484826 105 ns/op 0 B/op 0 allocs/op BenchmarkTango_GithubStatic 1000000 1153 ns/op 248 B/op 8 allocs/op BenchmarkTigerTonic_GithubStatic 4929780 249 ns/op 48 B/op 1 allocs/op BenchmarkTraffic_GithubStatic 106351 11819 ns/op 4664 B/op 90 allocs/op BenchmarkVulcan_GithubStatic 1613271 722 ns/op 98 B/op 3 allocs/op BenchmarkAce_GithubParam 8386032 143 ns/op 0 B/op 0 allocs/op BenchmarkAero_GithubParam 11816200 102 ns/op 0 B/op 0 allocs/op BenchmarkBear_GithubParam 1000000 1012 ns/op 496 B/op 5 allocs/op BenchmarkBeego_GithubParam 1000000 1157 ns/op 352 B/op 3 allocs/op BenchmarkBone_GithubParam 184653 6912 ns/op 1888 B/op 19 allocs/op BenchmarkChi_GithubParam 1000000 1102 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GithubParam 3484798 352 ns/op 128 B/op 1 allocs/op BenchmarkEcho_GithubParam 6337380 189 ns/op 0 B/op 0 allocs/op BenchmarkGin_GithubParam 9132032 131 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GithubParam 1000000 1446 ns/op 712 B/op 9 allocs/op BenchmarkGoji_GithubParam 1248640 977 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_GithubParam 383233 2784 ns/op 1408 B/op 13 allocs/op BenchmarkGoJsonRest_GithubParam 1000000 1991 ns/op 713 B/op 14 allocs/op BenchmarkGoRestful_GithubParam 76414 16015 ns/op 4352 B/op 16 allocs/op BenchmarkGorillaMux_GithubParam 150026 7663 ns/op 1296 B/op 10 allocs/op BenchmarkGowwwRouter_GithubParam 1592044 751 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_GithubParam 10420628 115 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GithubParam 1403755 835 ns/op 384 B/op 4 allocs/op BenchmarkKocha_GithubParam 2286170 533 ns/op 128 B/op 5 allocs/op BenchmarkLARS_GithubParam 9540374 129 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GithubParam 533154 2742 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_GithubParam 119397 9638 ns/op 1152 B/op 11 allocs/op BenchmarkPat_GithubParam 150675 8858 ns/op 2408 B/op 48 allocs/op BenchmarkPossum_GithubParam 1000000 1001 ns/op 496 B/op 5 allocs/op BenchmarkR2router_GithubParam 1602886 761 ns/op 432 B/op 5 allocs/op BenchmarkRivet_GithubParam 2986579 409 ns/op 96 B/op 1 allocs/op BenchmarkTango_GithubParam 1000000 1356 ns/op 344 B/op 8 allocs/op BenchmarkTigerTonic_GithubParam 388899 3429 ns/op 1176 B/op 22 allocs/op BenchmarkTraffic_GithubParam 123160 9734 ns/op 2816 B/op 40 allocs/op BenchmarkVulcan_GithubParam 1000000 1138 ns/op 98 B/op 3 allocs/op BenchmarkAce_GithubAll 40543 29670 ns/op 0 B/op 0 allocs/op BenchmarkAero_GithubAll 57632 20648 ns/op 0 B/op 0 allocs/op BenchmarkBear_GithubAll 9234 216179 ns/op 86448 B/op 943 allocs/op BenchmarkBeego_GithubAll 7407 243496 ns/op 71456 B/op 609 allocs/op BenchmarkBone_GithubAll 420 2922835 ns/op 720160 B/op 8620 allocs/op BenchmarkChi_GithubAll 7620 238331 ns/op 87696 B/op 609 allocs/op BenchmarkDenco_GithubAll 18355 64494 ns/op 20224 B/op 167 allocs/op BenchmarkEcho_GithubAll 31251 38479 ns/op 0 B/op 0 allocs/op BenchmarkGin_GithubAll 43550 27364 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GithubAll 4117 300062 ns/op 131656 B/op 1686 allocs/op BenchmarkGoji_GithubAll 3274 416158 ns/op 56112 B/op 334 allocs/op BenchmarkGojiv2_GithubAll 1402 870518 ns/op 352720 B/op 4321 allocs/op BenchmarkGoJsonRest_GithubAll 2976 401507 ns/op 134371 B/op 2737 allocs/op BenchmarkGoRestful_GithubAll 410 2913158 ns/op 910144 B/op 2938 allocs/op BenchmarkGorillaMux_GithubAll 346 3384987 ns/op 251650 B/op 1994 allocs/op BenchmarkGowwwRouter_GithubAll 10000 143025 ns/op 72144 B/op 501 allocs/op BenchmarkHttpRouter_GithubAll 55938 21360 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GithubAll 10000 153944 ns/op 65856 B/op 671 allocs/op BenchmarkKocha_GithubAll 10000 106315 ns/op 23304 B/op 843 allocs/op BenchmarkLARS_GithubAll 47779 25084 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GithubAll 3266 371907 ns/op 149409 B/op 1624 allocs/op BenchmarkMartini_GithubAll 331 3444706 ns/op 226551 B/op 2325 allocs/op BenchmarkPat_GithubAll 273 4381818 ns/op 1483152 B/op 26963 allocs/op BenchmarkPossum_GithubAll 10000 164367 ns/op 84448 B/op 609 allocs/op BenchmarkR2router_GithubAll 10000 160220 ns/op 77328 B/op 979 allocs/op BenchmarkRivet_GithubAll 14625 82453 ns/op 16272 B/op 167 allocs/op BenchmarkTango_GithubAll 6255 279611 ns/op 63826 B/op 1618 allocs/op BenchmarkTigerTonic_GithubAll 2008 687874 ns/op 193856 B/op 4474 allocs/op BenchmarkTraffic_GithubAll 355 3478508 ns/op 820744 B/op 14114 allocs/op BenchmarkVulcan_GithubAll 6885 193333 ns/op 19894 B/op 609 allocs/op ``` ```sh BenchmarkGin_GPlusStatic 19247326 62.2 ns/op 0 B/op 0 allocs/op BenchmarkAce_GPlusStatic 20235060 59.2 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlusStatic 31978935 37.6 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlusStatic 3516523 341 ns/op 104 B/op 3 allocs/op BenchmarkBeego_GPlusStatic 1212036 991 ns/op 352 B/op 3 allocs/op BenchmarkBone_GPlusStatic 6736242 183 ns/op 32 B/op 1 allocs/op BenchmarkChi_GPlusStatic 1490640 814 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GPlusStatic 55006856 21.8 ns/op 0 B/op 0 allocs/op BenchmarkEcho_GPlusStatic 17688258 67.9 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlusStatic 1829181 666 ns/op 280 B/op 5 allocs/op BenchmarkGoji_GPlusStatic 9147451 130 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_GPlusStatic 594015 2063 ns/op 1312 B/op 10 allocs/op BenchmarkGoJsonRest_GPlusStatic 1264906 950 ns/op 329 B/op 11 allocs/op BenchmarkGoRestful_GPlusStatic 231558 5341 ns/op 3872 B/op 13 allocs/op BenchmarkGorillaMux_GPlusStatic 908418 1809 ns/op 976 B/op 9 allocs/op BenchmarkGowwwRouter_GPlusStatic 40684604 29.5 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_GPlusStatic 46742804 25.7 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlusStatic 32567161 36.9 ns/op 0 B/op 0 allocs/op BenchmarkKocha_GPlusStatic 33800060 35.3 ns/op 0 B/op 0 allocs/op BenchmarkLARS_GPlusStatic 20431858 60.0 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlusStatic 1000000 1745 ns/op 736 B/op 8 allocs/op BenchmarkMartini_GPlusStatic 442248 3619 ns/op 768 B/op 9 allocs/op BenchmarkPat_GPlusStatic 4328004 292 ns/op 96 B/op 2 allocs/op BenchmarkPossum_GPlusStatic 1570753 763 ns/op 416 B/op 3 allocs/op BenchmarkR2router_GPlusStatic 3339474 355 ns/op 144 B/op 4 allocs/op BenchmarkRivet_GPlusStatic 18570961 64.7 ns/op 0 B/op 0 allocs/op BenchmarkTango_GPlusStatic 1388702 860 ns/op 200 B/op 8 allocs/op BenchmarkTigerTonic_GPlusStatic 7803543 159 ns/op 32 B/op 1 allocs/op BenchmarkTraffic_GPlusStatic 878605 2171 ns/op 1112 B/op 16 allocs/op BenchmarkVulcan_GPlusStatic 2742446 437 ns/op 98 B/op 3 allocs/op BenchmarkAce_GPlusParam 11626975 105 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlusParam 16914322 71.6 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlusParam 1405173 832 ns/op 480 B/op 5 allocs/op BenchmarkBeego_GPlusParam 1000000 1075 ns/op 352 B/op 3 allocs/op BenchmarkBone_GPlusParam 1000000 1557 ns/op 816 B/op 6 allocs/op BenchmarkChi_GPlusParam 1347926 894 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GPlusParam 5513000 212 ns/op 64 B/op 1 allocs/op BenchmarkEcho_GPlusParam 11884383 101 ns/op 0 B/op 0 allocs/op BenchmarkGin_GPlusParam 12898952" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlusParam 1000000 1194 ns/op 648 B/op 8 allocs/op BenchmarkGoji_GPlusParam 1857229 645 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_GPlusParam 520939 2322 ns/op 1328 B/op 11 allocs/op BenchmarkGoJsonRest_GPlusParam 1000000 1536 ns/op 649 B/op 13 allocs/op BenchmarkGoRestful_GPlusParam 205449 5800 ns/op 4192 B/op 14 allocs/op BenchmarkGorillaMux_GPlusParam 395310 3188 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_GPlusParam 1851798 667 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_GPlusParam 18420789 65.2 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlusParam 1878463 629 ns/op 352 B/op 3 allocs/op BenchmarkKocha_GPlusParam 4495610 273 ns/op 56 B/op 3 allocs/op BenchmarkLARS_GPlusParam 14615976 83.2 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlusParam 584145 2549 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_GPlusParam 250501 4583 ns/op 1072 B/op 10 allocs/op BenchmarkPat_GPlusParam 1000000 1645 ns/op 576 B/op 11 allocs/op BenchmarkPossum_GPlusParam 1000000 1008 ns/op 496 B/op 5 allocs/op BenchmarkR2router_GPlusParam 1708191 688 ns/op 432 B/op 5 allocs/op BenchmarkRivet_GPlusParam 5795014 211 ns/op 48 B/op 1 allocs/op BenchmarkTango_GPlusParam 1000000 1091 ns/op 264 B/op 8 allocs/op BenchmarkTigerTonic_GPlusParam 760221 2489 ns/op 856 B/op 16 allocs/op BenchmarkTraffic_GPlusParam 309774 4039 ns/op 1872 B/op 21 allocs/op BenchmarkVulcan_GPlusParam 1935730 623 ns/op 98 B/op 3 allocs/op BenchmarkAce_GPlus2Params 9158314 134 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlus2Params 11300517 107 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlus2Params 1239238 961 ns/op 496 B/op 5 allocs/op BenchmarkBeego_GPlus2Params 1000000 1202 ns/op 352 B/op 3 allocs/op BenchmarkBone_GPlus2Params 335576 3725 ns/op 1168 B/op 10 allocs/op BenchmarkChi_GPlus2Params 1000000 1014 ns/op 432 B/op 3 allocs/op BenchmarkDenco_GPlus2Params 4394598 280 ns/op 64 B/op 1 allocs/op BenchmarkEcho_GPlus2Params 7851861 154 ns/op 0 B/op 0 allocs/op BenchmarkGin_GPlus2Params 9958588 120 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlus2Params 1000000 1433 ns/op 712 B/op 9 allocs/op BenchmarkGoji_GPlus2Params 1325134 909 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_GPlus2Params 405955 2870 ns/op 1408 B/op 14 allocs/op BenchmarkGoJsonRest_GPlus2Params 977038 1987 ns/op 713 B/op 14 allocs/op BenchmarkGoRestful_GPlus2Params 205018 6142 ns/op 4384 B/op 16 allocs/op BenchmarkGorillaMux_GPlus2Params 205641 6015 ns/op 1296 B/op 10 allocs/op BenchmarkGowwwRouter_GPlus2Params 1748542 684 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_GPlus2Params 14047102" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlus2Params 1418673 828 ns/op 384 B/op 4 allocs/op BenchmarkKocha_GPlus2Params 2334562 520 ns/op 128 B/op 5 allocs/op BenchmarkLARS_GPlus2Params 11954094 101 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlus2Params 491552 2890 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_GPlus2Params 120532 9545 ns/op 1200 B/op 13 allocs/op BenchmarkPat_GPlus2Params 194739 6766 ns/op 2168 B/op 33 allocs/op BenchmarkPossum_GPlus2Params 1201224 1009 ns/op 496 B/op 5 allocs/op BenchmarkR2router_GPlus2Params 1575535 756 ns/op 432 B/op 5 allocs/op BenchmarkRivet_GPlus2Params 3698930 325 ns/op 96 B/op 1 allocs/op BenchmarkTango_GPlus2Params 1000000 1212 ns/op 344 B/op 8 allocs/op BenchmarkTigerTonic_GPlus2Params 349350 3660 ns/op 1200 B/op 22 allocs/op BenchmarkTraffic_GPlus2Params 169714 7862 ns/op 2248 B/op 28 allocs/op BenchmarkVulcan_GPlus2Params 1222288 974 ns/op 98 B/op 3 allocs/op BenchmarkAce_GPlusAll 845606 1398 ns/op 0 B/op 0 allocs/op BenchmarkAero_GPlusAll 1000000 1009 ns/op 0 B/op 0 allocs/op BenchmarkBear_GPlusAll 103830 11386 ns/op 5488 B/op 61 allocs/op BenchmarkBeego_GPlusAll 82653 14784 ns/op 4576 B/op 39 allocs/op BenchmarkBone_GPlusAll 36601 33123 ns/op 11744 B/op 109 allocs/op BenchmarkChi_GPlusAll 95264 12831 ns/op 5616 B/op 39 allocs/op BenchmarkDenco_GPlusAll 567681 2950 ns/op 672 B/op 11 allocs/op BenchmarkEcho_GPlusAll 720366 1665 ns/op 0 B/op 0 allocs/op BenchmarkGin_GPlusAll 1000000 1185 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_GPlusAll 71575 16365 ns/op 8040 B/op 103 allocs/op BenchmarkGoji_GPlusAll 136352 9191 ns/op 3696 B/op 22 allocs/op BenchmarkGojiv2_GPlusAll 38006 31802 ns/op 17616 B/op 154 allocs/op BenchmarkGoJsonRest_GPlusAll 57238 21561 ns/op 8117 B/op 170 allocs/op BenchmarkGoRestful_GPlusAll 15147 79276 ns/op 55520 B/op 192 allocs/op BenchmarkGorillaMux_GPlusAll 24446 48410 ns/op 16112 B/op 128 allocs/op BenchmarkGowwwRouter_GPlusAll 150112 7770 ns/op 4752 B/op 33 allocs/op BenchmarkHttpRouter_GPlusAll 1367820 878 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_GPlusAll 166628 8004 ns/op 4032 B/op 38 allocs/op BenchmarkKocha_GPlusAll 265694 4570 ns/op 976 B/op 43 allocs/op BenchmarkLARS_GPlusAll 1000000 1068 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_GPlusAll 54564 23305 ns/op 9568 B/op 104 allocs/op BenchmarkMartini_GPlusAll 16274 73845 ns/op 14016 B/op 145 allocs/op BenchmarkPat_GPlusAll 27181 44478 ns/op 15264 B/op 271 allocs/op BenchmarkPossum_GPlusAll 122587 10277 ns/op 5408 B/op 39 allocs/op BenchmarkR2router_GPlusAll 130137 9297 ns/op 5040 B/op 63 allocs/op BenchmarkRivet_GPlusAll 532438 3323 ns/op 768 B/op 11 allocs/op BenchmarkTango_GPlusAll 86054 14531 ns/op 3656 B/op 104 allocs/op BenchmarkTigerTonic_GPlusAll 33936 35356 ns/op 11600 B/op 242 allocs/op BenchmarkTraffic_GPlusAll 17833 68181 ns/op 26248 B/op 341 allocs/op BenchmarkVulcan_GPlusAll 120109 9861 ns/op 1274 B/op 39 allocs/op ``` ```sh BenchmarkGin_ParseStatic 18877833 63.5 ns/op 0 B/op 0 allocs/op BenchmarkAce_ParseStatic 19663731 60.8 ns/op 0 B/op 0 allocs/op BenchmarkAero_ParseStatic 28967341 41.5 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParseStatic 3006984 402 ns/op 120 B/op 3 allocs/op BenchmarkBeego_ParseStatic 1000000 1031 ns/op 352 B/op 3 allocs/op BenchmarkBone_ParseStatic 1782482 675 ns/op 144 B/op 3 allocs/op BenchmarkChi_ParseStatic 1453261 819 ns/op 432 B/op 3 allocs/op BenchmarkDenco_ParseStatic 45023595 26.5 ns/op 0 B/op 0 allocs/op BenchmarkEcho_ParseStatic 17330470 69.3 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParseStatic 1644006 731 ns/op 296 B/op 5 allocs/op BenchmarkGoji_ParseStatic 7026930 170 ns/op 0 B/op 0 allocs/op BenchmarkGojiv2_ParseStatic 517618 2037 ns/op 1312 B/op 10 allocs/op BenchmarkGoJsonRest_ParseStatic 1227080 975 ns/op 329 B/op 11 allocs/op BenchmarkGoRestful_ParseStatic 192458 6659 ns/op 4256 B/op 13 allocs/op BenchmarkGorillaMux_ParseStatic 744062 2109 ns/op 976 B/op 9 allocs/op BenchmarkGowwwRouter_ParseStatic 37781062 31.8 ns/op 0 B/op 0 allocs/op BenchmarkHttpRouter_ParseStatic 45311223 26.5 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParseStatic 21383475 56.1 ns/op 0 B/op 0 allocs/op BenchmarkKocha_ParseStatic 29953290 40.1 ns/op 0 B/op 0 allocs/op BenchmarkLARS_ParseStatic 20036196 62.7 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParseStatic 1000000 1740 ns/op 736 B/op 8 allocs/op BenchmarkMartini_ParseStatic 404156 3801 ns/op 768 B/op 9 allocs/op BenchmarkPat_ParseStatic 1547180 772 ns/op 240 B/op 5 allocs/op BenchmarkPossum_ParseStatic 1608991 757 ns/op 416 B/op 3 allocs/op BenchmarkR2router_ParseStatic 3177936 385 ns/op 144 B/op 4 allocs/op BenchmarkRivet_ParseStatic 17783205 67.4 ns/op 0 B/op 0 allocs/op BenchmarkTango_ParseStatic 1210777 990 ns/op 248 B/op 8 allocs/op BenchmarkTigerTonic_ParseStatic 5316440 231 ns/op 48 B/op 1 allocs/op BenchmarkTraffic_ParseStatic 496050 2539 ns/op 1256 B/op 19 allocs/op BenchmarkVulcan_ParseStatic 2462798 488 ns/op 98 B/op 3 allocs/op BenchmarkAce_ParseParam 13393669 89.6 ns/op 0 B/op 0 allocs/op BenchmarkAero_ParseParam 19836619 60.4 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParseParam 1405954 864 ns/op 467 B/op 5 allocs/op BenchmarkBeego_ParseParam 1000000 1065 ns/op 352 B/op 3 allocs/op BenchmarkBone_ParseParam 1000000 1698 ns/op 896 B/op 7 allocs/op BenchmarkChi_ParseParam 1356037 873 ns/op 432 B/op 3 allocs/op BenchmarkDenco_ParseParam 6241392 204 ns/op 64 B/op 1 allocs/op BenchmarkEcho_ParseParam 14088100 85.1 ns/op 0 B/op 0 allocs/op BenchmarkGin_ParseParam 17426064 68.9 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParseParam 1000000 1254 ns/op 664 B/op 8 allocs/op BenchmarkGoji_ParseParam 1682574 713 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_ParseParam 502224 2333 ns/op 1360 B/op 12 allocs/op BenchmarkGoJsonRest_ParseParam 1000000 1401 ns/op 649 B/op 13 allocs/op BenchmarkGoRestful_ParseParam 182623 7097 ns/op 4576 B/op 14 allocs/op BenchmarkGorillaMux_ParseParam 482332 2477 ns/op 1280 B/op 10 allocs/op BenchmarkGowwwRouter_ParseParam 1834873 657 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_ParseParam 23593393 51.0 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParseParam 2100160 574 ns/op 352 B/op 3 allocs/op BenchmarkKocha_ParseParam 4837220 252 ns/op 56 B/op 3 allocs/op BenchmarkLARS_ParseParam 18411192" }, { "data": "ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParseParam 571870 2398 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_ParseParam 286262 4268 ns/op 1072 B/op 10 allocs/op BenchmarkPat_ParseParam 692906 2157 ns/op 992 B/op 15 allocs/op BenchmarkPossum_ParseParam 1000000 1011 ns/op 496 B/op 5 allocs/op BenchmarkR2router_ParseParam 1722735 697 ns/op 432 B/op 5 allocs/op BenchmarkRivet_ParseParam 6058054 203 ns/op 48 B/op 1 allocs/op BenchmarkTango_ParseParam 1000000 1061 ns/op 280 B/op 8 allocs/op BenchmarkTigerTonic_ParseParam 890275 2277 ns/op 784 B/op 15 allocs/op BenchmarkTraffic_ParseParam 351322 3543 ns/op 1896 B/op 21 allocs/op BenchmarkVulcan_ParseParam 2076544 572 ns/op 98 B/op 3 allocs/op BenchmarkAce_Parse2Params 11718074 101 ns/op 0 B/op 0 allocs/op BenchmarkAero_Parse2Params 16264988 73.4 ns/op 0 B/op 0 allocs/op BenchmarkBear_Parse2Params 1238322 973 ns/op 496 B/op 5 allocs/op BenchmarkBeego_Parse2Params 1000000 1120 ns/op 352 B/op 3 allocs/op BenchmarkBone_Parse2Params 1000000 1632 ns/op 848 B/op 6 allocs/op BenchmarkChi_Parse2Params 1239477 955 ns/op 432 B/op 3 allocs/op BenchmarkDenco_Parse2Params 4944133 245 ns/op 64 B/op 1 allocs/op BenchmarkEcho_Parse2Params 10518286 114 ns/op 0 B/op 0 allocs/op BenchmarkGin_Parse2Params 14505195 82.7 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_Parse2Params 1000000 1437 ns/op 712 B/op 9 allocs/op BenchmarkGoji_Parse2Params 1689883 707 ns/op 336 B/op 2 allocs/op BenchmarkGojiv2_Parse2Params 502334 2308 ns/op 1344 B/op 11 allocs/op BenchmarkGoJsonRest_Parse2Params 1000000 1771 ns/op 713 B/op 14 allocs/op BenchmarkGoRestful_Parse2Params 159092 7583 ns/op 4928 B/op 14 allocs/op BenchmarkGorillaMux_Parse2Params 417548 2980 ns/op 1296 B/op 10 allocs/op BenchmarkGowwwRouter_Parse2Params 1751737 686 ns/op 432 B/op 3 allocs/op BenchmarkHttpRouter_Parse2Params 18089204 66.3 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_Parse2Params 1556986 777 ns/op 384 B/op 4 allocs/op BenchmarkKocha_Parse2Params 2493082 485 ns/op 128 B/op 5 allocs/op BenchmarkLARS_Parse2Params 15350108 78.5 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_Parse2Params 530974 2605 ns/op 1072 B/op 10 allocs/op BenchmarkMartini_Parse2Params 247069 4673 ns/op 1152 B/op 11 allocs/op BenchmarkPat_Parse2Params 816295 2126 ns/op 752 B/op 16 allocs/op BenchmarkPossum_Parse2Params 1000000 1002 ns/op 496 B/op 5 allocs/op BenchmarkR2router_Parse2Params 1569771 733 ns/op 432 B/op 5 allocs/op BenchmarkRivet_Parse2Params 4080546 295 ns/op 96 B/op 1 allocs/op BenchmarkTango_Parse2Params 1000000 1121 ns/op 312 B/op 8 allocs/op BenchmarkTigerTonic_Parse2Params 399556 3470 ns/op 1168 B/op 22 allocs/op BenchmarkTraffic_Parse2Params 314194 4159 ns/op 1944 B/op 22 allocs/op BenchmarkVulcan_Parse2Params 1827559 664 ns/op 98 B/op 3 allocs/op BenchmarkAce_ParseAll 478395 2503 ns/op 0 B/op 0 allocs/op BenchmarkAero_ParseAll 715392 1658 ns/op 0 B/op 0 allocs/op BenchmarkBear_ParseAll 59191 20124 ns/op 8928 B/op 110 allocs/op BenchmarkBeego_ParseAll 45507 27266 ns/op 9152 B/op 78 allocs/op BenchmarkBone_ParseAll 29328 41459 ns/op 16208 B/op 147 allocs/op BenchmarkChi_ParseAll 48531 25053 ns/op 11232 B/op 78 allocs/op BenchmarkDenco_ParseAll 325532 4284 ns/op 928 B/op 16 allocs/op BenchmarkEcho_ParseAll 433771 2759 ns/op 0 B/op 0 allocs/op BenchmarkGin_ParseAll 576316 2082 ns/op 0 B/op 0 allocs/op BenchmarkGocraftWeb_ParseAll 41500 29692 ns/op 13728 B/op 181 allocs/op BenchmarkGoji_ParseAll 80833 15563 ns/op 5376 B/op 32 allocs/op BenchmarkGojiv2_ParseAll 19836 60335 ns/op 34448 B/op 277 allocs/op BenchmarkGoJsonRest_ParseAll 32210 38027 ns/op 13866 B/op 321 allocs/op BenchmarkGoRestful_ParseAll 6644 190842 ns/op 117600 B/op 354 allocs/op BenchmarkGorillaMux_ParseAll 12634 95894 ns/op 30288 B/op 250 allocs/op BenchmarkGowwwRouter_ParseAll 98152 12159 ns/op 6912 B/op 48 allocs/op BenchmarkHttpRouter_ParseAll 933208 1273 ns/op 0 B/op 0 allocs/op BenchmarkHttpTreeMux_ParseAll 107191 11554 ns/op 5728 B/op 51 allocs/op BenchmarkKocha_ParseAll 184862 6225 ns/op 1112 B/op 54 allocs/op BenchmarkLARS_ParseAll 644546 1858 ns/op 0 B/op 0 allocs/op BenchmarkMacaron_ParseAll 26145 46484 ns/op 19136 B/op 208 allocs/op BenchmarkMartini_ParseAll 10000 121838 ns/op 25072 B/op 253 allocs/op BenchmarkPat_ParseAll 25417 47196 ns/op 15216 B/op 308 allocs/op BenchmarkPossum_ParseAll 58550 20735 ns/op 10816 B/op 78 allocs/op BenchmarkR2router_ParseAll 72732 16584 ns/op 8352 B/op 120 allocs/op BenchmarkRivet_ParseAll 281365 4968 ns/op 912 B/op 16 allocs/op BenchmarkTango_ParseAll 42831 28668 ns/op 7168 B/op 208 allocs/op BenchmarkTigerTonic_ParseAll 23774 49972 ns/op 16048 B/op 332 allocs/op BenchmarkTraffic_ParseAll 10000 104679 ns/op 45520 B/op 605 allocs/op BenchmarkVulcan_ParseAll 64810 18108 ns/op 2548 B/op 78 allocs/op ```" } ]
{ "category": "Runtime", "file_name": "BENCHMARKS.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. Examples of behavior that contributes to creating a positive environment include: Using welcoming and inclusive language Being respectful of differing viewpoints and experiences Gracefully accepting constructive criticism Focusing on what is best for the community Showing empathy towards other community members Examples of unacceptable behavior by participants include: The use of sexualized language or imagery and unwelcome sexual attention or advances Trolling, insulting/derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or electronic address, without explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project leader ([email protected]). All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers, contributors and users who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions with their involvement in the project as determined by the project's leader(s). This Code of Conduct is adapted from the , version 1.4, available at" } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "Singularity", "subcategory": "Container Runtime" }
[ { "data": "Join the [kubernetes-security-announce] group for security and vulnerability announcements. You can also subscribe to an RSS feed of the above using . Instructions for reporting a vulnerability can be found on the [Kubernetes Security and Disclosure Information] page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: Allocating IP Addresses menu_order: 5 search_type: Documentation Weave Net automatically assigns containers a unique IP address across the network, and also releases that address when the container exits. Unless you explicitly specify an address, this occurs for all invocations of the `attach`, `detach`, `expose`, and `hide` commands. Weave Net can also assign addresses in multiple subnets. The following automatic IP address management topics are discussed: * * Three initialization strategies are available: seed, consensus and observer. These options have different tradeoffs, so pick the one that suits your deployment best. Configuration via seed requires you to provide a list of peer names (via the `--ipalloc-init seed=` parameter) amongst which the address space will be shared initially. Normally weave derives a unique peer name automatically at launch, but since you need to know them ahead of time in this case you will need to name each peer explicitly via the `--name` parameter. Peers in the weave network are identified by a 48-bit value formatted like an ethernet MAC address (e.g. 01:23:45:67:89:ab) - you can either specify the name fully, or substitute a single run of zero-octets using the `::` notation, similar to : `00:00:00:00:00:01` can be written `::1` `01:00:00:00:00:00` can be written `1::` `01:00:00:00:00:01` can be written `1::1` host1$ weave launch --name ::1 --ipalloc-init seed=::1,::2,::3 host2$ weave launch --name ::2 --ipalloc-init seed=::1,::2,::3 host3$ weave launch --name ::3 --ipalloc-init seed=::1,::2,::3 In this configuration each peer knows in advance how the address space has been divided up, and will be able to perform allocations from the outset even under conditions of partition - no consensus is required. Alternatively, you can let Weave Net determine the seed automatically via a consensus algorithm. Since you don't need to provide it with a list of peer names anymore, you can let Weave Net derive those automatically for you as well. However, in order for Weave Net to form a single consensus reliably you must now instead tell each peer how many peers there are in total either by listing them as target peers or using the `--ipalloc-init consensus=` parameter. Just once, when the first automatic IP address allocation is requested in the whole network, Weave Net needs a majority of peers to be present in order to avoid formation of isolated groups, which can lead to inconsistency, for example, the same IP address being allocated to two different containers. Therefore, you must either supply the list of all peers in the network at `weave launch` or add the `--ipalloc-init consensus=` flag to specify how many peers there will be. To illustrate, suppose you have three hosts, accessible to each other as `$HOST1`, `$HOST2` and `$HOST3`. You can start Weave Net on those three hosts using these three commands: host1$ weave launch $HOST2 $HOST3 host2$ weave launch $HOST1 $HOST3 host3$ weave launch $HOST1 $HOST2 Or, if it is not convenient to name all the other hosts at launch time, you can pass the number of peers like this: host1$ weave launch --ipalloc-init consensus=3 host2$ weave launch --ipalloc-init consensus=3 $HOST3 host3$ weave launch --ipalloc-init consensus=3 $HOST2 The consensus mechanism used to determine a majority transitions through three states: 'deferred', 'waiting' and 'achieved': 'deferred' - no allocation requests or claims have been made yet; consensus is deferred until then 'waiting' - an attempt to achieve consensus is ongoing, triggered by an allocation or claim request; allocations will" }, { "data": "This state persists until a quorum of peers are able to communicate amongst themselves successfully 'achieved' - consensus achieved; allocations proceed normally Finally, some (but never all) peers can be launched as observers by specifying the `--ipalloc-init observer` option: host4$ weave launch --ipalloc-init observer $HOST3 You do not need to specify an initial peer count or seed to such peers. This can be useful to add peers to an existing fixed cluster (for example in response to a scale-out event) without worrying about adjusting initial peer counts accordingly. Normally it isn't a problem to over-estimate the value supplied to `--ipalloc-init consensus=`, but if you supply a number that is too small, then multiple independent groups may form. Weave Net uses the estimate of the number of peers at initialization to compute a majority or quorum number - specifically floor(n/2) + 1. If the actual number of peers is less than half the number stated, then they keep waiting for someone else to join in order to reach a quorum. But if the actual number is more than twice the quorum number, then you may end up with two sets of peers with each reaching a quorum and initializing independent data structures. You'd have to be quite unlucky for this to happen in practice, as they would have to go through the whole agreement process without learning about each other, but it's definitely possible. The quorum number is only used once at start-up (specifically, the first time someone tries to allocate or claim an IP address). Once a set of peers is initialized, you can add more and they will join on to the data structure used by the existing set. The one issue to watch is if the earlier peers are restarted, you must restart them using the current number of peers. If they use the smaller number that was correct when they first started, then they could form an independent set again. To illustrate this last point, the following sequence of operations is safe with respect to Weave Net's startup quorum: host1$ weave launch ...time passes... host2$ weave launch $HOST1 ...time passes... host3$ weave launch $HOST1 $HOST2 ...time passes... ...host1 is rebooted... host1$ weave launch $HOST2 $HOST3 Under certain circumstances (for example when adding new peers to an existing network) it is desirable to ensure that a peer has successfully joined and is ready to allocate IP addresses. An administrative command is provided for this purpose: host1$ weave prime This operation will block until the peer on which it is run has joined successfully. By default, Weave Net allocates IP addresses in the 10.32.0.0/12 range. This can be overridden with the `--ipalloc-range` option: host1$ weave launch --ipalloc-range 10.2.0.0/16 and must be the same on every host. The range parameter is written in in this example \"/16\" means the first 16 bits of the address form the network address and the allocator is to allocate container addresses that all start 10.2. See [IP addresses and routes](/site/concepts/ip-addresses.md) for more information. Weave shares the IP address range across all peers, dynamically according to their needs. If a group of peers becomes isolated from the rest (a partition), they can continue to work with the address ranges they had before isolation, and can subsequently be re-connected to the rest of the network without any conflicts arising. Key IPAM data is saved to disk, so that it is immediately available when the peer restarts: The division of the IP allocation range amongst peers Allocation of addresses to containers on the local peer A [data volume container](https://docs.docker.com/engine/userguide/containers/dockervolumes/#creating-and-mounting-a-data-volume-container) named `weavedb` is used to store this data." } ]
{ "category": "Runtime", "file_name": "ipam.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "The Virtual Kubelet accepts contributions via GitHub pull requests. This document outlines the process to help get your contribution accepted. If you are providing provider support for the Virtual Kubelet then we have to jump through some legal hurdles first. The must be signed by all contributors. Please fill out either the individual or corporate Contributor License Agreement (CLA). Once you are CLA'ed, we'll be able to accept your pull requests. *NOTE*: Only original source code from you and other people that have signed the CLA can be accepted into the repository. This is an open source project and as such no formal support is available. However, like all good open source projects we do offer \"best effort\" support through . Before opening a new issue or submitting a new pull request, it's helpful to search the project - it's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of. Issues are used as the primary method for tracking anything to do with the Virtual Kubelet. The issue lifecycle is mainly driven by the core maintainers, but is good information for those contributing to Virtual Kubelet. All issue types follow the same general lifecycle. Differences are noted below. Issue creation Triage The maintainer in charge of triaging will apply the proper labels for the issue. This includes labels for priority, type, and metadata. If additional labels are needed in the future, we will add them. If needed, clean up the title to succinctly and clearly state the issue. Also ensure that proposals are prefaced with \"Proposal\". Add the issue to the correct milestone. If any questions come up, don't worry about adding the issue to a milestone until the questions are answered. We attempt to do this process at least once per work day. Discussion \"Feature\" and \"Bug\" issues should be connected to the PR that resolves it. Whoever is working on a \"Feature\" or \"Bug\" issue (whether a maintainer or someone from the community), should either assign the issue to themself or make a comment in the issue saying that they are taking it. \"Proposal\" and \"Question\" issues should remain open until they are either resolved or have remained inactive for more than 30 days. This will help keep the issue queue to a manageable size and reduce noise. Should the issue need to stay open, the `keep open` label can be added. Issue closure If you haven't already done so, sign a Contributor License Agreement (see details above). Fork the repository, develop and test your code changes. You can use the following command to clone your fork to your local ``` cd $GOPATH mkdir -p {src,bin,pkg} mkdir -p src/github.com/virtual-kubelet/ cd src/github.com/virtual-kubelet/ git clone [email protected]:<your-github-account-name>/virtual-kubelet.git # OR: git clone https://github.com/<your-github-account-name>/virtual-kubelet.git cd virtual-kubelet go get ./... git remote add upstream [email protected]:virtual-kubelet/virtual-kubelet.git ``` Submit a pull" }, { "data": "We welcome and appreciate everyone to submit and review changes. Here are some guidelines to follow for help ensure a successful contribution experience. Please note these are general guidelines, and while they are a good starting point, they are not specifically rules. If you have a question about something, feel free to ask: on Kubernetes Slack GitHub Issues Since Virtual Kubelet has reached 1.0 it is a major goal of the project to keep a stable API. Breaking changes must only be considered if a 2.0 release is on the table, which should only come with thoughtful consideration of the projects users as well as maintenance burden. Also note that behavior changes in the runtime can have cascading effects that cause unintended failures. Behavior changes should come well documented and with ample consideration for downstream effects. If possible, they should be opt-in. Public API's should be extendable and flexible without requiring breaking changes. While we can always add a new function (`Foo2()`), a new type, etc, doing so makes it harder for people to update to the new behavior. Build API interfaces that do not need to be changed to adopt new or improved functionality. Opinions on how a particular thing should work should be encoded by the user rather than implicit in the runtime. Defaults are fine, but defaults should be overridable. The smaller the surface area of an API, the easier it is to do more interesting things with it. Don't overload functionality. If something is complicated to setup we can provide helpers or wrappers to do that, but don't require users to do things a certain way because this tends to diminish the usefulness, especially as it relates to runtimes. We also do not want the maintenance burden of every users individual edge cases. Probably if it is a public/exported API, it should take a `context.Context`. Even if it doesn't need one today, it may need it tomorrow, and then we have a breaking API change. We use `context.Context` for storing loggers, tracing spans, and cancellation all across the project. Better safe than sorry: add a `context.Context`. Callers can't handle errors if they don't know what the error is, so make sure they can figure that out. We use a package `errdefs` to define the types of errors we currently look out for. We do not typically look for concrete error types, so check out `errdefs` and see if there is already an error type in there for your needs, or even create a new one. Ideally all behavior would be tested, in practice this is not the case. Unit tests are great, and fast. There is also an end-to-end test suite for testing the overall behavior of the system. Please add tests. This is also a great place to get started if you are new to the codebase. Virtual Kubelet follows the ." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Virtual Kubelet", "subcategory": "Container Runtime" }
[ { "data": "Starting from , the import path will be: \"github.com/golang-jwt/jwt/v4\" The `/v4` version will be backwards compatible with existing `v3.x.y` tags in this repo, as well as `github.com/dgrijalva/jwt-go`. For most users this should be a drop-in replacement, if you're having troubles migrating, please open an issue. You can replace all occurrences of `github.com/dgrijalva/jwt-go` or `github.com/golang-jwt/jwt` with `github.com/golang-jwt/jwt/v4`, either manually or by using tools such as `sed` or `gofmt`. And then you'd typically run: ``` go get github.com/golang-jwt/jwt/v4 go mod tidy ``` The original migration guide for older releases can be found at https://github.com/dgrijalva/jwt-go/blob/master/MIGRATION_GUIDE.md." } ]
{ "category": "Runtime", "file_name": "MIGRATION_GUIDE.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "We follow the , and use the corresponding tooling. For the purposes of the aforementioned guidelines, controller-runtime counts as a \"library project\", but otherwise follows the guidelines exactly. For release branches, we generally tend to support backporting one (1) major release (`release-{X-1}` or `release-0.{Y-1}`), but may go back further if the need arises and is very pressing (e.g. security updates). Note the . Particularly: We DO guarantee Kubernetes REST API compatibility -- if a given version of controller-runtime stops working with what should be a supported version of Kubernetes, this is almost certainly a bug. We DO NOT guarantee any particular compatibility matrix between kubernetes library dependencies (client-go, apimachinery, etc); Such compatibility is infeasible due to the way those libraries are versioned." } ]
{ "category": "Runtime", "file_name": "VERSIONING.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "(devices-usb)= ```{note} The `usb` device type is supported for both containers and VMs. It supports hotplugging for both containers and VMs. ``` USB devices make the specified USB device appear in the instance. For performance issues, avoid using devices that require high throughput or low latency. For containers, only `libusb` devices (at `/dev/bus/usb`) are passed to the instance. This method works for devices that have user-space drivers. For devices that require dedicated kernel drivers, use a or a instead. For virtual machines, the entire USB device is passed through, so any USB device is supported. When a device is passed to the instance, it vanishes from the host. `usb` devices have the following device options: % Include content from ```{include} ../config_options.txt :start-after: <!-- config group devices-usb start --> :end-before: <!-- config group devices-usb end --> ```" } ]
{ "category": "Runtime", "file_name": "devices_usb.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "A distribution point represents a method for fetching a container image from an input string. This string does not specify an image's type. A distribution point can provide one or more image formats. Some Docker registries also provide OCI Images. Rkt can fetch a Docker/OCI image from a registry and convert it on the fly to its native image format, ACI. The tool can perform this conversion in advance. Before distribution points, rkt used ImageTypes. These mapped a specifically formatted input string to things like the distribution, transport and image type. This information is hidden now since all images are appc ACIs. Distribution points are used as the primary expression of container image information in the different layers of rkt. This includes fetching and referencing in a CAS/ref store. Distribution points are either direct or indirect. Direct distribution points provide the final information needed to fetch the image. Indirect distribution points take some indirect steps, like discovery, before getting the final image location. An indirect distribution point may resolve to a direct distribution point. A distribution point is represented as a URI with the URI scheme as \"cimd\" and the remaining parts (URI opaque data and query/fragments parts) as the distribution point data. See for more information on this. Distribution points clearly map to a resource name, otherwise they will not fit inside a resource locator (URL). We will then use the term URIs instead of URNs because it's the suggested name from the rfc (and URNs are defined, by rfc2141, to have the `urn` scheme). Every distribution starts the same: `cimd:DISTTYPE:v=uint32(VERSION):` where `cimd` is the container image distribution scheme* `DISTTYPE` is the distribution type* `v=uint32(VERSION)` is the distribution type format version* Rkt has three types of distribution points: `Appc` `ACIArchive` `Docker` This is an indirect distribution point. Appc defines a distribution point using appc image discovery The format is: `cimd:appc:v=0:name?label01=....&label02=....` The distribution type is \"appc\" The labels values must be Query escaped Example: `cimd:appc:v=0:coreos.com/etcd?version=v3.0.3&os=linux&arch=amd64` This is a direct distribution point since it directly define the final image location. ACIArchive defines a distribution point using an archive file The format is: `cimd:aci-archive:v=0:ArchiveURL?query...` The distribution type is \"aci-archive\" ArchiveURL must be query escaped Examples: `cimd:aci-archive:v=0:file%3A%2F%2Fabsolute%2Fpath%2Fto%2Ffile` `cimd:aci-archive:v=0:https%3A%2F%2Fexample.com%2Fapp.aci` Docker is an indirect distribution point. This defines a distribution point using a docker registry The format is: `cimd:docker:v=0:[REGISTRYHOST[:REGISTRYPORT]/]NAME[:TAG|@DIGEST]` Removing the common distribution point section, the format is the same as the docker image string format (man docker-pull). Examples: `cimd:docker:v=0:busybox` `cimd:docker:v=0:busybox:latest` `cimd:docker:v=0:registry-1.docker.io/library/busybox@sha256:a59906e33509d14c036c8678d687bd4eec81ed7c4b8ce907b888c607f6a1e0e6` This is an Indirect distribution point. OCI images can be retrieved using a Docker registry but in future the OCI image spec will define one or more own kinds of distribution starting from an image name (with additional tags/labels). This is a Direct distribution point. This can fetch an image starting from a" }, { "data": "The 'location' can point to: A single file archive A local directory based layout A remote directory based layout Other types of locations This will probably end up being the final distribution used by the above OCI image distributions (like ACIArchive is the final distribution point for the Appc distribution point): `cimd:oci-image-layout:v=0:file%3A%2F%2Fabsolute%2Fpath%2Fto%2Ffile?ref=refname` `cimd:oci-image-layout:v=0:https%3A%2F%2Fdir%2F?ref=refname` Since the OCI image layout can provide multiple images selectable by a ref, one needs to specify which ref to use in the archive distribution URI (see the above ref query parameter). Since distribution only covers one image, it is not possible to import all refs with a single distribution URI. TODO(sgotti): Define if oci-image-layout. It should internally handle both archive and directory based layouts or use two different distributions or a query parameter the explicitly define the layout (to avoid guessing if the URL points to a single file or to a directory).* Note Considering , the final distribution format will probably be similar to the Appc distribution. There is a need to distinguish their User Friendly string (prepending an appc: or oci: ?). To sum it up: | Distribution Point | Type | URI Format | Final Distribution Point | |--|-||--| | Appc | Direct | `cimd:appc:v=0:name?label01=....&label02=...` | ACIArchive | | Docker | Direct | `cimd:docker:v=0:[REGISTRYHOST[:REGISTRYPORT]/]NAME[:TAG&#124;@DIGEST]` | <none> | | ACIArchive | Indirect | `cimd:aci-archive:v=0:ArchiveURL?query...` | | | OCI | Direct | `cimd:oci:v=0:TODO` | OCIImageLayout | | OCIImageLayout | Indirect | `cimd:oci-image-layout:v=0:URL?ref=...` | | The distribution URI can be long and complex. It is helpful to have a friendly string for users to request an image with. Rkt supports a couple of image string input styles. These are mapped to an `AppImageType`: Appc discovery string: `example.com/app01:v1.0.0,label01=value01,...` or `example.com/app01,version=v1.0.0,label01=value01,...` etc. File paths are absolute (`/full/path/to/file`) or relative. The above two may overlap so some heuristic is needed to distinguish them (removing this heuristic will break backward compatibility in the CLI). File URL: `file:///full/path/to/file` Http(s) URL: `http(s)://host:port/path` Docker URL: This is a strange URL since it the schemeful (`docker://`) version of the docker image string To maintain backward compatibility these image string will be converted to a distribution URI: | Current ImageType | Distribution Point URI | |-|| | appc string | `cimd:appc:v=0:name?label01=....&label02=...` | | file path | `cimd:aci-archive:v=0:ArchiveURL` | | file URL | `cimd:aci-archive:v=0:ArchiveURL` | | https URL | `cimd:aci-archive:v=0:ArchiveURL` | | docker URI/URL (docker: and docker://) | `cimd:docker:v=0:[REGISTRYHOST[:REGISTRYPORT]/]NAME[:TAG&#124;@DIGEST]` | The above table also adds Docker URI (`docker:`) as a user friendly string and its clearer than the URL version (`docker://`) The parsing and generation of user friendly string is done outside the distribution package (to let distribution pkg users implement their own user friendly strings). Rkt has two jobs: Parse a user friendly string to a distribution URI. Generate a user friendly string from a distribution URI. This is useful when showing the refs from a refs store. They can easily be understood and copy/pasted. A user can provide as an input image as a \"user friendly\" string or a complete distribution URI. A Distribution Point implementation will also provide a function to compare if Distribution Point URIs are the same (e.g. ordering the query parameters). A Distribution Point will be the base for a future refactor of the fetchers logic (see ) This also creates a better separation between the distribution points and the transport layers. For example there may exist multiple transport plugins (file, http, s3, bittorrent etc...) to be called by an ACIArchive distribution point." } ]
{ "category": "Runtime", "file_name": "distribution-point.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "title: \"Backup Reference\" layout: docs It is possible to exclude individual items from being backed up, even if they match the resource/namespace/label selectors defined in the backup spec. To do this, label the item as follows: ```bash kubectl label -n <ITEM_NAMESPACE> <RESOURCE>/<NAME> velero.io/exclude-from-backup=true ``` To backup resources of specific Kind in a specific order, use option --ordered-resources to specify a mapping Kinds to an ordered list of specific resources of that Kind. Resource names are separated by commas and their names are in format 'namespace/resourcename'. For cluster scope resource, simply use resource name. Key-value pairs in the mapping are separated by semi-colon. Kind name is in plural form. ```bash velero backup create backupName --include-cluster-resources=true --ordered-resources 'pods=ns1/pod1,ns1/pod2;persistentvolumes=pv4,pv8' --include-namespaces=ns1 velero backup create backupName --ordered-resources 'statefulsets=ns1/sts1,ns1/sts0' --include-namespaces=ns1 ``` The schedule operation allows you to create a backup of your data at a specified time, defined by a . ``` velero schedule create NAME --schedule=\" *\" [flags] ``` Cron schedules use the following format. ``` ``` For example, the command below creates a backup that runs every day at 3am. ``` velero schedule create example-schedule --schedule=\"0 3 *\" ``` This command will create the backup, `example-schedule`, within Velero, but the backup will not be taken until the next scheduled time, 3am. Backups created by a schedule are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as YYYYMMDDhhmmss. For a full list of available configuration flags use the Velero CLI help command. ``` velero schedule create --help ``` Once you create the scheduled backup, you can then trigger it manually using the `velero backup` command. ``` velero backup create --from-schedule example-schedule ``` This command will immediately trigger a new backup based on your template for `example-schedule`. This will not affect the backup schedule, and another backup will trigger at the scheduled time. Backups created from schedule can have owner reference to the schedule. This can be achieved by command: ``` velero schedule create --use-owner-references-in-backup <backup-name> ``` By this way, schedule is the owner of it created backups. This is useful for some GitOps scenarios, or the resource tree of k8s synchronized from other places. Please do notice there is also side effect that may not be expected. Because schedule is the owner, when the schedule is deleted, the related backups CR (Just backup CR is deleted. Backup data still exists in object store and snapshots) will be deleted by k8s GC controller, too, but Velero controller will sync these backups from object store's metadata into k8s. Then k8s GC controller and Velero controller will fight over whether these backups should exist all through. If there is possibility the schedule will be disable to not create backup anymore, and the created backups are still useful. Please do not enable this option. For detail, please reference to . By default, Velero will paginate the LIST API call for each resource type in the Kubernetes API when collecting items into a backup. The `--client-page-size` flag for the Velero server configures the size of each page. Depending on the cluster's scale, tuning the page size can improve backup performance. You can experiment with higher values, noting their impact on the relevant `apiserverrequestdurationseconds*` metrics from the Kubernetes apiserver. Pagination can be entirely disabled by setting `--client-page-size` to `0`. This will request all items in a single unpaginated LIST call. Use the following commands to delete Velero backups and data: `kubectl delete backup <backupName> -n <veleroNamespace>` will delete the backup custom resource only and will not delete any associated data from object/block storage `velero backup delete <backupName>` will delete the backup resource including all data in object/block storage" } ]
{ "category": "Runtime", "file_name": "backup-reference.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "There are three ways of passing information to plugins using the Container Network Interface (CNI), none of which require the to be updated. These are plugin specific fields in the JSON config `args` field in the JSON config `CNI_ARGS` environment variable This document aims to provide guidance on which method should be used and to provide a convention for how common information should be passed. Establishing these conventions allows plugins to work across multiple runtimes. This helps both plugins and the runtimes. Plugin authors should aim to support these conventions where it makes sense for their plugin. This means they are more likely to \"just work\" with a wider range of runtimes. Plugins should accept arguments according to these conventions if they implement the same basic functionality as other plugins. If plugins have shared functionality that isn't covered by these conventions then a PR should be opened against this document. Runtime authors should follow these conventions if they want to pass additional information to plugins. This will allow the extra information to be consumed by the widest range of plugins. These conventions serve as an abstraction for the runtime. For example, port forwarding is highly implementation specific, but users should be able to select the plugin of their choice without changing the runtime. Additional conventions can be created by creating PRs which modify this document. formed part of the original CNI spec and have been present since the initial release. Plugins may define additional fields that they accept and may generate an error if called with unknown fields. The exception to this is the args field may be used to pass arbitrary data which may be ignored by plugins. A plugin can define any additional fields it needs to work properly. It should return an error if it can't act on fields that were expected or where the field values were malformed. This method of passing information to a plugin is recommended when the following conditions hold: The configuration has specific meaning to the plugin (i.e. it's not just general meta data) the plugin is expected to act on the configuration or return an error if it can't Dynamic information (i.e. data that a runtime fills out) should be placed in a `runtimeConfig` section. Plugins can request that the runtime insert this dynamic configuration by explicitly listing their `capabilities` in the network configuration. For example, the configuration for a port mapping plugin might look like this to an operator (it should be included as part of a . ```json { \"name\" : \"ExamplePlugin\", \"type\" : \"port-mapper\", \"capabilities\": {\"portMappings\": true} } ``` But the runtime would fill in the mappings so the plugin itself would receive something like" }, { "data": "```json { \"name\" : \"ExamplePlugin\", \"type\" : \"port-mapper\", \"runtimeConfig\": { \"portMappings\": [ {\"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\"} ] } } ``` | Area | Purpose | Capability | Spec and Example | Runtime implementations | Plugin Implementations | | -- | - | --| - | -- | | | port mappings | Pass mapping from ports on the host to ports in the container network namespace. | `portMappings` | A list of portmapping entries.<br/> <pre>[<br/> { \"hostPort\": 8080, \"containerPort\": 80, \"protocol\": \"tcp\" },<br /> { \"hostPort\": 8000, \"containerPort\": 8001, \"protocol\": \"udp\" }<br /> ]<br /></pre> | kubernetes | CNI `portmap` plugin | | ip ranges | Dynamically configure the IP range(s) for address allocation. Runtimes that manage IP pools, but not individual IP addresses, can pass these to plugins. | `ipRanges` | The same as the `ranges` key for `host-local` - a list of lists of subnets. The outer list is the number of IPs to allocate, and the inner list is a pool of subnets for each allocation. <br/><pre>[<br/> [<br/> { \"subnet\": \"10.1.2.0/24\", \"rangeStart\": \"10.1.2.3\", \"rangeEnd\": 10.1.2.99\", \"gateway\": \"10.1.2.254\" } <br/> ]<br/>]</pre> | none | CNI `host-local` plugin | | bandwidth limits | Dynamically configure interface bandwidth limits | `bandwidth` | Desired bandwidth limits. Rates are in bits per second, burst values are in bits. <pre> { \"ingressRate\": 2048, \"ingressBurst\": 1600, \"egressRate\": 4096, \"egressBurst\": 1600 } </pre> | none | CNI `bandwidth` plugin | | dns | Dynamically configure dns according to runtime | `dns` | Dictionary containing a list of `servers` (string entries), a list of `searches` (string entries), a list of `options` (string entries). <pre>{ <br> \"searches\" : [ \"internal.yoyodyne.net\", \"corp.tyrell.net\" ] <br> \"servers\": [ \"8.8.8.8\", \"10.0.0.10\" ] <br />} </pre> | kubernetes | CNI `win-bridge` plugin, CNI `win-overlay` plugin | | ips | Dynamically allocate IPs for container interface. Runtime which has the ability of address allocation can pass these to plugins. | `ips` | A list of `IP` (\\<ip\\>\\[/\\<prefix\\>\\]). <pre> [ \"192.168.0.1\", 10.10.0.1/24\", \"3ffe:ffff:0:01ff::2\", \"3ffe:ffff:0:01ff::1/64\" ] </pre> The plugin may require the IP address to include a prefix length. | none | CNI `static` plugin, CNI `host-local` plugin | | mac | Dynamically assign MAC. Runtime can pass this to plugins which need MAC as input. | `mac` | `MAC` (string entry). <pre> \"c2:11:22:33:44:55\" </pre> | none | CNI `tuning` plugin | | infiniband guid | Dynamically assign Infiniband GUID to network interface. Runtime can pass this to plugins which need Infiniband GUID as input. | `infinibandGUID` | `GUID` (string entry). <pre> \"c2:11:22:33:44:55:66:77\" </pre> | none | CNI plugin | | device id | Provide device identifier which is associated with the network to allow the CNI plugin to perform device dependent network configurations. | `deviceID` | `deviceID` (string entry). <pre> \"0000:04:00.5\" </pre> | none | CNI `host-device` plugin | | aliases | Provide a list of names that will be mapped to the IP addresses assigned to this interface. Other containers on the same network may use one of these names to access the container.| `aliases` | List of `alias` (string entry). <pre> [\"my-container\", \"primary-db\"] </pre> | none | CNI `alias` plugin | | cgroup path | Provide the cgroup path for pod as requested by CNI plugins. | `cgroupPath` | `cgroupPath` (string entry). <pre>\"/kubelet.slice/kubelet-kubepods.slice/kubelet-kubepods-burstable.slice/kubelet-kubepods-burstable-pod28ce45bc63f848a3a99bcfb9e63c856c.slice\" </pre> | none | CNI `host-local` plugin | `args` in were reserved as a field in the" }, { "data": "release of the CNI spec. args (dictionary): Optional additional arguments provided by the container runtime. For example a dictionary of labels could be passed to CNI plugins by adding them to a labels field under args. `args` provide a way of providing more structured data than the flat strings that CNI_ARGS can support. `args` should be used for optional meta-data. Runtimes can place additional data in `args` and plugins that don't understand that data should just ignore it. Runtimes should not require that a plugin understands or consumes that data provided, and so a runtime should not expect to receive an error if the data could not be acted on. This method of passing information to a plugin is recommended when the information is optional and the plugin can choose to ignore it. It's often that case that such information is passed to all plugins by the runtime without regard for whether the plugin can understand it. The conventions documented here are all namespaced under `cni` so they don't conflict with any existing `args`. For example: ```jsonc { \"cniVersion\":\"0.2.0\", \"name\":\"net\", \"args\":{ \"cni\":{ \"labels\": [{\"key\": \"app\", \"value\": \"myapp\"}] } }, // <REST OF CNI CONFIG HERE> \"ipam\":{ // <IPAM CONFIG HERE> } } ``` | Area | Purpose| Spec and Example | Runtime implementations | Plugin Implementations | | -- | | | -- | - | | labels | Pass`key=value` labels to plugins | <pre>\"labels\" : [<br /> { \"key\" : \"app\", \"value\" : \"myapp\" },<br /> { \"key\" : \"env\", \"value\" : \"prod\" }<br />] </pre> | none | none | | ips | Request specific IPs | Spec:<pre>\"ips\": [\"\\<ip\\>[/\\<prefix\\>]\", ...]</pre>Examples:<pre>\"ips\": [\"10.2.2.42/24\", \"2001:db8::5\"]</pre> The plugin may require the IP address to include a prefix length. | none | host-local, static | CNI_ARGS formed part of the original CNI spec and have been present since the initial release. `CNI_ARGS`: Extra arguments passed in by the user at invocation time. Alphanumeric key-value pairs separated by semicolons; for example, \"FOO=BAR;ABC=123\" The use of `CNIARGS` is deprecated and \"args\" should be used instead. If a runtime passes an equivalent key via `args` (eg the `ips` `args` Area and the `CNIARGS` `IP` Field) and the plugin understands `args`, the plugin must ignore the CNI_ARGS Field. | Field | Purpose| Spec and Example | Runtime implementations | Plugin Implementations | | | | - | -- | - | | IP | Request a specific IP from IPAM plugins | Spec:<pre>IP=\\<ip\\> suggests IP can be used. | host-local, static | If plugins are agnostic about the type of interface created, they SHOULD work in a chained mode and configure existing interfaces. Plugins MAY also create the desired interface when not run in a chain. For example, the `bridge` plugin adds the host-side interface to a bridge. So, it should accept any previous result that includes a host-side interface, including `tap` devices. If not called as a chained plugin, it creates a `veth` pair first. Plugins that meet this convention are usable by a larger set of runtimes and interfaces, including hypervisors and DPDK providers." } ]
{ "category": "Runtime", "file_name": "CONVENTIONS.md", "project_name": "Container Network Interface (CNI)", "subcategory": "Cloud Native Network" }
[ { "data": "Preview: and will help calculate `Sandbox Size` info and pass it to Kata Containers through annotations. In order to adapt to this beneficial change and be compatible with the past, we have implemented the new vCPUs handling way in `runtime-rs`, which is slightly different from the original `runtime-go`'s design. vCPUs sizing should be determined by the container workloads. So throughout the life cycle of Kata Containers, there are several points in time when we need to think about how many vCPUs should be at the time. Mainly including the time points of `CreateVM`, `CreateContainer`, `UpdateContainer`, and `DeleteContainer`. `CreateVM`: When creating a sandbox, we need to know how many vCPUs to start the VM with. `CreateContainer`: When creating a new container in the VM, we may need to hot-plug the vCPUs according to the requirements in container's spec. `UpdateContainer`: When receiving the `UpdateContainer` request, we may need to update the vCPU resources according to the new requirements of the container. `DeleteContainer`: When a container is removed from the VM, we may need to hot-unplug the vCPUs to reclaim the vCPU resources introduced by the container. When Kata calculate the number of vCPUs, We have three data sources, the `defaultvcpus` and `defaultmaxvcpus` specified in the configuration file (named `TomlConfig` later in the doc), the `io.kubernetes.cri.sandbox-cpu-quota` and `io.kubernetes.cri.sandbox-cpu-period` annotations passed by the upper layer runtime, and the corresponding CPU resource part in the container's spec for the container when `CreateContainer`/`UpdateContainer`/`DeleteContainer` is requested. Our understanding and priority of these resources are as follows, which will affect how we calculate the number of vCPUs later. From `TomlConfig`: `default_vcpus`: default number of vCPUs when starting a VM. `default_maxvcpus`: maximum number of vCPUs. From `Annotation`: `InitialSize`: we call the size of the resource passed from the annotations as `InitialSize`. Kubernetes will calculate the sandbox size according to the Pod's statement, which is the `InitialSize` here. This size should be the size we want to prioritize. From `Container Spec`: The amount of CPU resources that the Container wants to use will be declared through the spec. Including the aforementioned annotations, we mainly consider `cpu quota` and `cpuset` when calculating the number of vCPUs. `cpu quota`: `cpu quota` is the most common way to declare the amount of CPU resources. The number of vCPUs introduced by `cpu quota` declared in a container's spec is: `vCPUs = ceiling( quota / period )`. `cpuset`: `cpuset` is often used to bind the CPUs that tasks can run" }, { "data": "The number of vCPUs may introduced by `cpuset` declared in a container's spec is the number of CPUs specified in the set that do not overlap with other containers. There are two types of vCPUs that we need to consider, one is the number of vCPUs when starting the VM (named `Boot Size` in the doc). The second is the number of vCPUs when `CreateContainer`/`UpdateContainer`/`DeleteContainer` request is received (`Real-time Size` in the doc). The main considerations are `InitialSize` and `default_vcpus`. There are the following principles: `InitialSize` has priority over `default_vcpus` declared in `TomlConfig`. When there is such an annotation statement, the originally `defaultvcpus` will be modified to the number of vCPUs in the `InitialSize` as the `Boot Size`. (Because not all runtimes support this annotation for the time being, we still keep the `defaultcpus` in `TomlConfig`.) When the specs of all containers are aggregated for sandbox size calculation, the method is consistent with the calculation method of `InitialSize` here. When we receive an OCI request, it may be for a single container. But what we have to consider is the number of vCPUs for the entire VM. So we will maintain a list. Every time there is a demand for adjustment, the entire list will be traversed to calculate a value for the number of vCPUs. In addition, there are the following principles: Do not cut computing power and try to keep the number of vCPUs specified by `InitialSize`. So the number of vCPUs after will not be less than the `Boot Size`. `cpu quota` takes precedence over `cpuset` and the setting history are took into account. We think quota describes the CPU time slice that a cgroup can use, and `cpuset` describes the actual CPU number that a cgroup can use. Quota can better describe the size of the CPU time slice that a cgroup actually wants to use. The `cpuset` only describes which CPUs the cgroup can use, but the cgroup can use the specified CPU but consumes a smaller time slice, so the quota takes precedence over the `cpuset`. On the one hand, when both `cpu quota` and `cpuset` are specified, we will calculate the number of vCPUs based on `cpu quota` and ignore `cpuset`. On the other hand, if `cpu quota` was used to control the number of vCPUs in the past, and only `cpuset` was updated during `UpdateContainer`, we will not adjust the number of vCPUs at this time. `StaticSandboxResourceMgmt` controls hotplug. Some VMMs and kernels of some architectures do not support hotplugging. We can accommodate this situation through `StaticSandboxResourceMgmt`. When `StaticSandboxResourceMgmt = true` is set, we don't make any further attempts to update the number of vCPUs after booting." } ]
{ "category": "Runtime", "file_name": "vcpu-handling-runtime-rs.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "name: Task about: Create a general task title: \"[TASK] \" labels: kind/task assignees: '' <!--A clear and concise description of what the task is.--> <!-- Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists [ ] `item 1` --> <!--Add any other context or screenshots about the task request here.-->" } ]
{ "category": "Runtime", "file_name": "task.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "This document assumes a trusted environment with a functioning kata container, as per the . The term \"trusted\" implies that the system is authorized, authenticated and attested to use your artifacts and secrets safely. Machine: IBM z16 LPAR OS: Ubuntu 22.04.1 LTS CPU: 16 vCPU Memory: 16G Host capable of Secure Execution To take advantage of the IBM Secure Execution capability, the host machine on which you intend to run workloads must be an IBM z15 (or a newer model) or an IBM LinuxONE III (or a newer model). In addition to the hardware requirement, you need to verify the CPU facility and kernel configuration, as outlined below: ``` $ # To check the protected virtualization support from kernel $ cat /sys/firmware/uv/protvirthost 1 $ # To check if an ultravisor reserves memory for the current boot $ sudo dmesg | grep -i ultravisor [ 0.063630] prot_virt.f9efb6: Reserving 98MB as ultravisor base storage $ # To check a facility bit for Secure Execution $ cat /proc/cpuinfo | grep 158 facilities : ... numbers ... 158 ... numbers ... ``` If any of the results are not identifiable, please reach out to the responsible cloud provider to enable the Secure Execution capability. Alternatively, if you possess administrative privileges and the facility bit is set, you can enable the Secure Execution capability by adding `prot_virt=1` to the kernel parameters and performing a system reboot like: ``` $ sudo sed -i 's/^\\(parameters.*\\)/\\1 prot_virt=1/g' /etc/zipl.conf $ sudo zipl -V $ sudo systemctl reboot ``` Please note that the method of enabling the Secure Execution capability may vary among Linux distributions. Artifacts from Kata Containers A secure image is constructed using the following artifacts A raw kernel An initial RAM disk The most straightforward approach to obtain these artifacts is by reusing kata-containers: ``` $ export PATH=\"$PATH:/opt/kata/bin\" $ ls -1 $(dirname $(kata-runtime env --json | jq -r '.Kernel.Path')) config-6.1.62-121 kata-containers.img kata-containers-confidential.img kata-containers-initrd.img kata-containers-initrd-confidential.img kata-ubuntu-20.04.initrd kata-ubuntu-20.04-confidential.initrd kata-ubuntu-latest.image kata-ubuntu-latest-confidential.image vmlinux-6.1.62-121 vmlinux-6.1.62-121-confidential vmlinux.container vmlinux-confidential.container vmlinuz-6.1.62-121 vmlinuz-6.1.62-121-confidential vmlinuz.container vmlinuz-confidential.container ``` The output indicates the deployment of the kernel (`vmlinux-6.1.62-121-confidential`, though the version may vary at the time of testing), rootfs-image (`kata-ubuntu-latest-confidential.image`), and rootfs-initrd (`kata-ubuntu-20.04-confidential.initrd`). In this scenario, the available kernel and initrd can be utilized for a secure image. However, if any of these components are absent, they must be built from the as follows: ``` $ # Assume that the project is cloned at $GOPATH/src/github.com/kata-containers $ cd $GOPATH/src/github.com/kata-containers/kata-containers $ sudo -E PATH=$PATH make kernel-confidential-tarball $ sudo -E PATH=$PATH make rootfs-initrd-confidential-tarball $ tar -tf build/kata-static-kernel-confidential.tar.xz | grep vmlinuz ./opt/kata/share/kata-containers/vmlinuz-confidential.container ./opt/kata/share/kata-containers/vmlinuz-6.1.62-121-confidential $ tar -tf build/kata-static-rootfs-initrd-confidential.tar.xz | grep initrd ./opt/kata/share/kata-containers/kata-containers-initrd-confidential.img ./opt/kata/share/kata-containers/kata-ubuntu-20.04-confidential.initrd $ mkdir artifacts $ tar -xvf build/kata-static-kernel-confidential.tar.xz -C artifacts ./opt/kata/share/kata-containers/vmlinuz-6.1.62-121-confidential $ tar -xvf build/kata-static-rootfs-initrd-confidential.tar.xz -C artifacts ./opt/kata/share/kata-containers/kata-ubuntu-20.04-confidential.initrd $ ls artifacts/opt/kata/share/kata-containers/ kata-ubuntu-20.04-confidential.initrd vmlinuz-6.1.62-121-confidential ``` Secure Image Generation Tool `genprotimg` is a utility designed to generate an IBM Secure Execution image. It can be installed either from the package manager of a distribution or from the source code. The tool is included in the `s390-tools` package. Please ensure that you have a version of the tool equal to or greater than `2.17.0`. If not, you will need to specify an additional argument, `--x-pcf '0xe0'`, when running the command. Here is an example of a native build from the source: ``` $ sudo apt-get install gcc libglib2.0-dev libssl-dev libcurl4-openssl-dev $ tool_version=v2.25.0 $ git clone -b $tool_version" }, { "data": "$ pushd s390-tools/genprotimg && make && sudo make install && popd $ rm -rf s390-tools ``` Host Key Document A host key document is a public key employed for encrypting a secure image, which is subsequently decrypted using a corresponding private key during the VM bootstrap process. You can obtain the host key document either through IBM's designated or by requesting it from the cloud provider responsible for the IBM Z and LinuxONE instances where your workloads are intended to run. To ensure security, it is essential to verify the authenticity and integrity of the host key document belonging to an authentic IBM machine. To achieve this, please additionally obtain the following certificates from the Resource Link: IBM Z signing key certificate `DigiCert` intermediate CA certificate These files will be used for verification during secure image construction in the next section. Assuming you have placed a host key document at `$HOME/host-key-document`: Host key document as `HKD-0000-0000000.crt` and two certificates at `$HOME/certificates`: `DigiCert` intermediate CA certificate as `DigiCertCA.crt` IBM Z signing-key certificate as `ibm-z-host-key-signing.crt` you can construct a secure image using the following procedure: ``` $ # Change a directory to the project root $ cd $GOPATH/src/github.com/kata-containers/kata-containers $ hostkeydocument=$HOME/host-key-document/HKD-0000-0000000.crt $ kernel_image=artifacts/opt/kata/share/kata-containers/vmlinuz-6.1.62-121-confidential $ initrd_image=artifacts/opt/kata/share/kata-containers/kata-ubuntu-20.04-confidential.initrd $ echo \"panic=1 scsi_mod.scan=none swiotlb=262144 agent.log=debug\" > parmfile $ genprotimg --host-key-document=${hostkeydocument} \\ --output=kata-containers-se.img --image=${kernelimage} --ramdisk=${initrdimage} \\ --parmfile=parmfile --no-verify WARNING: host-key document verification is disabled. Your workload is not secured. $ file kata-containers-se.img kata-containers-se.img: data $ sudo cp kata-containers-se.img /opt/kata/share/kata-containers/ ``` It is important to note that the `--no-verify` parameter, which allows skipping the key verification process, is intended to be used solely in a development or testing environment. In production, the image construction should incorporate the verification in the following manner: ``` $ cacert=$HOME/certificates/DigiCertCA.crt $ signcert=$HOME/certificates/ibm-z-host-key-signing.crt $ genprotimg --host-key-document=${hostkeydocument} \\ --output=kata-containers-se.img --image=${kernelimage} --ramdisk=${initrdimage} \\ --cert=${cacert} --cert=${signcert} --parmfile=parmfile ``` The steps with no verification, including the dependencies for the kernel and initrd, can be easily accomplished by issuing the following make target: ``` $ cd $GOPATH/src/github.com/kata-containers/kata-containers $ mkdir hkddir && cp $hostkeydocument hkddir $ sudo -E PATH=$PATH HKDPATH=hkddir SEKERNELPARAMS=\"agent.log=debug\" \\ make boot-image-se-tarball $ ls build/kata-static-boot-image-se.tar.xz build/kata-static-boot-image-se.tar.xz ``` `SEKERNELPARAMS` could be used to add any extra kernel parameters. If no additional kernel configuration is required, this can be omitted. In production, you could build an image by running the same command, but with two additional environment variables for key verification: ``` $ export SIGNINGKEYCERT_PATH=$HOME/certificates/ibm-z-host-key-signing.crt $ export INTERMEDIATECACERT_PATH=$HOME/certificates/DigiCertCA.crt ``` To build an image on the `x86_64` platform, set the following environment variables together with the variables above before `make boot-image-se-tarball`: ``` CROSSBUILD=true TARGETARCH=s390x ARCH=s390x ``` There still remains an opportunity to fine-tune the configuration file: ``` $ runtimeconfigpath=$(kata-runtime kata-env --json | jq -r '.Runtime.Config.Path') $ cp ${runtimeconfigpath} ${runtimeconfigpath}.old $ # Make the following adjustment to the original config file $ diff ${runtimeconfigpath}.old ${runtimeconfigpath} 16,17c16,17 < kernel = \"/opt/kata/share/kata-containers/vmlinux.container\" < image = \"/opt/kata/share/kata-containers/kata-containers.img\" kernel = \"/opt/kata/share/kata-containers/kata-containers-se.img\" # image = \"/opt/kata/share/kata-containers/kata-containers.img\" 41c41 < # confidential_guest = true confidential_guest = true 544c544 < dial_timeout = 45 dial_timeout = 90 ``` To verify the successful decryption and loading of the secure image within a test VM, please refer to the following commands: ``` $ cd $GOPATH/src/github.com/kata-containers/kata-containers $ hypervisor_command=$(kata-runtime kata-env --json | jq -r '.Hypervisor.Path') $ secure_kernel=kata-containers-se.img $ sudo $hypervisor_command -machine confidential-guest-support=pv0 \\ -object s390-pv-guest,id=pv0 -accel kvm -smp 2 --m 4096 -serial mon:stdio \\ --nographic --nodefaults --kernel \"${secure_kernel}\" [ 0.110277] Linux version 5.19.2 (root@637f067c5f7d) (gcc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0, GNU ld (GNU Binutils for Ubuntu) 2.38) #1 SMP Wed May 31 09:06:49 UTC 2023 [ 0.110279] setup: Linux is running under KVM in 64-bit mode ... log skipped ... [ 1.467228] Run /init as init process {\"msg\":\"baremount source=\\\"proc\\\", dest=\\\"/proc\\\", fstype=\\\"proc\\\", options=\\\"\\\", flags=MSNOSUID | MSNODEV | MSNOEXEC\",\"level\":\"INFO\",\"ts\":\"2023-06-07T10:17:23.537542429Z\",\"pid\":\"1\",\"subsystem\":\"baremount\",\"name\":\"kata-agent\",\"source\":\"agent \",\"version\":\"0.1.0\"} ... log skipped ..." }, { "data": "# Press ctrl + a + x to exit ``` If the hypervisor log does not indicate any errors, it provides assurance that the image has been successfully loaded, and a Virtual Machine (VM) initiated by the kata runtime will function properly. Let us proceed with the final verification by running a test container in a Kubernetes cluster. Please make user you have a running cluster like: ``` $ kubectl get node NAME STATUS ROLES AGE VERSION test-cluster Ready control-plane,master 7m28s v1.23.1 ``` Please execute the following command to run a container: ``` $ cat <<EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: nginx-kata spec: runtimeClassName: kata-qemu containers: name: nginx image: nginx EOF pod/nginx-kata created $ kubectl get po NAME READY STATUS RESTARTS AGE nginx-kata 1/1 Running 0 29s $ kubectl get po -oyaml | grep \"runtimeClassName:\" runtimeClassName: kata-qemu $ # Please make sure if confidential-guest-support is set and a secure image is used $ $ ps -ef | grep qemu | grep -v grep root 76972 76959 0 13:40 ? 00:00:02 /opt/kata/bin/qemu-system-s390x ... qemu arguments ... -machine s390-ccw-virtio,accel=kvm,confidential-guest-support=pv0 ... qemu arguments ... -kernel /opt/kata/share/kata-containers/kata-containers-se.img ... qemu arguments ... ``` Finally, an operational kata container with IBM Secure Execution is now running. It is reasonable to expect that the manual steps mentioned above can be easily executed. Typically, you can use to install Kata Containers on a Kubernetes cluster. However, when leveraging IBM Secure Execution, you need to employ the confidential container's . During this process, a `kata-deploy` container image serves as a payload image in a custom resource `ccruntime` for confidential containers, enabling the operator to install Kata binary artifacts such as kernel, shim-v2, and more. This section will explain how to build a payload image (i.e., `kata-deploy`) for confidential containers. For the remaining instructions, please refer to the for confidential containers. ``` $ cd $GOPATH/src/github.com/kata-containers/kata-containers $ hostkeydocument=$HOME/host-key-document/HKD-0000-0000000.crt $ mkdir hkddir && cp $hostkeydocument hkddir $ # kernel-confidential and rootfs-initrd-confidential are built automactially by the command below $ sudo -E PATH=$PATH HKDPATH=hkddir SEKERNELPARAMS=\"agent.log=debug\" \\ make boot-image-se-tarball $ sudo -E PATH=$PATH make qemu-tarball $ sudo -E PATH=$PATH make virtiofsd-tarball $ # shim-v2 should be built after kernel due to dependency $ sudo -E PATH=$PATH make shim-v2-tarball $ mkdir kata-artifacts $ build_dir=$(readlink -f build) $ cp -r $build_dir/*.tar.xz kata-artifacts $ ls -1 kata-artifacts kata-static-agent.tar.xz kata-static-boot-image-se.tar.xz kata-static-coco-guest-components.tar.xz kata-static-kernel-confidential.tar.xz kata-static-pause-image.tar.xz kata-static-qemu.tar.xz kata-static-rootfs-initrd-confidential.tar.xz kata-static-shim-v2.tar.xz kata-static-virtiofsd.tar.xz $ ./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts ``` In production, the environment variables `SIGNINGKEYCERT_PATH` and `INTERMEDIATECACERT_PATH` should be exported like the manual configuration. If a rootfs-image is required for other available runtime classes (e.g. `kata` and `kata-qemu`) without the Secure Execution functionality, please run the following command before running `kata-deploy-merge-builds.sh`: ``` $ sudo -E PATH=$PATH make rootfs-image-tarball ``` At this point, you should have an archive file named `kata-static.tar.xz` at the project root, which will be used to build a payload image. If you are using a local container registry at `localhost:5000`, proceed with the following: ``` $ docker run -d -p 5000:5000 --name local-registry registry:2.8.1 ``` Build and push a payload image with the name `localhost:5000/build-kata-deploy` and the tag `latest` using the following: ``` $ sudo -E PATH=$PATH ./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh kata-static.tar.xz localhost:5000/build-kata-deploy latest ... logs ... Pushing the image localhost:5000/build-kata-deploy:latest to the registry The push refers to repository [localhost:5000/build-kata-deploy] 76c6644d9790: Layer already exists 2413aff53bb1: Layer already exists 91462f44bb06: Layer already exists 2ad49fac591a: Layer already exists 5c75aa64ef7a: Layer already exists test: digest: sha256:25825c7a4352f75403ee59a683eb122d5518e8ed6a244aacd869e41e2cafd385 size: 1369 ``` If you intend to integrate the aforementioned procedure with a CI system, configure the following setup for an environment variable. The setup helps speed up CI jobs by caching container images used during the" } ]
{ "category": "Runtime", "file_name": "how-to-run-kata-containers-with-SE-VMs.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "Join the [kubernetes-security-announce] group for security and vulnerability announcements. You can also subscribe to an RSS feed of the above using . Instructions for reporting a vulnerability can be found on the [Kubernetes Security and Disclosure Information] page. Information about supported Kubernetes versions can be found on the [Kubernetes version and version skew support policy] page on the Kubernetes website." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "To install CRI-O, check out our . Our is a good way to quickly get started running simple pods and containers. If you're more comfortable with kubeadm, use our . To transfer a cluster from old tools, see our . To enable support for image decryption, see our . For instructions on how to use CRI-O's experimental userns annotation, see the ." } ]
{ "category": "Runtime", "file_name": "tutorial.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "title: Status for CStor CRs authors: \"@shubham14bajpai\" owners: \"@vishnuitta\" \"@kmova\" \"@AmitKumarDas\" editor: \"@shubham14bajpai\" creation-date: 2020-02-24 last-updated: 2020-02-24 status: Implementable - - - - - This proposal includes design details of the status information provided by the CStor CRs. The status information is used to determine the state of the corresponding CR. The current CStor CRs don't provide appropriate information about the state of the CR and the reason for that state. Make CSPC and CSPI status informative enough to know the current state of the CR and the reason for the given state. As a OpenEBS user, the information regarding a CR should be present in the status or events of the CR itself rather than looking for it in controller logs. Kubernetes provides a framework for status conditions and events which can be used to pass relevant information from the controllers to the CRs. The current CSPI status gives us the following information via kubectl get command which are the zpool properties taken from the zfs commands and parsed to cspi status. ```diff mayadata:maya$ kubectl -n openebs get cspi NAME HOSTNAME ALLOCATED FREE CAPACITY STATUS AGE sparse-pool-1-7xq5 127.0.0.1 218K 9.94G 9.94G ONLINE 134m NAME HOSTNAME ALLOCATED FREE CAPACITY HEALTHYVOLUMES PROVISIONEDVOLUMES STATUS AGE sparse-pool-1-7xq5 127.0.0.1 218K 9.94G 9.94G 3 5 ONLINE 134m ``` As the CSPI and CVR have their controllers clubbed together as cspi-mgmt having CVR information on CSPI will help with the debugging process. The current CSPI gives the status as a form of Status.Phase which is the zpool state representation. This Phase is updated by the CSPI mgmt container. The mgmt container gets the health status from the zpool command and sets that to the phase of CSPI. The possible CSPI phases are: Online : The device or virtual device is in normal working order. Although some transient errors might still occur, the device is otherwise in working order. Degraded : The virtual device has experienced a failure but can still function. This state is most common when a mirror or RAID-Z device has lost one or more constituent devices. The fault tolerance of the pool might be compromised, as a subsequent fault in another device might be unrecoverable. Faulted : The device or virtual device is completely inaccessible. This status typically indicates the total failure of the device, such that ZFS is incapable of sending data to it or receiving data from it. If a top-level virtual device is in this state, then the pool is completely inaccessible. Offline : The device has been explicitly taken offline by the cspi-mgmt controller. Unavail : The device or virtual device cannot be" }, { "data": "In some cases, pools with UNAVAIL devices appear in DEGRADED mode. If a top-level virtual device is UNAVAIL, then nothing in the pool can be accessed. Removed : The device was physically removed while the system was running. Device removal detection is hardware-dependent and might not be supported on all platforms. Apart from phase having `LastUpdtaedTime` and `LastTransitionTime` for the phase in status would help in identifying the changes in the phase of CSPI and determine whether it is stale or not. The phase is the current state of the CSPI. With the addition of Conditions to the status we can represent the latest available observations of a CSPIs current state. The conditions for cspi will be represented by the following structure. ```go // CSPIConditionType describes the state of a CSPI at a certain point. type CStorPoolInstanceCondition struct { // Type of CSPI condition. Type CSPIConditionType `json:\"type\" protobuf:\"bytes,1,opt,name=type,casttype=DeploymentConditionType\"` // Status of the condition, one of True, False, Unknown. Status corev1.ConditionStatus `json:\"status\" protobuf:\"bytes,2,opt,name=status,casttype=k8s.io/api/core/v1.ConditionStatus\"` // The last time this condition's reason (or) message was updated. LastUpdateTime metav1.Time `json:\"lastUpdateTime,omitempty\" protobuf:\"bytes,6,opt,name=lastUpdateTime\"` // Last time the condition transitioned from one status to another. LastTransitionTime metav1.Time `json:\"lastTransitionTime,omitempty\" protobuf:\"bytes,7,opt,name=lastTransitionTime\"` // The reason for the condition's last transition. Reason string `json:\"reason,omitempty\" protobuf:\"bytes,4,opt,name=reason\"` // A human readable message indicating details about the transition. Message string `json:\"message,omitempty\" protobuf:\"bytes,5,opt,name=message\"` } ``` When the conditions like expansion or disk replacement is under progress the message and reason fields will get populated with corresponding details and once the condition has reached completion the fields will have some default values as placeholders until the condition is triggered again. The proposed conditions for CSPI are : PodAvailable : The PodAvailable condition represents whether the CSPI pool pod is running or not. Whenever the PodAvailable is set to False then the CSPI phase should be set to Unavail to tackle the stale phase on the CSPI when the pool pod is not in running state. The owner of this condition will be the CSPC operator as when the pool pod is lost the cspi-mgmt will not be able to update the conditions. ```yaml Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" status: \"True\" type: PodAvailable Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" message: 'pool pod not in running state' reason: MissingPoolDeployment status: \"False\" type: PodAvailable ``` PoolExpansion : The PoolExpansion condition gets appended when someone triggers pool expansion and represents the status of expansion. If multiple vdev were added for expansion then condition.status will be set as true further information will be available on events of corresponding" }, { "data": "```yaml Conditions: lastTransitionTime: \"2020-04-10T03:56:57Z\" lastUpdateTime: \"2020-04-10T03:56:57Z\" message: Pool expansion was successful by adding blockdevices/raid groups reason: PoolExpansionSuccessful status: \"False\" type: PoolExpansion Conditions: lastTransitionTime: \"2020-04-10T03:44:42Z\" lastUpdateTime: \"2020-04-10T03:44:42Z\" message: 'Pool expansion is in progress because of blockdevice/raid group addition error: failed to initialize libuzfs client' reason: PoolExpansionInProgress status: \"True\" type: PoolExpansion ``` DiskReplacement : The DiskReplacement condition gets appended when someone triggers disk replacement on that pool and represents the status of replacement. If multiple disks were replacing then condition message will show that the following are block devices that were under replacement. Further information will be available on corresponding CSPI events. ```yaml Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" reason: BlockDeviceReplacementSucceess status: \"False\" type: DiskReplacement Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" message: error msg from zfs command' reason: BlockDeviceReplacementInprogress status: \"True\" type: DiskReplacement ``` DiskUnavailable : The DiskUnavailable condition gets appended when someone when one or more disk gets into an unavailable state. If multiple disks were unavailable then same DiskUnavailable will set to true. The condition message will have information about the names disks were unavailable. Further information will be available on events of corresponding CSPI. ```yaml Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" message: disk gone bad reason: DiskFailed status: \"True\" type: DiskUnavailable Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" status: \"False\" type: DiskUnavailable ``` PoolLost : The PoolLost condition gets appended when the pool import fails because of some reason. ```yaml Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" message: unable to import pool reason: ImportFailed status: \"True\" type: PoolLost Conditions: lastUpdateTime: \"2020-04-10T03:56:57Z\" lastTransitionTime: \"2020-04-10T03:44:42Z\" status: \"False\" type: PoolLost ``` The current CSPC does not have any status to represent the state of the CSPC whether all the CSPI got provisioned or not, how many CSPI are healthy or some other state. The CSPC status should be informative enough to tell the current state of the provisioned instances whether they are in Healthy/Other phase and whether all instances are provisioned or not. It should not be having the details of the instances as the CSPI status already have the corresponding status and repeating the status is not required. The below example shows how the status should look like: ```sh NAME HEALTHYINSTANCES PROVISIONEDINSTANCES DESIREDINSTANCES AGE sparse-pool-1 1 2 3 142M NAME HEALTHYINSTANCES PROVISIONEDINSTANCES DESIREDINSTANCES AGE sparse-pool-1 3 3 3 142M ``` DESIREDINSTANCES gives the number of CSPI that needs to be provisioned i.e. the number of poolSpec mentioned in the CSPC yaml. PROVISIONEDINSTANCES is the count of CSPI which have been provisioned and the CSPI can be any state. HEALTHYINSTANCE is the count of CSPI which has a pod in running state and CSPI is in ONLINE state. Apart from phase having `LastUpdtaedTime` and `LastTransitionTime` for the phase in status would help in identifying the changes in the phase of CSPC and determine whether it is stale or not. Any additional information needed for the user can be pushed as events to the CSPC object." } ]
{ "category": "Runtime", "file_name": "cstor-status.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "English | This uninstall guide is intended for Spiderpool running on Kubernetes. If you have questions, feel free to ping us on . Read the full uninstall guide to understand all the necessary steps before performing them. Understand the running application and understand the impact that uninstalling Spiderpool may have on other related components (such as middleware). Please make sure you fully understand the risks before starting the uninstallation steps. Query the Spiderpool installed in the cluster through `helm ls` ```bash helm ls -A | grep -i spiderpool ``` Uninstall Spiderpool via `helm uninstall` ```bash helm uninstall <spiderpool-name> --namespace <spiderpool-namespace> ``` Replace `<spiderpool-name>` with the name of the Spiderpool you want to uninstall and `<spiderpool-namespace>` with the namespace of the Spiderpool. The function of automatically cleaning Spiderpool resources was introduced after v0.10.0. It is enabled through the `spiderpoolController.cleanup.enabled` configuration item. The value defaults to `true`. You can verify whether the number of resources related to Spiderpool is automatically cleared as follows. ```bash kubectl get spidersubnets.spiderpool.spidernet.io -o name | wc -l kubectl get spiderips.spiderpool.spidernet.io -o name | wc -l kubectl get spiderippools.spiderpool.spidernet.io -o name | wc -l kubectl get spiderreservedips.spiderpool.spidernet.io -o name | wc -l kubectl get spiderendpoints.spiderpool.spidernet.io -o name | wc -l kubectl get spidercoordinators.spiderpool.spidernet.io -o name | wc -l ``` In versions lower than v0.10.0, Some CR resources having prevents complete cleanup via `helm uninstall`. You can download the cleaning script below to perform the necessary cleanup and avoid any unexpected errors during future deployments of Spiderpool. ```bash wget https://raw.githubusercontent.com/spidernet-io/spiderpool/main/tools/scripts/cleanCRD.sh chmod +x cleanCRD.sh && ./cleanCRD.sh ```" } ]
{ "category": "Runtime", "file_name": "uninstall.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for the specified shell Generate the autocompletion script for cilium-operator-alibabacloud for the specified shell. See each sub-command's help for details on how to use the generated script. ``` -h, --help help for completion ``` - Run cilium-operator-alibabacloud - Generate the autocompletion script for bash - Generate the autocompletion script for fish - Generate the autocompletion script for powershell - Generate the autocompletion script for zsh" } ]
{ "category": "Runtime", "file_name": "cilium-operator-alibabacloud_completion.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "To make sure Longhorn system stable enough, we can apply the node specific CPU resource reservation mechanism to avoid the engine/replica engine crash due to the CPU resource exhaustion. https://github.com/longhorn/longhorn/issues/2207 Reserve CPU resource separately for engine manager pods and replica manager pods. The reserved CPU count is node specific: The more allocable CPU resource a node has, the more CPUs be will reserved for the instance managers in general. Allow reserving CPU resource for a specific node. This setting will override the global setting. Guarantee that the CPU resource is always enough, or the CPU resource reservation is always reasonable based on the volume numbers of a node. Notify/Warn users CPU resource exhaustion on a node: https://github.com/longhorn/longhorn/issues/1930 Add new fields `node.Spec.EngineManagerCPURequest` and `node.Spec.ReplicaManagerCPURequest`. This allows reserving a different amount of CPU resource for a specific node. Add two settings `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU`: It indicates how many percentages of CPU on a node will be reserved for one engine/replica manager pod. The old settings `Guaranteed Engine CPU` will be deprecated: This setting will be unset and readonly in the new version. For the old Longhorn system upgrade, Longhorn will automatically set the node fields based on the old setting then clean up the old setting, so that users don't need to do anything manually as well as not affect existing instance manager pods. Before the enhancement, users rely on the setting `Guaranteed Engine CPU` to reserve the same amount of CPU resource for all engine managers and all replica managers on all nodes. There is no way to reserve more CPUs for the instance managers on node having more allocable CPUs. After the enhancement, users can: Modify the global settings `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` to reserve how many percentage of CPUs for engine manager pods and replica manager pods, respectively. Set a different CPU value for the engine/replica manager pods on some particular nodes by `Node Edit`. Add a new field `EngineManagerCPURequest` and `ReplicaManagerCPURequest` for node objects. Add 2 new settings `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU`. These 2 setting values are integers ranging from 0 to 40. The sum of the 2 setting values should be smaller than 40 (%) as well. In Node Controller, The requested CPU resource when creating an instance manager pod: Ignore the deprecated setting `Guaranteed Engine CPU`. If the newly introduced node field `node.Spec.EngineManagerCPURequest`/`node.Spec.ReplicaManagerCPURequest` is not empty, the engine/replica manager pod requested CPU is determined by the field value. Notice that the field value is a milli value. Else using the formula based on setting `Guaranteed Engine Manager CPU`/`Guaranteed Replica Manager CPU`: `The Reserved CPUs = The value of field \"kubenode.status.allocatable.cpu\" The setting values" }, { "data": "In Setting Controller The current requested CPU of an instance manager pod should keep same as `node.Spec.EngineManagerCPURequest`/`node.Spec.ReplicaManagerCPURequest`, or the value calculated by the above formula. Otherwise, the pod will be killed then Node Controller will recreate it later. In upgrade Longhorn should update `node.Spec.EngineManagerCPURequest` and `node.Spec.ReplicaManagerCPURequest` based on setting `Guaranteed Engine CPU` then clean up `Guaranteed Engine CPU`. The fields 0 means Longhorn will use the setting values directly. The setting value 0 means removing the CPU requests for instance manager pods. Add 2 new arguments `Guaranteed Engine Manager CPU(Milli)` and `Guaranteed Replica Manager CPU(Milli)` in the node update page. Hide the deprecated setting `Guaranteed Engine CPU` which is type `Deprecated`. Type `Deprecated` is a newly introduced setting type. Update the existing test case `testsettingguaranteedenginecpu`: Validate the settings `Guaranteed Engine Manager CPU` controls the reserved CPUs of engine manager pods on each node. Validate the settings `Guaranteed Replica Manager CPU` controls the reserved CPUs of replica manager pods on each node. Validate that fields `node.Spec.EngineManagerCPURequest`/`node.Spec.ReplicaManagerCPURequest` can override the settings `Guaranteed Engine Manager CPU`/`Guaranteed Replica Manager CPU`. Deploy a cluster that each node has different CPUs. Launch Longhorn v1.1.0. Deploy some workloads using Longhorn volumes. Upgrade to the latest Longhorn version. Validate: all workloads work fine and no instance manager pod crash during the upgrade. The fields `node.Spec.EngineManagerCPURequest` and `node.Spec.ReplicaManagerCPURequest` of each node are the same as the setting `Guaranteed Engine CPU` value in the old version * 1000. The old setting `Guaranteed Engine CPU` is deprecated with an empty value. Modify new settings `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU`. Validate all workloads work fine and no instance manager pod restart. Scale down all workloads and wait for the volume detachment. Set `node.Spec.EngineManagerCPURequest` and `node.Spec.ReplicaManagerCPURequest` to 0 for some node. Verify the new settings will be applied to those node and the related instance manager pods will be recreated with the CPU requests matching the new settings. Scale up all workloads and verify the data as well as the volume r/w. Do cleanup. Prepare 3 sets of longhorn-manager and longhorn-instance-manager images. Deploy Longhorn with the 1st set of images. Set `Guaranteed Engine Manager CPU` and `Guaranteed Replica Manager CPU` to 15 and 24, respectively. Then wait for the instance manager recreation. Create and attach a volume to a node (node1). Upgrade the Longhorn system with the 2nd set of images. Verify the CPU requests in the pods of both instance managers match the settings. Create and attach one more volume to node1. Upgrade the Longhorn system with the 3rd set of images. Verify the pods of the 3rd instance manager cannot be launched on node1 since there is no available CPU for the allocation. Detach the volume in the 1st instance manager pod. Verify the related instance manager pods will be cleaned up and the new instance manager pod can be launched on node1. N/A" } ]
{ "category": "Runtime", "file_name": "20210125-enhanced-cpu-reservation.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "(profiles)= Profiles store a set of configuration options. They can contain instance options, devices and device options. You can apply any number of profiles to an instance. They are applied in the order they are specified, so the last profile to specify a specific key takes precedence. However, instance-specific configuration always overrides the configuration coming from the profiles. ```{note} Profiles can be applied to containers and virtual machines. Therefore, they might contain options and devices that are valid for either type. When applying a profile that contains configuration that is not suitable for the instance type, this configuration is ignored and does not result in an error. ``` If you don't specify any profiles when launching a new instance, the `default` profile is applied automatically. This profile defines a network interface and a root disk. The `default` profile cannot be renamed or removed. Enter the following command to display a list of all available profiles: incus profile list Enter the following command to display the contents of a profile: incus profile show <profile_name> Enter the following command to create an empty profile: incus profile create <profile_name> (profiles-edit)= You can either set specific configuration options for a profile or edit the full profile in YAML format. To set an instance option for a profile, use the command. Specify the profile name and the key and value of the instance option: incus profile set <profilename> <optionkey>=<optionvalue> <optionkey>=<option_value> ... To add and configure an instance device for your profile, use the command. Specify the profile name, a device name, the device type and maybe device options (depending on the {ref}`device type <devices>`): incus profile device add <profilename> <devicename> <devicetype> <deviceoptionkey>=<deviceoptionvalue> <deviceoptionkey>=<deviceoption_value> ... To configure instance device options for a device that you have added to the profile earlier, use the command: incus profile device set <profilename> <devicename> <deviceoptionkey>=<deviceoptionvalue> <deviceoptionkey>=<deviceoptionvalue> ... Instead of setting each configuration option separately, you can provide all options at once in YAML format. Check the contents of an existing profile or instance configuration for the required markup. For example, the `default` profile might look like this: config: {} description: Default Incus profile devices: eth0: name: eth0 network: incusbr0 type: nic root: path: / pool: default type: disk name: default used_by: Instance options are provided as an array under `config`. Instance devices and instance device options are provided under `devices`. To edit a profile using your standard terminal editor, enter the following command: incus profile edit <profile_name> Alternatively, you can create a YAML file (for example, `profile.yaml`) with the configuration and write the configuration to the profile with the following command: incus profile edit <profile_name> < profile.yaml Enter the following command to apply a profile to an instance: incus profile add <instancename> <profilename> ```{tip} Check the configuration after adding the profile: You will see that your profile is now listed under `profiles`. However, the configuration options from the profile are not shown under `config` (unless you add the `--expanded` flag). The reason for this behavior is that these options are taken from the profile and not the configuration of the instance. This means that if you edit a profile, the changes are automatically applied to all instances that use the profile. ``` You can also specify profiles when launching an instance by adding the `--profile` flag: incus launch <image> <instance_name> --profile <profile> --profile <profile> ... Enter the following command to remove a profile from an instance: incus profile remove <instancename> <profilename>" } ]
{ "category": "Runtime", "file_name": "profiles.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 4 sidebar_label: \"Scheduler\" The Scheduler is used to automatically schedule the Pod to the correct node which is associated with the HwameiStor volume. With the scheduler, the Pod does not need the NodeAffinity or NodeSelector field to select the node. A scheduler will work for both LVM and Disk volumes. The Scheduler should be deployed with the HA mode in the cluster, which is a best practice for production." } ]
{ "category": "Runtime", "file_name": "scheduler.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide requires Kata Containers available on your system, install-able by following . Kubernetes CRI (Container Runtime Interface) implementations allow using any OCI-compatible runtime with Kubernetes, such as the Kata Containers runtime. Kata Containers support both the and CRI implementations. After choosing one CRI implementation, you must make the appropriate configuration to ensure it integrates with Kata Containers. Kata Containers 1.5 introduced the `shimv2` for containerd 1.2.0, reducing the components required to spawn pods and containers, and this is the preferred way to run Kata Containers with Kubernetes (). An equivalent shim implementation for CRI-O is planned. For CRI-O installation instructions, refer to the page. The following sections show how to set up the CRI-O snippet configuration file (default path: `/etc/crio/crio.conf`) for Kata. Unless otherwise stated, all the following settings are specific to the `crio.runtime` table: ```toml [crio.runtime] ``` A comprehensive documentation of the configuration file can be found . Note: After any change to this file, the CRI-O daemon have to be restarted with: ```` $ sudo systemctl restart crio ```` The is the preferred way of specifying the container runtime configuration to run a Pod's containers. To use this feature, Kata must added as a runtime handler. This can be done by dropping a `50-kata` snippet file into `/etc/crio/crio.conf.d`, with the content shown below: ```toml [crio.runtime.runtimes.kata] runtime_path = \"/usr/bin/containerd-shim-kata-v2\" runtime_type = \"vm\" runtime_root = \"/run/vc\" privilegedwithouthost_devices = true ``` To customize containerd to select Kata Containers runtime, follow our \"Configure containerd to use Kata Containers\" internal documentation . Depending on what your needs are and what you expect to do with Kubernetes, please refer to the following to install it correctly. Kubernetes talks with CRI implementations through a `container-runtime-endpoint`, also called CRI socket. This socket path is different depending on which CRI implementation you chose, and the Kubelet service has to be updated accordingly. `/etc/systemd/system/kubelet.service.d/0-crio.conf` ``` [Service] Environment=\"KUBELETEXTRAARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///var/run/crio/crio.sock\" ``` `/etc/systemd/system/kubelet.service.d/0-cri-containerd.conf` ``` [Service] Environment=\"KUBELETEXTRAARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock\" ``` For more information about containerd see the \"Configure Kubelet to use containerd\" documentation . After you update your Kubelet service based on the CRI implementation you are using, reload and restart Kubelet. Then, start your cluster: ```bash $ sudo systemctl daemon-reload $ sudo systemctl restart kubelet $ sudo kubeadm init --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --pod-network-cidr=10.244.0.0/16 $ cat <<EOF | tee kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration nodeRegistration: criSocket: \"/run/containerd/containerd.sock\" kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 cgroupDriver: cgroupfs podCIDR: \"10.244.0.0/16\" EOF $ sudo kubeadm init --ignore-preflight-errors=all --config kubeadm-config.yaml $ export KUBECONFIG=/etc/kubernetes/admin.conf ``` By default, the cluster will not schedule pods in the control-plane node. To enable control-plane node scheduling: ```bash $ sudo -E kubectl taint nodes --all node-role.kubernetes.io/control-plane- ``` Users can use to specify a different runtime for Pods. ```bash $ cat > runtime.yaml <<EOF apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: kata handler: kata EOF $ sudo -E kubectl apply -f runtime.yaml ``` If a pod has the `runtimeClassName` set to `kata`, the CRI plugin runs the pod with the . Create an pod configuration that using Kata Containers runtime ```bash $ cat << EOF | tee nginx-kata.yaml apiVersion: v1 kind: Pod metadata: name: nginx-kata spec: runtimeClassName: kata containers: name: nginx image: nginx EOF ``` Create the pod ```bash $ sudo -E kubectl apply -f nginx-kata.yaml ``` Check pod is running ```bash $ sudo -E kubectl get pods ``` Check hypervisor is running ```bash $ ps aux | grep qemu ``` ```bash $ sudo -E kubectl delete -f nginx-kata.yaml ```" } ]
{ "category": "Runtime", "file_name": "run-kata-with-k8s.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "title: Ceph Cluster Helm Chart {{ template \"generatedDocsWarning\" . }} Creates Rook resources to configure a cluster using the package manager. This chart is a simple packaging of templates that will optionally create Rook resources such as: CephCluster, CephFilesystem, and CephObjectStore CRs Storage classes to expose Ceph RBD volumes, CephFS volumes, and RGW buckets Ingress for external access to the dashboard Toolbox Kubernetes 1.22+ Helm 3.x Install the The `helm install` command deploys rook on the Kubernetes cluster in the default configuration. The section lists the parameters that can be configured during installation. It is recommended that the rook operator be installed into the `rook-ceph` namespace. The clusters can be installed into the same namespace as the operator or a separate namespace. Rook currently publishes builds of this chart to the `release` and `master` channels. Before installing, review the values.yaml to confirm if the default settings need to be updated. If the operator was installed in a namespace other than `rook-ceph`, the namespace must be set in the `operatorNamespace` variable. Set the desired settings in the `cephClusterSpec`. The are only an example and not likely to apply to your cluster. The `monitoring` section should be removed from the `cephClusterSpec`, as it is specified separately in the helm settings. The default values for `cephBlockPools`, `cephFileSystems`, and `CephObjectStores` will create one of each, and their corresponding storage classes. All Ceph components now have default values for the pod resources. The resources may need to be adjusted in production clusters depending on the load. The resources can also be disabled if Ceph should not be limited (e.g. test clusters). The release channel is the most recent release of Rook that is considered stable for the community. The example install assumes you have first installed the and created your customized values.yaml. ```console helm repo add rook-release https://charts.rook.io/release helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \\ --set operatorNamespace=rook-ceph rook-release/rook-ceph-cluster -f values.yaml ``` !!! Note --namespace specifies the cephcluster namespace, which may be different from the rook operator namespace. The following table lists the configurable parameters of the rook-operator chart and their default values. {{ template \"chart.valuesTable\" . }} The `CephCluster` CRD takes its spec from `cephClusterSpec.*`. This is not an exhaustive list of parameters. For the full list, see the topic. The cluster spec example is for a converged cluster where all the Ceph daemons are running locally, as in the host-based example (cluster.yaml). For a different configuration such as a PVC-based cluster (cluster-on-pvc.yaml), external cluster (cluster-external.yaml), or stretch cluster (cluster-stretched.yaml), replace this entire `cephClusterSpec` with the specs from those examples. The `cephBlockPools` array in the values file will define a list of CephBlockPool as described in the table below. | Parameter | Description | Default | | | -- | - | | `name` | The name of the CephBlockPool | `ceph-blockpool` | | `spec` | The CephBlockPool spec, see the documentation. | `{}` | | `storageClass.enabled` | Whether a storage class is deployed alongside the CephBlockPool | `true` | | `storageClass.isDefault` | Whether the storage class will be the default storage class for PVCs. See for details. | `true` | | `storageClass.name` | The name of the storage class | `ceph-block` | | `storageClass.parameters` | See documentation or the helm values.yaml for suitable values | see values.yaml | |" }, { "data": "| The default to apply to PVCs created with this storage class. | `Delete` | | `storageClass.allowVolumeExpansion` | Whether is allowed by default. | `true` | | `storageClass.mountOptions` | Specifies the mount options for storageClass | `[]` | | `storageClass.allowedTopologies` | Specifies the for storageClass | `[]` | The `cephFileSystems` array in the values file will define a list of CephFileSystem as described in the table below. | Parameter | Description | Default | | | -- | - | | `name` | The name of the CephFileSystem | `ceph-filesystem` | | `spec` | The CephFileSystem spec, see the documentation. | see values.yaml | | `storageClass.enabled` | Whether a storage class is deployed alongside the CephFileSystem | `true` | | `storageClass.name` | The name of the storage class | `ceph-filesystem` | | `storageClass.pool` | The name of , without the filesystem name prefix | `data0` | | `storageClass.parameters` | See documentation or the helm values.yaml for suitable values | see values.yaml | | `storageClass.reclaimPolicy` | The default to apply to PVCs created with this storage class. | `Delete` | | `storageClass.mountOptions` | Specifies the mount options for storageClass | `[]` | The `cephObjectStores` array in the values file will define a list of CephObjectStore as described in the table below. | Parameter | Description | Default | | | -- | - | | `name` | The name of the CephObjectStore | `ceph-objectstore` | | `spec` | The CephObjectStore spec, see the documentation. | see values.yaml | | `storageClass.enabled` | Whether a storage class is deployed alongside the CephObjectStore | `true` | | `storageClass.name` | The name of the storage class | `ceph-bucket` | | `storageClass.parameters` | See documentation or the helm values.yaml for suitable values | see values.yaml | | `storageClass.reclaimPolicy` | The default to apply to PVCs created with this storage class. | `Delete` | | `ingress.enabled` | Enable an ingress for the object store | `false` | | `ingress.annotations` | Ingress annotations | `{}` | | `ingress.host.name` | Ingress hostname | `\"\"` | | `ingress.host.path` | Ingress path prefix | `/` | | `ingress.tls` | Ingress tls | `/` | | `ingress.ingressClassName` | Ingress tls | `\"\"` | If you have an existing CephCluster CR that was created without the helm chart and you want the helm chart to start managing the cluster: Extract the `spec` section of your existing CephCluster CR and copy to the `cephClusterSpec` section in `values.yaml`. Add the following annotations and label to your existing CephCluster CR: ```yaml annotations: meta.helm.sh/release-name: rook-ceph-cluster meta.helm.sh/release-namespace: rook-ceph labels: app.kubernetes.io/managed-by: Helm ``` Run the `helm install` command in the to create the chart. In the future when updates to the cluster are needed, ensure the values.yaml always contains the desired CephCluster spec. To deploy from a local build from your development environment: ```console cd deploy/charts/rook-ceph-cluster helm install --create-namespace --namespace rook-ceph rook-ceph-cluster -f values.yaml . ``` To see the currently installed Rook chart: ```console helm ls --namespace rook-ceph ``` To uninstall/delete the `rook-ceph-cluster` chart: ```console helm delete --namespace rook-ceph rook-ceph-cluster ``` The command removes all the Kubernetes components associated with the chart and deletes the release. Removing the cluster chart does not remove the Rook operator. In addition, all data on hosts in the Rook data directory (`/var/lib/rook` by default) and on OSD raw devices is kept. To reuse disks, you will have to wipe them before recreating the cluster. See the for more information." } ]
{ "category": "Runtime", "file_name": "ceph-cluster-chart.gotmpl.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "containerd uses the issues and milestones to define its roadmap. `ROADMAP.md` files are common in open source projects, but we find they quickly become out of date. We opt for an issues and milestone approach that our maintainers and community can keep up-to-date as work is added and completed. Issues tagged with the `roadmap` label are high level roadmap items. They are tasks and/or features that the containerd community wants completed. Smaller issues and pull requests can reference back to the main roadmap issue that is tagged to help detail progress towards the overall goal. Milestones define when an issue, pull request, and/or roadmap item is to be completed. Issues are the what, milestones are the when. Development is complex therefore roadmap items can move between milestones depending on the remaining development and testing required to release a change. To find the roadmap items currently planned for containerd you can filter on the `roadmap` label. After searching for roadmap items you can view what milestone they are scheduled to be completed in along with the progress." } ]
{ "category": "Runtime", "file_name": "ROADMAP.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "title: \"Ark version 0.7.0 and later: issue with deleting namespaces and backups \" layout: docs Version 0.7.0 introduced the ability to delete backups. However, you may encounter an issue if you try to delete the `heptio-ark` namespace. The namespace can get stuck in a terminating state, and you cannot delete your backups. To fix: If you don't have it, . Run: ```bash bash <(kubectl -n heptio-ark get backup -o json | jq -c -r $'.items[] | \"kubectl -n heptio-ark patch backup/\" + .metadata.name + \" -p \\'\" + (({metadata: {finalizers: ( (.metadata.finalizers // []) - [\"gc.ark.heptio.com\"]), resourceVersion: .metadata.resourceVersion}}) | tostring) + \"\\' --type=merge\"') ``` This command retrieves a list of backups, then generates and runs another list of commands that look like: ``` kubectl -n heptio-ark patch backup/my-backup -p '{\"metadata\":{\"finalizers\":[],\"resourceVersion\":\"461343\"}}' --type=merge kubectl -n heptio-ark patch backup/some-other-backup -p '{\"metadata\":{\"finalizers\":[],\"resourceVersion\":\"461718\"}}' --type=merge ``` If you encounter errors that tell you patching backups is not allowed, the Ark CustomResourceDefinitions (CRDs) might have been deleted. To fix, recreate the CRDs using `examples/common/00-prereqs.yaml`, then follow the steps above. In Ark version 0.7.1, the default configuration runs the Ark server in a different namespace from the namespace for backups, schedules, restores, and the Ark config. We strongly recommend that you keep this configuration. This approach can help prevent issues with deletes. The Ark team added the ability to delete backups by adding a finalizer to each backup. When you request the deletion of an object that has at least one finalizer, Kubernetes sets the object's deletion timestamp, which indicates that the object is marked for deletion. However, it does not immediately delete the object. Instead, the object is deleted only when it no longer has any finalizers. This means that something -- in this case, Ark -- must process the backup and then remove the Ark finalizer from it. Ark versions earlier than v0.7.1 place the Ark server pod in the same namespace as backups, restores, schedules, and the Ark config. If you try to delete the namespace, with `kubectl delete namespace/heptio-ark`, the Ark server pod might be deleted before the backups, because the order of deletions is arbitrary. If this happens, the remaining bacukps are stuck in a deleting state, because the Ark server pod no longer exists to remove their finalizers." } ]
{ "category": "Runtime", "file_name": "debugging-deletes.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<div class=\"jumbotron jumbotron-fluid\"> <div class=\"container\"> <div class=\"row\"> <div class=\"col-md-3\"></div> <div class=\"col-md-6\"> <h1 style=\"color:white;\">The Container Security Platform</h1> <p>Improve your container security, deliver security-imperative apps, increase security productivity, and enforce compliance.</p> <p style=\"margin-top: 20px;\"> <a class=\"btn\" href=\"/docs/user_guide/install/\"> Get started&nbsp; <i class=\"fas fa-arrow-alt-circle-right ml-2\"></i> </a> <a class=\"btn\" href=\"/docs/\"> What is gVisor?&nbsp; <i class=\"fas fa-arrow-alt-circle-right ml-2\"></i> </a> </p> </div> <div class=\"col-md-3\"></div> </div> </div> </div> <!-- gVisor Use Cases --> <section id=\"use-cases\"> <div class=\"container\"> <div class=\"row\"> <div class=\"col-md-6 pull-right gallery-popup\"> <img src=\"/assets/images/gvisor-high-level-arch.png\" alt=\"gVisor high-level architecture\" title=\"gVisor high-level architecture\" class=\"img-responsive\" /> </div> <div class=\"col-md-6 pull-left\"> <div class=\"divide-xl\"></div> <h2><span><b>gVisor</b></span> is the <span><b>missing security layer</b></span> for running containers efficiently and securely. </h2> <p class=\"info-text\">gVisor is an open-source Linux-compatible sandbox that runs anywhere existing container tooling does. It enables cloud-native container security and portability. gVisor leverages years of experience isolating production workloads at Google. </p> <div class=\"divide-xl\"></div> </div> </div> <!-- end row --> </div> <!-- end container --> <div class=\"container\" style=\"margin-top:20px\"> <div class=\"row\"> <div class=\"col-md-4 pull-left\"> <img src=\"/assets/images/gvisor-run-untrusted.png\" alt=\"gVisor can run untrusted code\" title=\"gVisor can run untrusted code\" class=\"img-responsive\" /> </div> <div class=\"col-md-8 pull-right\"> <div class=\"divide-xl\"></div> <h2>Run Untrusted Code</h2> <p class=\"info-text\">Isolate Linux hosts from containers so you can <strong>safely run user-uploaded, LLM-generated, or third-party code</strong>. Add defense-in-depth measures to your stack, bringing additional security to your infrastructure. </p> <div class=\"divide-xl\"></div> </div> </div> <!-- end row --> </div> <!-- end container --> <div class=\"container\" style=\"margin-top:20px\"> <div class=\"row\"> <div class=\"col-md-4 pull-right\"> <img src=\"/assets/images/gvisor-secure-by-default.png\" alt=\"gVisor secure by default\" title=\"gVisor secure by default\" class=\"img-responsive\" /> </div> <div class=\"col-md-8 pull-left\"> <div class=\"divide-xl\"></div> <h2>Protect Workloads & Infrastructure</h2> <p class=\"info-text\">Fortify hosts and containers against <strong>escapes and privilege escalation CVEs</strong>, enabling strong isolation for security-critical workloads as well as multi-tenant safety. </p> <div class=\"divide-xl\"></div> </div> </div> <!-- end row --> </div> <!-- end container --> <div class=\"container\" style=\"margin-top:20px\"> <div class=\"row\"> <div class=\"col-md-4 pull-left\"> <img src=\"/assets/images/gvisor-reduce-risk.png\" alt=\"gVisor reduces risk\" title=\"gVisor reduces risk\" class=\"img-responsive\" /> </div> <div class=\"col-md-8 pull-right\"> <div class=\"divide-xl\"></div> <h2>Reduce Risk</h2> <p class=\"info-text\">Deliver runtime visibility that integrates with popular <strong>threat detection tools</strong> to quickly identify threats, generate alerts, and enforce policies. </p> <div class=\"divide-xl\"></div> </div> </div> <!-- end row --> </div> <!-- end container --> </section> <!-- end use case section --> <!-- gVisor Solutions --> <section id=\"solutions\"> <div class=\"info-section-gray\"> <div class=\"container-fluid\" style=\"margin-top:50px;background-color:#171433\"> <div class=\"row\"> <h1 align=\"center\" style=\"color:white;font-size:38px\"> The way containers should run </h1> <div class=\"container\" style=\"margin-top:20px\"> <div class=\"col-md-1\"></div> <div class=\"col-md-5\"> <div class=\"panel panel-solution\"> <div class=\"panel-body\"> <div align=\"center\"><span><i class=\"fas fa-shield-alt fa-4x\"></i></span></div> <h2 align=\"center\"><span>Improve your container security</span></h2> <p class=\"info-text\">Give your K8s, SaaS, or Serverless infrastructure additional layers of protection when running end-user code, untrusted code, LLM-generated code, or third-party code. Enable <strong>strong isolation</strong> for sharing resources and delivering <strong>multi-tenant environments</strong>. </p> </div> </div> </div> <div class=\"col-md-5\"> <div class=\"panel panel-solution\"> <div class=\"panel-body\"> <div align=\"center\"><span><b><i class=\"fas fa-cogs fa-4x\"></i></b></span></div> <h2 align=\"center\"><span>Deliver security-imperative apps</span></h2> <p class=\"info-text\">gVisor adds defense-in-depth measures to your containers, allowing you to <strong>safeguard security-sensitive workloads</strong> like financial transactions, healthcare services, personal identifiable information, and other <strong>security-imperative applications</strong>. </p> </div> </div> </div> <div class=\"col-md-1\"></div> </div> <!-- end row container --> </div><!-- /row --> <div class=\"row\"> <div class=\"container\" style=\"margin-bottom:40px\"> <div class=\"col-md-1\"></div> <div class=\"col-md-5\"> <div class=\"panel panel-solution\"> <div class=\"panel-body\"> <div align=\"center\"><span><b><i class=\"fas fa-rocket fa-4x\"></i></b></span></div> <h2 align=\"center\"><span>Increase security productivity</span></h2> <p class=\"info-text\">Isolate your K8s, SaaS, Serverless, DevSecOps lifecycle or CI/CD pipeline. gVisor helps you achieve a secure-by-default posture. Spend <strong>less time staying on top of security disclosures</strong>, and <strong>more time building what" }, { "data": "</p> </div> </div> </div> <div class=\"col-md-5\"> <div class=\"panel panel-solution\"> <div class=\"panel-body\"> <div align=\"center\"><span><b><i class=\"fas fa-check fa-4x\"></i></b></span></div> <h2 align=\"center\"><span>Enforce compliance</span></h2> <p class=\"info-text\">gVisor safeguards against many cloud-native attacks by <strong>reducing the attack surface</strong> exposed to your containers. Shield services like APIs, configs, infrastructure as code, DevOps tooling, and supply chains, lowering the risk present in a typical cloud-native stack. </p> </div> </div> </div> <div class=\"col-md-1\"></div> </div> <!-- end row container --> </div><!-- /row --> </div><!-- /container --> </div> </section> <!-- gVisor Features --> <section id=\"features\"> <div class=\"info-section-gray\"> <div class=\"container\" style=\"margin-top:30px\"> <!-- Helmet universe image --> <div align=\"center\"> <img src=\"/assets/images/gvisor-helmet-universe.png\" alt=\"gVisor features\" title=\"gVisor features\" class=\"img-responsive\" > </div> <h1 align=\"center\" style=\"margin-top:3px\">gVisor Features</h1> <!-- Start features list --> <div class=\"row\"> <div class=\"container\"> <div class=\"col-md-1\"></div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2> <a href=\"docs/architecture_guide/security/#principles-defense-in-depth\" class=\"feature-link\"> Defense in Depth </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px\"> <strong>gVisor implements the Linux API</strong>: by intercepting all sandboxed application system calls to the kernel, it protects the host from the application. In addition, <strong>gVisor also sandboxes itself from the host</strong> using Linux's isolation capabilities. Through these layers of defense, gVisor achieves true defense-in-depth while still providing <strong>VM-like performance</strong> and <strong>container-like resource efficiency</strong>. </p> </div> </div> </div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2> <a href=\"docs/architecture_guide/security/\" class=\"feature-link\"> Secure by Default </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px;\">gVisor runs with the <strong>least amount of privileges</strong> and the strictest possible system call filter needed to function. gVisor implements the Linux kernel and its network stack using Go, a memory-safe and type-safe language. </p> </div> </div> </div> <div class=\"col-md-1\"></div> </div> <!-- end row container --> </div><!-- /row --> <div class=\"row\" style=\"margin-top:0px\"> <div class=\"container\"> <div class=\"col-md-1\"></div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2> <a href=\"docs/architecture_guide/platforms/\" class=\"feature-link\"> Runs Anywhere </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px;\">gVisor <strong>runs anywhere Linux does</strong>. It works on x86 and ARM, on VMs or bare-metal, and does not require virtualization support. gVisor works well on all popular cloud providers. </p> </div> </div> </div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2 style=\"color:#272261\"> <a href=\"docs/user_guide/compatibility/\" class=\"feature-link\"> Cloud Ready </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px;\">gVisor <strong>works with Docker, Kubernetes, and containerd</strong>. Many popular applications and images are deployed in production environments on gVisor. </p> </div> </div> </div> <div class=\"col-md-1\"></div> </div> <!-- end row container --> </div><!-- /row --> <div class=\"row\" style=\"margin-top:0px\"> <div class=\"container\"> <div class=\"col-md-1\"></div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2 style=\"color:#272261\"> <a href=\"docs/architecture_guide/performance/\" class=\"feature-link\"> Fast Startups and Execution </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px;\">gVisor containers start up in milliseconds and have minimal resource overhead. They act like, feel like, and <em>actually are</em> containers, not VMs. Their resource consumption can scale up and down at runtime, enabling <strong>container-native resource efficiency</strong>. </p> </div> </div> </div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2 style=\"color:#272261\"> <a href=\"docs/userguide/checkpointrestore/\" class=\"feature-link\"> Checkpoint and Restore </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px;\">gVisor can <strong>checkpoint and restore containers</strong>. Use it to cache warmed-up services, resume workloads on other machines, snapshot execution, save state for forensics, or branch interactive REPL sessions. </p> </div> </div> </div> <div class=\"col-md-1\"></div> </div> <!-- end row container --> </div><!-- /row --> <div class=\"row\" style=\"margin-top:0px\"> <div class=\"container\"> <div class=\"col-md-1\"></div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2 style=\"color:#272261\"> <a href=\"/docs/user_guide/runtimemonitor/\" class=\"feature-link\"> Runtime Monitoring </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px;\">Observe runtime behavior of your applications by streaming application actions (trace points) to an external <strong>threat detection engine</strong> like <a href=\"https://falco.org\" style=\"color:#272261\">Falco</a> and generate alerts. </p> </div> </div> </div> <div class=\"col-md-5\"> <div class=\"panel panel-default\" style=\"border:none;box-shadow:none;\"> <div class=\"panel-body\"> <h2 style=\"color:#272261\"> <a href=\"docs/user_guide/gpu/\" class=\"feature-link\"> GPU &amp; CUDA Support </a> </h2> <p class=\"info-text\" style=\"margin-bottom:0px;\">gVisor applications can <strong>use CUDA on Nvidia GPUs</strong>, bringing isolation to AI/ML workloads. </p> </div> </div> </div> <div" } ]
{ "category": "Runtime", "file_name": "index.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "A shared file system is a collection of resources and services that work together to serve a files for multiple users across multiple clients. Rook will automate the configuration of the Ceph resources and services that are necessary to start and maintain a highly available, durable, and performant shared file system. A Rook storage cluster must be configured and running in Kubernetes. In this example, it is assumed the cluster is in the `rook` namespace. When the storage admin is ready to create a shared file system, he will specify his desired configuration settings in a yaml file such as the following `filesystem.yaml`. This example is a simple configuration with metadata that is replicated across different hosts, and the data is erasure coded across multiple devices in the cluster. One active MDS instance is started, with one more MDS instance started in standby mode. ```yaml apiVersion: ceph.rook.io/v1 kind: Filesystem metadata: name: myfs namespace: rook-ceph spec: metadataPool: replicated: size: 3 dataPools: erasureCoded: dataChunks: 2 codingChunks: 1 metadataServer: activeCount: 1 activeStandby: true ``` Now create the file system. ```bash kubectl create -f filesystem.yaml ``` At this point the Rook operator recognizes that a new file system needs to be configured. The operator will create all of the necessary resources. The metadata pool is created (`myfs-meta`) The data pools are created (only one data pool for the example above: `myfs-data0`) The Ceph file system is created with the name `myfs` If multiple data pools were created, they would be added to the file system The file system is configured for the desired active count of MDS (`max_mds`=3) A Kubernetes deployment is created to start the MDS pods with the settings for the file system. Twice the number of instances are started as requested for the active count, with half of them in standby. After the MDS pods start, the file system is ready to be mounted. The file system settings are exposed to Rook as a Custom Resource Definition" }, { "data": "The CRD is the Kubernetes-native means by which the Rook operator can watch for new resources. The operator stays in a control loop to watch for a new file system, changes to an existing file system, or requests to delete a file system. The pools are the backing data store for the file system and are created with specific names to be private to a file system. Pools can be configured with all of the settings that can be specified in the . The underlying schema for pools defined by a pool CRD is the same as the schema under the `metadataPool` element and the `dataPools` elements of the file system CRD. ```yaml metadataPool: replicated: size: 3 dataPools: replicated: size: 3 erasureCoded: dataChunks: 2 codingChunks: 1 ``` Multiple data pools can be configured for the file system. Assigning users or files to a pool is left as an exercise for the reader with the . The metadata server settings correspond to the MDS service. `activeCount`: The number of active MDS instances. As load increases, CephFS will automatically partition the file system across the MDS instances. Rook will create double the number of MDS instances as requested by the active count. The extra instances will be in standby mode for failover. `activeStandby`: If true, the extra MDS instances will be in active standby mode and will keep a warm cache of the file system metadata for faster failover. The instances will be assigned by CephFS in failover pairs. If false, the extra MDS instances will all be on passive standby mode and will not maintain a warm cache of the metadata. `placement`: The mds pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, `podAntiAffinity`, and `topologySpreadConstraints` similar to placement defined for daemons configured by the . ```yaml metadataServer: activeCount: 1 activeStandby: true placement: ``` In Ceph Luminous, multiple file systems is still considered an experimental feature. While Rook seamlessly enables this scenario, be aware of the issues in the with snapshots and security implications. For a description of the underlying Ceph data model, see the ." } ]
{ "category": "Runtime", "file_name": "filesystem.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "layout: global title: Azure Data Lake Storage Gen1 This guide describes how to configure Alluxio with {:target=\"_blank\"} as the under storage system. Azure Data Lake Storage is an enterprise-wide hyper-scale repository for big data analytic workloads. Azure Data Lake enables you to capture data of any size, type, and ingestion speed in one single place for operational and exploratory analytics. It is designed to store and analyze large amounts of structured, semi-structured, and unstructured data. For more information about Azure Data Lake Storage Gen1, please read its {:target=\"_blank\"}. Note: Azure Data Lake Storage Gen1 will be retired on Feb 29, 2024. Be sure to migrate to Azure Data Lake Storage Gen2 prior to that date. See how {:target=\"_blank\"}. If you haven't already, please see before you get started. In preparation for using Azure Data Lake Storage Gen1 with Alluxio, [create a new Data Lake storage in your Azure account](https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-get-started-portal){:target=\"_blank\"} or use an existing Data Lake storage. <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<AZURE_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use, either by creating a new directory or using an existing one</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<AZURE_ACCOUNT>`</td> <td markdown=\"span\">Your Azure storage account</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<CLIENT_ID>`</td> <td markdown=\"span\">See {:target=\"_blank\"} for instructions on how to retrieve the application (client) ID and authentication key (also called the client secret) for your application</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<AUTHENTICATION_KEY>`</td> <td markdown=\"span\">See {:target=\"_blank\"} for instructions on how to retrieve the application (client) ID and authentication key (also called the client secret) for your application</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<TENANT_ID>`</td> <td markdown=\"span\">See {:target=\"_blank\"} for instructions on how to retrieve the tenant ID</td> </tr> </table> You also need to set up {:target=\"_blank\"} for your storage account. To use Azure Data Lake Storage Gen1 as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Specify the underfs address by modifying `conf/alluxio-site.properties` to include: ```properties alluxio.dora.client.ufs.root=adl://<AZUREACCOUNT>.azuredatalakestore.net/<AZUREDIRECTORY>/ ``` Specify the application ID, authentication key and tenant ID for the Azure AD application used for the Azure account of the root mount point by adding the following properties in `conf/alluxio-site.properties`: ```properties fs.adl.account.<AZUREACCOUNT>.oauth2.client.id=<CLIENTID> fs.adl.account.<AZUREACCOUNT>.oauth2.credential=<AUTHENTICATIONKEY> fs.adl.account.<AZUREACCOUNT>.oauth2.refresh.url=https://login.microsoftonline.com/<TENANTID>/oauth2/token ``` After these changes, Alluxio should be configured to work with Azure Data Lake storage as its under storage system, and you can run Alluxio locally with it. Once you have configured Alluxio to Azure Data Lake Storage Gen1, try to see that everything works." } ]
{ "category": "Runtime", "file_name": "Azure-Data-Lake.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "This guide shows you how to set up TLS for the LINSTOR API. The API, served by the LINSTOR Controller, is used by clients such as the CSI Driver and the Operator itself to control the LINSTOR Cluster. To complete this guide, you should be familiar with: editing `LinstorCluster` resources. using either or `openssl` to create TLS certificates. This method requires a working deployment in your cluster. For an alternative way to provision keys and certificates, see the below. When using TLS, the LINSTOR API uses client certificates for authentication. It is good practice to have a separate CA just for these certificates. ```yaml apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: ca-bootstrapper namespace: piraeus-datastore spec: selfSigned: { } apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: linstor-api-ca namespace: piraeus-datastore spec: commonName: linstor-api-ca secretName: linstor-api-ca duration: 87600h # 10 years isCA: true usages: signing key encipherment cert sign issuerRef: name: ca-bootstrapper kind: Issuer apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: linstor-api-ca namespace: piraeus-datastore spec: ca: secretName: linstor-api-ca ``` Then, configure this new issuer to let the Operator provision the needed certificates: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: apiTLS: certManager: name: linstor-api-ca kind: Issuer ``` This completes the necessary steps for secure the LINSTOR API with TLS using cert-manager. Skip to the to verify TLS is working. If you completed the above, skip directly to the below. This method requires the `openssl` program on the command line. For an alternative way to provision keys and certificates, see the above. First, create a new Certificate Authority using a new key and a self-signed certificate, valid for 10 years: ```bash openssl req -new -newkey rsa:4096 -days 3650 -nodes -x509 -keyout ca.key -out ca.crt -subj \"/CN=linstor-api-ca\" ``` Then, create two new keys, one for the LINSTOR API server, one for all LINSTOR API clients: ```bash openssl genrsa -out api-server.key 4096 openssl genrsa -out api-client.key 4096 ``` Next, we will create a certificate for the server. The clients might use different shortened service names, so we need to specify multiple subject names: ```bash cat /etc/ssl/openssl.cnf > api-csr.cnf cat >> api-csr.cnf <<EOF [ v3_req ] subjectAltName = @alt_names [ alt_names ] DNS.0 = linstor-controller.piraeus-datastore.svc.cluster.local" }, { "data": "= linstor-controller.piraeus-datastore.svc DNS.2 = linstor-controller EOF openssl req -new -sha256 -key api-server.key -subj \"/CN=linstor-controller\" -config api-csr.cnf -extensions v3_req -out api-server.csr openssl x509 -req -in api-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -config api-csr.cnf -extensions v3_req -out api-server.crt -days 3650 -sha256 ``` For the client certificate, simply setting one subject name is enough: ```bash openssl req -new -sha256 -key api-client.key -subj \"/CN=linstor-client\" -out api-client.csr openssl x509 -req -in api-client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out api-client.crt -days 3650 -sha256 ``` Now, we create Kubernetes secrets from the created keys and certificates: ```bash kubectl create secret generic linstor-api-tls -n piraeus-datastore --type=kubernetes.io/tls --from-file=ca.crt=ca.crt --from-file=tls.crt=api-server.crt --from-file=tls.key=api-server.key kubectl create secret generic linstor-client-tls -n piraeus-datastore --type=kubernetes.io/tls --from-file=ca.crt=ca.crt --from-file=tls.crt=api-client.crt --from-file=tls.key=api-client.key ``` Finally, configure the Operator resources to reference the newly created secrets. We configure the same client secret for all components for simplicity. ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: apiTLS: apiSecretName: linstor-api-tls clientSecretName: linstor-client-tls csiControllerSecretName: linstor-client-tls csiNodeSecretName: linstor-client-tls ``` You can verify that the secure API is running by manually connecting to the HTTPS endpoint using `curl` ```text $ kubectl exec -n piraeus-datastore deploy/linstor-controller -- curl --key /etc/linstor/client/tls.key --cert /etc/linstor/client/tls.crt --cacert /etc/linstor/client/ca.crt https://linstor-controller.piraeus-datastore.svc:3371/v1/controller/version {\"version\":\"1.20.2\",\"githash\":\"58a983a5c2f49eb8d22c89b277272e6c4299457a\",\"buildtime\":\"2022-12-14T14:21:28+00:00\",\"restapiversion\":\"1.16.0\"}% ``` If the command is successful, the API is using HTTPS and clients are able to connect with their certificates. If you see an error, make sure that the client certificates are trusted by the API secret, and vice versa. The following script provides a quick way to verify that one TLS certificate is trusted by another: ```bash function k8ssecrettrusted_by() { kubectl get secret -n piraeus-datastore -ogo-template='{{ index .data \"tls.crt\" | base64decode }}' \"$1\" > $1.tls.crt kubectl get secret -n piraeus-datastore -ogo-template='{{ index .data \"ca.crt\" | base64decode }}' \"$2\" > $2.ca.crt openssl verify -CAfile $2.ca.crt $1.tls.crt } k8ssecrettrusted_by linstor-client-tls linstor-api-tls ``` Another issue might be the API endpoint using a Certificate not using the expected service name. A typical error message for this issue would be: ```text curl: (60) SSL: no alternative certificate subject name matches target host name 'linstor-controller.piraeus-datastore.svc' ``` In this case, make sure you have specified the right subject names when provisioning the certificates. All available options are documented in the reference for ." } ]
{ "category": "Runtime", "file_name": "api-tls.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "Starting from , the import path will be: \"github.com/golang-jwt/jwt/v4\" The `/v4` version will be backwards compatible with existing `v3.x.y` tags in this repo, as well as `github.com/dgrijalva/jwt-go`. For most users this should be a drop-in replacement, if you're having troubles migrating, please open an issue. You can replace all occurrences of `github.com/dgrijalva/jwt-go` or `github.com/golang-jwt/jwt` with `github.com/golang-jwt/jwt/v4`, either manually or by using tools such as `sed` or `gofmt`. And then you'd typically run: ``` go get github.com/golang-jwt/jwt/v4 go mod tidy ``` The original migration guide for older releases can be found at https://github.com/dgrijalva/jwt-go/blob/master/MIGRATION_GUIDE.md." } ]
{ "category": "Runtime", "file_name": "MIGRATION_GUIDE.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Migrating to the new Piraeus Operator deployment requires temporarily removing the existing deployment. After this step no new volumes can be created and no existing volume can be attached or detached, until the new deployment is rolled out. This is the third step when migrating Piraeus Operator from version 1 (v1) to version 2 (v2). . To prevent modification of the existing cluster, scale down the existing Piraeus Operator deployment. ``` $ helm upgrade piraeus-op ./charts/piraeus --set operator.replicas=0 $ kubectl rollout status -w deploy/piraeus-op-operator deployment \"piraeus-op-operator\" successfully rolled out ``` The Operator sets Finalizers on the resource it controls. This prevents the deletion of these resources when the Operator is not running. Remove the Finalizers by applying a patch: ``` $ kubectl patch linstorsatellitesets piraeus-op-ns --type merge --patch '{\"metadata\": {\"finalizers\": []}}' linstorsatelliteset.piraeus.linbit.com/piraeus-op-ns patched $ kubectl patch linstorcontrollers piraeus-op-cs --type merge --patch '{\"metadata\": {\"finalizers\": []}}' linstorcontroller.piraeus.linbit.com/piraeus-op-cs patched ``` Having removed the Finalizers, you can delete the Piraeus Resources. This will stop the LINSTOR Cluster, and no new volumes can be created, and existing volumes will not attach or detach. Volumes already attached to a Pod will continue to replicate. ``` $ kubectl delete linstorcsidrivers/piraeus-op $ kubectl delete linstorsatellitesets/piraeus-op-ns $ kubectl delete linstorcontrollers/piraeus-op-cs ``` As a last step, you can completely remove the helm deployment, deleting additional resources such as service accounts and RBAC resources. In addition, also clean up the Custom Resource Definitions: ``` $ helm uninstall piraeus-op $ kubectl delete crds linstorcsidrivers.piraeus.linbit.com linstorsatellitesets.piraeus.linbit.com linstorcontrollers.piraeus.linbit.com ``` If you have deployed additional components from Piraeus, such as the HA Controller or LINSTOR Affinity Controller, you will need to remove them, too. After completing the migration, you can install them again, if needed. ``` $ helm uninstall piraeus-ha-controller $ helm uninstall linstor-affinity-controller ```" } ]
{ "category": "Runtime", "file_name": "3-remove-operator-v1.md", "project_name": "Piraeus Datastore", "subcategory": "Cloud Native Storage" }
[ { "data": "Before starting a container, the reads the `default_vcpus` option from the to determine the number of virtual CPUs (vCPUs) needed to start the virtual machine. By default, `default_vcpus` is equal to 1 for fast boot time and a small memory footprint per virtual machine. Be aware that increasing this value negatively impacts the virtual machine's boot time and memory footprint. In general, we recommend that you do not edit this variable, unless you know what are you doing. If your container needs more than one vCPU, use to assign more resources. Kubernetes ```yaml apiVersion: v1 kind: Pod metadata: name: cpu-demo namespace: sandbox spec: containers: name: cpu0 image: vish/stress resources: limits: cpu: \"3\" args: -cpus \"5\" ``` ```sh $ sudo -E kubectl create -f ~/cpu-demo.yaml ``` A Kubernetes pod is a group of one or more containers, with shared storage and network, and a specification for how to run the containers ]. In Kata Containers this group of containers, which is called a sandbox, runs inside the same virtual machine. If you do not specify a CPU constraint, the runtime does not add more vCPUs and the container is not placed inside a CPU cgroup. Instead, the container uses the number of vCPUs specified by `default_vcpus` and shares these resources with other containers in the same situation (without a CPU constraint). When you create a container with a CPU constraint, the runtime adds the number of vCPUs required by the container. Similarly, when the container terminates, the runtime removes these resources. A container without a CPU constraint uses the default number of vCPUs specified in the configuration file. In the case of Kubernetes pods, containers without a CPU constraint use and share between them the default number of vCPUs. For example, if `default_vcpus` is equal to 1 and you have 2 containers without CPU constraints with each container trying to consume 100% of vCPU, the resources divide in two parts, 50% of vCPU for each container because your virtual machine does not have enough resources to satisfy containers needs. If you want to give access to a greater or lesser portion of vCPUs to a specific container, use . Kubernetes ```yaml apiVersion: v1 kind: Pod metadata: name: cpu-demo namespace: sandbox spec: containers: name: cpu0 image: vish/stress resources: requests: cpu: \"0.7\" args: -cpus \"3\" ``` ```sh $ sudo -E kubectl create -f ~/cpu-demo.yaml ``` Before running containers without CPU constraint, consider that your containers are not running alone. Since your containers run inside a virtual machine other processes use the vCPUs as well (e.g. `systemd` and the Kata Containers ). In general, we recommend setting `default_vcpus` equal to 1 to allow non-container processes to run on this vCPU and to specify a CPU constraint for each" }, { "data": "The runtime calculates the number of vCPUs required by a container with CPU constraints using the following formula: `vCPUs = ceiling( quota / period )`, where `quota` specifies the number of microseconds per CPU Period that the container is guaranteed CPU access and `period` specifies the CPU CFS scheduler period of time in microseconds. The result determines the number of vCPU to hot plug into the virtual machine. Once the vCPUs have been added, the places the container inside a CPU cgroup. This placement allows the container to use only its assigned resources. If you already know the number of vCPUs needed for each container and pod, or just want to run them with the same number of vCPUs, you can specify that number using the `default_vcpus` option in the configuration file, each virtual machine starts with that number of vCPUs. One limitation of this approach is that these vCPUs cannot be removed later and you might be wasting resources. For example, if you set `default_vcpus` to 8 and run only one container with a CPU constraint of 1 vCPUs, you might be wasting 7 vCPUs since the virtual machine starts with 8 vCPUs and 1 vCPUs is added and assigned to the container. Non-container processes might be able to use 8 vCPUs but they use a maximum 1 vCPU, hence 7 vCPUs might not be used. In some cases, the hardware and/or software architecture being utilized does not support hotplug. For example, Firecracker VMM does not support CPU or memory hotplug. Similarly, the current Linux Kernel for aarch64 does not support CPU or memory hotplug. To appropriately size the virtual machine for the workload within the container or pod, we provide a `staticsandboxresource_mgmt` flag within the Kata Containers configuration. When this is set, the runtime will: Size the VM based on the workload requirements as well as the `default_vcpus` option specified in the configuration. Not resize the virtual machine after it has been launched. VM size determination varies depending on the type of container being run, and may not always be available. If workload sizing information is not available, the virtual machine will be started with the `default_vcpus`. In the case of a pod, the initial sandbox container (pause container) typically doesn't contain any resource information in its runtime `spec`. It is possible that the upper layer runtime (i.e. containerd or CRI-O) may pass sandbox sizing annotations within the pause container's `spec`. If these are provided, we will use this to appropriately size the VM. In particular, we'll calculate the number of CPUs required for the workload and augment this by `default_vcpus` configuration option, and use this for the virtual machine size. In the case of a single container (i.e., not a pod), if the container specifies resource requirements, the container's `spec` will provide the sizing information directly. If these are set, we will calculate the number of CPUs required for the workload and augment this by `default_vcpus` configuration option, and use this for the virtual machine size." } ]
{ "category": "Runtime", "file_name": "vcpu-handling-runtime-go.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "When users try to activate a restoring/DR volume with some replicas failed for some reasons, the volume will be stuck in attaching state and users can not do anything except deleting the volume. However the volume still can be rebuilt back to be normal as long as there is a healthy replica. To improve user experience, Longhorn should forcibly activate a restoring/DR volume if there is a healthy replica and users allow the creation of a degraded volume. https://github.com/longhorn/longhorn/issues/1512 Allow users to activate a restoring/DR volume as long as there is a healthy replica and the volume works well. `None` Forcibly activate a restoring/DR volume if there is a healthy replica and users enable the global setting `allow-volume-creation-with-degraded-availability`. Users can activate a restoring/DR volume by the CLI `kubectl` or Longhorn UI and the volume could work well. Set up two Kubernetes clusters. These will be called cluster A and cluster B. Install Longhorn on both clusters, and set the same backup target on both clusters. In the cluster A, make sure the original volume X has a backup created or has recurring backups scheduled. In backup page of cluster B, choose the backup volume X, then create disaster recovery volume Y. User set `volume.spec.Standby` to `false` by editing the volume CR or the manifest creating the volume to activate the volume. UI has click `Activate Disaster Recovery Volume` button in `Volume` or `Volume Details` pages to activate the volume. `None` Check if `volume.Spec.Standby` is set to `false` Get the global setting `allow-volume-creation-with-degraded-availability` Activate the DR volume if `allow-volume-creation-with-degraded-availability` is set to `true` and there are one or more ready replicas. Create a DR volume Set the global setting `concurrent-replica-rebuild-per-node-limit` to be 0 Failed some replicas Check if there is at least one healthy replica Call the API `activate` The volume could be activated Attach the volume to a node and check if data is correct `None` `None`" } ]
{ "category": "Runtime", "file_name": "20230601-forcibly-activate-a-restoring-dr-volume.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "This page lists all active members of the steering committee, as well as maintainers and reviewers for this repository. Each repository in the will list their repository maintainers and reviewers in their own `OWNERS.md` file. Please see for governance guidelines and responsibilities for the steering committee, maintainers, and reviewers. See for automatic PR assignment. This will be built up when time comes. Tobias Brunner <[email protected]> () Simon Beck <[email protected]> () Nicolas Bigler <[email protected]> () ukasz Widera <[email protected]> () Gabriel Saratura <[email protected]> () We also document the list of maintainers in the . We currently do not have nominated reviewers, we'll build them up when time comes. As of today, we don't have any emeritus maintainers yet." } ]
{ "category": "Runtime", "file_name": "OWNERS.md", "project_name": "K8up", "subcategory": "Cloud Native Storage" }
[ { "data": "is an in-memory immutable data manager that provides out-of-the-box high-level abstraction and zero-copy in-memory sharing for distributed data in big data tasks, such as graph analytics, numerical computing, and machine learning. Vineyard provides: Efficient in-memory data management and zero-copy sharing across different systems. Out-of-the-box high-level data abstraction for distributed objects (e.g., tensors, tables, graphs) and efficient polyglot support (currently including C++, Python, and Java). Built-in streaming support for data accessing and across system pipelining. An extensible driver framework and a set of efficient built-in drivers for eliminating the boilerplate part in computation engines, e.g., I/O, serialization, and checkpointing. Alignment with CNCF: Vineyard builds on Kubernetes for deploying and scaling and the objects are observable in Kubernetes as CRDs. Vineyard makes efficient zero-copy sharing possible for data-intensive workflows on cloud-native infrastructure by a data-aware Kubernetes scheduler plugin. Vineyard adopts a immutable object design, which aligns with the immutable infrastructure of the cloud-native environment. Vineyard was accepted as a CNCF sandbox project on Apr 28th, 2021. Include a link to your projects devstats page. We will be looking for signs of consistent or increasing contribution activity. Please feel free to add commentary to add colour to the numbers and graphs we will see on devstats. Stargazers and Forks Commits per week Contributors and Companies The vineyard community has grown since the project entered the CNCF sandbox. Number of contributors: 11 -> 26 Github stars: 300+ -> 600+ Github forks: 20+ -> 80+ Contributing organizations: 1 -> 12 How many maintainers do you have, and which organisations are they from? (Feel free to link to an existing MAINTAINERS file if appropriate.) We currently have 7 maintainers and 2 committers and have . Initial maintainers | Name | GitHub ID | Affiliation | Email | | | | | | | Tao He | | Alibaba | | | Xiaojian Luo | | Alibaba | | | Wenyuan Yu | | Alibaba | | | Weibin Zeng | | Alibaba | | | Siyuan Zhang | | Alibaba | | | Diwen Zhu | | Alibaba | | New maintainers in this year | Name | GitHub ID | Affiliation | Email | | | | | | | Ke Meng | | Alibaba | | New Committers in this year | Name | GitHub ID | Affiliation | Email | | | | | | | Lihong Lin | | PKU | | | Pei Li | | CMU | | What do you know about adoption, and how has this changed since your last review / since you joined Sandbox? If you can list companies that are end-users of your project, please do so. (Feel free to link to an existing ADOPTERS file if" }, { "data": "We know several cases where vineyard has been adopted in both testing and production environments. GraphScope: production stage GraphScope is an open-source graph processing platform. Vineyard is used in graphscope to provide distributed shared in-memory storage for different graph processing engines. weilaisudu: transiting from testing to production stage weilaisudu is the company behind the project , a distributed scientific computing engine that provides numpy and pandas like API. Vineyard is used as the shared-memory storage for actors that do computation on chunks. ESRF: testing stage ESRF is a joint research facility situated in France, one of the biggest x-ray science facilities in Europe. VIneyard is used in the BLISS software to serve as the shared storage between sensors and data processing jobs. PingAn: testing stage PingAn is a large-scale fin-tech company in China. Vineyard is used in a research platform to support efficient dataset sharing and management among data science researchers. We have also integrated with the apache-airflow project, which is a workflow orchestration engine and has been widely adopted. We have published airflow-vineyard-provider on Astronomer Registry and received much feedback from end-users, but we haven't tracked the actual adoption yet. How has the project performed against its goals since the last review? (We won't penalize you if your goals changed for good reasons.) Vineyard has successfully archived the goal of bringing value to big data analytical workflows on Kubernetes. We have shown the gain in an internal project which involves both ETL, graph computation, and machine learning jobs. Our goal hasn't changed since becoming CNCF sandbox project and we are still aiming at supporting a more efficient big data analytical workflow on the cloud-native infrastructure. Specifically, we'll keep moving towards following goals in the next year: Providing efficient cross-engine data sharing for data-intensive workflows in Kubernetes Integrating with projects in the cloud-native community for orchestration and scheduling and integrating with more big data computing engines to improve the end-to-end efficiency. Building a new cloud-native paradigm for big data applications working together. By integrating Vineyard, Kubernetes can help orchestrating data and workloads together for better alignment and efficiency. We still need to do more to engage end-users to show the value-added of the vineyard project. How can the CNCF help you achieve your upcoming goals? Vineyard has incredibly benefited from CNCF since accepted as a sandbox project. We believe the end-users in the CNCF community are critical for Vineyard to become successful. We have submitted proposals for the KubeCon and CNCF Conferences in the past year but got rejected. We hope we could have more opportunities to introduce our project to border end-users in the CNCF community to increase adoption. Do you think that your project meets the ? We think our project vineyard still needs further exploration to get border adoption in the production environment and we are looking forward to meeting the incubation criteria in near future." } ]
{ "category": "Runtime", "file_name": "2022-vineyard-annual.md", "project_name": "Vineyard", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- Thank you for contributing to Zenko! Please enter applicable information below. --> What does this PR do, and why do we need it? Which issue does this PR fix? fixes #<ISSUE> Special notes for your reviewers:" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Feature request about: Suggest an idea for this project title: '' labels: enhancement assignees: '' Is your feature request related to a problem? Please describe. A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] Describe the solution you'd like A clear and concise description of what you want to happen. Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered. Additional context Add any other context or screenshots about the feature request here." } ]
{ "category": "Runtime", "file_name": "feature_request.md", "project_name": "DatenLord", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Inspect the hive ``` cilium-operator-aws hive [flags] ``` ``` --bgp-v2-api-enabled Enables BGPv2 APIs in Cilium --ces-dynamic-rate-limit-nodes strings List of nodes used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-burst strings List of qps burst used for the dynamic rate limit steps --ces-dynamic-rate-limit-qps-limit strings List of qps limits used for the dynamic rate limit steps --ces-enable-dynamic-rate-limit Flag to enable dynamic rate limit specified in separate fields instead of the static one --ces-max-ciliumendpoints-per-ces int Maximum number of CiliumEndpoints allowed in a CES (default 100) --ces-slice-mode string Slicing mode defines how CiliumEndpoints are grouped into CES: either batched by their Identity (\"cesSliceModeIdentity\") or batched on a \"First Come, First Served\" basis (\"cesSliceModeFCFS\") (default \"cesSliceModeIdentity\") --ces-write-qps-burst int CES work queue burst rate. Ignored when ces-enable-dynamic-rate-limit is set (default 20) --ces-write-qps-limit float CES work queue rate limit. Ignored when ces-enable-dynamic-rate-limit is set (default 10) --cluster-id uint32 Unique identifier of the cluster --cluster-name string Name of the cluster (default \"default\") --clustermesh-concurrent-service-endpoint-syncs int The number of remote cluster service syncing operations that will be done concurrently. Larger number = faster endpoint slice updating, but more CPU (and network) load. (default 5) --clustermesh-config string Path to the ClusterMesh configuration directory --clustermesh-enable-endpoint-sync Whether or not the endpoint slice cluster mesh synchronization is enabled. --clustermesh-enable-mcs-api Whether or not the MCS API support is enabled. --clustermesh-endpoint-updates-batch-period duration The length of endpoint slice updates batching period for remote cluster services. Processing of pod changes will be delayed by this duration to join them with potential upcoming updates and reduce the overall number of endpoints updates. Larger number = higher endpoint programming latency, but lower number of endpoints revision generated. (default 500ms) --clustermesh-endpoints-per-slice int The maximum number of endpoints that will be added to a remote cluster's EndpointSlice . More endpoints per slice will result in less endpoint slices, but larger resources. (default 100) --controller-group-metrics strings List of controller group names for which to to enable metrics. Accepts 'all' and 'none'. The set of controller group names available is not guaranteed to be stable between Cilium versions. --enable-cilium-operator-server-access strings List of cilium operator APIs which are administratively enabled. Supports ''. (default []) --enable-gateway-api-app-protocol Enables Backend Protocol selection (GEP-1911) for Gateway API via appProtocol --enable-gateway-api-proxy-protocol Enable proxy protocol for all GatewayAPI listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-gateway-api-secrets-sync Enables fan-in TLS secrets sync from multiple namespaces to singular namespace (specified by gateway-api-secrets-namespace flag) (default true) --enable-ingress-controller Enables cilium ingress controller. This must be enabled along with enable-envoy-config in cilium agent. --enable-ingress-proxy-protocol Enable proxy protocol for all Ingress listeners. Note that only Proxy protocol traffic will be accepted once this is enabled. --enable-ingress-secrets-sync Enables fan-in TLS secrets from multiple namespaces to singular namespace (specified by ingress-secrets-namespace flag) (default true) --enable-k8s Enable the k8s clientset (default true) --enable-k8s-api-discovery Enable discovery of Kubernetes API groups and resources with the discovery API --enable-k8s-endpoint-slice Enables k8s EndpointSlice feature in Cilium if the k8s cluster supports it (default true) --enable-node-ipam Enable Node IPAM --enable-node-port Enable NodePort type services by Cilium --enforce-ingress-https Enforces https for host having matching TLS host in Ingress. Incoming traffic to http listener will return 308 http error code with respective location in header. (default true) --gateway-api-hostnetwork-enabled Exposes Gateway listeners on the host" }, { "data": "--gateway-api-hostnetwork-nodelabelselector string Label selector that matches the nodes where the gateway listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --gateway-api-secrets-namespace string Namespace having tls secrets used by CEC for Gateway API (default \"cilium-secrets\") --gateway-api-xff-num-trusted-hops uint32 The number of additional GatewayAPI proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --gops-port uint16 Port for gops server to listen on (default 9891) -h, --help help for hive --identity-gc-interval duration GC interval for security identities (default 15m0s) --identity-gc-rate-interval duration Interval used for rate limiting the GC of security identities (default 1m0s) --identity-gc-rate-limit int Maximum number of security identities that will be deleted within the identity-gc-rate-interval (default 2500) --identity-heartbeat-timeout duration Timeout after which identity expires on lack of heartbeat (default 30m0s) --ingress-default-lb-mode string Default loadbalancer mode for Ingress. Applicable values: dedicated, shared (default \"dedicated\") --ingress-default-request-timeout duration Default request timeout for Ingress. --ingress-default-secret-name string Default secret name for Ingress. --ingress-default-secret-namespace string Default secret namespace for Ingress. --ingress-default-xff-num-trusted-hops uint32 The number of additional ingress proxy hops from the right side of the HTTP header to trust when determining the origin client's IP address. --ingress-hostnetwork-enabled Exposes ingress listeners on the host network. --ingress-hostnetwork-nodelabelselector string Label selector that matches the nodes where the ingress listeners should be exposed. It's a list of comma-separated key-value label pairs. e.g. 'kubernetes.io/os=linux,kubernetes.io/hostname=kind-worker' --ingress-hostnetwork-shared-listener-port uint32 Port on the host network that gets used for the shared listener (HTTP, HTTPS & TLS passthrough) --ingress-lb-annotation-prefixes strings Annotations and labels which are needed to propagate from Ingress to the Load Balancer. (default [lbipam.cilium.io,service.beta.kubernetes.io,service.kubernetes.io,cloud.google.com]) --ingress-secrets-namespace string Namespace having tls secrets used by Ingress and CEC. (default \"cilium-secrets\") --ingress-shared-lb-service-name string Name of shared LB service name for Ingress. (default \"cilium-ingress\") --k8s-api-server string Kubernetes API server URL --k8s-client-burst int Burst value allowed for the K8s client --k8s-client-qps float32 Queries per second limit for the K8s client --k8s-heartbeat-timeout duration Configures the timeout for api-server heartbeat, set to 0 to disable (default 30s) --k8s-kubeconfig-path string Absolute path of the kubernetes kubeconfig file --k8s-service-proxy-name string Value of K8s service-proxy-name label for which Cilium handles the services (empty = all services without service.kubernetes.io/service-proxy-name label) --kube-proxy-replacement string Enable only selected features (will panic if any selected feature cannot be enabled) (\"false\"), or enable all features (will panic if any feature cannot be enabled) (\"true\") (default \"false\") --loadbalancer-l7-algorithm string Default LB algorithm for services that do not specify related annotation (default \"round_robin\") --loadbalancer-l7-ports strings List of service ports that will be automatically redirected to backend. --max-connected-clusters uint32 Maximum number of clusters to be connected in a clustermesh. Increasing this value will reduce the maximum number of identities available. Valid configurations are [255, 511]. (default 255) --mesh-auth-mutual-enabled The flag to enable mutual authentication for the SPIRE server (beta). --mesh-auth-spiffe-trust-domain string The trust domain for the SPIFFE identity. (default \"spiffe.cilium\") --mesh-auth-spire-agent-socket string The path for the SPIRE admin agent Unix socket. (default \"/run/spire/sockets/agent/agent.sock\") --mesh-auth-spire-server-address string SPIRE server endpoint. (default \"spire-server.spire.svc:8081\") --mesh-auth-spire-server-connection-timeout duration SPIRE server connection timeout. (default 10s) --operator-api-serve-addr string Address to serve API requests (default \"localhost:9234\") --operator-pprof Enable serving pprof debugging API --operator-pprof-address string Address that pprof listens on (default \"localhost\") --operator-pprof-port uint16 Port that pprof listens on (default 6061) --operator-prometheus-serve-addr string Address to serve Prometheus metrics (default \":9963\") --skip-crd-creation When true, Kubernetes Custom Resource Definitions will not be created ``` - Run cilium-operator-aws - Output the dependencies graph in graphviz dot format" } ]
{ "category": "Runtime", "file_name": "cilium-operator-aws_hive.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "This enhancement will refactor the restore implementation and enable rebuild for restore/DR volumes. https://github.com/longhorn/longhorn/issues/1279 The goal of this enhancement is to simplify the restore flow so that it can work for rebuilding replicas of restore/DR volumes without breaking the live upgrade feature. This enhancement won't guarantee that the restore/DR volume activation won't be blocked by replica rebuilding. When there are replicas crashing among restore/DR volumes, new rebuilding replicas will be created as usual. But instead of following the normal replica rebuilding workflow (syncing data/files from other running replicas), the rebuilding replicas of restore/DR volumes will directly restore data from backup. The normal rebuilding (file syncing) workflow implicitly considers that all existing snapshots won't change and newer data during the rebuilding will be written into volume head. But for restore/DR volumes, new data writing is directly handled by replica (sync agent server) and it will be written to underlying snapshots rather than volume heads. As a result, the normal rebuilding logic doesn't fit restore/DR volumes. In order to skip the file syncing and snapshotting and directly do restore, the rebuilding related API should be updated, which will lead to API version bumps. As long as there is a replica having not restored the newest/latest backup, longhorn manager will directly call restore command. Then rebuilding replicas will be able to start the restore even if all other replicas are up-to-date. Previously, in order to maintain the consistency of DR volume replicas, Longhorn manager will guarantee that all replicas have restored the same backup before starting the next backup restore. But considering the case that the newly rebuilt replicas are empty whereas the existing replicas have restored some backups, This restriction makes replica rebuilding become impossible in some cases. Hence we need to break the restriction. This restriction break degrades the consistency of DR volume replicas. But it's acceptable as long as all replicas can finish the latest backup restore and the DR volume can be activated in the end. This modification means engines and replicas should be intelligent enough to decide if they need to do restore and which kind of restore they need to launch. Actually replica processes have all information about the restore status and they can decide if they need incremental restore or full restore by themself. Specifying the last backup in the restore command is redundant. Longhorn manager only needs to tell the replicas what is the latest backup they should restore. Longhorn manager still need to know what is the last restored backup of all replicas, since it relies on it to determine if the restore/DR volume is available/can be activated. Longhorn should wait for rebuild complete and check restore status before auto detachment. Otherwise, the restore volume will be automatically detached when the rebuild is in progress then the rebuild is meaningless in this case. Before, the restore volume keeps state `Degraded` if there is replica crashing. And the volume will finally become `Faulted` if all replicas are crashed one by one during restoring. After, the restore volume will start replica rebuilding automatically then be back to state `Healthy` if there is replica crashing. The volume is available as long as all replicas are not crashed at the same time. And volume will finish activation/auto-detachment after the rebuild is done. Users create a restore volume and wait for restore complete. When the restore is in progress, some replicas somehow get" }, { "data": "Then the volume rebuilds new replicas immediately, and it will become `Healthy` once the new replicas start rebuilding. The volume will be detached automatically once the restore and the rebuild complete. Users create a DR volume. Some replicas get crashed. Then the DR volume automatically rebuilds new replicas and restores the latest backup for the rebuilt replicas. Users try to activate the DR volume. The DR volume will wait for the rebuild of all replicas and successful restoration of the latest backup before detachment. Add a new flag `--restore` for command `add-replica`, which indicates skipping file syncing and snapshotting. Deprecate the arg `lastRestoreBackup` and the flag `--incrementally` for command `backup restore`. Add a new command `verify-rebuild-replica`, which can mark the rebuilding replicas as available (mode `RW`) for restore/DR volumes after the initial restore is done. Create a separate message/struct for `ReplicaCreate` request then add the two new fields `Mode` and `SnapshotRequired` to the request. Modify command `add-replica` related APIs: Use a new flag `--restore` in command `add-replica` to indicate that file syncing and snapshotting should be skipped for restore/DR volumes. The current controller gRPC call `ReplicaCreate` used in the command will directly create a snapshot before the rebuilding. But considering the (snapshot) consistency of of restore/DR volumes, snapshots creation/deletion is fully controlled by the restore command (and the expansion command during the restore). Hence, the snapshotting here needs to be skipped by updating the gRPC call `ReplicaCreate`. Add command `verify-rebuild-replica`: It just calls the existing controller gRPC function `ReplicaVerifyRebuild`. It's mainly used to mark the rebuilding replica of restore/DR volumes as mode `RW` with some verifications and a replica reload. Modify command `backup restore`: Deprecate/Ignore the arg `lastRestoreBackup` in the restore command and the following sync agent gRPC function. Instead, the sync agent server will directly do a full restore or a incremental restore based on its current restore status. Deprecate/Ignore the flag `--incrementally` for command `backup restore`. By checking the disk list of all existing replicas, the command function knows if it needs to generate a new snapshot name. The caller of the gRPC call `BackupRestore` only needs to tell the name of the final snapshot file that stores restored data. For new restore volume, there is no existing snapshot among all replicas hence we will generate a random snapshot name. For replicas of DR volumes or rebuilding replicas of restore volumes, the caller will find the replica containing the most snapshots then use the latest snapshot of the replica in the following restore. As for the delta file used in the incremental restore, it will be generated by the sync agent server rather than by the caller. Since the caller has no idea about the last restored backup now and the delta file naming format is `volume-delta-<last restored backup name>.img`. To avoid disk/snapshot chain inconsistency between rebuilt replicas and old replicas of a DR volume, snapshot purge is required if there are more than 1 snapshots in one replica. And the (incremental) restore will be blocked before the snapshot purge complete. Make the sync agent gRPC call `BackupRestore` more intelligent: The function will check the restore status first. If there is no restore record in the sync agent server or the last restored backup is invalid, a full restore will be applied. This means we can remove the gRPC call `BackupRestoreIncrementally`. Remove the expansion before the restore call. The expansion of DR volumes should be guaranteed by longhorn" }, { "data": "Coalesce the incremental restore related functions to normal restore functions if possible. Allow replica replenishment for restore/DR volumes. Add the new flag `--restore` when using command `add-replica` to rebuild replicas of restore/DR volumes. Modify the pre-restore check and restore status sync logic: Previously, the restore command will be invoked only if there is no restoring replica. Right now the command will be called as long as there is a replica having not restored the latest backup. Do not apply the consensual check as the prerequisite of the restore command invocation. The consensual check will be used for `engine.Status.LastRestoredBackup` update only. Invoke `verify-rebuild-replica` when there is a complete restore for a rebuilding replica (mode `WO`). Modify the way to invoke restore command: Retain the old implementation for compatibility. For the engine using the new engine image, call restore command directly as long as the pre-restore check gets passed. Need to ignore some errors. e.g.: replicas are restoring, the requested backup restore is the same as the last backup restore, or replicas need to complete the snapshot purge before the restore. Mark the rebuilding replicas as mode `ERR` and disable the replica replenishment during the expansion. Modify the prerequisites of restore volume auto detachment or DR volume activation: Wait for the rebuild complete and the volume becoming `Healthy`. Check and wait for the snapshot purge. This prerequisite check works only for new restore/DR volumes. Create a restore volume with 2 replicas. Run command `backup restore` for the DR volume. Delete one replica of the restore volume. Initialize a new replica, and add the replica to the restore volume. Run command `backup restore`. Verify the restored data is correct, and all replicas work fine. Create a DR volume with 2 replicas. Run command `backup restore` for the DR volume. Wait for restore complete. Expand the DR volume and wait for the expansion complete. Delete one replica of the DR volume. Initialize a new replica, and add the replica to the DR volume. Run command `backup restore`. The old replica should start snapshot purge and the restore is actually not launched. Wait for the snapshot purge complete. Re-run command `backup restore`. Then wait for the restore complete. Check if the restored data is correct, and all replicas work fine. And verify all replicas contain only 1 snapshot. Launch a pod with Longhorn volume. Write data to the volume and take a backup. Create a restore volume from the backup and wait for the restore start. Crash one random replicas. Then check if the replicas will be rebuilt and the restore volume can be `Healthy` after the rebuilding. Wait for the restore complete and auto detachment. Launch a pod for the restored volume. Verify all replicas work fine with the correct data. Launch a pod with Longhorn volume. Write data to the volume and take the 1st backup. Wait for the 1st backup creation complete then write more data to the volume (which is the data of the 2nd backup). Create a DR volume from the 1st backup and wait for the restore start. Crash one random replica. Take the 2nd backup for the original volume. Then trigger DR volume last backup update immediately (by calling backup list API) after the 2nd backup creation complete. Check if the replicas will be rebuilt and the restore volume can be `Healthy` after the rebuilding. Wait for the restore complete then activate the volume. Launch a pod for the activated DR" }, { "data": "Verify all replicas work fine with the correct data. Launch a pod with Longhorn volume. Write data to the volume and take the 1st backup. Create a DR volume from the 1st backup. Shutdown the pod and wait for the original volume detached. Expand the original volume and wait for the expansion complete. Re-launch a pod for the original volume. Write data to the original volume and take the 2nd backup. (Make sure the total data size is larger than the original volume size so that there is date written to the expanded part.) Wait for the 2nd backup creation complete. Trigger DR volume and crash one random replica of the DR volume. Check if the replicas will be rebuilt, and the restore volume can be `Healthy` after the rebuilding. Wait for the expansion, restore, and rebuild complete. Verify the DR volume size and snapshots count after the restore. Write data to the original volume and take the 3rd backup. Wait for the 3rd backup creation complete then trigger the incremental restore for the DR volume. Activate the DR volume and wait for the DR volume activated. Launch a pod for the activated DR volume. Verify the restored data of the activated DR volume. Write more data to the activated DR volume. Then verify all replicas are still running. Crash one random replica of the activated DR volume. Wait for the rebuild complete then verify the activated volume still works fine. Launch Longhorn v1.0.1. Launch a pod with Longhorn volume. Write data to the volume and take the 1st backup. Create 2 DR volumes from the 1st backup. Shutdown the pod and wait for the original volume detached. Expand the original volume and wait for the expansion complete. Write data to the original volume and take the 2nd backup. (Make sure the total data size is larger than the original volume size so that there is date written to the expanded part.) Trigger incremental restore for the DR volumes by listing the backup volumes, and wait for restore complete. Upgrade Longhorn to the latest version. Crash one random replica for the 1st DR volume . Verify the 1st DR volume won't rebuild replicas and keep state `Degraded`. Write data to the original volume and take the 3rd backup. Trigger incremental restore for the DR volumes, and wait for restore complete. Do live upgrade for the 1st DR volume. This live upgrade call should fail and nothing gets changed. Activate the 1st DR volume. Launch a pod for the 1st activated volume, and verify the restored data is correct. Do live upgrade for the original volume and the 2nd DR volumes. Crash one random replica for the 2nd DR volume. Wait for the restore & rebuild complete. Delete one replica for the 2nd DR volume, then activate the DR volume before the rebuild complete. Verify the DR volume will be auto detached after the rebuild complete. Launch a pod for the 2nd activated volume, and verify the restored data is correct. Crash one replica for the 2nd activated volume. Wait for the rebuild complete, then verify the volume still works fine by reading/writing more data. Live upgrade is supported. It's possible that the restore/DR volume rebuilding somehow gets stuck, or users have no time to wait for the restore/DR volume rebuilding done. We need to provide a way that users can use the volume as soon as possible. This enhancement is tracked in https://github.com/longhorn/longhorn/issues/1512." } ]
{ "category": "Runtime", "file_name": "20200721-refactor-restore-for-rebuild-enabling.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "In an about networking security, we described how and why gVisor implements its own userspace network stack in the Sentry (gVisor kernel). In summary, weve implemented our networking stack aka Netstack in Go to minimize exposure to unsafe code and avoid using an unsafe Foreign Function Interface. With Netstack, gVisor can do all packet processing internally and only has to enable a few host I/O syscalls for near-complete networking capabilities. This keeps gVisors exposure to host vulnerabilities as narrow as possible. Although writing Netstack in Go was important for runtime safety, up until now it had an undeniable performance cost. iperf benchmarks showed Netstack was spending between 20-30% of its processing time allocating memory and pausing for garbage collection, a slowdown that limited gVisors ability to efficiently sandbox networking workloads. In this blog we will show how we crafted a cure for Netstacks allocation addiction, reducing them by 99%, while also increasing gVisor networking throughput by 30+%. {:width=\"100%\"} Go guarantees a basic level of memory safety through the use of a garbage collector (GC), which is described in great detail by the Go team . The Go runtime automatically tracks and frees objects allocated from the heap, relieving the programmer of the often painful and error-prone process of manual memory management. Unfortunately, tracking and freeing memory during runtime comes at a performance cost. Running the GC adds scheduling overhead, consumes valuable CPU time, and occasionally pauses the entire programs progress to track down garbage. Gos GC is highly optimized, tunable, and sufficient for a majority of workloads. Most of the other parts of gVisor happily use Go's GC with no complaints. However, under high network stress, Netstack needed to aggressively allocate buffers used for processing TCP/IP data and metadata. These buffers often had short lifespans, and once the processing was done they were left to be cleaned up by the GC. This meant Netstack was producing tons of garbage that needed to be tracked and freed by GC workers. Luckily, we weren't the only ones with this problem. This pattern of small, frequently allocated and discarded objects was common enough that the Go team introduced in Go1.3. `sync.Pool` is designed to relieve pressure off the Go GC by maintaining a thread-safe cache of previously allocated objects. `sync.Pool` can retrieve an object from the cache if it exists or allocate a new one according to a user specified allocation function. Once the user is finished with an object they can safely return it to the cache to be reused again. While `sync.Pool` was exactly what we needed to reduce allocations, incorporating it into Netstack wasnt going to be as easy as just replacing all our `make()`s with `pool.Get()`s. Netstack uses a few different types of buffers under the" }, { "data": "Some of these are specific to protocols, like for TCP, and others are more widely shared, like , which is used for IP, ICMP, UDP, etc. Although each of these buffer types are slightly different, they generally share a few common traits that made it difficult to use `sync.Pool` out of the box: The buffers were originally built with the assumption that a garbage collector would clean them up automatically there was little (if any) effort put into tracking object lifetimes. This meant that we had no way to know when it was safe to return buffers to a pool. Buffers have dynamic sizes that are determined during creation, usually depending on the size of the packet holding them. A `sync.Pool` out of the box can only accommodate buffers of a single size. One common solution to this is to fill a pool with , but even a pooled `bytes.Buffer` could incur allocations if it were too small and had to be grown to the requested size. Netstack splits, merges, and clones buffers at various points during processing (for example, breaking a large segment into smaller MTU-sized packets). Modifying a buffers size during runtime could mean lots of reallocating from the pool in a one-size-fits-all setup. This would limit the theoretical effectiveness of a pooled solution. We needed an efficient, low-level buffer abstraction that had answers for the Netstack specific challenges and could be shared by the various intermediate buffer types. By sharing a common buffer abstraction, we could maximize the benefits of pooling and avoid introducing additional allocations while minimally changing any intermediate buffer processing logic. Our solution was . Bufferv2 is a non-contiguous, reference counted, pooled, copy-on-write, buffer-like data structure. Internally, a bufferv2 `Buffer` is a linked list of `View`s. Each `View` has start/end indices and holds a pointer to a `Chunk`. A `Chunk` is a reference-counted structure thats allocated from a pool and holds data in a byte slice. There are several `Chunk` pools, each of which allocates chunks with different sized byte slices. These sizes start at 64 and double until 64k. {:width=\"100%\"} The design of bufferv2 has a few key advantages over simpler object pooling: Zero-cost copies and copy-on-write*: Cloning a Buffer only increments the reference count of the underlying chunks instead of reallocating from the pool. Since buffers are much more frequently read than modified, this saves allocations. In the cases where a buffer is modified, only the chunk thats changed has to be cloned, not the whole buffer. Fast buffer transformations*: Truncating and merging buffers or appending and prepending Views to Buffers are fast operations. Thanks to the non-contiguous memory structure these operations are usually as quick as adding a node to a linked list or changing the indices in a" }, { "data": "Tiered pools*: When growing a Buffer or appending data, the new chunks come from different pools of previously allocated chunks. Using multiple pools means we are flexible enough to efficiently accommodate packets of all sizes with minimal overhead. Unlike a one-size-fits-all solution, we don't have to waste lots of space with a chunk size that is too big or loop forever allocating small chunks. Shifting Netstack to bufferv2 came with some costs. To start, rewriting all buffers to use bufferv2 was a sizable effort that took many months to fully roll out. Any place in Netstack that allocated or used a byte slice needed to be rewritten. Reference counting had to be introduced so all the aforementioned intermediate buffer types (`PacketBuffer`, `segment`, etc) could accurately track buffer lifetimes, and tests had to be modified to ensure reference counting correctness. In addition to the upfront cost, the shift to bufferv2 also increased the engineering complexity of future Netstack changes. Netstack contributors must adhere to new rules to maintain memory safety and maximize the benefits of pooling. These rules are strict there needs to be strong justification to break them. They are as follows: Never allocate a byte slice; always use `NewView()` instead. Use a `View` for simple data operations (e.g writing some data of a fixed size) and a `Buffer` for more complex I/O operations (e.g appending data of variable size, merging data, writing from an `io.Reader`). If you need access to the contents of a `View` as a byte slice, use `View.AsSlice()`. If you need access to the contents of a `Buffer` as a byte slice, consider refactoring, as this will cause an allocation. Never write or modify the slices returned by `View.AsSlice()`; they are still owned by the view. Release bufferv2 objects as close to where they're created as possible. This is usually most easily done with defer. Document function ownership of bufferv2 object parameters. If there is no documentation, it is assumed that the function does not take ownership of its parameters. If a function takes ownership of its bufferv2 parameters, the bufferv2 objects must be cloned before passing them as arguments. All new Netstack tests must enable the leak checker and run a final leak check after the test is complete. Bufferv2 is enabled by default as of , and will be rolling out to soon, so no action is required to see a performance boost. Network-bound workloads, such as web servers or databases like Redis, are the most likely to see benefits. All the code implementing bufferv2 is public , and contributions are welcome! If youd like to run the iperf benchmark for yourself, you can run: ``` make run-benchmark BENCHMARKSTARGETS=//test/benchmarks/network:iperftest \\ RUNTIME=your-runtime-here BENCHMARKS_OPTIONS=-test.benchtime=60s ``` in the base gVisor directory. If you experience any issues, please feel free to let us know at ." } ]
{ "category": "Runtime", "file_name": "2022-10-24-buffer-pooling.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "`sdc-factoryreset [-h | --help]` This command resets a machine to its originally installed state. Specifically, it reboots the machine, imports all ZFS pools, and destroys them individually. It does this by setting a ZFS user property on the system pool. If this command is invoked unintentionally, an administrator can prevent the system from resetting itself by booting in rescue mode (noimport=true as a GRUB boot option) and clearing the smartdc:factoryreset property from the var dataset. If the system is booting without the noimport=true GRUB option, the only way to stop the pending factory reset is to power cycle the machine, and boot again into rescue mode. The service which does the actual factory reset starts well before an administrator would be able to login to the box, even if that administrator has console access. `-h` Show the help and usage mesage sdc-factoryreset Copyright (c) 2014, Joyent, Inc." } ]
{ "category": "Runtime", "file_name": "sdc-factoryreset.1.md", "project_name": "SmartOS", "subcategory": "Container Runtime" }
[ { "data": "Firecracker offers support to update attached block devices after the microVM has been started. This is provided via PATCH /drives API which notifies Firecracker that the underlying block file has been changed on the host. It should be called when the path to the block device is changed or if the file size has been modified. It is important to note that external changes to the block device file do not automatically trigger a notification in Firecracker so the explicit PATCH API call is mandatory. The implementation of the PATCH /drives API does not modify the host backing file. It only updates the emulation layer block device properties, path and length and then triggers a virtio device reconfiguration that is handled by the guest driver which will update the size of the raw block device. With that being said, a sequence which performs resizing/altering of the block underlying host file followed by a PATCH /drives API call is not an atomic operation as the guest can also modify the block file via emulation during the sequence, if the raw block device is mounted or accessible. This feature was designed to work with a cooperative guest in order to effectively simulate hot plug/unplug functionality for block devices. The following guarantees need to be provided: guest did not mount the device guest does not read or write from the raw block device `/dev/vdX` during the update sequence Example sequence that configures a microVM with a placeholder drive and then updates it with the real one: ```bash touch ${rodrivepath} curl --unix-socket ${socket} -i \\ -X PUT \"http://localhost/drives/scratch\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \\\"drive_id\\\": \\\"scratch\\\", \\\"pathonhost\\\": \\\"${rodrivepath}\\\", \\\"isrootdevice\\\": false, \\\"isreadonly\\\": true \\\"rate_limiter\\\": { \\\"bandwidth\\\": { \\\"size\\\": 100000, \\\"onetimeburst\\\": 4096, \\\"refill_time\\\": 150 }, \\\"ops\\\": { \\\"size\\\": 10, \\\"refill_time\\\": 250 } } }\" touch ${updatedrodrive_path} truncate --size ${newsize}M ${updatedrodrivepath}" }, { "data": "${updatedrodrive_path} curl --unix-socket ${socket} -i \\ -X PATCH \"http://localhost/drives/scratch\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \\\"drive_id\\\": \\\"scratch\\\", \\\"pathonhost\\\": \\\"${updatedrodrive_path}\\\" }\" ``` We do not recommend using this feature outside of its supported use case scope. If the required guarantees are not provided, data integrity and potential other issues may arise depending on the actual use case. There are two major aspects that need be considered here: If the guest has the opportunity to perform I/O against the block device during the update sequence it can either read data while it is changed or can overwrite data already written by a host process. For example a truncate operation can be undone if the guest issues a write for the last sector of the raw block device, or the guest application can become inconsistent or/and can create inconsistency in the block device itself. If the atomicity of the operation is guaranteed by using methods to make the microVM quiescence during the update sequence (for example pausing the microVM) the guest itself or block device can still become incosistent from in flight I/O requests in the guest that will be executed after it is resumed. Unlike with Virtio block device, with vhost-user block devices, Firecracker does not interact with the underlying block file directly (the vhost-user backend does). It means that changes to the file are not automatically seen by Firecracker. There is a mechanism in the for the backend to notify the frontend about changes in the device config via `VHOSTUSERBACKENDCONFIGCHANGE_MSG` message. This requires an extra UDS socket connection between the frontend and backend used for backend-originated messages. This mechanism is not supported by Firecracker. Instead, Firecracker makes use of the `PATCH /drives` API request to get notified about such changes. Such an API request only includes the required property (`drive_id`), because optional properties are not relevant to vhost-user. Example of a `PATCH` request for a vhost-user drive: ```bash curl --unix-socket ${socket} -i \\ -X PATCH \"http://localhost/drives/scratch\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \\\"drive_id\\\": \\\"scratch\\\" }\" ``` A `PATCH` request to a vhost-user drive will make Firecracker retrieve the new device config from the backend and send a config change notification to the guest." } ]
{ "category": "Runtime", "file_name": "patch-block.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Over time, new versions with improvements to the Rook software will be released and Rook clusters that have already been deployed should be upgraded to the newly released version. Being able to keep the deployed software current is an important part of managing the deployment and ensuring its health. In the theme of Rook's orchestration and management capabilities making the life of storage admins easier, this upgrade process should be both automatic and reliable. This document will describe a proposed design for the upgrading of Rook software as well as pose questions to the community for feedback so that we can deliver on the goal of automatic and reliable upgrades. In order for software upgrade support in Rook to be considered successful, the goals listed below should be met. Note that these goals are for a long term vision and are not all necessarily deliverable within the v0.6 release time frame. Automatic:* When a new version of Rook is released and the admin has chosen to start the upgrade, a live cluster should be able to update all its components to the new version without further user intervention. No downtime: During an upgrade window, there should be zero* downtime of cluster functionality. The upgrade process should be carried out in a rolling fashion so that not all components are being updated simultaneously. The cluster should be maintained in a healthy state the entire time. Migrations:* Breaking changes, as well as schema and data format changes should be handled through an automated migration processes. Rollback:* In the event that the upgrade is not successful, the Rook software should be rolled back to the previous version and cluster health should be restored. Until automated upgrade support is available in Rook, we have authored a user guide that walks you through the steps to upgrade the software in a Rook cluster. Consideration is also provided in the guide for how to verify the cluster remains healthy during and after the upgrade process. Please refer to the to learn more about the current Rook upgrade process. The responsibility for performing and orchestrating an upgrade will be handled by an upgrade controller that runs as part of the Rook operator, in the same pod and process (similar to how the Rook volume provisioner is run). This controller will be responsible for carrying out the sequence of steps for updating each individual Rook component. Additionally, the controller will monitor cluster and component health during the upgrade process, taking corrective steps to restore health, up to and including a full rollback to the old version. In order for the upgrade controller to begin an upgrade process, the following conditions must be met: The cluster should be in a healthy state in accordance with our defined . The upgrade controller should not begin an upgrade if the cluster is currently unhealthy. Metadata for pods must be persistent. If config files and other metadata only resides on an ephemeral empty dir for the pods (i.e., `dataDirHostPath` is not set), then the upgrade controller will not perform an upgrade. This section describes in a broad sense the general sequence of steps for upgrading a Rook cluster after a new Rook software version is released, e.g. `v0.6.1`. Note that this sequence is modeled after the , including the cluster health checks described in the" }, { "data": "The Rook system namespace contains the single control plane for all Rook clusters in the environment. This system namespace should be upgraded first before any individual clusters are upgraded. Operator: The operator pod itself is upgraded first since it is the host of the upgrade controller. If there is any new upgrade logic or any migration needed, the new version of the upgrade controller would know how to perform it, so it needs to be updated first. This will be a manual operation by the admin, ensuring that they are ready for their cluster to begin the upgrade process: ```bash kubectl set image deployment/rook-operator rook-operator=rook/rook:v0.6.1 ``` This command will update the image field of the operator's pod template, which will then begin the process of the deployment that manages the operator pod to terminate the pod and start a new one running the new version in its place. Agents: The Rook agents will also be running in the Rook system namespace since they perform operations for all Rook clusters in the environment. When the operator pod comes up on a newer version than the agents, it will use the Kubernetes API to update the image field of the agent's pod template. After this update, it will then terminate each agent pod in a rolling fashion so that their managing daemon set will replace them with a new pod on the new version. Once the operator and all agent pods are running and healthy on the new version, the administrator is free to begin the upgrade process for each of their Rook clusters. The Rook operator, at startup after being upgraded, iterates over each Cluster CRD instance and proceeds to verify desired state. If the Rook system namespace upgrade described above has not yet occurred, then the operator will delay upgrading a cluster until the system upgrade is completed. The operator should never allow a cluster's version to be newer than its own version. The upgrade controller begins a reconciliation to bring the cluster's actual version value in agreement with the desired version, which is the container version of the operator pod. As each step in this sequence begins/ends, the status field of the cluster CRD will be updated to indicate the progress (current step) of the upgrade process. This will help the upgrade controller resume the upgrade if it were to be interrupted. Also, each step should be idempotent so that if the step has already been carried out, there will be no unintended side effects if the step is resumed or run again. Mons: The monitor pods will be upgraded in a rolling fashion. For each monitor, the following actions will be performed by the upgrade controller: The `image` field of the pod template spec will be updated to the new version number. Then the pod will be terminated, allowing the replica set that is managing it to bring up a new pod on the new version to replace it. The controller will verify that the new pod is on the new version, in the `Running` state, and that the monitor returns to `in quorum` and has a Ceph status of `OK`. The cluster health will be verified as a whole before moving to the next" }, { "data": "Ceph Managers: The Ceph manager pod will be upgraded next by updating the `image` field on the pod template spec. The deployment that is managing the pod will then terminate it and start a new pod running the new version. The upgrade controller will verify that the new pod is on the new version, in the `Running` state and that the manager instance shows as `Active` in the Ceph status output. OSDs: The OSD pods will be upgraded in a rolling fashion after the monitors. For each OSD, the following actions will take place: The `image` field of the pod template spec will be updated to the new version number. The lifecycle management of OSDs can be done either as a whole by a single daemon set or individually by a replica set per OSD. In either case, each individual OSD pod will be terminated so that its managing controller will respawn a new pod on the new version in its place. The controller will verify that each OSD is running the new version and that they return to the `UP` and `IN` statuses. Placement group health will also be verified to ensure all PGs return to the `active+clean` status before moving on. If the user has installed optional components, such as object storage (RGW) or shared file system (MDS), they will also be upgraded to the new version. They are both managed by deployments, so the upgrade controller will update the `image` field in their pod template specs which then causes the deployment to terminate old pods and start up new pods on the new versions to replace them. Cluster health and object/file functionality will be verified before the upgrade controller moves on to the next instances. As mentioned previously, the manual health verification steps found in the will be used by the upgrade controller, in an automated fashion, to ensure the cluster is healthy before proceeding with the upgrade process. This approach of upgrading one component, verifying health and stability, then upgrading the next component can be viewed as a form of . Here is a quick summary of the standard health checks the upgrade controller should perform: All pods are in the `Running` state and have few, if any, restarts No pods enter a crash loop backoff Overall status: The overall cluster status is `OK` and there are no warning or error status messages displayed. Monitors: All of the monitors are `in quorum` and have individual status of `OK`. OSDs: All OSDs are `UP` and `IN`. MGRs: All Ceph managers are in the `Active` state. Placement groups: All PGs are in the `active+clean` state. To further supplement the upgrade controller's ability to determine health, as well as facilitate the built-in Kubernetes upgrade capabilities, the Rook pods should implement when possible. For pods that implement these probes, the upgrade controller can check them as another data point in determining if things are healthy before proceeding with the upgrade. If the upgrade controller observes the cluster to be in an unhealthy state (that does not recover) during the upgrade process, it will need to roll back components in the cluster to the previous stable version. This is possible due to the rolling/canary approach of the upgrade" }, { "data": "To roll a component back to the previous version, the controller will simply set the `image` field of the pod template spec to the previous version then terminate each pod to allow their managing controller to start a new pod on the old version to replace it. The hope is that cluster health and stability will be restored once it has been rolled back to the previous version, but it is possible that simply rolling back the version may not solve all cases of cluster instability that begin during an upgrade process. We will need more hands on experience with cluster upgrades in order to improve both upgrade reliability and rollback effectiveness. We should consider implementing status commands that will help the user monitor and verify the upgrade progress and status. Some examples for potential new commands would be: `rook versions`: This command would return the version of all Rook components in the cluster, so they can see at a glance which components have finished upgrading. This is similar to the . `rook status --upgrade`: This command would return a summary, retrieved from the upgrade controller, of the most recent completed steps and status of the upgrade that it is currently working on. When a breaking change or a data format change occurs, the upgrade controller will have the ability to automatically perform the necessary migration steps during the upgrade process. While migrations are possible, they are certainly not desirable since they require extra upgrade logic to be written and tested, as well as providing new potential paths for failure. Going forward, it will be important for the Rook project to increase our discipline regarding the introduction of breaking changes. We should be very careful about adding any new code that requires a migration during the update process. Kubernetes has some with the `kubectl rolling-update` command. Rook can potentially take advantage of this support for our replication controllers that have multiple stateless pods deployed, such as RGW. This support is likely not a good fit for some of the more critical and sensitive components in the cluster, such as monitors, that require careful observation to ensure health is maintained and quorum is reestablished. If the upgrade controller uses the built-in rolling update support for certain stateless components, it should still verify all cluster health checks before proceeding with the next set of components. The upgrade process should be carefully orchestrated in a controlled manner to ensure reliability and success. Therefore, there should be some locking or synchronization that can ensure that while an upgrade is in progress, other changes to the cluster cannot be made. For example, if the upgrade controller is currently rolling out a new version, it should not be possible to modify the cluster CRD with other changes, such as removing a node from the cluster. This could be done by the operator stopping its watches on all CRDs or it could choose to simply return immediately from CRD events while the upgrade is in progress. There are also some mechanisms within Ceph that can help the upgrade proceed in a controlled manner. For example, the `noout` flag can be set in the Ceph cluster, indicating that while OSDs will be taken down to upgrade them, they should not be marked out of the cluster, which would trigger unnecessary recovery operations. The recommends setting the `noout` flag for the duration of the upgrade. Details of the `noout` flag can be found in the" }, { "data": "For small clusters, the process of upgrading one pod at a time should be sufficient. However, for much later clusters (100+ nodes), this would result in an unacceptably long upgrade window duration. The upgrade controller should be able to batch some of its efforts to upgrade multiple pods at once in order to finish an upgrade in a more timely manner. This batching should not be done across component types (e.g. upgrading mons and OSDs at the same time), those boundaries where the health of the entire cluster is verified should still exist. This batching should also not be done for monitors as there are typically only a handful of monitors servicing the entire cluster and it is not recommended to have multiple monitors down at the same time. But, the upgrade controller should be able to slowly increase its component update batch size as it proceeds through some other component types, such as OSDs, MDS and RGW. For example, in true canary deployment fashion, a single OSD could be upgraded to the new version and OSD/cluster health will be verified. Then two OSDs could be updated at once and verification occurs again, followed by four OSDs, etc. up to a reasonable upper bound. We do not want too many pods going down at one time, which could potentially impact cluster health and functionality, so a sane upper bound will be important. If an upgrade does not succeed, especially if the rollback effort also fails, we want to have some artifacts that are accessible by the storage administrator to troubleshoot the issue or to reach out to the Rook community for help. Because the upgrade process involves terminating pods and starting new ones, we need some strategies for investigating what happened to pods that may no longer be alive. Listed below are a few techniques for accessing debugging artifacts from pods that are no longer running: `kubectl logs --previous ${PODNAME} ${CONTAINERNAME}` allows you to retrieve logs from a previous instance of a pod (e.g. a pod that crashed but is not yet terminated) `kubectl get pods --show-all=true` will list all pods, including older versioned pods that were terminated in order to replace them with pods running the newer version. The Rook operator logs (which host the upgrade controller output) should be thorough and verbose about the following: The sequence of actions it took during the upgrade The replication controllers (e.g. daemon set, replica set, deployment) that it modified and the pod names that it terminated All health check status and output it encountered We have demonstrated that Rook is upgradable with the manual process outlined in the . Fully automated upgrade support has been described within this design proposal, but will likely need to be implemented in an iterative process, with lessons learned along the way from pre-production field experience. The next step will be to implement the happy path where the upgrade controller automatically updates all Rook components in the and stops immediately if any health checks fail and the cluster does not return to a healthy functional state. Handling failure cases with rollback as well as handling migrations and breaking changes will likely be implemented in future milestones, along with reliability and stability improvements from field and testing experience. What other steps can be taken to restore cluster health before resorting to rollback? What do we do if rollback doesn't succeed? What meaningful liveness/readiness probes can our pods implement?" } ]
{ "category": "Runtime", "file_name": "upgrade.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- Thank you for contributing to Rook! --> <!-- STEPS TO FOLLOW: Add a description of the changes (frequently the same as the commit description) Enter the issue number next to \"Resolves #\" below (if there is no tracking issue resolved, remove that section) Review our Contributing documentation at https://rook.io/docs/rook/latest/Contributing/development-flow/ Follow the steps in the checklist below, starting with the Commit Message Formatting. --> <!-- Uncomment this section with the issue number if an issue is being resolved Issue resolved by this Pull Request: Resolves # > Checklist: . updated with breaking and/or notable changes for the next minor release. [ ] Documentation has been updated, if necessary. [ ] Unit tests have been added, if necessary. [ ] Integration tests have been added, if necessary." } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "```python import curvefs cbd1 = curvefs.CBDClient() cbd2 = curvefs.CBDClient() ``` ```python import curvefs cbd = curvefs.CBDClient() cbd.Init(\"/etc/curve/client.conf\") ``` ```python import curvefs cbd = curvefs.CBDClient() cbd.Init(\"/etc/curve/client.conf\") # subsequent examples omit the initialization process user = curvefs.UserInfo_t() user.owner = \"curve\" user.password = \"\" # when the password is empty, it can be omitted cbd.Create(\"/curve\", user, 1010241024*1024) typedef struct UserInfo { char owner[256]; # username char password[256]; # password } UserInfo_t; ``` ```python user = curvefs.UserInfo_t() user.owner = \"curve\" finfo = curvefs.FileInfo_t() cbd.StatFile(\"/curve\", user, finfo) print finfo.filetype print finfo.length print finfo.ctime typedef struct FileInfo { uint64_t id; uint64_t parentid; int filetype; # volume type uint64_t length; # volume size uint64_t ctime; # volume create time char filename[256]; # volume name char owner[256]; # volume owner int fileStatus; # volume status } FileInfo_t; ``` ```python user = curvefs.UserInfo_t() user.owner = \"curve\" cbd.Extend(\"/curve\", user, 2010241024*1024) finfo = curvefs.FileInfo_t() cbd.StatFile(\"/curve\", user, finfo) print finfo.length ``` ```python user = curvefs.UserInfo_t() user.owner = \"user1\" fd = cbd.Open(\"/tmp1\", user) cbd.Close(fd) ``` ```python user = curvefs.UserInfo_t() user.owner = \"user1\" fd = cbd.Open(\"/tmp1\", user) cbd.Write(fd, \"aaaaaaaa\"*512, 0, 4096) cbd.Write(fd, \"bbbbbbbb\"*512, 4096, 4096) cbd.Read(fd\"\", 0, 4096) cbd.Close(fd) ``` Note: The current python api does not support asynchronous reading and writing ```python user = curvefs.UserInfo_t() user.owner = \"curve\" cbd.Unlink(\"/curve\", user) ``` ```python user = curvefs.UserInfo_t() user.owner = \"curve\" cbd.Recover(\"/curve\", user, 0) ``` ```python user = curvefs.UserInfo_t() user.owner = \"curve\" cbd.Rename(user, \"/curve\", \"/curve-new\") ``` ```python user = curvefs.UserInfo_t() user.owner = \"curve\" cbd.Mkdir(\"/curvedir\", user) ``` ```python user = curvefs.UserInfo_t() user.owner = \"curve\" cbd.Rmdir(\"/curvedir\", user) ``` ```python files = cbd.Listdir(\"/test\", user) for f in files: print f ``` ```python clusterId = cbd.GetClusterId() print clusterId ``` ```python cbd.UnInit() ``` | Code | Message | description | | :--: | :-- | | | 0 | OK | success | | -1 | EXISTS | file or directory exists | | -2 | FAILED | fail | | -3 | DISABLEDIO | disable io | | -4 | AUTHFAIL | authentication failed | | -5 | DELETING | deleting | | -6 | NOTEXIST | file not exist | | -7 | UNDER_SNAPSHOT | under snapshot | | -8 | NOT_UNDERSNAPSHOT | not under snapshot | | -9 | DELETE_ERROR | delete error | | -10 | NOT_ALLOCATE | segment not allocated | | -11 | NOT_SUPPORT | operation not supported | | -12 | NOT_EMPTY | directory not empty | | -13 | NOSHRINKBIGGER_FILE | no shrinkage | | -14 | SESSION_NOTEXISTS | session not exist | | -15 | FILE_OCCUPIED | file occupied | | -16 | PARAM_ERROR | parameter error | | -17 | INTERNAL_ERROR | internal error | | -18 | CRC_ERROR | CRC error | | -19 | INVALID_REQUEST | parameter invalid | | -20 | DISK_FAIL | disk fail | | -21 | NO_SPACE | no space | | -22 | NOT_ALIGNED | io not aligned | | -23 | BAD_FD | file is being closed, fd unavailable | | -24 | LENGTHNOTSUPPORT | file length is not supported | | -25 | SESSIONNOTEXIST | session not exist, duplicate with -14 | | -26 | STATUSNOTMATCH | status error | | -27 | DELETEBEINGCLONED | delete the file being cloned | | -28 | CLIENTNOTSUPPORT_SNAPSHOT | this version of client not support snapshot | | -29 | SNAPSHOT_FROZEN | snapshot is disabled | | -100 | UNKNOWN | unknown error |" } ]
{ "category": "Runtime", "file_name": "curve-client-python-api_en.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "This document provides a list of labels and their purpose. These labels can be attached to issues and PRs. The aim behind having such list of labels is, we can sort and distinguish issues. This list will help new community members to dive-in OpenEBS if they wish to contribute to project or investigate the problem. Use case: I am a beginner, I want to contribute to OpenEBS project and I am not sure from where to begin? Answer: Start looking into the issues tagged with labels such as `help/small` , `size/XS`, `kind/unit-test` etc. Solve such issues and raise PR. Labels prefixed with kind indicate the type of issue and PR. Suppose, you want to report a bug then you can tag your issue with `kind/bug` label. | Label | Description | ||| | kind/api | Issue/PR related to API | | kind/bug | If theres an error, failure in workflow, fault etc.| | kind/feature | New / existing / required feature or enhancement | | kind/improve | Needs improvement | | kind/e2e-test | Need/Add end to end test | | kind/unit-test | Need/Add Unit test| | kind/security | Security-related concerns | | kind/testing | Needs/Adds tests | Labels prefixed with help implies that issue needs some help from community/users. Further, you can adjust the level of help required, by specifying small, medium, large etc. | Label | Description | ||| | help/small | Need small help on docs (beginner level)| | help/medium | Need help to fix the code ( Intermediate level) | | help/large | Need help on design changes and code review (Expert level) | Labels prefixed with area indicates that which area issue or PR belong. Here, area implies component of the OpenEBS project. It will be easier to sort an issue based on the area of user's" }, { "data": "| Label | Description | ||| | area/maya | Maya Specific | | area/k8s | Kubernetes specific | | area/ci | Continuous integration specific | | area/high-availability | High availability | | area/storage | Storage specific | | area/machine-learning | Machine learning specific | Labels prefixed with size defines the size of change done in particular PR. | Label | Description | ||| | size/L | Size of change is large/7 commits probably | | size/M | Size of change is medium/5 commits probably | | size/S | Size of change is small/3 commits probably | | size/XL | Size of change is extra large/10 commits probably | | size/XS | Size of change is extra small/1 commit probably | | size/XXL | Size of change is very large/15 commits probably | Please do not club major changes into single PR. It'll be difficult for a reviewer to review a big PR. In that case, reviewer may discard your PR and will ask you create multiple PRs. Labels with prefix priority defines the severity of the issue. Severity level of an issue can be given as a suffix. | Label | Description | ||| | priority/0 | Urgent: Must be fixed in the next build. | | priority/1 | High: Must be fixed in any of the upcoming builds but should be included in the release. | | priority/2 | Medium: Maybe fixed after the release / in the next release. | | priority/3 | Low: May or may not be fixed at all. | Labels with prefix repo points the repository where given issue needs to be fixed. If users have no idea where to file the issue then they can file a issue in `openebs/openebs` later OpenEBS contributors/authors/owners will tag issue with the appropriate repository using repo label. | Label | Description | ||| | repo/maya | Related to openebs/maya repository | | repo/mayaserver | Related to openebs/mayaserver repository | | repo/jiva | Related to openebs/jiva repository | | repo/k8s-provisioner| Related to openebs/k8s-provisioner repository| As the name suggests, this contains general category labels and they are self-explanatory. | Label | Description | ||| | architecting | If something needs design changes/brainstorming | | documentation | Related to documentation | | work-in-progress | Means work in progress | | website | Issues related to website | | release-note | Need release note| | ready-for-review | PR is ready for review |" } ]
{ "category": "Runtime", "file_name": "labels-of-issues.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Here is an example of IPAM configuration. ```json { \"cniVersion\":\"0.3.1\", \"name\":\"macvlan-pod-network\", \"plugins\":[ { \"name\":\"macvlan-pod-network\", \"type\":\"macvlan\", \"master\":\"ens256\", \"mode\":\"bridge\", \"mtu\":1500, \"ipam\":{ \"type\":\"spiderpool\", \"logfilepath\":\"/var/log/spidernet/spiderpool.log\", \"logfilemax_size\":\"100\", \"logfilemax_age\":\"30\", \"logfilemax_count\":7, \"log_level\":\"INFO\", \"defaultipv4ippool\": [\"default-ipv4-pool1\",\"default-ipv4-pool2\"], \"defaultipv6ippool\": [\"default-ipv6-pool1\",\"default-ipv6-pool2\"] } } ] } ``` `logfilepath` (string, optional): Path to log file of IPAM plugin, default to `\"/var/log/spidernet/spiderpool.log\"`. `logfilemax_size` (string, optional): Max size of each rotated file, default to `\"100\"`(unit MByte). `logfilemax_age` (string, optional): Max age of each rotated file, default to `\"30\"`(unit Day). `logfilemax_count` (string, optional): Max number of rotated file, default to `\"7\"`. `log_level` (string, optional): Log level, default to `\"INFO\"`. It could be `\"INFO\"`, `\"DEBUG\"`, `\"WARN\"`, `\"ERROR\"`. `defaultipv4ippool` (string array, optional): Default IPAM IPv4 Pool to use. `defaultipv6ippool` (string array, optional): Default IPAM IPv6 Pool to use." } ]
{ "category": "Runtime", "file_name": "plugin-ipam.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "This document introduces EPM fundamental and design. EPM is a module optimized enclave boot time. It is designed as two parts, one is OCI bundles cache management, the other one is enclave cache pool management. This documentation will focus on enclave cache pool management currently, briefly call it as as below. The EPM is used to cache enclaves which can store various enclave runtime, e.g. , (under plan), (under development). Current implementation is based on skeleton. Inclavare-containers provides skeleton as an enclave runtime reference. EPM includes EPM service and EPM client sorted by function. EPM service is used to store the enclave info and as gRPC server implementing EPM interface. There are two communicating channels between EPM service and EPM client. Channel one is used by gRPC as function calling. Channel two is used by transferring enclave file descriptor between both processes. EPM service will listen on epm.sock in unix socket. It will receive the gRPC calling from EPM client. The EPM client can collect the enclaves info by procfs and transfer skeleton enclave info to enclave pool cache in EPM service. Skeleton enclave info includes enclave base address, enclave descriptor file, enclave memory layout, permissions. In Enclave runtime pal API will be used to manage enclave runtime lifecycle. Pal API V3 provides EPM reference implementation. EPM service is a process running on host machine. It implements cache pool interface. EPM service has capability of storing the enclave info, the storing is split into two parts: The enclave file descriptor just can be transferred with the same process who is opening the enclave. This is why to use two stages of storing. pre-storing In pre-storing state, the enclave info is not valid to be used, it just saves the enclave fd to `var EnclavePoolPreStore map[string]*v1alpha1.Enclave` in EPM service by unix domin socket with `sendFd`/`recvFd`. final-storing Once the enclave is released by `paldestroy` in container, the enclave info except enclave fd will be transferred by gRPC interface to `var EnclavePoolStore map[int]*v1alpha1.Enclave` in EPM service. The `paldestroy` will call gRPC interface `SaveFinalCache` to sync with the EPM service, until now the enclave info will be available for other containers. When other container is started, `pal_init` will try to query and get an enclave from enclave pool by gRPC interface. If the enclave pool is not empty, the enclave info will be responded to EPM client. After mmap with parameter enclave file descriptor and enclave map address, the enclave will be valid to container. EPM is started as a process on host. Its based on gRPC inter-process communicating. EPM service is started by command line: sudo epm. It will create and listen on /var/run/epm/epm.sock as following: ```go enclmanager := enclavepool.NewEnclaveCacheManager(cfg.Root) // register process cache pool manager to the manager server" }, { "data": "// start the grpc server with the server options s := grpc.NewServer(serverOpts...) // registry and start the cache pool manager server v1alpha1.RegisterEnclavePoolManagerServer(s, &server) // listen and serve lis, err := net.Listen(\"unix\", cfg.GRPC.Address) s.Serve(lis) ``` Enclave proto definition: ```protobuf message Enclave { int64 fd = 1; // Enclave file descriptor in current process int64 nr = 2; // Enclave memory segment numbers repeated Enclavelayout layout = 3; // Enclave memory layout } message Enclavelayout { uint64 addr = 1; // Current enclave segment starting address uint64 size = 2; // Current enclave segment size EnclavePerms prot = 3; // Current enclave segments permission } message EnclavePerms { bool read = 1; bool write = 2; bool execute = 3; bool share = 4; bool private = 5; } ``` Cache and Cache Pool: Cache represents the metadata of a cache managed by enclave pool. ```protobuf message Cache { string type = 1; // Type represents the type of enclave pool string subType = 2; // SubType represents the subtype of enclave pool which represents a more find-grained pool string ID = 3; // ID represents the id of the cache and the id is unique in the same type of enclave pool string savePath = 4; // SavePath represents the absolute path to store the cache Cache parent = 5; // Parent represents the parent cache of the current cache, if do not have a parent the value is nil int64 size = 6; // Size represents the size in bytes of the cache int64 created = 7; // Created represents the creation time of the cache which is the number of seconds elapsed since January 1, 1970 UTC google.protobuf.Any options = 8; // Options is an optional field which can extend any type of data structure } ``` EnclavePoolManager EnclavePoolManager represents an enclave pool. ```protobuf service EnclavePoolManager { // GetCache represents get the specified cache from pool rpc GetCache(GetCacheRequest) returns (GetCacheResponse) {} // PickCache represents pick a suitable cache from pool rpc PickCache(PickCacheRequest) returns (PickCacheResponse){} // SaveCache represents save the data to a cache directory and record the cache metadata rpc SaveCache(SaveCacheRequest) returns (SaveCacheResponse) {} // SaveFinalCache represents save the enclave info which can be used instantly rpc SaveFinalCache(SaveCacheRequest) returns (SaveCacheResponse) {} // ListCache represents list part of or all of the cache metadata rpc ListCache(ListCacheRequest) returns (ListCacheResponse) {} // DeleteCache represents delete the specified cached data and remove the corresponding cache metadata rpc DeleteCache(DeleteCacheRequest) returns (DeleteCacheResponse) {} // LoadCache represents load the specified cache data to work directory rpc LoadCache(LoadCacheRequest) returns (LoadCacheResponse) {} } ``` Enclave info will be stored into the map variant temporarily after `pal_init`. ```go var EnclavePoolPreStore map[string]*v1alpha1.Enclave ``` Enclave info will be stored into the map variant finally after `pal_destroy`. ```go var EnclavePoolStore" }, { "data": "``` Each enclave manager will implement the following interface, currently we just have one type of enclave runtime, we implement it in enclave-cache.go EnclavePool interface EnclavePool represents one type of enclave pool, each kind of enclave pool need implement the EnclavePool interface. ```go type EnclavePool interface { // GetCache gets the cache by ID GetCache(ID string) (*v1alpha1.Cache, error) // SaveCache saves the data to a cache directory and record the cache metadata SaveCache(sourcePath string, cache *v1alpha1.Cache) error // SaveFinalCache save the final enclave cache info SaveFinalCache(ID string) error // ListCache lists part of or all of the cache metadata ListCache(lastCacheID string, limit int32) ([]*v1alpha1.Cache, error) // DeleteCache deletes the specified cached data and remove the corresponding cache metadata DeleteCache(ID string) error // LoadCache loads the specified cache data to work directory LoadCache(ID, targetPath string) error // GetPoolType gets the pool type of current pool GetPoolType() string } ``` DefaultEnclavePool DefaultEnclavePool is the default implementation of EnclavePool ```go type DefaultEnclavePool struct { Root string Type string Enclaveinfo map[int]*v1alpha1.Enclave CacheMetadata *cache_metadata.Metadata } ``` We just need implement two interfaces: ```go func (d EnclaveCacheManager) GetCache(ID string) (v1alpha1.Cache, error) ``` This function will marshal enclave info in cache pool, and then `sendFd` enclave file descriptor to the receiving process init-runelet in container. By `sendFd`, the enclave file descriptor will be dup to init-runelet process in container which will create and map the enclave by enclave info. ```go func (d EnclaveCacheManager) SaveCache(sourcePath string, cache v1alpha1.Cache) error ``` SaveCache represents enclave info will be saved into EnclavePoolStore, the control flow is as below: Analyze /proc/self/maps, get device /dev/sgx/enclaves mapping info(address, size, permission, flag) Query /proc/self/fd, get device /dev/sgx/enclaves fd GRPC those info above to EPM service, but enclave fd is sent by `sendFd` in unix socket. Calling GetCache in `pal_init`, The GetCache function will go into EPM service by gRPC. cacheResp will be responded with cache info from EPM service to EPM client. Enclave fd will be returned by `sendFd`/`recvFd`. Enclave info will be composed of cacheResp and enclave fd. defines a common interface to interact between rune and enclave runtime.Enclave Runtime PAL API Specification currently supports PAL API V1 V2 V3. Only V3 can support EPM. Lets take skeleton as an example. int `palinit(palattrv3t *attr)` definition in rune/libenclave/internal/runtime/pal/skeleton/liberpal-skeleton.h. The `pal_init` takes responsibility for creating, loading, measuring and initializing enclave. ```go typedef struct { palattrv1t attrv1; int fd; uint64_t addr; } palattrv3_t; ``` At early stage of `palinit`, the 'epm.GetEnclave()' will try to query enclave from EPM service. If the value returned is not empty, it will filled `palattrv3t` structure. it means that enclave is retrieved from the enclave cache pool. `Pal_init` will be done instantly. We have a performance testing based on skeleton as below: From `palinit` to `paldestroy`, an enclave runtime's lifecycle." } ]
{ "category": "Runtime", "file_name": "design.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "This design includes the changes to the VolumeSnapshotter api design as required by the feature. The VolumeSnapshotter v2 interface will have two new methods. If there are any additional VolumeSnapshotter API changes that are needed in the same Velero release cycle as this change, those can be added here as well. This API change is needed to facilitate long-running plugin actions that may not be complete when the Execute() method returns. The existing snapshotID returned by CreateSnapshot will be used as the operation ID. This will allow long-running plugin actions to continue in the background while Velero moves on to the next plugin, the next item, etc. Allow for VolumeSnapshotter CreateSnapshot() to initiate a long-running operation and report on operation status. Allowing velero control over when the long-running operation begins. As per the design, a new VolumeSnapshotterv2 plugin `.proto` file will be created to define the GRPC interface. v2 go files will also be created in `plugin/clientmgmt/volumesnapshotter` and `plugin/framework/volumesnapshotter`, and a new PluginKind will be created. The velero Backup process will be modified to reference v2 plugins instead of v1 plugins. An adapter will be created so that any existing VolumeSnapshotter v1 plugin can be executed as a v2 plugin when executing a backup. The v2 VolumeSnapshotter.proto will be like the current v1 version with the following changes: The VolumeSnapshotter service gets two new rpc methods: ``` service VolumeSnapshotter { rpc Init(VolumeSnapshotterInitRequest) returns (Empty); rpc CreateVolumeFromSnapshot(CreateVolumeRequest) returns (CreateVolumeResponse); rpc GetVolumeInfo(GetVolumeInfoRequest) returns (GetVolumeInfoResponse); rpc CreateSnapshot(CreateSnapshotRequest) returns (CreateSnapshotResponse); rpc DeleteSnapshot(DeleteSnapshotRequest) returns (Empty); rpc GetVolumeID(GetVolumeIDRequest) returns (GetVolumeIDResponse); rpc SetVolumeID(SetVolumeIDRequest) returns (SetVolumeIDResponse); rpc Progress(VolumeSnapshotterProgressRequest) returns (VolumeSnapshotterProgressResponse); rpc Cancel(VolumeSnapshotterCancelRequest) returns (google.protobuf.Empty); } ``` To support these new rpc methods, we define new request/response message types: ``` message VolumeSnapshotterProgressRequest { string plugin = 1; string snapshotID = 2; } message VolumeSnapshotterProgressResponse { generated.OperationProgress progress = 1; } message VolumeSnapshotterCancelRequest { string plugin = 1; string operationID = 2; } ``` One new shared message type will be needed, as defined in the v2 BackupItemAction design: ``` message OperationProgress { bool completed = 1; string err = 2; int64 completed = 3; int64 total = 4; string operationUnits = 5; string description = 6; google.protobuf.Timestamp started = 7; google.protobuf.Timestamp updated = 8; } ``` A new PluginKind, `VolumeSnapshotterV2`, will be created, and the backup process will be modified to use this plugin kind. See for more details on implementation plans, including v1 adapters, etc. The included v1 adapter will allow any existing VolumeSnapshotter plugin to work as expected, with no-op Progress() and Cancel() methods. This will be implemented during the Velero 1.11 development cycle." } ]
{ "category": "Runtime", "file_name": "vsv2-design.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "This list captures the set of organizations that are using K8up within their environments. If you are an adopter of K8up and not yet on this list, we encourage you to add your organization here as well! The goal for this list is to be the complete and authoritative source for the entire community of K8up adopters, and give inspiration to others that are earlier in their K8up journey. Contributing to this list is a small effort that has a big impact to the project's growth, maturity, and momentum. Thank you to all adopters and contributors of the K8up project! To add your organization to this list, to directly update this list, or directly in GitHub. This list is sorted in the order that organizations were added to it. | Organization | Contact | Description of Use | | -- | - | - | | | | K8up was born at VSHN because at that time there was no other mature enough backup operator around. Today, K8up is integral part of the service offering and protects precious data every day. | | | | We use K8up as an integral part of our Disaster Recovery procedures. | | | | We've adopted K8up within . This keeps the data of all Lagoon customers safe and restorable. | | | | We use K8up as Backup Operator in our Kubernetes Management product.|" } ]
{ "category": "Runtime", "file_name": "ADOPTERS.md", "project_name": "K8up", "subcategory": "Cloud Native Storage" }
[ { "data": "This tutorial will walk you through the creation of server running in a using It assumes you've already downloaded and configured `CRI-O`. If not, see . It also assumes you've set up CNI, and are using the default plugins as described . If you are using a different configuration, results may vary. This section will walk you through installing the following components: crictl - The CRI client for testing. See for details on how to install crictl. ```shell sudo crictl --runtime-endpoint unix:///var/run/crio/crio.sock version ``` ```text Version: 0.1.0 RuntimeName: cri-o RuntimeVersion: 1.20.0-dev RuntimeApiVersion: v1alpha1 ``` to avoid setting `--runtime-endpoint` when calling crictl, you can run `export CONTAINERRUNTIMEENDPOINT=unix:///var/run/crio/crio.sock` or `cp crictl.yaml /etc/crictl.yaml` from this repo Now that the `CRI-O` components have been installed and configured you are ready to create a Pod. This section will walk you through launching a Redis server in a Pod. Once the Redis server is running we'll use telnet to verify it's working, then we'll stop the Redis server and clean up the Pod. First we need to setup a Pod sandbox using a Pod configuration, which can be found in the `CRI-O` source tree: ```shell cd $GOPATH/src/github.com/cri-o/cri-o ``` In case the file `/etc/containers/policy.json` does not exist on your filesystem, make sure that Skopeo has been installed correctly. You can use a policy template provided in the CRI-O source tree, but it is insecure and it is not to be used on production machines: ```shell sudo mkdir /etc/containers/ sudo cp test/policy.json /etc/containers ``` Next create the Pod and capture the Pod ID for later use: ```shell PODID=$(sudo crictl runp test/testdata/sandboxconfig.json) ``` Use the `crictl` command to get the status of the Pod: ```shell sudo crictl inspectp --output table $POD_ID ``` Output: ```text ID: 3cf919ba84af36642e6cdb55e157a62407dec99d3cd319f46dd8883163048330 Name: podsandbox1 UID: redhat-test-crio Namespace: redhat.test.crio Attempt: 1 Status: SANDBOX_READY Created: 2020-11-12 12:53:41.345961219 +0100 CET IP Addresses: 10.85.0.7 Labels: group -> test io.kubernetes.container.name -> POD Annotations: owner -> hmeng security.alpha.kubernetes.io/seccomp/pod -> unconfined Info: # Redacted ``` Use the `crictl` command to pull the Redis image. ```shell sudo crictl pull quay.io/crio/fedora-crio-ci:latest ``` Create a Redis container from a container configuration and attach it to the Pod created earlier, while capturing the container ID: ```shell CONTAINERID=$(sudo crictl create $PODID test/testdata/containerredis.json test/testdata/sandboxconfig.json) ``` The `crictl create` command will take a few seconds to return because the Redis container needs to be pulled. Start the Redis container: ```shell sudo crictl start $CONTAINER_ID ``` Get the status for the Redis container: ```shell sudo crictl inspect $CONTAINER_ID ``` Output: ```text ID: f70e2a71239c6724a897da98ffafdfa4ad850944098680b82d381d757f4bcbe1 Name: podsandbox1-redis State: CONTAINER_RUNNING Created: 32 seconds ago Started: 16 seconds ago Labels: tier -> backend Annotations: pod -> podsandbox1 Info: # Redacted ``` Fetch the Pod IP (can also be obtained via the `inspectp` output above): <!-- markdownlint-disable MD013 --> ```shell PODIP=$(sudo crictl inspectp --output go-template --template '{{.status.network.ip}}' $PODID) ``` <!-- markdownlint-enable MD013 --> Verify the Redis server is responding to `MONITOR` commands: ```shell echo MONITOR | ncat $POD_IP 6379 ``` Output: ```text +OK ``` The Redis logs are logged to the stderr of the crio service, which can be viewed using `journalctl`: ```shell sudo journalctl -u crio --no-pager ``` ```shell sudo crictl stop $CONTAINER_ID sudo crictl rm $CONTAINER_ID ``` Verify the container is gone via: ```shell sudo crictl ps ``` ```shell sudo crictl stopp $POD_ID sudo crictl rmp $POD_ID ``` Verify the pod is gone via: ```shell sudo crictl pods ```" } ]
{ "category": "Runtime", "file_name": "crictl.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "(instances-manage)= When listing the existing instances, you can see their type, status, and location (if applicable). You can filter the instances and display only the ones that you are interested in. ````{tabs} ```{group-tab} CLI Enter the following command to list all instances: incus list You can filter the instances that are displayed, for example, by type, status or the cluster member where the instance is located: incus list type=container incus list status=running incus list location=server1 You can also filter by name. To list several instances, use a regular expression for the name. For example: incus list ubuntu.* Enter to see all filter options. ``` ```{group-tab} API Query the `/1.0/instances` endpoint to list all instances. You can use {ref}`rest-api-recursion` to display more information about the instances: incus query /1.0/instances?recursion=2 You can {ref}`filter <rest-api-filtering>` the instances that are displayed, by name, type, status or the cluster member where the instance is located: incus query /1.0/instances?filter=name+eq+ubuntu incus query /1.0/instances?filter=type+eq+container incus query /1.0/instances?filter=status+eq+running incus query /1.0/instances?filter=location+eq+server1 To list several instances, use a regular expression for the name. For example: incus query /1.0/instances?filter=name+eq+ubuntu.* See for more information. ``` ```` ````{tabs} ```{group-tab} CLI Enter the following command to show detailed information about an instance: incus info <instance_name> Add `--show-log` to the command to show the latest log lines for the instance: incus info <instance_name> --show-log ``` ```{group-tab} API Query the following endpoint to show detailed information about an instance: incus query /1.0/instances/<instance_name> See for more information. ``` ```` ````{tabs} ```{group-tab} CLI Enter the following command to start an instance: incus start <instance_name> You will get an error if the instance does not exist or if it is running already. To immediately attach to the console when starting, pass the `--console` flag. For example: incus start <instance_name> --console See {ref}`instances-console` for more information. ``` ```{group-tab} API To start an instance, send a PUT request to change the instance state: incus query --request PUT /1.0/instances/<instance_name>/state --data '{\"action\":\"start\"}' <!-- Include start monitor status --> The return value of this query contains an operation ID, which you can use to query the status of the operation: incus query /1.0/operations/<operation_ID> Use the following query to monitor the state of the instance: incus query" }, { "data": "See and for more information. <!-- Include end monitor status --> ``` ```` (instances-manage-stop)= `````{tabs} ````{group-tab} CLI Enter the following command to stop an instance: incus stop <instance_name> You will get an error if the instance does not exist or if it is not running. ```` ````{group-tab} API To stop an instance, send a PUT request to change the instance state: incus query --request PUT /1.0/instances/<instance_name>/state --data '{\"action\":\"stop\"}' % Include content from above ```{include} ./instances_manage.md :start-after: <!-- Include start monitor status --> :end-before: <!-- Include end monitor status --> ``` ```` ````` If you don't need an instance anymore, you can remove it. The instance must be stopped before you can delete it. `````{tabs} ```{group-tab} CLI Enter the following command to delete an instance: incus delete <instance_name> ``` ```{group-tab} API To delete an instance, send a DELETE request to the instance: incus query --request DELETE /1.0/instances/<instance_name> See for more information. ``` ````` ```{caution} This command permanently deletes the instance and all its snapshots. ``` There are different ways to prevent accidental deletion of instances: To protect a specific instance from being deleted, set {config:option}`instance-security:security.protection.delete` to `true` for the instance. See {ref}`instances-configure` for instructions. In the CLI client, you can create an alias to be prompted for approval every time you use the command: incus alias add delete \"delete -i\" If you want to wipe and re-initialize the root disk of your instance but keep the instance configuration, you can rebuild the instance. Rebuilding is only possible for instances that do not have any snapshots. Stop your instance before rebuilding it. ````{tabs} ```{group-tab} CLI Enter the following command to rebuild the instance with a different image: incus rebuild <imagename> <instancename> Enter the following command to rebuild the instance with an empty root disk: incus rebuild <instance_name> --empty For more information about the `rebuild` command, see . ``` ```{group-tab} API To rebuild the instance with a different image, send a POST request to the instance's `rebuild` endpoint. For example: incus query --request POST /1.0/instances/<instancename>/rebuild --data '{\"source\": {\"alias\":\"<imagealias>\",\"server\":\"<server_URL>\", protocol:\"simplestreams\"}}' To rebuild the instance with an empty root disk, specify the source type as `none`: incus query --request POST /1.0/instances/<instance_name>/rebuild --data '{\"source\": {\"type\":\"none\"}}' See for more information. ``` ````" } ]
{ "category": "Runtime", "file_name": "instances_manage.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "These guidelines are new and may change. This note will be removed when consensus is reached. Not all existing code will comply with this style guide, but new code should. Further, it is a goal to eventually update all existing code to be in compliance. All code, unless it substantially increases the line count or complexity, should use early exits from loops and functions where possible. All Go code should comply with the and guides, as well as the additional guidelines described below. Mutexes should be named mu or xxxMu. Mutexes as a general rule should not be exported. Instead, export methods which use the mutexes to avoid leaky abstractions. Mutexes should be sibling fields to the fields that they protect. Mutexes should not be declared as global variables, instead use a struct (anonymous ok, but naming conventions still apply). Mutexes should be ordered before the fields that they protect. Mutexes should have a comment on their declaration explaining any ordering requirements (or pointing to where this information can be found), if applicable. There is no need for a comment explaining which fields are protected. Each field or variable protected by a mutex should state as such in a comment on the field or variable declaration. Functions with special entry conditions (e.g., a lock must be held) should state these conditions in a `Preconditions:` comment block. One condition per line; multiple conditions are specified with a bullet (`*`). Functions with notable exit conditions (e.g., a `Done` function must eventually be called by the caller) can similarly have a `Postconditions:` block. Unused returns should be explicitly ignored with underscores. If there is a function which is commonly used without using its return(s), a wrapper function should be declared which explicitly ignores the returns. That said, in many cases, it may make sense for the wrapper to check the returns. Built-in types should use their associated verbs (e.g. %d for integral types), but other types should use a %v variant, even if they implement fmt.Stringer. The built-in `error` type should use %w when formatted with `fmt.Errorf`, but only then. Comments should be wrapped at 80 columns with a 2 space tab size. Code does not need to be wrapped, but if wrapping would make it more readable, it should be wrapped with each subcomponent of the thing being wrapped on its own line. For example, if a struct is split between lines, each field should be on its own line. ```go _ = exec.Cmd{ Path: \"/foo/bar\", Args: []string{\"-baz\"}, } ``` C++ code should conform to the and the guidelines described for tests." } ]
{ "category": "Runtime", "file_name": "style.md", "project_name": "gVisor", "subcategory": "Container Runtime" }
[ { "data": "title: Prometheus Monitoring Each Rook Ceph cluster has some built in metrics collectors/exporters for monitoring with . If you do not have Prometheus running, follow the steps below to enable monitoring of Rook. If your cluster already contains a Prometheus instance, it will automatically discover Rook's scrape endpoint using the standard `prometheus.io/scrape` and `prometheus.io/port` annotations. !!! attention This assumes that the Prometheus instances is searching all your Kubernetes namespaces for Pods with these annotations. If prometheus is already installed in a cluster, it may not be configured to watch for third-party service monitors such as for Rook. Normally you should be able to add the prometheus annotations `prometheus.io/scrape=true` and `prometheus.io/port={port}` and prometheus would automatically configure the scrape points and start gathering metrics. If prometheus isn't configured to do this, see the . First the Prometheus operator needs to be started in the cluster so it can watch for our requests to start monitoring Rook and respond by deploying the correct Prometheus pods and configuration. A full explanation can be found in the , but the quick instructions can be found here: ```console kubectl create -f https://raw.githubusercontent.com/coreos/prometheus-operator/v0.71.1/bundle.yaml ``` !!! note If the Prometheus Operator is already present in your cluster, the command provided above may fail. For a detailed explanation of the issue and a workaround, please refer to . This will start the Prometheus operator, but before moving on, wait until the operator is in the `Running` state: ```console kubectl get pod ``` Once the Prometheus operator is in the `Running` state, proceed to the next section to create a Prometheus instance. With the Prometheus operator running, we can create service monitors that will watch the Rook cluster. There are two sources for metrics collection: Prometheus manager module: It is responsible for exposing all metrics other than ceph daemons performance counters. Ceph exporter: It is responsible for exposing only ceph daemons performance counters as prometheus metrics. From the root of your locally cloned Rook repo, go the monitoring directory: ```console $ git clone --single-branch --branch master https://github.com/rook/rook.git cd rook/deploy/examples/monitoring ``` Create the service monitor as well as the Prometheus server pod and service: ```console kubectl create -f service-monitor.yaml kubectl create -f exporter-service-monitor.yaml kubectl create -f prometheus.yaml kubectl create -f prometheus-service.yaml ``` Ensure that the Prometheus server pod gets created and advances to the `Running` state before moving on: ```console kubectl -n rook-ceph get pod prometheus-rook-prometheus-0 ``` Configure the Prometheus endpoint so the dashboard can retrieve metrics from Prometheus with two settings: `prometheusEndpoint`: The url of the Prometheus instance `prometheusEndpointSSLVerify`: Whether SSL should be verified if the Prometheus server is using https The following command can be used to get the Prometheus url: ```console echo \"http://$(kubectl -n rook-ceph -o" }, { "data": "get pod prometheus-rook-prometheus-0):30900\" ``` Following is an example to configure the Prometheus endpoint in the CephCluster CR. ```YAML spec: dashboard: prometheusEndpoint: http://192.168.61.204:30900 prometheusEndpointSSLVerify: true ``` !!! note It is not recommended to consume storage from the Ceph cluster for Prometheus. If the Ceph cluster fails, Prometheus would become unresponsive and thus not alert you of the failure. Once the Prometheus server is running, you can open a web browser and go to the URL that is output from this command: ```console echo \"http://$(kubectl -n rook-ceph -o jsonpath={.status.hostIP} get pod prometheus-rook-prometheus-0):30900\" ``` You should now see the Prometheus monitoring website. Click on `Graph` in the top navigation bar. In the dropdown that says `insert metric at cursor`, select any metric you would like to see, for example `cephclustertotalusedbytes` Click on the `Execute` button. Below the `Execute` button, ensure the `Graph` tab is selected and you should now see a graph of your chosen metric over time. You can find Prometheus Consoles for and from Ceph here: . A guide to how you can write your own Prometheus consoles can be found on the official Prometheus site here: . To enable the Ceph Prometheus alerts via the helm charts, set the following properties in values.yaml: rook-ceph chart: `monitoring.enabled: true` rook-ceph-cluster chart: `monitoring.enabled: true` `monitoring.createPrometheusRules: true` Alternatively, to enable the Ceph Prometheus alerts with example manifests follow these steps: Create the RBAC and prometheus rules: ```console kubectl create -f deploy/examples/monitoring/rbac.yaml kubectl create -f deploy/examples/monitoring/localrules.yaml ``` Make following changes to your CephCluster object (e.g., `cluster.yaml`). ```YAML apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph [...] spec: [...] monitoring: enabled: true [...] ``` Deploy or update the CephCluster object. ```console kubectl apply -f cluster.yaml ``` !!! note This expects the Prometheus Operator and a Prometheus instance to be pre-installed by the admin. The Prometheus alerts can be customized with a post-processor using tools such as . For example, first extract the helm chart: ```console helm template -f values.yaml rook-release/rook-ceph-cluster > cluster-chart.yaml ``` Now create the desired customization configuration files. This simple example will show how to update the severity of a rule, add a label to a rule, and change the `for` time value. Create a file named kustomization.yaml: ```yaml patches: path: modifications.yaml target: group: monitoring.coreos.com kind: PrometheusRule name: prometheus-ceph-rules version: v1 resources: cluster-chart.yaml ``` Create a file named modifications.yaml ```yaml op: add path: /spec/groups/0/rules/0/labels value: my-label: foo severity: none op: add path: /spec/groups/0/rules/0/for value: 15m ``` Finally, run kustomize to update the desired prometheus rules: ```console kustomize build . > updated-chart.yaml kubectl create -f updated-chart.yaml ``` The dashboards have been created by . For feedback on the dashboards please reach out to him on the . !!! note The dashboards are only compatible with Grafana 7.2.0 or" }, { "data": "Also note that the dashboards are updated from time to time, to fix issues and improve them. The following Grafana dashboards are available: When updating Rook, there may be updates to RBAC for monitoring. It is easy to apply the changes with each update or upgrade. This should be done at the same time you update Rook common resources like `common.yaml`. ```console kubectl apply -f deploy/examples/monitoring/rbac.yaml ``` !!! hint This is updated automatically if you are upgrading via the helm chart To clean up all the artifacts created by the monitoring walk-through, copy/paste the entire block below (note that errors about resources \"not found\" can be ignored): ```console kubectl delete -f service-monitor.yaml kubectl delete -f prometheus.yaml kubectl delete -f prometheus-service.yaml kubectl delete -f https://raw.githubusercontent.com/coreos/prometheus-operator/v0.71.1/bundle.yaml ``` Then the rest of the instructions in the can be followed to finish cleaning up. Tectonic strongly discourages the `tectonic-system` Prometheus instance to be used outside their intentions, so you need to create a new yourself. After this you only need to create the service monitor as stated above. To integrate CSI liveness into ceph monitoring we will need to deploy a service and service monitor. ```console kubectl create -f csi-metrics-service-monitor.yaml ``` This will create the service monitor to have prometheus monitor CSI !!! note Please note that the liveness sidecar is disabled by default. To enable it set `CSIENABLELIVENESS` to `true` in the Rook operator settings (operator.yaml). RBD per-image IO statistics collection is disabled by default. This can be enabled by setting `enableRBDStats: true` in the CephBlockPool spec. Prometheus does not need to be restarted after enabling it. If Prometheus needs to select specific resources, we can do so by injecting labels into these objects and using it as label selector. ```YAML apiVersion: ceph.rook.io/v1 kind: CephCluster metadata: name: rook-ceph namespace: rook-ceph [...] spec: [...] labels: monitoring: prometheus: k8s [...] ``` Using metrics exported from the Prometheus service, the horizontal pod scaling can use the custom metrics other than CPU and memory consumption. It can be done with help of Prometheus Scaler provided by the . See the for details. Following is an example to autoscale RGW: ```YAML apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: rgw-scale namespace: rook-ceph spec: scaleTargetRef: kind: Deployment name: rook-ceph-rgw-my-store-a # deployment for the autoscaling minReplicaCount: 1 maxReplicaCount: 5 triggers: type: prometheus metadata: serverAddress: http://rook-prometheus.rook-ceph.svc:9090 metricName: collectingcephrgw_put query: | sum(rate(cephrgwput[2m])) # prometheus query used for autoscaling threshold: \"90\" ``` !!! warning During reconciliation of a `CephObjectStore`, the Rook Operator will reset the replica count for RGW which was set by horizontal pod scaler. The horizontal pod autoscaler will change the again once it re-evaluates the rule. This can result in a performance hiccup of several seconds after a reconciliation. This is briefly discussed (here)[https://github.com/rook/rook/issues/10001]" } ]
{ "category": "Runtime", "file_name": "ceph-monitoring.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "This document contains and defines the rules that have to be followed by any contributor to the Zenko CTST test suite. CTST pulls from the repo to provide APIs for AWS Standards, usable in the 'Steps' of the test framework. Beyond usual Cucumber.js practice of using worlds, steps and features, this test suite has some specifics to consider: The tests must be idempotent: they must not conflict with each other, here by using random bucket/object names. It should be possible to rerun the tests after the end of the CTST test suite: tests should not be dependant on environment." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Zenko", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- Thanks for sending a pull request! Please be aware that we're following the Kubernetes guidelines of contributing to this project. This means that we have to use this mandatory template for all of our pull requests. Please also make sure you've read and understood our contributing guidelines (https://github.com/cri-o/cri-o/blob/main/CONTRIBUTING.md) as well as ensuring that all your commits are signed with `git commit -s`. Here are some additional tips for you: If this is your first time, please read our contributor guidelines: https://git.k8s.io/community/contributors/guide#your-first-contribution and developer guide https://git.k8s.io/community/contributors/devel/development.md#development-guide Please label this pull request according to what type of issue you are addressing, especially if this is a release targeted pull request. For reference on required PR/issue labels, read here: https://git.k8s.io/community/contributors/devel/sig-release/release.md#issuepr-kind-label If you want faster PR reviews, read how: https://git.k8s.io/community/contributors/guide/pull-requests.md#best-practices-for-faster-reviews If the PR is unfinished, see how to mark it: https://git.k8s.io/community/contributors/guide/pull-requests.md#marking-unfinished-pull-requests --> <!-- Uncomment only one `/kind <>` line, hit enter to put that in a new line, and remove leading whitespace from that line: --> <!-- /kind api-change /kind bug /kind ci /kind cleanup /kind dependency-change /kind deprecation /kind design /kind documentation /kind failing-test /kind feature /kind flake /kind other --> <!-- Automatically closes linked issue when PR is merged. Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`. --> <!-- Fixes # or None --> <!-- If no, just write `None` in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string \"action required\". For more information on release notes see: https://git.k8s.io/community/contributors/guide/release-notes.md --> ```release-note ```" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "This guide is mainly about how to use the image to quickly build the development environment of iSulad. Reduce the cost of environmental preparation. Take the docker image of openEuler-21.03 as an example. First download the image from the official website: `wget https://repo.openeuler.org/openEuler-21.03/dockerimg/x8664/openEuler-docker.x86_64.tar.xz` Prepare to build the base image with the provided by the docs ```bash $ mkdir -p ./build-home $ cp docs/dockerfiles/isuladbuildin_openeuler.Dockerfile ./build-home $ pushd build-home $ docker build -t isuladbuild:v1 -f isuladbuildinopeneuler.Dockerfile . $ popd ``` ```bash $ docker run -itd -v /root/tmp/:/var/lib/isulad -v /sys/fs/cgroup/:/sys/fs/cgroup -v /lib/modules:/lib/modules --tmpfs /tmp:exec,mode=777 --tmpfs /run:exe c,mode=777 --privileged isulad_build:v1 sh ``` Note A host directory must be mounted to the container for isulad's working directory; Requires privileged permission; Must be linked to the modules directory; ```bash git clone https://gitee.com/src-openeuler/lxc.git pushd lxc rm -rf lxc-4.0.3 ./apply-patches || exit 1 pushd lxc-4.0.3 ./autogen.sh && ./configure || exit 1 make -j $(nproc) || exit 1 make install popd popd ``` ```bash ldconfig git clone https://gitee.com/openeuler/lcr.git pushd lcr rm -rf build mkdir build pushd build cmake -DDEBUG=ON -DCMAKESKIPRPATH=TRUE ../ || exit 1 make -j $(nproc) || exit 1 make install popd popd ``` ```bash ldconfig git clone https://gitee.com/openeuler/clibcni.git pushd clibcni rm -rf build mkdir build pushd build cmake -DDEBUG=ON ../ || exit 1 make -j $(nproc) || exit 1 make install popd popd ``` ```bash mkdir -p ~/.cargo touch ~/.cargo/config echo \"[source.crates-io]\" >> ~/.cargo/config echo \"[source.local-registry]\" >> ~/.cargo/config echo \"directory = \\\"vendor\\\"\" >> ~/.cargo/config ldconfig rm -rf lib-shim-v2 git clone https://gitee.com/src-openeuler/lib-shim-v2.git pushd lib-shim-v2 tar -zxf lib-shim-v2-0.0.1.tar.gz pushd lib-shim-v2-0.0.1 make lib || exit 1 make install popd popd ``` ```bash ldconfig rm -rf iSulad git clone https://gitee.com/openeuler/iSulad.git pushd iSulad rm -rf build mkdir build pushd build cmake -DDEBUG=ON -DENABLEUT=ON -DENABLESHIM_V2=ON ../ || exit 1 make -j $(nproc) || exit 1 make install ctest -V || exit 1 popd popd ```" } ]
{ "category": "Runtime", "file_name": "build_guide_with_docker_image.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "This document describes security aspects of Sysbox system containers. See also the documentation for more on security. System container processes are confined to the directory hierarchy associated with the container's root filesystem, plus any configured mounts (e.g., Docker volumes or bind-mounts). This is known as the container's root filesystem jail (aka \"rootfs jail\"). System containers deployed with Sysbox always use all Linux namespaces. That is, when you deploy a system container with Sysbox, (e.g., `docker run --runtime=sysbox-runc -it alpine:latest`), Sysbox will always use all Linux namespaces to create the container. This is done for enhanced container isolation. It's one area where Sysbox deviates from the OCI specification, which leaves it to the higher layer (e.g., Docker + containerd) to choose the namespaces that should be enabled for the container. The table below shows a comparison on namespace usage between system containers and regular Docker containers. | Namespace | Docker + Sysbox | Docker + OCI runc | | | | -- | | mount | Yes | Yes | | pid | Yes | Yes | | uts | Yes | Yes | | net | Yes | Yes | | ipc | Yes | Yes | | cgroup | Yes | No | | user | Yes | No by default; Yes when Docker engine is configured with userns-remap mode | By virtue of using the Linux user namespace, system containers get: Stronger container isolation (e.g., root in the container maps to an unprivileged user on the host). The root user inside the container has full privileges (i.e., all capabilities) within the container. Refer to the kernel's manual page for more info. The Linux cgroup namespace helps isolation by hiding host paths in cgroup information exposed inside the system container via `/proc`. Refer to the kernel's manual page for more info. The Linux user namespace works by mapping user-IDs and group-IDs between the container and the host. Sysbox performs the mapping as follows: If the (e.g., Docker or CRI-O) tells Sysbox to run the container with the user-namespace enabled, Sysbox honors the user-ID mappings provided by the container manager. Otherwise, Sysbox automatically enables the user-namespace in the container and allocates user-ID mappings for it. In either case, Sysbox uses the kernel's ID-mapped mounts feature or the shiftfs kernel module (depending of which is available) to ensure host files mounted into the container show up with the proper user-ID and group-ID inside the container's user-namespace. The Sysbox Community Edition (Sysbox-CE) uses a common user-ID mapping for all system containers. In other words, the root user in all containers is mapped to the same user-ID on the host. While this provides strong container-to-host isolation (i.e., root in the container is not root in the host), container-to-container is not as strong as it could be. The Sysbox Enterprise Edition (Sysbox-EE) improves on this by providing exclusive user-ID mappings to each" }, { "data": "In order to provide strong cross-container isolation, Sysbox-EE allocates exclusive userns ID mappings to each container, By way of example: if we launch two containers with Sysbox-EE, notice the ID mappings assigned to each: $ docker run --runtime=sysbox-runc --name=syscont1 --rm -d alpine tail -f /dev/null 16c1abcc48259a47ef749e2d292ceef6a9f7d6ab815a6a5d12f06efc3c09d0ce $ docker run --runtime=sysbox-runc --name=syscont2 --rm -d alpine tail -f /dev/null 573843fceac623a93278aafd4d8142bf631bc1b214b1bcfcd183b1be77a00b69 $ docker exec syscont1 cat /proc/self/uid_map 0 165536 65536 $ docker exec syscont2 cat /proc/self/uid_map 0 231072 65536 Each system container gets an exclusive range of 64K user IDs. For syscont1, user IDs \\[0, 65536] are mapped to host user IDs \\[165536, 231071]. And for syscont2 user IDs \\[0, 65536] are mapped to host user IDs \\[231072, 65536]. The same applies to the group IDs. The reason 64K user-IDs are given to each system container is to allow the container to have IDs ranging from the `root` (ID 0) all the way up to user `nobody` (ID 65534). Exclusive ID mappings ensure that if a container process somehow escapes the container's root filesystem jail, it will find itself without any permissions to access any other files in the host or in other containers. The exclusive host user IDs chosen by Sysbox-EE are obtained from the `/etc/subuid` and `/etc/subgid` files: $ more /etc/subuid cesar:100000:65536 sysbox:165536:268435456 $ more /etc/subgid cesar:100000:65536 sysbox:165536:268435456 These files are automatically configured by Sysbox during installation (or more specifically when the `sysbox-mgr` component is started during installation) By default, Sysbox reserves a range of 268435456 user IDs (enough to accommodate 4K system containers, each with 64K user IDs). If more than 4K containers are running at the same time, Sysbox will by default re-use user-ID mappings from the range specified in `/etc/subuid`. The same applies to group-ID mappings. In this scenario multiple system containers may share the same user-ID mapping, reducing container-to-container isolation a bit. For extra security, it's possible to configure Sysbox to not re-use mappings and instead fail to launch new system containers until host user IDs become available (i.e., when other system containers are stopped). The size of the reserved ID range, as well as the policy in case the range is exhausted, is configurable via the sysbox-mgr command line. If you wish to change this, See `sudo sysbox-mgr --help` and use the . Sysbox performs partial virtualization of procfs inside the system container. This is a key differentiating feature of Sysbox. The goal for this virtualization is to expose procfs as read-write yet ensure the system container processes cannot modify system-wide settings on the host via procfs. The virtualization is \"partial\" because many resources under procfs are already \"namespaced\" (i.e., isolated) by the Linux kernel. However, many others are not, and it is for those that Sysbox performs the virtualization. The virtualization of procfs inside the container not only occurs on \"/proc\", but also in any other procfs mountpoints inside the system container. By extension, this means that inner containers also see a virtualized procfs, ensuring they too are well isolated. Procfs virtualization is independent among system containers, meaning that each system container gets its own virtualized procfs: any changes it does to it are not seen in other system containers. Sysbox takes care of exposing and tracking the procfs contexts within each system container. Sysbox performs partial virtualization of sysfs inside the system container, for the same reasons as with procfs (see prior" }, { "data": "In addition, the `/sys/fs/cgroup` sub-directory is mounted read-write to allow system container processes to assign cgroup resources within the system container. System container processes can only use cgroups to assign a subset of the resources assigned to the system container itself. Processes inside the system container can't modify cgroup resources assigned to the system container itself. By default, a system container's init process configured with user-ID 0 (root) always starts with all capabilities enabled. ```console $ docker run --runtime=sysbox-runc -it alpine:latest / # grep -i cap /proc/self/status CapInh: 0000003fffffffff CapPrm: 0000003fffffffff CapEff: 0000003fffffffff CapBnd: 0000003fffffffff CapAmb: 0000003fffffffff ``` Note that the system container's Linux user-namespace ensures that these capabilities are only applicable to resources assigned to the system container itself. The process has no capabilities on resources not assigned to the system container. A system container's init process configured with a non-root user-ID starts with no capabilities. For example, when deploying system containers with Docker: ```console $ docker run --runtime=sysbox-runc --user 1000 -it alpine:latest / $ grep -i cap /proc/self/status CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 0000003fffffffff CapAmb: 0000000000000000 ``` This mimics the way capabilities are assigned to processes on a physical host or VM. Note that starting with Sysbox v0.5.0, it's possible to modify this behavior to have Sysbox honor the capabilities passed to it by the higher level container manager via the OCI spec. See the for more on this. System containers deployed with Sysbox allow a minimum set of 300+ syscalls, using Linux seccomp. Significant syscalls blocked within system containers are the same as those listed , except that system container allow these system calls too: mount umount2 add_key request_key keyctl pivot_root gethostname sethostname setns unshare It's currently not possible to reduce the set of syscalls allowed within a system container (i.e., the Docker `--security-opt seccomp=<profile>` option is not supported). Sysbox performs selective system call interception on a few \"control-path\" system calls, such as `mount` and `umount2`. This is another key feature of Sysbox, and it's done in order to perform proper and virtualization. Sysbox does this very selectively in order to ensure performance of processes inside the system container is not impacted. The following devices are always present in the system container: /dev/null /dev/zero /dev/full /dev/random /dev/urandom /dev/tty Additional devices may be added by the container engine. For example, when deploying system containers with Docker, you typically see the following devices in addition to the ones listed above: /dev/console /dev/pts /dev/mqueue /dev/shm Sysbox does not currently support exposing host devices inside system containers (e.g., via the `docker run --device` option). We are working on adding support for this. System container resource consumption can be limited via cgroups. Sysbox supports both cgroups v1 and v2, and in both cases when managed by systemd or not. Cgroups can be used to balance resource consumption as well as to prevent denial-of-service attacks in which a buggy or compromised system container consumes all available resources in the system. For example, when using Docker to deploy system containers, the `docker run --cpu`, `--memory`, `--blkio*`, etc., settings can be used for this purpose. Filesystem mounts that make up the" }, { "data": "mounts setup at container creation time) are considered special, meaning that Sysbox places restrictions on the operations that may be done on them from within the container. This ensures processes inside the container can't modify those mounts in a way that would weaken or break container isolation, even though these processes may be running as root with full capabilities inside the container and thus have access to the `mount` and `umount` syscalls. We call these \"immutable mounts\". For example, assume we launch a system container with a read-only mount of a host Docker volume called `myvol`: ```console $ docker run --runtime=sysbox-runc -it --rm --hostname=syscont -v myvol:/mnt/myvol:ro ubuntu root@syscont:/# ``` Inside the container, you'll see that the root filesystem is made up of several mounts setup implicitly by Sysbox, as well as the `myvol` mount: ```console root@syscont:/# findmnt TARGET SOURCE FSTYPE OPTIONS / . shiftfs rw,relatime |-/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime | |-/sys/firmware tmpfs tmpfs ro,relatime,uid=165536,gid=165536 | |-/sys/fs/cgroup tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,mode=755,uid=165536,gid=165536 | | |-/sys/fs/cgroup/systemd systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd | | |-/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory | | |-/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset | | |-/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio | | |-/sys/fs/cgroup/netcls,netprio cgroup cgroup rw,nosuid,nodev,noexec,relatime,netcls,netprio | | |-/sys/fs/cgroup/perfevent cgroup cgroup rw,nosuid,nodev,noexec,relatime,perfevent | | |-/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb | | |-/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct | | |-/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer | | |-/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices | | |-/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids | | `-/sys/fs/cgroup/rdma cgroup cgroup rw,nosuid,nodev,noexec,relatime,rdma | |-/sys/kernel/config tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=1024k,uid=165536,gid=165536 | |-/sys/kernel/debug tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=1024k,uid=165536,gid=165536 | |-/sys/kernel/tracing tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=1024k,uid=165536,gid=165536 | `-/sys/module/nfconntrack/parameters/hashsize sysboxfs[/sys/module/nfconntrack/parameters/hashsize] fuse rw,nosuid,nodev,relatime,userid=0,groupid=0,defaultpermissions,allowother |-/proc proc proc rw,nosuid,nodev,noexec,relatime | |-/proc/bus proc[/bus] proc ro,relatime | |-/proc/fs proc[/fs] proc ro,relatime | |-/proc/irq proc[/irq] proc ro,relatime | |-/proc/sysrq-trigger proc[/sysrq-trigger] proc ro,relatime | |-/proc/asound tmpfs tmpfs ro,relatime,uid=165536,gid=165536 | |-/proc/acpi tmpfs tmpfs ro,relatime,uid=165536,gid=165536 | |-/proc/keys udev[/null] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 | |-/proc/timerlist udev[/null] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nrinodes=492795,mode=755 | |-/proc/scheddebug udev[/null] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nrinodes=492795,mode=755 | |-/proc/scsi tmpfs tmpfs ro,relatime,uid=165536,gid=165536 | |-/proc/swaps sysboxfs[/proc/swaps] fuse rw,nosuid,nodev,relatime,userid=0,groupid=0,defaultpermissions,allowother | |-/proc/sys sysboxfs[/proc/sys] fuse rw,nosuid,nodev,relatime,userid=0,groupid=0,defaultpermissions,allowother | `-/proc/uptime sysboxfs[/proc/uptime] fuse rw,nosuid,nodev,relatime,userid=0,groupid=0,defaultpermissions,allowother |-/dev tmpfs tmpfs rw,nosuid,size=65536k,mode=755,uid=165536,gid=165536 | |-/dev/console devpts[/0] devpts rw,nosuid,noexec,relatime,gid=165541,mode=620,ptmxmode=666 | |-/dev/mqueue mqueue mqueue rw,nosuid,nodev,noexec,relatime | |-/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=165541,mode=620,ptmxmode=666 | |-/dev/shm shm tmpfs rw,nosuid,nodev,noexec,relatime,size=65536k,uid=165536,gid=165536 | |-/dev/kmsg udev[/null] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 | |-/dev/null udev[/null] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 | |-/dev/random udev[/random] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 | |-/dev/full udev[/full] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 | |-/dev/tty udev[/tty] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 | |-/dev/zero udev[/zero] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 | `-/dev/urandom udev[/urandom] devtmpfs rw,nosuid,noexec,relatime,size=1971180k,nr_inodes=492795,mode=755 |-/mnt/myvol /var/lib/docker/volumes/myvol/_data shiftfs ro,relatime |-/etc/resolv.conf /var/lib/docker/containers/080fb5dbe347a947accf7ba27a545ce11d937f02f02ec0059535cfc065d04ea0[/resolv.conf] shiftfs rw,relatime |-/etc/hostname /var/lib/docker/containers/080fb5dbe347a947accf7ba27a545ce11d937f02f02ec0059535cfc065d04ea0[/hostname] shiftfs rw,relatime |-/etc/hosts /var/lib/docker/containers/080fb5dbe347a947accf7ba27a545ce11d937f02f02ec0059535cfc065d04ea0[/hosts] shiftfs rw,relatime |-/var/lib/docker /dev/sda2[/var/lib/sysbox/docker/080fb5dbe347a947accf7ba27a545ce11d937f02f02ec0059535cfc065d04ea0] ext4 rw,relatime |-/var/lib/kubelet /dev/sda2[/var/lib/sysbox/kubelet/080fb5dbe347a947accf7ba27a545ce11d937f02f02ec0059535cfc065d04ea0] ext4 rw,relatime |-/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs /dev/sda2[/var/lib/sysbox/containerd/080fb5dbe347a947accf7ba27a545ce11d937f02f02ec0059535cfc065d04ea0] ext4 rw,relatime |-/usr/src/linux-headers-5.4.0-65 /usr/src/linux-headers-5.4.0-65 shiftfs ro,relatime |-/usr/src/linux-headers-5.4.0-65-generic /usr/src/linux-headers-5.4.0-65-generic shiftfs ro,relatime `-/usr/lib/modules/5.4.0-65-generic /lib/modules/5.4.0-65-generic shiftfs ro,relatime ``` All of these mounts are considered special / immutable because they are setup during container creation time (i.e., before any process inside the container starts executing). In order to ensure proper isolation between the container and the host, Sysbox places restrictions on what mount, remount, and unmount operations the processes inside the container can do with these immutable mounts. The default restrictions are: Remounts: A read-only immutable mount can't be modified (i.e., remounted) as a read-write mount. This ensures read-only mounts setup at container creation time remain as such. This behavior can be changed by setting the sysbox-fs config option `allow-immutable-remounts=true`. Note that the opposite does not apply: a read-write immutable mount can be modified to a read-only mount, since this creates a stronger restriction on the" }, { "data": "Other attributes of immutable mounts can't be changed via a remount (e.g., nosuid, noexec, relatime, etc.) Unmounts: The filesystem root \"/\" can't be unmounted. The immutable mounts at /proc and /sys (as well as any submounts underneath them) can't be unmounted. Other immutable mounts can be unmounted. Doing so exposes the contents of the container's immutable image below it. While it may surprise you that Sysbox allows these unmounts, this is necessary because system container images often have a process manager inside, and some process managers (in particular systemd) unmount all mountpoints inside the container during container stop. If Sysbox where to restrict these unmounts, the process manager will report errors during container stop Allowing unmounts of immutable mounts is typically not a security concern, because the unmount normally exposes the underlying contents of the system container's image, and this image will likely not have sensitive data that is masked by the mounts. Having said this, this behavior can be changed by setting the sysbox-fs config option `allow-immutable-unmounts=false`. When this option is set, Sysbox does restrict unmounts on all immutable mounts. See the for an example. Restricted mount operations typically fail with \"EPERM\". For example, continuing with the prior example, let's try to remount as read-write the `myvol` mount: ```console root@syscont:/# mount -o remount,rw,bind /mnt/myvol mount: /mnt/myvol: permission denied. ``` The same behavior occurs with the immutable read-only mount of the host's linux headers, setup implicitly by Sysbox into the container: ```console root@syscont# mount -o remount,rw,bind /root/linux-headers-5.4.0-6 mount: /root/linux-headers-5.4.0-6: permission denied. ``` In the example above, even though the root user had full capabilities inside the system container, it got EPERM when it tried to remount the immutable mounts because Sysbox detected the operation and blocked it. Note that these restrictions apply whether the mount occurs inside the Linux mount namespace for the system container, or any other mount namespace created inside the container (e.g., via `unshare -m`). Bind-mounts from immutable mounts are also restricted, meaning that it's OK to create a bind-mount from an immutable mount to another mountpoint inside the system container, but the new mountpoint will have similar restrictions as the corresponding immutable mount. The restrictions for bind mounts whose source is an immutable mount are: Remount: A bind mount whose source is a read-only immutable mount can't be modified (i.e., remounted) as a read-write mount. This ensures read-only mounts setup at container creation time remain as such. This behavior can be changed by setting the sysbox-fs config option `allow-immutable-remounts=true`. Unmount: No restrictions. Continuing with the prior example, inside the system container let's create a bind mount from the immutable read-only mount `/usr/src/linux-headers-5.4.0-65` to `/root/headers`, and attempt a remount on the latter: ```console root@syscont:/# mkdir /root/headers root@syscont:/# mount --bind /usr/src/linux-headers-5.4.0-65 /root/headers root@syscont:/# mount -o remount,bind,rw /root/headers mount: /root/headers: permission denied. ``` As you can see, the operation failed with EPERM (as expected) because the original mount `/usr/src/linux-headers-5.4.0-65` is a read-only immutable mount. Note that the new bind-mount can be unmounted without problem. Sysbox places no restriction on this since this simply removes the bind mount without having any effect on the immutable mount." }, { "data": "root@syscont:/# umount /root/headers ``` Except for the restrictions listed above, other mounts, remounts, and unmounts work fine inside the system container (just as they would on a physical host or VM). For example, one can create a new tmpfs mount inside, remount it read-only, and unmount it without problem. The new mount can even be stacked on top of an immutable mount, in which case it simply \"hides\" the immutable mount underneath (but does not change or remove it, meaning that container isolation remains untouched). When launching containers inside a system container (e.g., by installing Docker inside the system container), the inner container manager (e.g., Docker) will setup several mounts for the inner containers. This works perfectly fine, as those mounts typically fall under the \"other mounts\" category described in the prior section. However, if any of those mounts is associated with an immutable mount of the outer system container, then the immutable mount restrictions above apply. For example, let's launch a system container that comes with Docker inside. We will mount as \"read-only\" a host volume `myvol` into the system container's `/mnt/myvol` directory: ```console $ docker run --runtime=sysbox-runc -it --rm --hostname=syscont -v myvol:/mnt/myvol:ro nestybox/alpine-docker ``` Inside the system container, let's start Docker: ```console / # dockerd > /var/log/dockerd.log 2>&1 & ``` Now let's launch an inner privileged container, and mount the system container's `/mnt/myvol` mount (which is immutable since it was setup at system container creation time) into the inner container: ```console / # docker run -it --rm --privileged --hostname=inner -v /mnt/myvol:/mnt/myvol ubuntu ``` Inside the inner privileged container, we can use the mount command. Let's try to remount `/mnt/myvol` to read-write: ```console root@inner:/# mount -o remount,rw,bind /mnt/myvol mount: /mnt/myvol: permission denied. ``` As expected, this failed because the inner container's `/mnt/myvol` is technically a bind-mount of `/mnt/myvol` in the system container, and the latter is an immutable read-only mount of the system container, so it can't be remounted read-write. For the reasons described above, initial mount immutability is a key security feature of Sysbox. It enables system container processes to be properly isolated from the host, while still giving these processes full access to perform other types of mounts, remounts, and unmounts inside the container. As a point of comparison, other container runtimes either restrict mounts completely (by blocking the `mount` and `umount` system calls via the Linux seccomp mechanism) which prevents several system-level programs from working properly inside the container, or alternatively allow all mount, remount, and unmount operations inside the container (e.g., as in Docker privileged containers), creating a security weakness that can be easily used to break container isolation. In contrast, Sysbox offers a more nuanced approach, in which the `mount` and `umount` system calls are allowed inside the container, but are restricted when applied to the container's initial mounts. Modern versions of the Linux kernel (>= 3.5) support a per-process attribute called `nonewprivs`. It's a security feature meant to ensure a child process can't gain more privileges than its parent process. Once this attribute is set on a process it can't be unset, and it's inherited by all child and further descendant processes. The details are explained and" }, { "data": "The `nonewprivs` attribute may be set on the init process of a container, for example, using the `docker run --security-opt no-new-privileges` flag (see the doc for details). In a system container, the container's init process is normally owned by the system container's root user and granted full privileges within the system container's user namespace (as described ). In this case setting the `nonewprivs` attribute on the system container's init process has no effect as far as limiting the privileges it may get (since it already has all privileges within the system container). However, it does have effect on child processes with lesser privileges (those won't be allowed to elevate their privileges, preventing them from executing setuid programs for example). Sysbox does not yet support AppArmor profiles to apply mandatory access control (MAC) to containers. If the container manager (e.g., Docker) instructs Sysbox to apply an AppArmor profile on a container, Sysbox currently ignores this. The rationale behind this is that typical AppArmor profiles from container managers such as Docker are too restrictive for system containers, and don't give you much benefit given that Sysbox gets equivalent protection by enabling the Linux user namespaces in its containers. For example, Docker's is too restrictive for system containers. Having said this, in the near future we plan to add some support for AppArmor in order to follow the defense-in-depth security principle. Sysbox does not yet support running on systems with SELinux enabled. Sysbox does not have support for other Linux LSMs at this time. The Linux kernel has a mechanism to kill processes when the system is running low on memory. The decision on which process to kill is done based on an out-of-memory (OOM) score assigned to all processes. The score is in the range \\[-1000:1000], where higher values means higher probability of the process being killed when the host reaches an out-of-memory scenario. It's possible for users with sufficient privileges to adjust the OOM score of a given process, via the `/proc/[pid]/oomscoreadj` file. A system container's init process OOM score adjustment can be configured to start at a given value. For example, using Docker's `--oom-score-adj` option: ```console $ docker run --runtime=sysbox-runc --oom-score-adj=-100 -it alpine:latest ``` In addition, Sysbox ensures that system container processes are allowed to modify their out-of-memory (OOM) score adjustment to any value in the range \\[-999:1000], via `/proc/[pid]/oomscoreadj`. This is necessary in order to allow system software that requires such adjustment range (e.g., Kubernetes) to operate correctly within the system container. From a host security perspective however, allowing system container processes to adjust their OOM score downwards is risky, since it means that such processes are unlikely to be killed when the host is running low on memory. To mitigate this risk, a user can always put an upper bound on the memory allocated to each system container. Though this does not prevent a system container process from reducing its OOM score, placing such an upper bound reduces the chances of the system running out of memory and prevents memory exhaustion attacks by malicious processes inside the system container. Placing an upper bound on the memory allocated to the system container can be done by using Docker's `--memory` option: ```console $ docker run --runtime=sysbox-runc --memory=100m -it alpine:latest ```" } ]
{ "category": "Runtime", "file_name": "security.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage security identities ``` -h, --help help for identity ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - CLI - Retrieve information about an identity - List identities" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_identity.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "See the to learn how to level up through the project. Please see the file for the full list of contributors to the project | Maintainer | Emplolyer | ||| | | Daocloud | | | Daocloud | | | Daocloud | | | Daocloud | | | Daocloud | | | Daocloud | | | Computer Power team |" } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List CIDR filters ``` cilium-dbg prefilter list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage XDP CIDR filters" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_prefilter_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark describe restores\" layout: docs Describe restores Describe restores ``` ark describe restores [NAME1] [NAME2] [NAME...] [flags] ``` ``` -h, --help help for restores -l, --selector string only show items matching this label selector --volume-details display details of restic volume restores ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Describe ark resources" } ]
{ "category": "Runtime", "file_name": "ark_describe_restores.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- Use `Issue #<issue number>` or `Issue longhorn/longhorn#<issue number>` or `Issue (paste link of issue)`. DON'T use `Fixes #<issue number>` or `Fixes (paste link of issue)`, as it will automatically close the linked issue when the PR is merged. --> Issue #" } ]
{ "category": "Runtime", "file_name": "PULL_REQUEST_TEMPLATE.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Current known vulnerabilities are listed in the section for the repo. You can report a new vulnerability using tool. Alternatively you can report it via kanisterio google group \"Contact owners and managers\" button: https://groups.google.com/g/kanisterio/about The maintainers will help diagnose the severity of the issue and determine how to address the issue. Issues deemed to be non-critical will be filed as GitHub issues. Critical issues will receive immediate attention and be fixed as quickly as possible. The maintainers will then coordinate a release date with you. When serious security problems in Kanister are discovered and corrected, the maintainers issue a security advisory, describing the problem and containing a pointer to the fix. These will be announced on the Kanister's mailing list and websites and be visible in . Security issues are fixed as soon as possible, and the fixes are propagated to the stable branches as fast as possible. However, when a vulnerability is found during a code audit, or when several other issues are likely to be spotted and fixed in the near future, the maintainers may delay the release of a Security Advisory, so that one unique, comprehensive Security Advisory covering several vulnerabilities can be issued. Communication with vendors and other distributions shipping the same code may also cause these delays." } ]
{ "category": "Runtime", "file_name": "SECURITY.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "Velero Generic Data Path (VGDP): VGDP is the collective modules that is introduced in . Velero uses these modules to finish data transfer for various purposes (i.e., PodVolume backup/restore, Volume Snapshot Data Movement). VGDP modules include uploaders and the backup repository. Exposer: Exposer is a module that is introduced in . Velero uses this module to expose the volume snapshots to Velero node-agent pods or node-agent associated pods so as to complete the data movement from the snapshots. Velero node-agent is a daemonset hosting controllers and VGDP modules to complete the concrete work of backups/restores, i.e., PodVolume backup/restore, Volume Snapshot Data Movement backup/restore. Specifically, node-agent runs DataUpload controllers to watch DataUpload CRs for Volume Snapshot Data Movement backups, so there is one controller instance in each node. One controller instance takes a DataUpload CR and then launches a VGDP instance, which initializes a uploader instance and the backup repository connection, to finish the data transfer. The VGDP instance runs inside a node-agent pod or in a pod associated to the node-agent pod in the same node. Varying from the data size, data complexity, resource availability, VGDP may take a long time and remarkable resources (CPU, memory, network bandwidth, etc.). Technically, VGDP instances are able to run in any node that allows pod schedule. On the other hand, users may want to constrain the nodes where VGDP instances run for various reasons, below are some examples: Prevent VGDP instances from running in specific nodes because users have more critical workloads in the nodes Constrain VGDP instances to run in specific nodes because these nodes have more resources than others Constrain VGDP instances to run in specific nodes because the storage allows volume/snapshot provisions in these nodes only Therefore, in order to improve the compatibility, it is worthy to configure the affinity of VGDP to nodes, especially for backups for which VGDP instances run frequently and centrally. Define the behaviors of node affinity of VGDP instances in node-agent for volume snapshot data movement backups Create a mechanism for users to specify the node affinity of VGDP instances for volume snapshot data movement backups It is also beneficial to support VGDP instances affinity for PodVolume backup/restore, however, it is not possible since VGDP instances for PodVolume backup/restore should always run in the node where the source/target pods are created. It is also beneficial to support VGDP instances affinity for data movement restores, however, it is not possible in some cases. For example, when the `volumeBindingMode` in the storageclass is `WaitForFirstConsumer`, the restore volume must be mounted in the node where the target pod is scheduled, so the VGDP instance must run in the same node. On the other hand, considering the fact that restores may not frequently and centrally run, we will not support data movement restores. As elaberated in the , the Exposer may take different ways to expose snapshots, i.e., through backup pods (this is the only way supported at" }, { "data": "The implementation section below only considers this approach currently, if a new expose method is introduced in future, the definition of the affinity configurations and behaviors should still work, but we may need a new implementation. We will use the ```node-agent-config``` configMap to host the node affinity configurations. This configMap is not created by Velero, users should create it manually on demand. The configMap should be in the same namespace where Velero is installed. If multiple Velero instances are installed in different namespaces, there should be one configMap in each namespace which applies to node-agent in that namespace only. Node-agent server checks these configurations at startup time and use it to initiate the related VGDP modules. Therefore, users could edit this configMap any time, but in order to make the changes effective, node-agent server needs to be restarted. Inside ```node-agent-config``` configMap we will add one new kind of configuration as the data in the configMap, the name is ```loadAffinity```. Users may want to set different LoadAffinity configurations according to different conditions (i.e., for different storages represented by StorageClass, CSI driver, etc.), so we define ```loadAffinity``` as an array. This is for extensibility consideration, at present, we don't implement multiple configurations support, so if there are multiple configurations, we always take the first one in the array. The data structure for ```node-agent-config``` is as below: ```go type Configs struct { // LoadConcurrency is the config for load concurrency per node. LoadConcurrency *LoadConcurrency `json:\"loadConcurrency,omitempty\"` // LoadAffinity is the config for data path load affinity. LoadAffinity []*LoadAffinity `json:\"loadAffinity,omitempty\"` } type LoadAffinity struct { // NodeSelector specifies the label selector to match nodes NodeSelector metav1.LabelSelector `json:\"nodeSelector\"` } ``` Affinity configuration means allowing VGDP instances running in the nodes specified. There are two ways to define it: It could be defined by `MatchLabels` of `metav1.LabelSelector`. The labels defined in `MatchLabels` means a `LabelSelectorOpIn` operation by default, so in the current context, they will be treated as affinity rules. It could be defined by `MatchExpressions` of `metav1.LabelSelector`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpIn` or `LabelSelectorOpExists`. Anti-affinity configuration means preventing VGDP instances running in the nodes specified. Below is the way to define it: It could be defined by `MatchExpressions` of `metav1.LabelSelector`. The labels are defined in `Key` and `Values` of `MatchExpressions` and the `Operator` should be defined as `LabelSelectorOpNotIn` or `LabelSelectorOpDoesNotExist`. A sample of the ```node-agent-config``` configMap is as below: ```json { \"loadAffinity\": [ { \"nodeSelector\": { \"matchLabels\": { \"beta.kubernetes.io/instance-type\": \"Standard_B4ms\" }, \"matchExpressions\": [ { \"key\": \"kubernetes.io/hostname\", \"values\": [ \"node-1\", \"node-2\", \"node-3\" ], \"operator\": \"In\" }, { \"key\": \"xxx/critial-workload\", \"operator\": \"DoesNotExist\" } ] } } ] } ``` This sample showcases two affinity configurations: matchLabels: VGDP instances will run only in nodes with label key `beta.kubernetes.io/instance-type` and value `Standard_B4ms` matchExpressions: VGDP instances will run in node `node1`, `node2` and `node3` (selected by" }, { "data": "label) This sample showcases one anti-affinity configuration: matchExpressions: VGDP instances will not run in nodes with label key `xxx/critial-workload` To create the configMap, users need to save something like the above sample to a json file and then run below command: ``` kubectl create cm node-agent-config -n velero --from-file=<json file name> ``` As mentioned in the , the exposer decides where to launch the VGDP instances. At present, for volume snapshot data movement backups, the exposer creates backupPods and the VGDP instances will be initiated in the nodes where backupPods are scheduled. So the loadAffinity will be translated (from `metav1.LabelSelector` to `corev1.Affinity`) and set to the backupPods. It is possible that node-agent pods, as a daemonset, don't run in every worker nodes, users could fulfil this by specify `nodeSelector` or `nodeAffinity` to the node-agent daemonset spec. On the other hand, at present, VGDP instances must be assigned to nodes where node-agent pods are running. Therefore, if there is any node selection for node-agent pods, users must consider this into this load affinity configuration, so as to guarantee that VGDP instances are always assigned to nodes where node-agent pods are available. This is done by users, we don't inherit any node selection configuration from node-agent daemonset as we think daemonset scheduler works differently from plain pod scheduler, simply inheriting all the configurations may cause unexpected result of backupPod schedule. Otherwise, if a backupPod are scheduled to a node where node-agent pod is absent, the corresponding DataUpload CR will stay in `Accepted` phase until the prepare timeout (by default 30min). At present, as part of the expose operations, the exposer creates a volume, represented by backupPVC, from the snapshot. The backupPVC uses the same storageClass with the source volume. If the `volumeBindingMode` in the storageClass is `Immediate`, the volume is immediately allocated from the underlying storage without waiting for the backupPod. On the other hand, the loadAffinity is set to the backupPod's affinity. If the backupPod is scheduled to a node where the snapshot volume is not accessible, e.g., because of storage topologies, the backupPod won't get into Running state, concequently, the data movement won't complete. Once this problem happens, the backupPod stays in `Pending` phase, and the corresponding DataUpload CR stays in `Accepted` phase until the prepare timeout (by default 30min). There is a common solution for the both problems: We have an existing logic to periodically enqueue the dataupload CRs which are in the `Accepted` phase for timeout and cancel checks We add a new logic to this existing logic to check if the corresponding backupPods are in unrecoverable status The above problems could be covered by this check, because in both cases the backupPods are in abnormal and unrecoverable status If a backupPod is unrecoverable, the dataupload controller cancels the dataupload and deletes the backupPod Specifically, when the above problems happen, the status of a backupPod is like below: ``` status: conditions: lastProbeTime: null message: '0/2 nodes are available: 1 node(s) didn''t match Pod''s node affinity/selector, 1 node(s) had volume node affinity conflict. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling..' reason: Unschedulable status: \"False\" type: PodScheduled phase: Pending ```" } ]
{ "category": "Runtime", "file_name": "node-agent-affinity.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Cobra will follow a steady release cadence. Non breaking changes will be released as minor versions quarterly. Patch bug releases are at the discretion of the maintainers. Users can expect security patch fixes to be released within relatively short order of a CVE becoming known. For more information on security patch fixes see the CVE section below. Releases will follow . Users tracking the Master branch should expect unpredictable breaking changes as the project continues to move forward. For stability, it is highly recommended to use a release. We will maintain two major releases in a moving window. The N-1 release will only receive bug fixes and security updates and will be dropped once N+1 is released. Deprecation of Go versions or dependent packages will only occur in major releases. To reduce the change of this taking users by surprise, any large deprecation will be preceded by an announcement in the and an Issue on Github. Maintainers will make every effort to release security patches in the case of a medium to high severity CVE directly impacting the library. The speed in which these patches reach a release is up to the discretion of the maintainers. A low severity CVE may be a lower priority than a high severity one. Cobra maintainers will use GitHub issues and the as the primary means of communication with the community. This is to foster open communication with all users and contributors. Breaking changes are generally allowed in the master branch, as this is the branch used to develop the next release of Cobra. There may be times, however, when master is closed for breaking changes. This is likely to happen as we near the release of a new version. Breaking changes are not allowed in release branches, as these represent minor versions that have already been released. These version have consumers who expect the APIs, behaviors, etc, to remain stable during the lifetime of the patch stream for the minor release. Examples of breaking changes include: Removing or renaming exported constant, variable, type, or function. Updating the version of critical libraries such as `spf13/pflag`, `spf13/viper` etc... Some version updates may be acceptable for picking up bug fixes, but maintainers must exercise caution when reviewing. There may, at times, need to be exceptions where breaking changes are allowed in release branches. These are at the discretion of the project's maintainers, and must be carefully considered before merging. Maintainers will ensure the Cobra test suite utilizes the current supported versions of Golang. Changes to this document and the contents therein are at the discretion of the maintainers. None of the contents of this document are legally binding in any way to the maintainers or the users." } ]
{ "category": "Runtime", "file_name": "CONDUCT.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for zsh Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: echo \"autoload -U compinit; compinit\" >> ~/.zshrc To load completions in your current shell session: source <(cilium-operator-aws completion zsh) To load completions for every new session, execute once: cilium-operator-aws completion zsh > \"${fpath[1]}/_cilium-operator-aws\" cilium-operator-aws completion zsh > $(brew --prefix)/share/zsh/site-functions/_cilium-operator-aws You will need to start a new shell for this setup to take effect. ``` cilium-operator-aws completion zsh [flags] ``` ``` -h, --help help for zsh --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell" } ]
{ "category": "Runtime", "file_name": "cilium-operator-aws_completion_zsh.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Carina support expanding PVC online, so user can resize carina pvc as needed. ```shell $ kubectl get pvc -n carina NAMESPACE NAME STATUS VOLUME Capacity STORAGECLASS AGE carina carina-pvc Bound pvc-80ede42a-90c3-4488-b3ca-85dbb8cd6c22 7G carina-sc 20d ``` Expanding it online. ```shell $ kubectl patch pvc/carina-pvc \\ --namespace \"carina\" \\ --patch '{\"spec\": {\"resources\": {\"requests\": {\"storage\": \"15Gi\"}}}}' ``` Check if expanding works. ```shell $ kubectl exec -it web-server -n carina bash $ df -h Filesystem Size Used Avail Use% Mounted on overlay 199G 17G 183G 9% / tmpfs 64M 0 64M 0% /dev /dev/vda2 199G 17G 183G 9% /conf /dev/carina-vg-hdd/volume.... 15G 0 64M 0% /www/nginx/work tmpfs 3.9G 0 3.9G 0% /tmp/k8s-webhook-server/serving-certs ``` Note, if using cache tiering PVC, then user need to restart the pod to make the expanding work." } ]
{ "category": "Runtime", "file_name": "pvc-expand.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "For all Firecracker versions prior to v1.0.0, the emulated block device uses a synchronous IO engine for executing the device requests, based on blocking system calls. Firecracker 1.0.0 adds support for an asynchronous block device IO engine. \\[!WARNING\\] Support is currently in developer preview. See for more info. The `Async` engine leverages for executing requests in an async manner, therefore getting overall higher throughput by taking better advantage of the block device hardware, which typically supports queue depths greater than 1. The block IO engine is configured via the PUT /drives API call (pre-boot only), with the `io_engine` field taking two possible values: `Sync` (default) `Async` (in ) The `Sync` variant is the default, in order to provide backwards compatibility with older Firecracker versions. Note is another option for block IO that requires an external backend process. ```bash curl --unix-socket ${socket} -i \\ -X PUT \"http://localhost/drives/rootfs\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \\\"drive_id\\\": \\\"rootfs\\\", \\\"pathonhost\\\": \\\"${drive_path}\\\", \\\"isrootdevice\\\": true, \\\"isreadonly\\\": false, \\\"io_engine\\\": \\\"Sync\\\" }\" ``` Firecracker requires a minimum host kernel version of 5.10.51 for the `Async` IO engine. This requirement is based on the availability of the `io_uring` subsystem, as well as a couple of features and bugfixes that were added in newer kernel versions. If a block device is configured with the `Async` io_engine on a host kernel older than 5.10.51, the API call will return a 400 Bad Request, with a suggestive error message. The performance is strictly tied to the host kernel version. The gathered data may not be relevant for modified/newer kernels than 5.10. When using the `Async` variant, there is added latency on device creation (up to ~110 ms), caused by the extra io_uring system calls performed by Firecracker. This translates to higher latencies on either of these operations: API call duration for block device config Boot time for VMs started via JSON config files Snapshot restore time For use-cases where the lowest latency on the aforementioned operations is desired, it is recommended to use the `Sync` IO engine. The `Async` engine performance potential is showcased when the block device backing files are placed on a physical disk that supports efficient parallel execution of requests, like an NVME drive. It's also recommended to evenly distribute the backing files across the available drives of a host, to limit contention in high-density" }, { "data": "The performance measurements we've done were made on NVME drives, and we've discovered that: For read workloads which operate on data that is not present in the host page cache, the performance improvement for `Async` is about 1.5x-3x in overall efficiency (IOPS per CPU load) and up to 30x in total IOPS. For write workloads, the `Async` engine brings an improvement of about 20-45% in total IOPS but performs worse than the `Sync` engine in total efficiency (IOPS per CPU load). This means that while Firecracker will achieve better performance, it will be at the cost of consuming more CPU for the kernel workers. In this case, the VMM cpu load is also reduced, which should translate into performance increase in hybrid workloads (block+net+vsock). Whether or not using the `Async` engine is a good idea performance-wise depends on the workloads and the amount of spare CPU available on a host. According to our NVME experiments, io_uring will always bring performance improvements (granted that there are enough available CPU resources). It is recommended that users perform some tests with examples of expected workloads and measure the efficiency as (IOPS/CPU load). View the for information about developer preview terminology. The `Async` io_engine is not yet suitable for production use. It will be made available for production once Firecracker has support for a host kernel that implements mitigation mechanisms for the following threats: The number of io_uring kernel workers assigned to one Firecracker block device is upper-bounded by: ``` (1 + NUMACOUNT * min(sizeofring, 4 * NUMBEROF_CPUS) ``` This formula is derived from the 5.10 linux kernel code, while `sizeofring` is hardcoded to `128` in Firecracker. Depending on the number of microVMs that can concurrently live on a host and the number of block devices configured for each microVM, the kernel PID limit may be reached, resulting in failure to create any new process. Kernels starting with 5.15 expose a configuration option for customising this upper bound. Once possible, we plan on exposing this in the Firecracker drive configuration interface. The io_uring kernel workers are spawned in the root cgroup of the system. They dont inherit the Firecracker cgroup, cannot be moved out of the root cgroup and their names don't contain any information about the microVM's PID. This makes it impossible to attribute a worker to a specific Firecracker VM and limit the CPU and memory consumption of said workers via cgroups. Starting with kernel 5.12 (currently unsupported), the Firecracker cgroup is inherited by the io_uring workers. We plan on marking the Async engine as production ready once an LTS linux kernel including mitigations for the aforementioned mitigations is released and support for it is added in Firecracker. Read more about Firecracker's ." } ]
{ "category": "Runtime", "file_name": "block-io-engine.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "title: CephFilesystemMirror CRD This guide assumes you have created a Rook cluster as explained in the main Rook allows creation and updating the fs-mirror daemon through the custom resource definitions (CRDs). CephFS will support asynchronous replication of snapshots to a remote (different Ceph cluster) CephFS file system via cephfs-mirror tool. Snapshots are synchronized by mirroring snapshot data followed by creating a snapshot with the same name (for a given directory on the remote file system) as the snapshot being synchronized. For more information about user management and capabilities see the . To get you started, here is a simple example of a CRD to deploy an cephfs-mirror daemon. ```yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystemMirror metadata: name: my-fs-mirror namespace: rook-ceph spec: {} ``` If any setting is unspecified, a suitable default will be used automatically. `name`: The name that will be used for the Ceph cephfs-mirror daemon. `namespace`: The Kubernetes namespace that will be created for the Rook cluster. The services, pods, and other resources created by the operator will be added to this namespace. `placement`: The cephfs-mirror pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, and `podAntiAffinity` similar to placement defined for daemons configured by the . `annotations`: Key value pair list of annotations to add. `labels`: Key value pair list of labels to add. `resources`: The resource requirements for the cephfs-mirror pods. `priorityClassName`: The priority class to set on the cephfs-mirror pods. In order to configure mirroring peers, please refer to the ." } ]
{ "category": "Runtime", "file_name": "ceph-fs-mirror-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "| Case ID | Title | Priority | Smoke | Status | Other | | - | | -- | -- | | -- | | V00001 | Switch podCIDRType to `auto`, see if it could auto fetch the type | p3 | | done | | | V00002 | Switch podCIDRType to `auto` but no cni files in /etc/cni/net.d, Viewing should be consistent with `none` | p3 | | done | | | V00003 | Switch podCIDRType to `calico`, see if it could auto fetch the cidr from calico ippools | p3 | | done | | | V00004 | Switch podCIDRType to `cilium`, see if it works in ipam-mode: [cluster-pool,kubernetes,multi-pool] | p3 | | done | | | V00005 | Switch podCIDRType to `none`, expect the cidr of status to be empty | p3 | | done | | | V00006 | status.phase is not-ready, expect the cidr of status to be empty | p3 | | done | | | V00007 | spidercoordinator has the lowest priority | p3 | | done | | | V00008 | status.phase is not-ready, pods will fail to run | p3 | | done | | | V00009 | it can get the clusterCIDR from kubeadmConfig or kube-controller-manager pod | p3 | | done | | | V00010 | It can get service cidr from k8s serviceCIDR resoures | p3 | | done | | | V00011 | status should be NotReady if neither kubeadm-config configMap nor kube-controller-manager pod can be found | p3 | | done | |" } ]
{ "category": "Runtime", "file_name": "spidercoodinator.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "Network services controller is responsible for reading the services and endpoints information from Kubernetes API server and configure IPVS on each cluster node accordingly. Please for design details and pros and cons compared to iptables based Kube-proxy Demo of Kube-router's IPVS based Kubernetes network service proxy ](https://asciinema.org/a/120312) Features: round robin load balancing client IP based session persistence source IP is preserved if service controller is used in conjuction with network routes controller (kube-router with --run-router flag) option to explicitly masquerade (SNAT) with --masquerade-all flag Network policy controller is responsible for reading the namespace, network policy and pods information from Kubernetes API server and configure iptables accordingly to provide ingress filter to the pods. Kube-router supports the networking.k8s.io/NetworkPolicy API or network policy V1/GA and also network policy beta semantics. Please for design details of Network Policy controller Demo of Kube-router's iptables based implementaton of network policies ](https://asciinema.org/a/120735) Network routes controller is responsible for reading pod CIDR allocated by controller manager to the node, and advertises the routes to the rest of the nodes in the cluster (BGP peers). Use of BGP is transparent to user for basic pod-to-pod networking. ](https://asciinema.org/a/120885) However BGP can be leveraged to other use cases like advertising the cluster ip, routable pod ip etc. Only in such use-cases understanding of BGP and configuration is required. Please see below demo how kube-router advertises cluster IP and pod cidrs to external BGP router ](https://asciinema.org/a/121635)" } ]
{ "category": "Runtime", "file_name": "see-it-in-action.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"Concepts\" layout: docs * * Heptio Ark provides customizable degrees of recovery for all Kubernetes API objects (Pods, Deployments, Jobs, Custom Resource Definitions, etc.), as well as for persistent volumes. This recovery can be cluster-wide, or fine-tuned according to object type, namespace, or labels. Ark is ideal for the disaster recovery use case, as well as for snapshotting your application state, prior to performing system operations on your cluster (e.g. upgrades). This section gives a quick overview of the Ark operation types. The backup operation (1) uploads a tarball of copied Kubernetes resources into cloud object storage and (2) uses the cloud provider API to make disk snapshots of persistent volumes, if specified. are cleared for PVs but kept for all other object types. You can optionally specify hooks that should be executed during the backup. For example, you may need to tell a database to flush its in-memory buffers to disk prior to taking a snapshot. You can find more information about hooks . Some things to be aware of: Cluster backups are not strictly atomic.* If API objects are being created or edited at the time of backup, they may or not be included in the backup. In practice, backups happen very quickly and so the odds of capturing inconsistent information are low, but still possible. A backup usually takes no more than a few seconds.* The snapshotting process for persistent volumes is asynchronous, so the runtime of the `ark backup` command isn't dependent on disk size. These ad-hoc backups are saved with the `<BACKUP NAME>` specified during creation. The schedule operation allows you to back up your data at recurring intervals. The first backup is performed when the schedule is first created, and subsequent backups happen at the schedule's specified interval. These intervals are specified by a Cron expression. A Schedule acts as a wrapper for Backups; when triggered, it creates them behind the scenes. Scheduled backups are saved with the name `<SCHEDULE NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as YYYYMMDDhhmmss. The restore operation allows you to restore all of the objects and persistent volumes from a previously created Backup. Heptio Ark supports multiple namespace remapping--for example, in a single restore, objects in namespace \"abc\" can be recreated under namespace \"def\", and the ones in \"123\" under \"456\". Kubernetes API objects that have been restored can be identified with a label that looks like `ark-restore=<BACKUP NAME>-<TIMESTAMP>`, where `<TIMESTAMP>` is formatted as YYYYMMDDhhmmss. You can also run the Ark server in restore-only mode, which disables backup, schedule, and garbage collection functionality during disaster recovery. For information about the individual API types Ark uses, please see the . When first creating a backup, you can specify a TTL. If Ark sees that an existing Backup resource has expired, it removes both: The Backup resource itself The actual backup file from cloud object storage Heptio Ark treats object storage as the source of truth. It continuously checks to see that the correct Backup resources are always present. If there is a properly formatted backup file in the storage bucket, but no corresponding Backup resources in the Kubernetes API, Ark synchronizes the information from object storage to Kubernetes. This allows restore functionality to work in a cluster migration scenario, where the original Backup objects do not exist in the new cluster. See the for details." } ]
{ "category": "Runtime", "file_name": "concepts.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "English | Kind is a tool for running local Kubernetes clusters using Docker container \"nodes\". Spiderpool provides a script to install the Kind cluster, you can use it to deploy a cluster that meets your needs, and test and experience Spiderpool. Get the Spiderpool stable version code to the local host and enter the root directory of the Spiderpool project. ```bash ~# LATESTRELEASEVERISON=$(curl -s https://api.github.com/repos/spidernet-io/spiderpool/releases | grep '\"tag_name\":' | grep -v rc | grep -Eo \"([0-9]+\\.[0-9]+\\.[0-9])\" | sort -r | head -n 1) ~# curl -Lo /tmp/$LATESTRELEASEVERISON.tar.gz https://github.com/spidernet-io/spiderpool/archive/refs/tags/v$LATESTRELEASEVERISON.tar.gz ~# tar -xvf /tmp/$LATESTRELEASEVERISON.tar.gz -C /tmp/ ~# cd /tmp/spiderpool-$LATESTRELEASEVERISON ``` Execute `make dev-doctor` to check whether the development tools on the local host meet the conditions for deploying Kind cluster and Spiderpool. Building a Spiderpool environment requires Kubectl, Kind, Docker, Helm, and yq tools. If they are missing on your machine, run `test/scripts/install-tools.sh` to install them. If you are mainland user who is not available to access ghcr.io, Additional parameter `-e E2ECHINAIMAGE_REGISTRY=true` can be specified during installation to help you pull images faster. === \"Spiderpool with Macvlan\" In this scenario, POD cloud be assigned multiple macvlan interfaces, and communicate through Pod IP, clusterIP, nodePort, etc. Please refer to for more details. The following command will create a single-CNI cluster with Macvlanand it implements the ClusterIP by kube-proxy. ```bash ~# make setupsingleCnimacvlan ``` === \"Dual CNIs with Spiderpool and Calico\" In this scenario, you can experience the effect of Pod having dual CNI NICs. Please refer to for more details. The following command will create a multi-CNI cluster with Calico and Spiderpool. In this environment, Calico serves as the default CNI and Pod could get a secondary underlay interface from Spiderpool. Calico works based on iptables datapath and implements service resolution based on kube-proxy. ```bash ~# make setupdualCnicalico ``` === \"Dual CNIs with Spiderpool and Cilium\" In this scenario, you can experience the effect of Pod having dual CNI NICs. Please refer to for more details. The following command will create a multi-CNI cluster with Cilium and Spiderpool, In this environment, Cilium serves as the default CNI and Pod could get a secondary underlay interface from Spiderpool. Cilium's eBPF acceleration is enabled, kube-proxy is disabled, and service resolution is implemented based on eBPF. > Confirm whether the operating system Kernel version number is >= 4.9.17. If the kernel is too low, the installation will fail. Kernel 5.10+ is recommended. ```bash ~# make setupdualCnicilium ``` Execute the following command in the root directory of the Spiderpool project to configure KUBECONFIG for the Kind cluster for kubectl. ```bash ~# export KUBECONFIG=$(pwd)/test/.cluster/spider/.kube/config ``` It should be possible to observe the following: ```bash ~# kubectl get nodes NAME STATUS ROLES AGE VERSION spider-control-plane Ready control-plane 2m29s v1.26.2 spider-worker Ready <none> 2m58s" }, { "data": "~# kubectl get po -n kube-system | grep spiderpool NAME READY STATUS RESTARTS AGE spiderpool-agent-4dr97 1/1 Running 0 3m spiderpool-agent-4fkm4 1/1 Running 0 3m spiderpool-controller-7864477fc7-c5dk4 1/1 Running 0 3m spiderpool-controller-7864477fc7-wpgjn 1/1 Running 0 3m spiderpool-init 0/1 Completed 0 3m ``` The Quick Install Kind Cluster script provided by Spiderpool will automatically create an application for you to verify that your Kind cluster is working properly and the following is the running state of the application: ```bash ~# kubectl get po -l app=test-pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-pod-856f9689d-876nm 1/1 Running 0 5m34s 172.18.40.63 spider-worker <none> <none> ``` Through the above checks, everything is normal in the Kind cluster. In this chapter, we will introduce how to use Spiderpool in different environments. Spiderpool introduces the CR to automate the management of Multus NetworkAttachmentDefinition CR and extend the capabilities of Multus CNI configurations. === \"Spiderpool with Macvlan\" Get the Spidermultusconfig CR and IPPool CR of the cluster ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -A NAMESPACE NAME AGE kube-system macvlan-vlan0 1h kube-system macvlan-vlan100 1h kube-system macvlan-vlan200 1h ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT default-v4-ippool 4 172.18.0.0/16 5 253 true default-v6-ippool 6 fc00:f853:ccd:e793::/64 5 253 true ... ``` Create an application. The following command will create a single NIC Deployment application: `v1.multus-cni.io/default-network`Specify Spidermultusconfig CR: `kube-system/macvlan-vlan0` through it, and use this configuration to create a default NIC (eth0) configured by Macvlan for the application. `ipam.spidernet.io/ippool`Used to specify Spiderpool's IP pool. Spiderpool will automatically select an IP in the pool to bind to the application's default NIC. ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: labels: app: test-app annotations: ipam.spidernet.io/ippool: |- { \"ipv4\": [\"default-v4-ippool\"], \"ipv6\": [\"default-v6-ippool\"] } v1.multus-cni.io/default-network: kube-system/macvlan-vlan0 spec: containers: name: test-app image: alpine imagePullPolicy: IfNotPresent command: \"/bin/sh\" args: \"-c\" \"sleep infinity\" EOF ``` Verify that the application was created successfully. ```shell ~# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-7fdbb59666-4k5m7 1/1 Running 0 9s 172.18.40.223 spider-worker <none> <none> ~# kubectl exec -ti test-app-7fdbb59666-4k5m7 -- ip a ... 3: eth0@if339: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 0a:96:54:6f:76:b4 brd ff:ff:ff:ff:ff:ff inet 172.18.40.223/16 brd 172.18.255.255 scope global eth0 validlft forever preferredlft forever 4: veth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 4a:8b:09:d9:4c:0a brd ff:ff:ff:ff:ff:ff ``` === \"Dual CNIs with Spiderpool and Calico\" Get the Spidermultusconfig CR and IPPool CR of the cluster ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -A NAMESPACE NAME AGE kube-system calico 3m11s kube-system macvlan-vlan0 2m20s kube-system macvlan-vlan100 2m19s kube-system macvlan-vlan200 2m19s ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT default-v4-ippool 4 172.18.0.0/16 1 253 true default-v6-ippool 6 fc00:f853:ccd:e793::/64 1 253 true ... ``` Create an application. The following command will create a Deployment application with two NICs. The default NIC(eth0) is configured by the cluster default CNI Calico. `k8s.v1.cni.cncf.io/networks`Use this annotation to create an additional NIC (net1) configured by Macvlan for the" }, { "data": "`ipam.spidernet.io/ippools`Used to specify Spiderpool's IPPool. Spiderpool will automatically select an IP in the pool to bind to the application's net1 NIC ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: labels: app: test-app annotations: ipam.spidernet.io/ippools: |- [{ \"interface\": \"net1\", \"ipv4\": [\"default-v4-ippool\"], \"ipv6\": [\"default-v6-ippool\"] }] k8s.v1.cni.cncf.io/networks: kube-system/macvlan-vlan0 spec: containers: name: test-app image: alpine imagePullPolicy: IfNotPresent command: \"/bin/sh\" args: \"-c\" \"sleep infinity\" EOF ``` Verify that the application was created successfully. ```shell ~# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-86dd478b-bv6rm 1/1 Running 0 12s 10.243.104.211 spider-worker <none> <none> ~# kubectl exec -ti test-app-7fdbb59666-4k5m7 -- ip a ... 4: eth0@if148: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP qlen 1000 link/ether 1a:1e:e1:f3:f9:4b brd ff:ff:ff:ff:ff:ff inet 10.243.104.211/32 scope global eth0 5: net1@if347: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether 56:b4:3d:a6:d2:d1 brd ff:ff:ff:ff:ff:ff inet 172.18.40.154/16 brd 172.18.255.255 scope global net1 ``` === \"Dual CNIs with Spiderpool and Cilium\" Get the Spidermultusconfig CR and IPPool CR of the cluster ```bash ~# kubectl get spidermultusconfigs.spiderpool.spidernet.io -A NAMESPACE NAME AGE kube-system cilium 5m32s kube-system macvlan-vlan0 5m12s kube-system macvlan-vlan100 5m17s kube-system macvlan-vlan200 5m18s ~# kubectl get spiderippool NAME VERSION SUBNET ALLOCATED-IP-COUNT TOTAL-IP-COUNT DEFAULT default-v4-ippool 4 172.18.0.0/16 1 253 true default-v6-ippool 6 fc00:f853:ccd:e793::/64 1 253 true ... ``` Create an application. The following command will create a Deployment application with two NICs. The default NIC(eth0) is configured by the cluster default CNI Cilium. `k8s.v1.cni.cncf.io/networks`Use this annotation to create an additional NIC (net1) configured by Macvlan for the application. `ipam.spidernet.io/ippools`Used to specify Spiderpool's IPPool. Spiderpool will automatically select an IP in the pool to bind to the application's net1 NIC ```shell cat <<EOF | kubectl create -f - apiVersion: apps/v1 kind: Deployment metadata: name: test-app spec: replicas: 1 selector: matchLabels: app: test-app template: metadata: labels: app: test-app annotations: ipam.spidernet.io/ippools: |- [{ \"interface\": \"net1\", \"ipv4\": [\"default-v4-ippool\"], \"ipv6\": [\"default-v6-ippool\"] }] k8s.v1.cni.cncf.io/networks: kube-system/macvlan-vlan0 spec: containers: name: test-app image: alpine imagePullPolicy: IfNotPresent command: \"/bin/sh\" args: \"-c\" \"sleep infinity\" EOF ``` Verify that the application was created successfully. ```shell ~# kubectl get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES test-app-86dd478b-ml8d9 1/1 Running 0 58s 10.244.102.212 spider-worker <none> <none> ~# kubectl exec -ti test-app-7fdbb59666-4k5m7 -- ip a ... 4: eth0@if148: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1480 qdisc noqueue state UP qlen 1000 link/ether 26:f1:88:f9:7d:d7 brd ff:ff:ff:ff:ff:ff inet 10.244.102.212/32 scope global eth0 5: net1@if347: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP link/ether ca:71:99:ec:ec:28 brd ff:ff:ff:ff:ff:ff inet 172.18.40.228/16 brd 172.18.255.255 scope global net1 ``` Now you can test and experience Spiderpool's based on Kind. Uninstall a Kind cluster Execute `make clean` to uninstall the Kind cluster. Delete test's images ```bash ~# docker rmi -f $(docker images | grep spiderpool | awk '{print $3}') ~# docker rmi -f $(docker images | grep multus | awk '{print $3}') ```" } ]
{ "category": "Runtime", "file_name": "get-started-kind.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: Velero is an Open Source Tool to Back up and Migrate Kubernetes Clusters slug: Velero-is-an-Open-Source-Tool-to-Back-up-and-Migrate-Kubernetes-Clusters # Velero.io word list : ignore excerpt: Velero is an open source tool to safely back up, recover, and migrate Kubernetes clusters and persistent volumes. It works both on premises and in a public cloud. author_name: Velero Team categories: ['kubernetes'] tags: ['Velero Team'] Velero is an open source tool to safely back up, recover, and migrate Kubernetes clusters and persistent volumes. It works both on premises and in a public cloud. Velero consists of a server process running as a deployment in your Kubernetes cluster and a command-line interface (CLI) with which DevOps teams and platform operators configure scheduled backups, trigger ad-hoc backups, perform restores, and more. Unlike other tools which directly access the Kubernetes etcd database to perform backups and restores, Velero uses the Kubernetes API to capture the state of cluster resources and to restore them when necessary. This API-driven approach has a number of key benefits: Backups can capture subsets of the clusters resources, filtering by namespace, resource type, and/or label selector, providing a high degree of flexibility around whats backed up and restored. Users of managed Kubernetes offerings often do not have access to the underlying etcd database, so direct backups/restores of it are not possible. Resources exposed through aggregated API servers can easily be backed up and restored even if theyre stored in a separate etcd database. Additionally, Velero enables you to backup and restore your applications persistent data alongside their configurations, using either your storage platforms native snapshot capability or an integrated file-level backup tool called . Since Velero was initially released in August 2017, weve had nearly 70 contributors to the project, with a ton of support from the community. We also recently reached 2000 stars on GitHub. We are excited to keep building our great community and project. Twitter () Slack ( on Kubernetes) Google Group () We are continuing to work towards Velero 1.0 and would love your help working on the items in our roadmap. If youre interested in contributing, we have a number of GitHub issues labeled as and , including items related to Prometheus metrics, the CLI UX, improved documentation, and more. We are more than happy to work with new and existing contributors alike. Previously posted at: <https://blogs.vmware.com/cloudnative/2019/02/28/velero-v0-11-delivers-an-open-source-tool-to-back-up-and-migrate-kubernetes-clusters/> <!-- Velero.io word list : ignore -->" } ]
{ "category": "Runtime", "file_name": "2019-04-09-Velero-is-an-Open-Source-Tool-to-Back-up-and-Migrate-Kubernetes-Clusters.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Storage Quota sidebar_position: 4 JuiceFS supports both total file system quota and subdirectory quota, both of which can be used to limit the available capacity and the number of available inodes. Both file system quota and directory quota are hard limits. When the total file system quota is exhausted, subsequent writes will return `ENOSPC` (No space left) error; and when the directory quota is exhausted, subsequent writes will return `EDQUOT` (Disk quota exceeded) error. :::tip The storage quota settings are stored in the metadata engine for all mount points to read, and the client of each mount point will also cache its own used capacity and inodes and synchronize them with the metadata engine once per second. Meanwhile the client will read the latest usage value from the metadata engine every 10 seconds to synchronize the usage information among each mount point, but this information synchronization mechanism cannot guarantee that the usage data is counted accurately. ::: For Linux, the default capacity of a JuiceFS type file system is identified as `1.0P` by using the `df` command. ```shell $ df -Th | grep juicefs JuiceFS:ujfs fuse.juicefs 1.0P 682M 1.0P 1% /mnt ``` :::note The capacity of underlying object storage is usually unlimited, i.e., JuiceFS storage is unlimited. Therefore, the displayed capacity is just an estimate rather than the actual storage limit. ::: The `config` command that comes with the client allows you to view the details of a file system. ```shell $ juicefs config $METAURL { \"Name\": \"ujfs\", \"UUID\": \"1aa6d290-279b-432f-b9b5-9d7fd597dec2\", \"Storage\": \"minio\", \"Bucket\": \"127.0.0.1:9000/jfs1\", \"AccessKey\": \"herald\", \"SecretKey\": \"removed\", \"BlockSize\": 4096, \"Compression\": \"none\", \"Shards\": 0, \"Partitions\": 0, \"Capacity\": 0, \"Inodes\": 0, \"TrashDays\": 0 } ``` The capacity limit (in GiB) can be set with `--capacity` when creating a file system, e.g. to create a file system with an available capacity of 100 GiB: ```shell juicefs format --storage minio \\ --bucket 127.0.0.1:9000/jfs1 \\ ... \\ --capacity 100 \\ $METAURL myjfs ``` You can also set a capacity limit for a created file system with the `config` command: ```shell $ juicefs config $METAURL --capacity 100 2022/01/27 12:31:39.506322 juicefs[16259] <INFO>: Meta address: postgres://[email protected]:5432/jfs1 2022/01/27 12:31:39.521232 juicefs[16259] <WARNING>: The latency to database is too high: 14.771783ms capacity: 0 GiB -> 100 GiB ``` For file systems that have been set with storage quota, the identification capacity becomes the quota capacity: ```shell $ df -Th | grep juicefs JuiceFS:ujfs fuse.juicefs 100G 682M 100G 1% /mnt ``` On Linux systems, each file (a folder is also a type of file) has an inode regardless of size, so limiting the number of inodes is equivalent to limiting the number of files. The quota can be set with `--inodes` when creating the file system, e.g. ```shell juicefs format --storage minio \\ --bucket 127.0.0.1:9000/jfs1 \\ ... \\ --inodes 100 \\ $METAURL myjfs ``` The file system created by the above command allows only 100 files to be stored. However, there is no limit to the size of individual files. For example, it will still work if a single file is equivalent or even larger than 1 TB as long as the total number of files does not exceed 100. You can also set a capacity quota for a created file system by using the `config` command: ```shell $ juicefs config $METAURL --inodes 100 2022/01/27 12:35:37.311465 juicefs[16407] <INFO>: Meta address: postgres://[email protected]:5432/jfs1 2022/01/27 12:35:37.322991 juicefs[16407] <WARNING>: The latency to database is too high:" }, { "data": "inodes: 0 -> 100 ``` You can combine `--capacity` and `--inodes` to set the capacity quota of a file system with more flexibility. For example, to create a file system that the total capacity limits to 100 TiB with only 100000 files to be stored: ```shell juicefs format --storage minio \\ --bucket 127.0.0.1:9000/jfs1 \\ ... \\ --capacity 102400 \\ --inodes 100000 \\ $METAURL myjfs ``` Similarly, for the file systems that have been created, you can follow the settings below separately. ```shell juicefs config $METAURL --capacity 102400 ``` ```shell juicefs config $METAURL --inodes 100000 ``` :::tip The client reads the latest storage quota settings from the metadata engine every 60 seconds to update the local settings, and this frequency may cause other mount points to take up to 60 seconds to update the quota setting. ::: JuiceFS began to support directory-level storage quota since v1.1, and you can use the `juicefs quota` subcommand for directory quota management and query. :::tip The usage statistic relies on the mount process, please do not use this feature until all writable mount processes are upgraded to v1.1.0. ::: You can use `juicefs quota set $METAURL --path $DIR --capacity $N` to set the directory capacity limit in GiB. For example, to set a capacity quota of 1GiB for the directory `/test`: ```shell $ juicefs quota set $METAURL --path /test --capacity 1 +-++++--+-+-+ | Path | Size | Used | Use% | Inodes | IUsed | IUse% | +-++++--+-+-+ | /test | 1.0 GiB | 1.6 MiB | 0% | unlimited | 314 | | +-++++--+-+-+ ``` After the setting is successful, you can see a table describing the current quota setting directory, quota size, current usage and other information. :::tip The use of the `quota` subcommand does not require a local mount point, and it is expected that the input directory path is a path relative to the JuiceFS root directory rather than a local mount path. It may take a long time to set a quota for a large directory, because the current usage of the directory needs to be calculated. ::: If you need to query the quota and current usage of a certain directory, you can use the `juicefs quota get $METAURL --path $DIR` command: ```shell $ juicefs quota get $METAURL --path /test +-++++--+-+-+ | Path | Size | Used | Use% | Inodes | IUsed | IUse% | +-++++--+-+-+ | /test | 1.0 GiB | 1.6 MiB | 0% | unlimited | 314 | | +-++++--+-+-+ ``` You can also use the `juicefs quota ls $METAURL` command to list all directory quotas. You can use `juicefs quota set $METAURL --path $DIR --inodes $N` to set the directory inode quota, the unit is one. For example, to set a quota of 400 inodes for the directory `/test`: ```shell $ juicefs quota set $METAURL --path /test --inodes 400 +-++++--+-+-+ | Path | Size | Used | Use% | Inodes | IUsed | IUse% | +-++++--+-+-+ | /test | 1.0 GiB | 1.6 MiB | 0% | 400 | 314 | 78% | +-++++--+-+-+ ``` You can combine `--capacity` and `--inodes` to set the capacity limit of the directory more flexibly. For example, to set a quota of 10GiB and 1000 inodes for the `/test` directory: ```shell $ juicefs quota set $METAURL --path /test --capacity 10 --inodes 1000 +-+--+++--+-+-+ | Path | Size | Used | Use% | Inodes | IUsed | IUse% | +-+--+++--+-+-+ | /test | 10 GiB |" }, { "data": "MiB | 0% | 1,000 | 314 | 31% | +-+--+++--+-+-+ ``` In addition, you can also not limit the capacity of the directory and the number of inodes (set to `0` means unlimited), and only use the `quota` command to count the current usage of the directory: ```shell $ juicefs quota set $METAURL --path /test --capacity 0 --inodes 0 +-+--+++--+-+-+ | Path | Size | Used | Use% | Inodes | IUsed | IUse% | +-+--+++--+-+-+ | /test | unlimited | 1.6 MiB | | unlimited | 314 | | +-+--+++--+-+-+ ``` JuiceFS allows nested quota to be set on multiple levels of directories, client performs recursive lookup to ensure quota settings take effect on every level of directory. This means even if the parent directory is allocated a smaller quota, you can still set a larger quota on the child directory. JuiceFS supports mounting arbitrary subdirectories using . If the directory quota is set for the mounted subdirectory, you can use the `df` command that comes with the system to view the directory quota and current usage. For example, the file system quota is 1PiB and 10M inodes, while the quota for the `/test` directory is 1GiB and 400 inodes. The output of the `df` command when mounted using the root directory is: ```shell $ df -h Filesystem Size Used Avail Use% Mounted on ... JuiceFS:myjfs 1.0P 1.6M 1.0P 1% /mnt/jfs $ df -i -h Filesystem Inodes IUsed IFree IUse% Mounted on ... JuiceFS:myjfs 11M 315 10M 1% /mnt/jfs ``` When mounted using the `/test` subdirectory, the output of the `df` command is: ```shell $ df -h Filesystem Size Used Avail Use% Mounted on ... JuiceFS:myjfs 1.0G 1.6M 1023M 1% /mnt/jfs $ df -i -h Filesystem Inodes IUsed IFree IUse% Mounted on ... JuiceFS:myjfs 400 314 86 79% /mnt/jfs ``` :::note When there is no quota set for the mounted subdirectory, JuiceFS will query up to find the nearest directory quota and return it to `df`. If directory quotas are set for multiple levels of parent directories, JuiceFS will return the minimum available capacity and number of inodes after calculation. ::: Since directory usage updates are laggy and asynchronous, loss may occur under unusual circumstances (such as a client exiting unexpectedly). We can use the `juicefs quota check $METAURL --path $DIR` command to check or fix it: ```shell $ juicefs quota check $METAURL --path /test 2023/05/23 15:40:12.704576 juicefs[1638846] <INFO>: quota of /test is consistent [base.go:839] +-+--+++--+-+-+ | Path | Size | Used | Use% | Inodes | IUsed | IUse% | +-+--+++--+-+-+ | /test | 10 GiB | 1.6 MiB | 0% | 1,000 | 314 | 31% | +-+--+++--+-+-+ ``` When the directory usage is correct, the current directory quota usage will be output; if it fails, the error log will be output: ```shell $ juicefs quota check $METAURL --path /test 2023/05/23 15:48:17.494604 juicefs[1639997] <WARNING>: /test: quota(314, 4.0 KiB) != summary(314, 1.6 MiB) [base.go:843] 2023/05/23 15:48:17.494644 juicefs[1639997] <FATAL>: quota of /test is inconsistent, please repair it with --repair flag [main.go:31] ``` At this point you can use the `--repair` option to repair directory usage: ```shell $ juicefs quota check $METAURL --path /test --repair 2023/05/23 15:50:08.737086 juicefs[1640281] <WARNING>: /test: quota(314, 4.0 KiB) != summary(314, 1.6 MiB) [base.go:843] 2023/05/23 15:50:08.737123 juicefs[1640281] <INFO>: repairing... [base.go:852] +-+--+++--+-+-+ | Path | Size | Used | Use% | Inodes | IUsed | IUse% | +-+--+++--+-+-+ | /test | 10 GiB | 1.6 MiB | 0% | 1,000 | 314 | 31% | +-+--+++--+-+-+ ```" } ]
{ "category": "Runtime", "file_name": "quota.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "This file describes all the parameters of the configure script and their possible uses. For a quick help of available parameters, run `./configure --help`. This parameter takes a comma-separated list of all the flavors that the build system should assemble. Depending on a default stage1 image setup, this list is by default either empty or set to `coreos,kvm,fly` for, respectively, detailed setup and flavor setup. Note that specifying this parameter does not necessarily mean that rkt will use them in the end. Available flavors are: `coreos` - it takes systemd and bash from a CoreOS Container Linux PXE image; uses systemd-nspawn `kvm` - it takes systemd, bash and other binaries from a Container Linux PXE image; uses lkvm or qemu `src` - it builds systemd, takes bash from the host at build time; uses built systemd-nspawn `host` - it takes systemd and bash from host at runtime; uses systemd-nspawn from the host `fly` - chroot-only approach for single-application minimal isolation containers; native Go implementation The `host` flavor is probably the best suited flavor for distributions that have strict rules about software sources. This parameter takes a version number to become the version of all the built stage1 flavors. Normally, without this parameter, the images have the same version as rkt itself. This parameter may be useful for distributions that often provide patched versions of upstream software without changing major/minor/patch version number, but instead add a numeric suffix. An example usage could be passing `--with-stage1-flavors-version-override=0.12.0-2`, so the new image will have a version `0.12.0-2` instead of `0.12.0`. This parameter also affects the default stage1 image version in flavor setup. The parameters described below affect the handling of rkt's default stage1 image. rkt first tries to find the stage1 image in the store by using the default stage1 image name and version. If this fails, rkt will try to fetch the image into the store from the default stage1 image location. There are two mutually exclusive ways to specify a default stage1 image name and version: flavor setup detailed setup Flavor setup has only one parameter. This kind of setup is rather a convenience wrapper around the detailed setup. It takes a name of the flavor of the stage1 image we build and, based on that, it sets up the default stage1 image name and version. Default stage1 image in this case is often something like coreos.com/rkt/stage1-<name of the flavor>. Default stage1 version is usually just rkt version, unless it is overridden with" }, { "data": "This is the default setup if neither flavor nor detailed setup are used. The default stage1 image flavor is the first flavor on the list in `--with-stage1-flavors`. Detailed setup has two parameters, both must be provided. This kind of setup could be used to make some 3rd party stage1 implementation the default stage1 image used by rkt. This parameter tells what is the name of the default stage1 image. This parameter tells what is the version of the default stage1 image. This parameter tells rkt where to find the default stage1 image if it is not found in the store. For the detailed setup, the default value of this parameter is empty, so if it is not provided, you may be forced to inform rkt about the location of the stage1 image at runtime. For the flavor setup, the default value is also empty, which tells rkt to look for the image in the directory the rkt binary is located, unless it is overridden at runtime. Normally, this parameter should be some URL, with a scheme or an absolute path. This parameter tells rkt where the directory which contains all the stage1 images is located. The value should be an absolute path. In this directory, all the built flavors of stage1 images should be installed. The `--stage1-from-dir` rkt flag will look for images in this directory. The default value of this parameter is `<libdir>/rkt/stage1-images`, where `<libdir>` is a distribution-specific place for storing arch-dependent files. There are some additional parameters for some flavors. Usually they do not need to be modified, default values are sane. `src` flavor provides parameters for specifying some `git`-specific details of the systemd repository. This parameter takes a URL to a `systemd` git repository. The default is `https://github.com/systemd/systemd.git`. You may want to change it to point the build system to use some local repository. This parameter specifies the systemd version to be built. Version names are usually in form of `v<number>`, where number is a systemd version. The default is `v999`. This parameter takes either a tag name or a branch name. You can use branch name `master` to test the bleeding edge version of systemd or any working branch, or tag name. Since arbitrary branch names do not imply which systemd version is being built, the actual systemd version is specified using `--with-stage1-systemd-version`. The default is `master`. `coreos` and `kvm` flavors provide parameters related to CoreOS Container Linux PXE" }, { "data": "This parameter is used to point the build system to a local Container Linux PXE image. This can be helpful for some packagers, where downloading anything over the network is a no-no. The parameter takes either relative or absolute paths. The default value is empty, so the image will be downloaded over the network. If this parameter is specified, then also `--with-coreos-local-pxe-image-systemd-version` must be specified too. The build system has no reliable way to deduce automatically what version of systemd the Container Linux PXE image contains, so it needs some help. This parameters tells the build systemd what is the version of systemd in the local PXE image. The value should be like tag name in systemd git repository, that is - `v<number>`, like `v229`. If this parameter is specified, then also `--with-coreos-local-pxe-image-path` must be specified too. There is only one flag for testing - to enable functional testing. Functional tests are disabled by default. There are some requirements to be fulfilled to be able to run them. The tests are runnable only in Linux. The tests must be run as root, so the build system uses sudo to achieve that. Note that using sudo may kill the non-interactivity of the build system, so make sure that if you use it in some CI, then CI user is a sudoer and does not need a password. Also, when trying to run functional tests with the host flavor of the stage1 image, the host must be managed by systemd of at least version v220. If any of the requirements above are not met and the value of the parameter is yes then configure will bail out. This may not be ideal in CI environment, so there is a third possible value of this parameter - \"auto\". \"auto\" will enable functional tests if all the requirements are met. Otherwise, it will disable them without any errors. These flags are related to security. This option to enable is set by default. For logging to work, is required. Set this option to `auto` to conditionally enable TPM features based on build support. This option to allow building rkt with go having known security issues is unset by default. Use it with caution. This option enables incremental compilation. This is useful for local development. In contrast to a release build this option enables `go install` vs `go build` which decreases incremental compilation time. Note that this option is not supported in cross-compile builds. For this reason the incremental build option must not be used for release builds." } ]
{ "category": "Runtime", "file_name": "build-configure.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "access add address adjust aggregate allocate allow alphabetize annotate append apply ask assert assign attempt audit augment avoid begin bind block break bring build bump cache call cancel capitalize cast catch centralize change check choose clarify clean cleanup clear close collect combine comment commit compact compile complete configure conform connect consider consolidate continue convert coordinate copy correct create cut debug declare decouple decrease define deflake delay delete denote depend deploy deprecate describe deserialize determine differentiate disable disallow discuss display distinguish do dockerize document downgrade drop dump duplicate edit elaborate eliminate enable enforce enhance ensure escape exclude exit explain expose extend extract factor fail fill filter find finish fix flush follow force forget format generalize generate get give group handle hardcode heartbeat hide ignore implement import improve include incorporate increase increment init initialize inline insert install integrate introduce invalidate isolate journal keep kill leave let limit link list load lock log login make manage mark mention merge migrate modify move name normalize open optimize order organize output overload override parallelize parse pass perform permit pin placate polish populate port prefer prepare preserve prevent print process propagate prototype provide publish put quote read rearrange rebase recompute recover redefine redirect reduce reenable refactor reference refine reformat regen regenerate register reimplement relax release relocate remove rename reorder reorg reorganize rephrase replace replay report request require rerun reserve reset resolve respect respond restore restructure retrieve retry return reuse revert review revise reword rewrite roll run save search send separate set shade share shift shorten show shutdown simplify skip sleep solve sort space specify spell split start stash state stop store supply support suppress switch sync synchronize tag terminate test throw time track translate trim try tune turn tweak undo unify unignore unmount unwrap update upgrade use validate verify wait wrap write" } ]
{ "category": "Runtime", "file_name": "pr_title_words.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "An attacker could send a JWE containing compressed data that used large amounts of memory and CPU when decompressed by `Decrypt` or `DecryptMulti`. Those functions now return an error if the decompressed data would exceed 250kB or 10x the compressed size (whichever is larger). Thanks to Enze Wang@Alioth and Jianjun Chen@Zhongguancun Lab (@zer0yu and @chenjj) for reporting. This release makes some breaking changes in order to more thoroughly address the vulnerabilities discussed in [Three New Attacks Against JSON Web Tokens][1], \"Sign/encrypt confusion\", \"Billion hash attack\", and \"Polyglot token\". Limit JWT encryption types (exclude password or public key types) (#78) Enforce minimum length for HMAC keys (#85) jwt: match any audience in a list, rather than requiring all audiences (#81) jwt: accept only Compact Serialization (#75) jws: Add expected algorithms for signatures (#74) Require specifying expected algorithms for ParseEncrypted, ParseSigned, ParseDetached, jwt.ParseEncrypted, jwt.ParseSigned, jwt.ParseSignedAndEncrypted (#69, #74) Usually there is a small, known set of appropriate algorithms for a program to use and it's a mistake to allow unexpected algorithms. For instance the \"billion hash attack\" relies in part on programs accepting the PBES2 encryption algorithm and doing the necessary work even if they weren't specifically configured to allow PBES2. Revert \"Strip padding off base64 strings\" (#82) The specs require base64url encoding without padding. Minimum supported Go version is now 1.21 ParseSignedCompact, ParseSignedJSON, ParseEncryptedCompact, ParseEncryptedJSON. These allow parsing a specific serialization, as opposed to ParseSigned and ParseEncrypted, which try to automatically detect which serialization was provided. It's common to require a specific serialization for a specific protocol - for instance JWT requires Compact serialization. Limit decompression output size to prevent a DoS. Backport from v4.0.1. DecryptMulti: handle decompression error (#19) jwe/CompactSerialize: improve performance (#67) Increase the default number of PBKDF2 iterations to 600k (#48) Return the proper algorithm for ECDSA keys (#45) Add Thumbprint support for opaque signers (#38) Security issue: an attacker specifying a large \"p2c\" value can cause JSONWebEncryption.Decrypt and JSONWebEncryption.DecryptMulti to consume large amounts of CPU, causing a DoS. Thanks to Matt Schwager (@mschwager) for the disclosure and to Tom Tervoort for originally publishing the category of attack. https://i.blackhat.com/BH-US-23/Presentations/US-23-Tervoort-Three-New-Attacks-Against-JSON-Web-Tokens.pdf Limit decompression output size to prevent a DoS. Backport from v4.0.1." } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "There are two command-line tools that are built within the Kanister repository. Although all Kanister custom resources can be managed using kubectl, there are situations where this may be cumbersome. A canonical example of this is backup/restore - Manually creating a restore ActionSet requires copying Artifacts from the status of the complete backup ActionSet, which is an error prone process. `kanctl` simplifies this process by allowing the user to create custom Kanister resources - ActionSets and Profiles, override existing ActionSets and validate profiles. `kanctl` has two top level commands: `create` `validate` The usage of these commands, with some examples, has been show below: ``` bash $ kanctl create --help Create a custom kanister resource Usage: kanctl create [command] Available Commands: actionset Create a new ActionSet or override a <parent> ActionSet profile Create a new profile repository-server Create a new kopia repository server Flags: --dry-run if set, resource YAML will be printed but not created -h, --help help for create --skip-validation if set, resource is not validated before creation Global Flags: -n, --namespace string Override namespace obtained from kubectl context Use \"kanctl create [command] --help\" for more information about a command. ``` As seen above, both ActionSets and profiles can be created using `kanctl create` ``` bash $ kanctl create actionset --help Create a new ActionSet or override a <parent> ActionSet Usage: kanctl create actionset [flags] Flags: -a, --action string action for the action set (required if creating a new action set) -b, --blueprint string blueprint for the action set (required if creating a new action set) -c, --config-maps strings config maps for the action set, comma separated ref=namespace/name pairs (eg: --config-maps ref1=namespace1/name1,ref2=namespace2/name2) -d, --deployment strings deployment for the action set, comma separated namespace/name pairs (eg: --deployment namespace1/name1,namespace2/name2) -f, --from string specify name of the action set -h, --help help for actionset -k, --kind string resource kind to apply selector on. Used along with the selector specified using --selector/-l (default \"all\") -T, --namespacetargets strings namespaces for the action set, comma separated list of namespaces (eg: --namespacetargets namespace1,namespace2) -O, --objects strings objects for the action set, comma separated list of object references (eg: --objects group/version/resource/namespace1/name1,group/version/resource/namespace2/name2) -o, --options strings specify options for the action set, comma separated key=value pairs (eg: --options key1=value1,key2=value2) -p, --profile string profile for the action set -v, --pvc strings pvc for the action set, comma separated namespace/name pairs (eg: --pvc namespace1/name1,namespace2/name2) -s, --secrets strings secrets for the action set, comma separated ref=namespace/name pairs (eg: --secrets ref1=namespace1/name1,ref2=namespace2/name2) -l, --selector string k8s selector for objects --selector-namespace string namespace to apply selector" }, { "data": "Used along with the selector specified using --selector/-l -t, --statefulset strings statefulset for the action set, comma separated namespace/name pairs (eg: --statefulset namespace1/name1,namespace2/name2) Global Flags: --dry-run if set, resource YAML will be printed but not created -n, --namespace string Override namespace obtained from kubectl context --skip-validation if set, resource is not validated before creation ``` `kanctl create actionset` helps create ActionSets in a couple of different ways. A common backup/restore scenario is demonstrated below. Create a new Backup ActionSet ``` bash $ kanctl create actionset --action backup --namespace kanister --blueprint time-log-bp \\ --deployment kanister/time-logger \\ --profile s3-profile actionset backup-9gtmp created $ kubectl --namespace kanister describe actionset backup-9gtmp ``` Restore from the backup we just created ``` bash $ kanctl create actionset --action restore --from backup-9gtmp --namespace kanister actionset restore-backup-9gtmp-4p6mc created $ kubectl --namespace kanister describe actionset restore-backup-9gtmp-4p6mc ``` Delete the Backup we created ``` bash $ kanctl create actionset --action delete --from backup-9gtmp --namespace kanister actionset delete-backup-9gtmp-fc857 created $ kubectl --namespace kanister describe actionset delete-backup-9gtmp-fc857 ``` To make the selection of objects (resources on which actions are performed) easier, you can filter on K8s labels using `--selector`. ``` bash $ kanctl create actionset --action backup --namespace kanister --blueprint time-log-bp \\ --selector app=time-logger \\ --kind deployment \\ --selector-namespace kanister --profile s3-profile actionset backup-8f827 created ``` The `--dry-run` flag will print the YAML of the ActionSet without actually creating it. ``` bash $ kanctl create actionset --action backup --namespace kanister --blueprint time-log-bp \\ --selector app=time-logger \\ --kind deployment \\ --selector-namespace kanister \\ --profile s3-profile \\ --dry-run apiVersion: cr.kanister.io/v1alpha1 kind: ActionSet metadata: creationTimestamp: null generateName: backup- spec: actions: blueprint: time-log-bp configMaps: {} name: backup object: apiVersion: \"\" kind: deployment name: time-logger namespace: kanister options: {} profile: apiVersion: \"\" kind: \"\" name: s3-profile namespace: kanister secrets: {} ``` Profile creation using `kanctl create` ``` bash $ kanctl create profile --help Create a new profile Usage: kanctl create profile [command] Available Commands: s3compliant Create new S3 compliant profile Flags: -h, --help help for profile --skip-SSL-verification if set, SSL verification is disabled for the profile Global Flags: --dry-run if set, resource YAML will be printed but not created -n, --namespace string Override namespace obtained from kubectl context --skip-validation if set, resource is not validated before creation Use \"kanctl create profile [command] --help\" for more information about a" }, { "data": "``` A new S3Compliant profile can be created using the s3compliant subcommand ``` bash $ kanctl create profile s3compliant --help Create new S3 compliant profile Usage: kanctl create profile s3compliant [flags] Flags: -a, --access-key string access key of the s3 compliant bucket -b, --bucket string s3 bucket name -e, --endpoint string endpoint URL of the s3 bucket -h, --help help for s3compliant -p, --prefix string prefix URL of the s3 bucket -r, --region string region of the s3 bucket -s, --secret-key string secret key of the s3 compliant bucket Global Flags: --dry-run if set, resource YAML will be printed but not created -n, --namespace string Override namespace obtained from kubectl context --skip-SSL-verification if set, SSL verification is disabled for the profile --skip-validation if set, resource is not validated before creation ``` ``` bash $ kanctl create profile s3compliant --bucket <bucket> --access-key ${AWSACCESSKEY_ID} \\ --secret-key ${AWSSECRETACCESS_KEY} \\ --region us-west-1 \\ --namespace kanister secret 's3-secret-chst2' created profile 's3-profile-5mmkj' created ``` Kopia Repository Server resource creation using `kanctl create` ``` bash $ kanctl create repository-server --help Create a new RepositoryServer Usage: kanctl create repository-server [flags] Flags: -a, --admin-user-access-secret string name of the secret having admin credentials to connect to connect to kopia repository server -r, --kopia-repository-password-secret string name of the secret containing password for the kopia repository -k, --kopia-repository-user string name of the user for accessing the kopia repository -c, --location-creds-secret string name of the secret containing kopia repository storage credentials -l, --location-secret string name of the secret containing kopia repository storage location details -p, --prefix string prefix to be set in kopia repository -t, --tls-secret string name of the tls secret needed for secure kopia client and kopia repository server communication -u, --user string name of the user to be created for the kopia repository server -s, --user-access-secret string name of the secret having access credentials of the users that can connect to kopia repository server -w, --wait wait for the kopia repository server to be in ready state after creation -h, --help help for repository-server Global Flags: --dry-run if set, resource YAML will be printed but not created -n, --namespace string Override namespace obtained from kubectl context --skip-validation if set, resource is not validated before creation --verbose Display verbose output ``` Profile and Blueprint resources can be validated using `kanctl validate <resource>` command. ``` bash $ kanctl validate --help Validate custom Kanister resources Usage: kanctl validate <resource> [flags] Flags: -f, --filename string yaml or json file of the custom resource to validate -v, --functionVersion string kanister function version, e.g., v0.0.0 (defaults to v0.0.0) -h, --help help for validate --name string specify the K8s name of the custom resource to validate --resource-namespace string namespace of the custom resource. Used when validating resource specified using --name. (default \"default\") --schema-validation-only if set, only schema of resource will be validated Global Flags: -n, --namespace string Override namespace obtained from kubectl context ``` You can either validate an existing profile in K8s or a new profile yet to be created. ``` bash $ cat << EOF | kanctl validate profile -f - apiVersion: cr.kanister.io/v1alpha1 kind: Profile metadata: name: s3-profile namespace: kanister location: type: s3Compliant s3Compliant: bucket: XXXX endpoint: XXXX prefix: XXXX region: XXXX credential: type: keyPair keyPair: idField: awsaccesskey_id secretField: awssecretaccess_key secret: apiVersion: v1 kind: Secret name: aws-creds namespace: kanister skipSSLVerify: false EOF Passed the 'Validate Profile schema' check.. Passed the 'Validate bucket region specified in profile' check.. Passed the 'Validate read access to bucket specified in profile' check.. Passed the 'Validate write access to bucket specified in profile' check.. All checks" }, { "data": "``` Blueprint resources can be validated by specifying locally present blueprint manifest using `-f` flag and optionally `-v` flag for kanister function version. ``` bash \\# Download mysql blueprint locally \\$ curl -O <https://raw.githubusercontent.com/kanisterio/kanister/%7Cversion%7C/examples/mysql/mysql-blueprint.yaml> \\# Run blueprint validator \\$ kanctl validate blueprint -f mysql-blueprint.yaml Passed the \\'validation of phase dumpToObjectStore in action backup\\' check.. Passed the \\'validation of phase deleteFromBlobStore in action delete\\' check.. Passed the \\'validation of phase restoreFromBlobStore in action restore\\' check.. ``` `kanctl validate blueprint` currently verifies the Kanister function names and presence of the mandatory arguments to those functions. A common use case for Kanister is to transfer data between Kubernetes and an object store like AWS S3. We\\'ve found it can be cumbersome to pass Profile configuration to tools like the AWS command line from inside Blueprints. `kando` is a tool to simplify object store interactions from within blueprints. It also provides a way to create desired output from a blueprint phase. It has the following commands: `location push` `location pull` `location delete` `output` The usage for these commands can be displayed using the `--help` flag: ``` bash $ kando location pull --help Pull from s3-compliant object storage to a file or stdout Usage: kando location pull <target> [flags] Flags: -h, --help help for pull Global Flags: -s, --path string Specify a path suffix (optional) -p, --profile string Pass a Profile as a JSON string (required) ``` ``` bash $ kando location push --help Push a source file or stdin stream to s3-compliant object storage Usage: kando location push <source> [flags] Flags: -h, --help help for push Global Flags: -s, --path string Specify a path suffix (optional) -p, --profile string Pass a Profile as a JSON string (required) ``` ``` bash $ kando location delete --help Delete artifacts from s3-compliant object storage Usage: kando location delete [flags] Flags: -h, --help help for delete Global Flags: -s, --path string Specify a path suffix (optional) -p, --profile string Pass a Profile as a JSON string (required) ``` ``` bash $ kando output --help Create phase output with given key:value Usage: kando output <key> <value> [flags] Flags: -h, --help help for output ``` The following snippet is an example of using kando from inside a Blueprint. ``` bash kando location push \\--profile \\'{{ toJson .Profile }}\\' \\--path \\'/backup/path\\' - kando location delete \\--profile \\'{{ toJson .Profile }}\\' \\--path \\'/backup/path\\' kando output version ``` Installation of the tools requires to be installed ``` bash $ curl https://raw.githubusercontent.com/kanisterio/kanister/master/scripts/get.sh | bash ``` These tools, especially `kando` are meant to be invoked inside containers via Blueprints. Although suggest using the released image when possible, we\\'ve also made it simple to add these tools to your container. The released image, `ghcr.io/kanisterio/kanister-tools`, is hosted by [github container registry](https://github.com/orgs/kanisterio/packages/container/package/kanister-tools). The Dockerfile for this image is in the [kanister github repo](https://github.com/kanisterio/kanister/blob/master/docker/tools/Dockerfile). To add these tools to your own image, you can add the following command to your Dockerfile: ``` console RUN curl https://raw.githubusercontent.com/kanisterio/kanister/master/scripts/get.sh | bash ``` -->" } ]
{ "category": "Runtime", "file_name": "tooling.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "System containers can act as virtual host environments running multiple services. As such, including a process manager inside the container such as Systemd is useful (e.g., to start and stop services in the appropriate sequence, perform zombie process reaping, etc.) Moreover, many applications rely on Systemd in order to function properly (in particular legacy (non-cloud) applications, but also cloud-native software such as Kubernetes). If you want to run these in a container, Systemd must be present in the container. Starting with release v0.1.2, Sysbox has preliminary support for running Systemd inside a system container, meaning that Systemd works but there are still some minor issues that need resolution. With Sysbox, you can run Systemd-in-Docker easily and securely, without the need to create complex Docker run commands or specialized image entrypoints, and without resorting to privileged Docker containers. Simply launch a system container image that has Systemd as its entry point and Sysbox will ensure the system container is setup to run Systemd without problems. You can find examples of system container images that come with Systemd in the repository. The Nestybox and have a number of these images too. The Sysbox Quick Start Guide has a on how to use them. Of course, the container image will also need to have the systemd service units that you need. These service units are typically added to the image during the image build process. For example, the for the `nestybox/ubuntu-bionic-systemd-docker` image includes Docker's systemd service unit by simply installing Docker in the container. As a result, when you launch that container, Systemd automatically starts Docker. The great majority of systemd services work well inside system container deployed with Sysbox. However, the following services are known not to work: This service pulls audit logs from the kernel and enters them into the systemd journal. It fails inside the container because it does not have permission to access the kernel's audit log. Note that this log is currently a system-wide log, so accessing inside the container may not be appropriate anyway. This services monitors device events from the kernel's udev subsystem. It fails inside the container because it does not have the required permissions. This service is not needed inside a system container, as devices exposed in the container are setup when the container is started and are immutable (i.e., hot-plug is not supported). This service waits for all network devices to be online. For some yet-to-be-determined reason, this service is failing inside a system container. Note that the service is usually not required, given that the container's network interfaces are virtual and are thus normally up and running when the container starts. To disable systemd services inside a container, the best approach is to modify the Dockerfile for the container and add a line such as: ``` RUN systemctl mask systemd-journald-audit.socket systemd-udev-trigger.service systemd-firstboot.service systemd-networkd-wait-online.service ``` See this . We recommend disabling the unsupported systemd services (see prior section), as well as other services which don't make sense to have in the container for your use case. This results in faster container startup time and possibly better performance. Systemd is great but may be a bit too heavy for your use case. In that case you can use lighter-weight process managers such as . You can find examples in the repository. The has a number of system container images that come with Supervisord inside. The Sysbox Quick Start Guide has a on how to use them." } ]
{ "category": "Runtime", "file_name": "systemd.md", "project_name": "Sysbox", "subcategory": "Container Runtime" }
[ { "data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. -->" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "The `cpu-template-helper` tool is a program designed to assist users with creating and managing their custom CPU templates. The `cpu-template-helper` tool has two sets of commands: template-related commands and fingerprint-related commands. This command dumps guest CPU configuration in the custom CPU template JSON format. ``` cpu-template-helper template dump \\ --output <cpu-config> \\ [--config <firecracker-config>] ``` Users can utilize this as an entry point of a custom CPU template creation to comprehend what CPU configuration are exposed to guests. The guest CPU configuration consists of the following entities: x86_64 CPUID MSRs (Model Specific Registers) aarch64 ARM registers It retrieves the above entities exposed to a guest by applying the same preboot process as Firecacker and capturing them in the state just before booting a guest. More details about the preboot process can be found and . Note Some MSRs and ARM registers are not included in the output, since they are not reasonable to modify with CPU templates. The full list of them can be found in . Note Since the output depends on underlying hardware and software stack (BIOS, CPU, kernel, Firecracker), it is required to dump guest CPU configuration on each combination when creating a custom CPU template targetting them all. This command strips identical entries from multiple guest CPU configuration files generated with the dump command. ``` cpu-template-helper template strip \\ --paths <cpu-config-1> <cpu-config-2> [..<cpu-config-N>] \\ --suffix <suffix> ``` One practical use case of the CPU template feature is to provide a consistent CPU feature set to guests running on multiple CPU models. When creating a custom CPU template for this purpose, it is efficient to focus on the differences in guest CPU configurations across those CPU models. Given that a dumped guest CPU configuration typically amounts to approximately 1,000 lines, this command considerably narrows down the scope to consider. This command verifies that the given custom CPU template is applied correctly. ``` cpu-template-helper template verify \\ --template <cpu-template> [--config <firecracker-config>] ``` Firecracker modifies the guest CPU configuration after the CPU template is applied. Occasionally, due to hardware and/or software limitations, KVM might not set the given configuration. Since Firecracker does not check them at runtime, it is required to ensure that these situations don't happen with their custom CPU templates before deploying it. When a template is specified both through `--template` and in Firecracker configuration file provided via `--config`, the template specified with `--template` takes precedence. Note This command does not ensure that the contents of the template are sensible. Thus, users need to make sure that the template does not have any inconsistent entries and does not crash" }, { "data": "This command not only dumps the guest CPU configuration, but also host information that could affect the validity of custom CPU templates. ``` cpu-template-helper fingerprint dump \\ --output <output-path> \\ [--config <firecracker-config>] ``` Keeping the underlying hardware and software stack updated is essential for maintaining security and leveraging new technologies. On the other hand, since the guest CPU configuration can vary depending on the infrastructure, updating it could lead to a situation where a custom CPU template loses its validity. In addition, even if values of the guest CPU configuration don't change, its internal behavior or semantics could still change. For instance, a kernel version update may introduce changes to KVM emulation and a microcode update may alter the behavior of CPU instructions. To ensure awareness of these changes, it is strongly recommended to store the fingerprint file at the time of creating a custom CPU template and to continuously compare it with the current one. This command compares two fingerprint files: one was taken at the time of custom CPU template creation and the other is taken currently. ``` cpu-template-helper fingerprint compare \\ --prev <prev-fingerprint> \\ --curr <curr-fingerprint> \\ --filters <field-1> [..<field-N>] ``` By continously comparing fingerprint files, users can ensure they are aware of any changes that could require revising the custom CPU template. However, it is worth noting that not all of these changes necessarily require a revision, and some changes could be inconsequential to the custom CPU template depending on its use case. To provide users with flexibility in comparing fingerprint files based on situations or use cases, the `--filters` option allows users to select which fields to compare. As examples of when to compare fingerprint files: When bumping the Firecracker version up When bumping the kernel version up When applying a microcode update (or launching a new host (e.g. AWS EC2 metal instance)) This section gives steps of creating and managing a custom CPU template in a sample scenario where the template is designed to provide a consistent set of CPU features to a heterogeneous fleet consisting of multiple CPU models. Run the `cpu-template-helper template dump` command on each CPU model to retrieve guest CPU configuration. Run the `cpu-template-helper template strip` command to remove identical entries across the dumped guest CPU configuration files. Examine the differences of guest CPU configuration in details, determine which CPU features should be presented to guests and draft a custom CPU template. Run the `cpu-template-helper template verify` command to check the created custom CPU template is applied" }, { "data": "Conduct thorough testing of the template as needed to ensure that it does not contain any inconsistent entries and does not lead to guest crashes. Run the `cpu-template-helper fingerprint dump` command on each CPU model at the same time when creating a custom CPU template. Store the dumped fingerprint files together with the custom CPU template. Run the `cpu-template-helper fingerprint dump` command to ensure the template's validity whenever you expect changes to the underlying hardware and software stack. Run the `cpu-template-helper fingerprint compare` command to identify changes of the underlying environment introduced after creating the template. (if changes are detected) Review the identified changes, make necessary revisions to the CPU template, and replace the fingerprint file with the new one. Note It is recommended to review the update process of the underlying stack on your infrastructure. This can help identify points that may require the above validation check. | Register name | Index | | | -- | | MSRIA32TSC | 0x00000010 | | MSRARCHPERFMON_PERFCTRn | 0x000000c1 - 0x000000d2 | | MSRARCHPERFMON_EVENTSELn | 0x00000186 - 0x00000197 | | MSRARCHPERFMONFIXEDCTRn | 0x00000309 - 0x0000030b | | MSRCOREPERFFIXEDCTR_CTRL | 0x0000038d | | MSRCOREPERFGLOBALSTATUS | 0x0000038e | | MSRCOREPERFGLOBALCTRL | 0x0000038f | | MSRCOREPERFGLOBALOVF_CTRL | 0x00000390 | | MSRK7EVNTSELn | 0xc0010000 - 0xc0010003 | | MSRK7PERFCTR0 | 0xc0010004 - 0xc0010007 | | MSRF15HPERFCTLn + MSRF15HPERFCTRn | 0xc0010200 - 0xc001020c | | MSRIA32VMX_BASIC | 0x00000480 | | MSRIA32VMXPINBASEDCTLS | 0x00000481 | | MSRIA32VMXPROCBASEDCTLS | 0x00000482 | | MSRIA32VMXEXITCTLS | 0x00000483 | | MSRIA32VMXENTRYCTLS | 0x00000484 | | MSRIA32VMX_MISC | 0x00000485 | | MSRIA32VMXCR0FIXEDn | 0x00000486 - 0x00000487 | | MSRIA32VMXCR4FIXEDn | 0x00000488 - 0x00000489 | | MSRIA32VMXVMCSENUM | 0x0000048a | | MSRIA32VMXPROCBASEDCTLS2 | 0x0000048b | | MSRIA32VMXEPTVPID_CAP | 0x0000048c | | MSRIA32VMXTRUEPINBASED_CTLS | 0x0000048d | | MSRIA32VMXTRUEPROCBASED_CTLS | 0x0000048e | | MSRIA32VMXTRUEEXIT_CTLS | 0x0000048f | | MSRIA32VMXTRUEENTRY_CTLS | 0x00000490 | | MSRIA32VMX_VMFUNC | 0x00000491 | | MSRIA32MCG_STATUS | 0x0000017a | | MSRIA32MCG_CTL | 0x0000017b | | MSRIA32MCGEXTCTL | 0x000004d0 | | HVX64MSRGUESTOS_ID | 0x40000000 | | HVX64MSR_HYPERCALL | 0x40000001 | | HVX64MSRVPINDEX | 0x40000002 | | HVX64MSR_RESET | 0x40000003 | | HVX64MSRVPRUNTIME | 0x40000010 | | HVX64MSRVPASSIST_PAGE | 0x40000073 | | HVX64MSR_SCONTROL | 0x40000080 | | HVX64MSRSTIMER0CONFIG | 0x400000b0 | | HVX64MSRCRASHPn | 0x40000100 - 0x40000104 | | HVX64MSRCRASHCTL | 0x40000105 | | HVX64MSRREENLIGHTENMENTCONTROL | 0x40000106 | | HVX64MSRTSCEMULATION_CONTROL | 0x40000107 | | HVX64MSRTSCEMULATION_STATUS | 0x40000108 | | HVX64MSRSYNDBGCONTROL | 0x400000f1 | | HVX64MSRSYNDBGSTATUS | 0x400000f2 | | HVX64MSRSYNDBGSEND_BUFFER | 0x400000f3 | | HVX64MSRSYNDBGRECV_BUFFER | 0x400000f4 | | HVX64MSRSYNDBGPENDING_BUFFER | 0x400000f5 | | HVX64MSRSYNDBGOPTIONS | 0x400000ff | | HVX64MSRTSCINVARIANT_CONTROL | 0x40000118 | | Register name | ID | | | | | Program Counter | 0x6030000000100040 | | KVMREGARMTIMERCNT | 0x603000000013df1a |" } ]
{ "category": "Runtime", "file_name": "cpu-template-helper.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "English | [toc] By default, the pods on the edge node can only access the pods in cloud nodes. For the pods on the edge nodes to communicate with each other directly without going through the cloud, we can define a community. Communities can also be used to organize multiple clusters which need to communicate with each other. Assume there are two clusters, `beijng` and `shanghai`. in the `beijing` cluster, there are there edge nodes of `edge1`, `edge2`, and `edge3` Create the following community to enable the communication between edge pods on the nodes of edge1/2/3 in cluster `beijing` ```yaml apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-edge-nodes spec: members: beijing.edge1 beijing.edge2 beijing.edge3 ``` Create the following community to enable the communication between `beijing` cluster and `shanghai` cluster ```yaml apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: connectors spec: members: beijing.connector shanghai.connector ``` To facilitate networking management, FabEdge provides a feature called Auto Networking which works under LAN, it uses direct routing to let pods running edge nodes in a LAN to communicate. You need to enable it at installation, check out for how to install fabedge manually, here is only reference values.yaml: ```yaml agent: args: AUTO_NETWORKING: \"true\" # enable auto-networking feature MULTICAST_TOKEN: \"1b1bb567\" # make sure the token is unique, only nodes with the same token can compose a network MULTICAST_ADDRESS: \"239.40.20.81:18080\" # fabedge-agent uses this address to multicast endpoints information ``` PS: Auto networking only works for edge nodes under the same router. When some nodes are in the same LAN and the same community, they will prefer auto networking. It is required to register the endpoint information of each member cluster into the host cluster for cross-cluster communication. Create a cluster resource in the host cluster: ```yaml apiVersion:" }, { "data": "kind: Cluster metadata: name: beijing ``` Get the token ```shell Name: beijing Namespace: Kind: Cluster Spec: Token: eyJhbGciOi--omitted--4PebW68A ``` Deploy FabEdge in the member cluster using the token. ```yaml apiVersion: fabedge.io/v1alpha1 kind: Cluster name: beijing spec: endPoints: id: C=CN, O=fabedge.io, CN=beijing.connector name: beijing.connector nodeSubnets: 10.20.8.12 10.20.8.38 publicAddresses: 10.20.8.12 subnets: 10.233.0.0/18 10.233.70.0/24 10.233.90.0/24 type: Connector token: eyJhbGciOi--omit--4PebW68A ``` In the public cloud, the virtual machine has only private address, which prevents from FabEdge establishing the edge-to-edge tunnels. In this case, the user can apply a public address for the virtual machine and add it to the annotation of the edge node. FabEdge will use this public address to establish the tunnel instead of the private one. ```shell kubectl annotate node edge1 \"fabedge.io/node-public-addresses=60.247.88.194\" ``` GlobalService is used to export a local/standard k8s service (ClusterIP or Headless) for other clusters to access it. And it provides the topology-aware service discovery capability. create a service, e.g. namespace: default, name: web Label it with : `fabedge.io/global-service: true` It can be accessed by the domain name: `web.defaut.svc.global` Normally every fabedge-agent's arguments are the same, but FabEdge allows you configure arguments for a fabedge-agent on a specific node. You only need to provide fabedge agent arguments on annotations of the node, fabedge-operator will change the fabege-agent arguments. For example: ```shell kubectl annotate node edge1 argument.fabedge.io/enable-proxy=false # disable fab-proxy ``` The format of agent argument in node annotations is \"argument.fabedge.io/argument-name\", complete fabedge-agent arguments are listed fabedge-operator by default will create a fabedge-agent pod for each edge node, but FabEdge allows you to forbid it on specific nodes. First, you need to change edge labels, check out for how to install FabEdge manually, here is only reference values.yaml ```yaml cluster: edgeLabels: node-role.kubernetes.io/edge= agent.fabedge.io/enabled=true ``` Assume you have two edge nodes: edge1 and edge2, and you want only edge1 to have fabedge-agent, execute the command: ```yaml kubectl label node edge1 node-role.kubernetes.io/edge= kubectl label node edge1 agent.fabedge.io/enabled=true ``` Then you will have only edge1 have fabedge-agent running on it." } ]
{ "category": "Runtime", "file_name": "user-guide.md", "project_name": "FabEdge", "subcategory": "Cloud Native Network" }
[ { "data": "All notable changes to this project will be documented in this file. The format is based on , and this project adheres to . : Added ACPI support to Firecracker for x86_64 microVMs. Currently, we pass ACPI tables with information about the available vCPUs, interrupt controllers, VirtIO and legacy x86 devices to the guest. This allows booting kernels without MPTable support. Please see our for more information regarding relevant kernel configurations. : Added support for the Virtual Machine Generation Identifier (VMGenID) device on x86_64 platforms. VMGenID is a virtual device that allows VMMs to notify guests when they are resumed from a snapshot. Linux includes VMGenID support since version 5.18. It uses notifications from the device to reseed its internal CSPRNG. Please refer to and documention for more info on VMGenID. VMGenID state is part of the snapshot format of Firecracker. As a result, Firecracker snapshot version is now 2.0.0. : Changed `--config` parameter of `cpu-template-helper` optional. Users no longer need to prepare kernel, rootfs and Firecracker configuration files to use `cpu-template-helper`. Changed T2CL template to pass through bit 27 and 28 of `MSRIA32ARCH_CAPABILITIES` (`RFDSNO` and `RFDSCLEAR`) since KVM consider they are able to be passed through and T2CL isn't designed for secure snapshot migration between different processors. Changed T2S template to set bit 27 of `MSRIA32ARCHCAPABILITIES` (`RFDSNO`) to 1 since it assumes that the fleet only consists of processors that are not affected by RFDS. : Avoid setting `kvmimmediateexit` to 1 if are already handling an exit, or if the vCPU is stopped. This avoids a spurious KVM exit upon restoring snapshots. : Do not initialize vCPUs in powered-off state upon snapshot restore. No functional change, as vCPU initialization is only relevant for the booted case (where the guest expects CPUs to be powered off). Firecracker's `--start-time-cpu-us` and `--start-time-us` parameters are deprecated and will be removed in v2.0 or later. They are used by the jailer to pass the value that should be subtracted from the (CPU) time, when emitting the `starttimeus` and `starttimecpu_us` metrics. These parameters were never meant to be used by end customers, and we recommend doing any such time adjustments outside Firecracker. Booting with microVM kernels that rely on MPTable on x86_64 is deprecated and support will be removed in v2.0 or later. We suggest to users of Firecracker to use guest kernels with ACPI support. For x86_64 microVMs, ACPI will be the only way Firecracker passes hardware information to the guest once MPTable support is removed. : Added a check in the network TX path that the size of the network frames the guest passes to us is not bigger than the maximum frame the device expects to handle. On the TX path, we copy frames destined to MMDS from guest memory to Firecracker memory. Without the check, a mis-behaving virtio-net driver could cause an increase in the memory footprint of the Firecracker process. Now, if we receive such a frame, we ignore it and increase `Net::txmalformedframes` metric. : Make the first differential snapshot taken after a full snapshot contain only the set of memory pages changed since the full snapshot. Previously, these differential snapshots would contain all memory pages. This will result in potentially much smaller differential snapshots after a full snapshot. : Fix UFFD support not being forward-compatible with new ioctl options introduced in Linux 6.6. See also https://github.com/bytecodealliance/userfaultfd-rs/issues/61. : Added support to emit aggregate (minimum/maximum/sum) latency for `VcpuExit::MmioRead`, `VcpuExit::MmioWrite`, `VcpuExit::IoIn` and" }, { "data": "The average for these VM exits is not emitted since it can be deduced from the available emitted metrics. : Added dev-preview support for backing a VM's guest memory by 2M hugetlbfs pages. Please see the for more information : Added block and net device metrics for file/tap access latencies and queue backlog lengths, which can be used to analyse saturation of the Firecracker VMM thread and underlying layers. Queue backlog length metrics are flushed periodically. They can be used to esimtate an average queue length by request by dividing its value by the number of requests served. : Changed microVM snapshot format version strategy. Firecracker snapshot format now has a version that is independent of Firecracker version. The current version of the snapshot format is v1.0.0. From now on, the Firecracker binary will define the snapshot format version it supports and it will only be able to load snapshots with format that is backwards compatible with that version. Users can pass the `--snapshot-version` flag to the Firecracker binary to see its supported snapshot version format. This change renders all previous Firecracker snapshots (up to Firecracker version v1.6.0) incompatible with the current Firecracker version. : Added information about page size to the payload Firecracker sends to the UFFD handler. Each memory region object now contains a `pagesizekib` field. See also the . : Only use memfd to back guest memory if a vhost-user-blk device is configured, otherwise use anonymous private memory. This is because serving page faults of shared memory used by memfd is slower and may impact workloads. : Fixed a bug in the cpu-template-helper that made it panic during conversion of cpu configuration with SVE registers to the cpu template on aarch64 platform. Now cpu-template-helper will print warnings if it encounters SVE registers during the conversion process. This is because cpu templates are limited to only modify registers less than 128 bits. : Fixed a bug in the Firecracker that prevented it to restore snapshots of VMs that had SVE enabled. : Made `PATCH` requests to the `/machine-config` endpoint transactional, meaning Firecracker's configuration will be unchanged if the request returns an error. This fixes a bug where a microVM with incompatible balloon and guest memory size could be booted, due to the check for this condition happening after Firecracker's configuration was updated. : Added a double fork mechanism in the Jailer to avoid setsid() failures occurred while running Jailer as the process group leader. However, this changed the behaviour of Jailer and now the Firecracker process will always have a different PID than the Jailer process. : Added a \"Known Limitations\" section in the Jailer docs to highlight the above change in behaviour introduced in PR#4259. : As a solution to the change in behaviour introduced in PR#4259, provided a mechanism to reliably fetch Firecracker PID. With this change, Firecracker process's PID will always be available in the Jailer's root directory regardless of whether newpidns was set. : Fixed a bug where a client would hang or timeout when querying for an MMDS path whose content is empty, because the 'Content-Length' header field was missing in a response. : Added support for per net device metrics. In addition to aggregate metrics `net`, each individual net device will emit metrics under the label `\"net{ifaceid}\"`. E.g. the associated metrics for the endpoint `\"/network-interfaces/eth0\"` will be available under `\"net_eth0\"` in the metrics json object. : Added support for per block device" }, { "data": "In addition to aggregate metrics `block`, each individual block device will emit metrics under the label `\"block{driveid}\"`. E.g. the associated metrics for the endpoint `\"/drives/{driveid}\"` will be available under `\"blockdrive_id\"` in the metrics json object. : Added a new `vm-state` subcommand to `info-vmstate` command in the `snapshot-editor` tool to print MicrovmState of vmstate snapshot file in a readable format. Also made the `vcpu-states` subcommand available on x86_64. : Added source-level instrumentation based tracing. See for more details. , , , , : Added developer preview only (NOT for production use) support for vhost-user block devices. Firecracker implements a vhost-user frontend. Users are free to choose from existing open source backend solutions or their own implementation. Known limitation: snapshotting is not currently supported for microVMs containing vhost-user block devices. See the for details. The device emits metrics under the label `\"vhostuser{device}{driveid}\"`. : The jailer's option `--parent-cgroup` will move the process to that cgroup if no `cgroup` options are provided. Simplified and clarified the removal policy of deprecated API elements to follow semantic versioning 2.0.0. For more information, please refer to . : Refactored error propagation to avoid logging and printing an error on exits with a zero exit code. Now, on successful exit \"Firecracker exited successfully\" is logged. : Removed support for creating Firecracker snapshots targeting older versions of Firecracker. With this change, running 'firecracker --version' will not print the supported snapshot versions. : Allow merging of diff snapshots into base snapshots by directly writing the diff snapshot on top of the base snapshot's memory file. This can be done by setting the `memfilepath` to the path of the pre-existing full snapshot. : `rebase-snap` tool is now deprecated. Users should use `snapshot-editor` for rebasing diff snapshots. : Fixed a bug that ignored the `--show-log-origin` option, preventing it from printing the source code file of the log messages. : Fixed a bug reporting a non-zero exit code on successful shutdown when starting Firecracker with `--no-api`. : Fixed a bug where Firecracker would log \"RunWithApiError error: MicroVMStopped without an error: GenericError\" when exiting after encountering an emulation error. It now correctly prints \"RunWithApiError error: MicroVMStopped with an error: GenericError\". : Fixed a bug introduced in #4047 that limited the `--level` option of logger to Pascal-cased values (e.g. accepting \"Info\", but not \"info\"). It now ignores case again. : Fixed a bug in the asynchronous virtio-block engine that rendered the device non-functional after a PATCH request was issued to Firecracker for updating the path to the host-side backing file of the device. : Fixed a bug where if Firecracker was instructed to take a snapshot of a microvm which itself was restored from a snapshot, specifying `memfilepath` to be the path of the memory file from which the microvm was restored would result in both the microvm and the snapshot being corrupted. It now instead performs a \"write-back\" of all memory that was updated since the snapshot was originally loaded. : Added official support for Linux 6.1. See for some security and performance considerations. and : Added `snapshot-editor` tool for modifications of snapshot files. It allows for rebasing of memory snapshot files, printing and removing aarch64 registers from the vmstate and obtaining snapshot version. : Added new fields to the custom CPU templates. (aarch64 only) `vcpu_features` field allows modifications of vCPU features enabled during vCPU initialization. `kvm_capabilities` field allows modifications of KVM capability checks that Firecracker performs during boot. If any of these fields are in use, minimal target snapshot version is restricted to" }, { "data": "Updated deserialization of `bitmap` for custom CPU templates to allow usage of '\\_' as a separator. Changed the strip feature of `cpu-template-helper` tool to operate bitwise. Better logs during validation of CPU ID in snapshot restoration path. Also Firecracker now does not fail if it can't get CPU ID from the host or can't find CPU ID in the snapshot. Changed the serial device to only try to initialize itself if stdin is a terminal or a FIFO pipe. This fixes logged warnings about the serial device failing to initialize if the process is daemonized (in which case stdin is /dev/null instead of a terminal). Changed to show a warning message when launching a microVM with C3 template on a processor prior to Intel Cascade Lake, because the guest kernel does not apply the mitigation against MMIO stale data vulnerability when it is running on a processor that does not enumerate FBSDPNO, PSDPNO and SBDRSSDPNO on IA32ARCHCAPABILITIES MSR. Made Firecracker resize its file descriptor table on process start. It now preallocates the in-kernel fdtable to hold `RLIMIT_NOFILE` many fds (or 2048 if no limit is set). This avoids the kernel reallocating the fdtable during Firecracker operations, resulting in a 30ms to 70ms reduction of snapshot restore times for medium to large microVMs with many devices attached. Changed the dump feature of `cpu-template-helper` tool not to enumerate program counter (PC) on ARM because it is determined by the given kernel image and it is useless in the custom CPU template context. The ability to create snapshots for an older version of Firecracker is now deprecated. As a result, the `version` body field in `PUT` on `/snapshot/create` request in deprecated. Added support for the /dev/userfaultfd device available on linux kernels >= 6.1. This is the default for creating UFFD handlers on these kernel versions. If it is unavailable, Firecracker falls back to the userfaultfd syscall. Deprecated `cpu_template` field in `PUT` and `PATCH` requests on `/machine-config` API, which is used to set a static CPU template. Custom CPU templates added in v1.4.0 are available as an improved iteration of the static CPU templates. For more information about the transition from static CPU templates to custom CPU templates, please refer to . Changed default log level from to . This results in more logs being output by default. Fixed a change in behavior of normalize host brand string that breaks Firecracker on external instances. Fixed the T2A CPU template not to unset the MMX bit (CPUID.80000001h:EDX\\[23\\]) and the FXSR bit (CPUID.80000001h:EDX\\[24\\]). Fixed the T2A CPU template to set the RstrFpErrPtrs bit (CPUID.80000008h:EBX\\[2\\]). Fixed a bug where Firecracker would crash during boot if a guest set up a virtio queue that partially overlapped with the MMIO gap. Now Firecracker instead correctly refuses to activate the corresponding virtio device. Fixed the T2CL CPU template to pass through security mitigation bits that are listed by KVM as bits able to be passed through. By making the most use of the available hardware security mitigations on a processor that a guest is running on, the guest might be able to benefit from performance improvements. Fixed the T2S CPU template to set the GDSNO bit of the IA32ARCH_CAPABILITIES MSR to 1 in accordance with an Intel microcode update. To use the template securely, users should apply the latest microcode update on the host. Fixed the spelling of the `nomodule` param passed in the default kernel command line" }, { "data": "This is a breaking change for setups that use the default kernel command line which also depend on being able to load kernel modules at runtime. This may also break setups which use the default kernel command line and which use an init binary that inadvertently depends on the misspelled param (\"nomodules\") being present at the command line, since this param will no longer be passed. Added support for custom CPU templates allowing users to adjust vCPU features exposed to the guest via CPUID, MSRs and ARM registers. Introduced V1N1 static CPU template for ARM to represent Neoverse V1 CPU as Neoverse N1. Added support for the `virtio-rng` entropy device. The device is optional. A single device can be enabled per VM using the `/entropy` endpoint. Added a `cpu-template-helper` tool for assisting with creating and managing custom CPU templates. Set FDPEXCPTNONLY bit (CPUID.7h.0:EBX\\[6\\]) and ZEROFCSFDS bit (CPUID.7h.0:EBX\\[13\\]) in Intel's CPUID normalization process. Fixed feature flags in T2S CPU template on Intel Ice Lake. Fixed CPUID leaf 0xb to be exposed to guests running on AMD host. Fixed a performance regression in the jailer logic for closing open file descriptors. Related to: . A race condition that has been identified between the API thread and the VMM thread due to a misconfiguration of the `apieventfd`. Fixed CPUID leaf 0x1 to disable perfmon and debug feature on x86 host. Fixed passing through cache information from host in CPUID leaf 0x80000006. Fixed the T2S CPU template to set the RRSBA bit of the IA32ARCHCAPABILITIES MSR to 1 in accordance with an Intel microcode update. Fixed the T2CL CPU template to pass through the RSBA and RRSBA bits of the IA32ARCHCAPABILITIES MSR from the host in accordance with an Intel microcode update. Fixed passing through cache information from host in CPUID leaf 0x80000005. Fixed the T2A CPU template to disable SVM (nested virtualization). Fixed the T2A CPU template to set EferLmsleUnsupported bit (CPUID.80000008h:EBX\\[20\\]), which indicates that EFER\\[LMSLE\\] is not supported. Introduced T2CL (Intel) and T2A (AMD) CPU templates to provide instruction set feature parity between Intel and AMD CPUs when using these templates. Added Graviton3 support (c7g instance type). Improved error message when invalid network backend provided. Improved TCP throughput by between 5% and 15% (depending on CPU) by using scatter-gather I/O in the net device's TX path. Upgraded Rust toolchain from 1.64.0 to 1.66.0. Made seccompiler output bit-reproducible. Fixed feature flags in T2 CPU template on Intel Ice Lake. Added a new CPU template called `T2S`. This exposes the same CPUID as `T2` to the Guest and also overwrites the `ARCH_CAPABILITIES` MSR to expose a reduced set of capabilities. With regards to hardware vulnerabilities and mitigations, the Guest vCPU will apear to look like a Skylake CPU, making it safe to snapshot uVMs running on a newer host CPU (Cascade Lake) and restore on a host that has a Skylake CPU. Added a new CLI option `--metrics-path PATH`. It accepts a file parameter where metrics will be sent to. Added baselines for m6i.metal and m6a.metal for all long running performance tests. Releases now include debuginfo files. Changed the jailer option `--exec-file` to fail if the filename does not contain the string `firecracker` to prevent from running non-firecracker binaries. Upgraded Rust toolchain from 1.52.1 to 1.64.0. Switched to specifying our dependencies using caret requirements instead of comparison requirements. Updated all dependencies to their respective newest" }, { "data": "Made the `T2` template more robust by explicitly disabling additional CPUID flags that should be off but were missed initially or that were not available in the spec when the template was created. Now MAC address is correctly displayed when queried with GET `/vm/config` if left unspecified in both pre and post snapshot states. Fixed a self-DoS scenario in the virtio-queue code by reporting and terminating execution when the number of available descriptors reported by the driver is higher than the queue size. Fixed the bad handling of kernel cmdline parameters when init arguments were provided in the `boot_args` field of the JSON body of the PUT `/boot-source` request. Fixed a bug on ARM64 hosts where the upper 64bits of the V0-V31 FL/SIMD registers were not saved correctly when taking a snapshot, potentially leading to data loss. This change invalidates all ARM64 snapshots taken with versions of Firecracker \\<= 1.1.3. Improved stability and security when saving CPU MSRs in snapshots. The API `PATCH` methods for `machine-config` can now be used to reset the `cpu_template` to `\"None\"`. Until this change there was no way to reset the `cpu_template` once it was set. Added a `rebase-snap` tool for rebasing a diff snapshot over a base snapshot. Mmds version is persisted across snapshot-restore. Snapshot compatibility is preserved bidirectionally, to and from a Firecracker version that does not support persisting the Mmds version. In such cases, the default V1 option is used. Added `--mmds-size-limit` for limiting the mmds data store size instead of piggy-backing on `--http-api-max-payload-size`. If left unconfigured it defaults to the value of `--http-api-max-payload-size`, to provide backwards compatibility. Added optional `mem_backend` body field in `PUT` requests on `/snapshot/load`. This new parameter is an object that defines the configuration of the backend responsible for handling memory loading during snapshot restore. The `membackend` parameter contains `backendtype` and `backend_path` required fields. `backend_type` is an enum that can take either `File` or `Uffd` as value. Interpretation of `backend_path` field depends on the value of `backend_type`. If `File`, then the user must provide the path to file that contains the guest memory to be loaded. Otherwise, if `backend_type` is `Uffd`, then `backend_path` is the path to a unix domain socket where a custom page fault handler process is listening and expecting a UFFD to be sent by Firecracker. The UFFD is used to handle the guest memory page faults in the separate process. Added logging for the snapshot/restore and async block device IO engine features to indicate they are in development preview. The API `PATCH` method for `/machine-config` can be now used to change `trackdirtypages` on aarch64. MmdsV2 is now Generally Available. MmdsV1 is now deprecated and will be removed in Firecracker v2.0.0. Use MmdsV2 instead. Deprecated `memfilepath` body field in `PUT` on `/snapshot/load` request. Fixed inconsistency that allowed the start of a microVM from a JSON file without specifying the `vcpucount` and `memsize_mib` parameters for `machine-config` although they are mandatory when configuring via the API. Now these fields are mandatory when specifying `machine-config` in the JSON file and when using the `PUT` request on `/machine-config`. Fixed inconsistency that allowed a user to specify the `cpu_template` parameter and set `smt` to `True` in `machine-config` when starting from a JSON file on aarch64 even though they are not permitted when using `PUT` or `PATCH` in the API. Now Firecracker will return an error on aarch64 if `smt` is set to `True` or if `cpu_template` is" }, { "data": "Fixed inconsistent behaviour of the `PUT` method for `/machine-config` that would reset the `trackdirtypages` parameter to `false` if it was not specified in the JSON body of the request, but left the `cpu_template` parameter intact if it was not present in the request. Now a `PUT` request for `/machine-config` will reset all optional parameters (`smt`, `cpu_template`, `trackdirtypages`) to their default values if they are not specified in the `PUT` request. Fixed incosistency in the swagger definition with the current state of the `/vm/config` endpoint. Added jailer option `--parent-cgroup <relative_path>` to allow the placement of microvm cgroups in custom cgroup nested hierarchies. The default value is `<exec-file>` which is backwards compatible to the behavior before this change. Added jailer option `--cgroup-version <1|2>` to support running the jailer on systems that have cgroup-v2. Default value is `1` which means that if `--cgroup-version` is not specified, the jailer will try to create cgroups on cgroup-v1 hierarchies only. Added `--http-api-max-payload-size` parameter to configure the maximum payload size for PUT and PATCH requests. Limit MMDS data store size to `--http-api-max-payload-size`. Cleanup all environment variables in Jailer. Added metrics for accesses to deprecated HTTP and command line API endpoints. Added permanent HTTP endpoint for `GET` on `/version` for getting the Firecracker version. Added `--metadata` parameter to enable MMDS content to be supplied from a file allowing the MMDS to be used when using `--no-api` to disable the API server. Checksum file for the release assets. Added support for custom headers to MMDS requests. Accepted headers are: `X-metadata-token`, which accepts a string value that provides a session token for MMDS requests; and `X-metadata-token-ttl-seconds`, which specifies the lifetime of the session token in seconds. Support and validation for host and guest kernel 5.10. A . Added `io_engine` to the pre-boot block device configuration. Possible values: `Sync` (the default option) or `Async` (only available for kernels newer than 5.10.51). The `Async` variant introduces a block device engine that uses io_uring for executing requests asynchronously, which is in developer preview (NOT for production use). See `docs/api_requests/block-io-engine.md`. Added `block.ioenginethrottled_events` metric for measuring the number of virtio events throttled because of the IO engine. New optional `version` field to PUT requests towards `/mmds/config` to configure MMDS version. Accepted values are `V1` and `V2` and default is `V1`. MMDS `V2` is developer preview only (NOT for production use) and it does not currently work after snapshot load. Mandatory `network_interfaces` field to PUT requests towards `/mmds/config` which contains a list of network interface IDs capable of forwarding packets to MMDS. Removed the `--node` jailer parameter. Deprecated `vsock_id` body field in `PUT`s on `/vsock`. Removed the deprecated the `--seccomp-level parameter`. `GET` requests to MMDS require a session token to be provided through `X-metadata-token` header when using V2. Allow `PUT` requests to MMDS in order to generate a session token to be used for future `GET` requests when version 2 is used. Remove `allowmmdsrequests` field from the request body that attaches network interfaces. Specifying interfaces that allow forwarding requests to MMDS is done by adding the network interface's ID to the `network_interfaces` field of PUT `/mmds/config` request's body. Renamed `/machine-config` `ht_enabled` to `smt`. `smt` field is now optional on PUT `/machine-config`, defaulting to `false`. Configuring `smt: true` on aarch64 via the API is forbidden. GET `/vm/config` was returning a default config object after restoring from a snapshot. It now correctly returns the config of the original microVM, except for bootconfig and the cputemplate and smt fields of the machine config, which are currently lost. Fixed incorrect propagation of init parameters in kernel commandline. Related to: . Adapt T2 and C3 CPU templates for kernel" }, { "data": "Firecracker was not previously masking some CPU features of the host or emulated by KVM, introduced in more recent kernels: `umip`, `vmx`, `avx512_vnni`. Fix jailer's cgroup implementation to accept properties that contain multiple dots. Added devtool build `--ssh-keys` flag to support fetching from private git repositories. Added option to configure block device flush. Added `--new-pid-ns` flag to the Jailer in order to spawn the Firecracker process in a new PID namespace. Added API metrics for `GET`, `PUT` and `PATCH` requests on `/mmds` endpoint. Added `--describe-snapshot` flag to Firecracker to fetch the data format version of a snapshot state file provided as argument. Added `--no-seccomp` parameter for disabling the default seccomp filters. Added `--seccomp-filter` parameter for supplying user-provided, custom filters. Added the `seccompiler-bin` binary that is used to compile JSON seccomp filters into serialized BPF for Firecracker consumption. Snapshotting support for GICv2 enabled guests. Added `devtool install` to deploy built binaries in `/usr/local/bin` or a given path. Added code logic to send `VIRTIOVSOCKEVENTTRANSPORTRESET` on snapshot creation, when the Vsock device is active. The event will close active connections on the guest. Added `GET` request on `/vm/config` that provides full microVM configuration as a JSON HTTP response. Added `--resource-limit` flag to jailer to limit resources such as: number of file descriptors allowed at a time (with a default value of 2048) and maximum size of files created by the process. Changed Docker images repository from DockerHub to Amazon ECR. Fixed off-by-one error in virtio-block descriptor address validation. Changed the `PATCH` request on `/balloon/statistics` to schedule the first statistics update immediately after processing the request. Deprecated the `--seccomp-level parameter`. It will be removed in a future release. Using it logs a runtime warning. Experimental gnu libc builds use empty default seccomp filters, allowing all system calls. Fixed non-compliant check for the RTC device ensuring a fixed 4-sized data buffer. Unnecessary interrupt assertion was removed from the RTC. However, a dummy interrupt is still allocated for snapshot compatibility reasons. Fixed the SIGPIPE signal handler so Firecracker no longer exits. The signal is still recorded in metrics and logs. Fixed ballooning API definitions by renaming all fields which mentioned \"MB\" to use \"MiB\" instead. Snapshot related host files (vm-state, memory, block backing files) are now flushed to their backing mediums as part of the CreateSnapshot operation. Fixed the SSBD mitigation not being enabled on `aarch64` with the provided `prod-host-setup.md`. Fixed the balloon statistics not working after a snapshot restore event. The `utctimestampms` now reports the timestamp in ms from the UTC UNIX Epoch, as the name suggests. It was previously using a monotonic clock with an undefined starting point. Added optional `resume_vm` field to `/snapshot/load` API call. Added support for block rate limiter PATCH. Added devtool test `-c|--cpuset-cpus` flag for cpus confinement when tests run. Added devtool test `-m|--cpuset-mems` flag for memory confinement when tests run. Added the virtio traditional memory ballooning device. Added a mechanism to handle vCPU/VMM errors that result in process termination. Added incremental guest memory snapshot support. Added aarch64 snapshot support. Change the information provided in `DescribeInstance` command to provide microVM state information (Not started/Running/Paused) instead of whether it's started or not. Removed the jailer `--extra-args` parameter. It was a noop, having been replaced by the `--` separator for extra arguments. Changed the output of the `--version` command line parameter to include a list of supported snapshot data format versions for the firecracker binary. Increased the maximum number of virtio devices from 11 to 19. Added a new check that prevents creating" }, { "data": "snapshots when more than 11 devices are attached. If the stdout buffer is full and non-blocking, the serial writes no longer block. Any new bytes will be lost, until the buffer is freed. The device also logs these errors and increments the `uart.error_count` metric for each lost byte. Fixed inconsistency in YAML file InstanceInfo definition Added metric for throttled block device events. Added metrics for counting rate limiter throttling events. Added metric for counting MAC address updates. Added metrics for counting TAP read and write errors. Added metrics for counting RX and TX partial writes. Added metrics that measure the duration of pausing and resuming the microVM, from the VMM perspective. Added metric for measuring the duration of the last full/diff snapshot created, from the VMM perspective. Added metric for measuring the duration of loading a snapshot, from the VMM perspective. Added metrics that measure the duration of pausing and resuming the microVM, from the API (user) perspective. Added metric for measuring the duration of the last full/diff snapshot created, from the API (user) perspective. Added metric for measuring the duration of loading a snapshot, from the API (user) perspective. Added `trackdirtypages` field to `machine-config`. If enabled, Firecracker can create incremental guest memory snapshots by saving the dirty guest pages in a sparse file. Added a new API call, `PATCH /vm`, for changing the microVM state (to `Paused` or `Resumed`). Added a new API call, `PUT /snapshot/create`, for creating a full or diff snapshot. Added a new API call, `PUT /snapshot/load`, for loading a snapshot. Added new jailer command line argument `--cgroup` which allow the user to specify the cgroups that are going to be set by the Jailer. Added full support for AMD CPUs (General Availability). More details . Boot time on AMD achieves the desired performance (i.e under 150ms). The logger `level` field is now case-insensitive. Disabled boot timer device after restoring a snapshot. Enabled boot timer device only when specifically requested, by using the `--boot-timer` dedicated cmdline parameter. firecracker and jailer `--version` now gets updated on each devtool build to the output of `git describe --dirty`, if the git repo is available. MicroVM process is only attached to the cgroups defined by using `--cgroups` or the ones defined indirectly by using `--node`. Changed `devtool build` to build jailer binary for `musl` only targets. Building jailer binary for `non-musl` targets have been removed. Added a new API call, `PUT /metrics`, for configuring the metrics system. Added `app_name` field in InstanceInfo struct for storing the application name. New command-line parameters for `firecracker`, named `--log-path`, `--level`, `--show-level` and `--show-log-origin` that can be used for configuring the Logger when starting the process. When using this method for configuration, only `--log-path` is mandatory. Added a for updating the dev container image. Added a new API call, `PUT /mmds/config`, for configuring the `MMDS` with a custom valid link-local IPv4 address. Added experimental JSON response format support for MMDS guest applications requests. Added metrics for the vsock device. Added `devtool strip` command which removes debug symbols from the release binaries. Added the `txmalformedframes` metric for the virtio net device, emitted when a TX frame missing the VNET header is encountered. Added `--version` flag to both Firecracker and Jailer. Return `405 Method Not Allowed` MMDS response for non HTTP `GET` MMDS requests originating from guest. Fixed folder permissions in the jail (#1802). Any number of whitespace characters are accepted after \":\" when parsing HTTP" }, { "data": "Potential panic condition caused by the net device expecting to find a VNET header in every frame. Potential crash scenario caused by \"Content-Length\" HTTP header field accepting negative values. Fixed #1754 - net: traffic blocks when running ingress UDP performance tests with very large buffers. Updated CVE-2019-3016 mitigation information in In case of using an invalid JSON as a 'config-file' for Firecracker, the process will exit with return code 152. Removed the `testrun.sh` wrapper. Removed `metrics_fifo` field from the logger configuration. Renamed `logfifo` field from LoggerConfig to `logpath` and `metrics_fifo` field from MetricsConfig to `metrics_path`. `PATCH /drives/{id}` only allowed post-boot. Use `PUT` for pre-boot updates to existing configurations. `PATCH /network-interfaces/{id}` only allowed post-boot. Use `PUT` for pre-boot updates to existing configurations. Changed returned status code from `500 Internal Server Error` to `501 Not Implemented`, for queries on the MMDS endpoint in IMDS format, when the requested resource value type is unsupported. Allowed the MMDS data store to be initialized with all supported JSON types. Retrieval of these values within the guest, besides String, Array, and Dictionary, is only possible in JSON mode. `PATCH` request on `/mmds` before the data store is initialized returns `403 BadRequest`. Segregated MMDS documentation in MMDS design documentation and MMDS user guide documentation. Support for booting with an initial RAM disk image. This image can be specified through the new `initrd_path` field of the `/boot-source` API request. Fixed #1469 - Broken GitHub location for Firecracker release binary. The jailer allows changing the default api socket path by using the extra arguments passed to firecracker. Fixed #1456 - Occasional KVMEXITSHUTDOWN and bad syscall (14) during VM shutdown. Updated the production host setup guide with steps for addressing CVE-2019-18960. The HTTP header parsing is now case insensitive. The `putapirequests` and `patchapirequests` metrics for net devices were un-swapped. Removed redundant `--seccomp-level` jailer parameter since it can be simply forwarded to the Firecracker executable using \"end of command options\" convention. Removed `memory.dirty_pages` metric. Removed `options` field from the logger configuration. Decreased release binary size by ~15%. Changed default API socket path to `/run/firecracker.socket`. This path also applies when running with the jailer. Disabled KVM dirty page tracking by default. Removed redundant RescanBlockDevice action from the /actions API. The functionality is available through the PATCH /drives API. See `docs/api_requests/patch-block.md`. Added support for GICv2. Fixed CVE-2019-18960 - Fixed a logical error in bounds checking performed on vsock virtio descriptors. Fixed #1283 - Can't start a VM in AARCH64 with vcpus number more than 16. Fixed #1088 - The backtrace are printed on `panic`, no longer causing a seccomp fault. Fixed #1375 - Change logger options type from `Value` to `Vec<LogOption>` to prevent potential unwrap on None panics. Fixed #1436 - Raise interrupt for TX queue used descriptors Fixed #1439 - Prevent achieving 100% cpu load when the net device rx is throttled by the ratelimiter Fixed #1437 - Invalid fields in rate limiter related API requests are now failing with a proper error message. Fixed #1316 - correctly determine the size of a virtio device backed by a block device. Fixed #1383 - Log failed api requests. Decreased release binary size by 10%. New command-line parameter for `firecracker`, named `--no-api`, which will disable the API server thread. If set, the user won't be able to send any API requests, neither before, nor after the vm has booted. It must be paired with `--config-file` parameter. Also, when API server is disabled, MMDS is no longer available" }, { "data": "New command-line parameter for `firecracker`, named `--config-file`, which represents the path to a file that contains a JSON which can be used for configuring and starting a microVM without sending any API requests. The jailer adheres to the \"end of command options\" convention, meaning all parameters specified after `--` are forwarded verbatim to Firecracker. Added `KVM_PTP` support to the recommended guest kernel config. Added entry in FAQ.md for Firecracker Guest timekeeping. Vsock API call: `PUT /vsocks/{id}` changed to `PUT /vsock` and no longer appear to support multiple vsock devices. Any subsequent calls to this API endpoint will override the previous vsock device configuration. Removed unused 'Halting' and 'Halted' instance states. Vsock host-initiated connections now implement a trivial handshake protocol. See the for details. Related to: , , Fixed serial console on aarch64 (GitHub issue #1147). Upon panic, the terminal is now reset to canonical mode. Explicit error upon failure of vsock device creation. The failure message returned by an API call is flushed in the log FIFOs. Insert virtio devices in the FDT in order of their addresses sorted from low to high. Enforce the maximum length of the network interface name to be 16 chars as specified in the Linux Kernel. Changed the vsock property `id` to `vsock_id` so that the API client can be successfully generated from the swagger definition. New device: virtio-vsock, backed by Unix domain sockets (GitHub issue #650). See `docs/vsock.md`. No error is thrown upon a flush metrics intent if logger has not been configured. Updated the documentation for integration tests. Fixed high CPU usage before guest network interface is brought up (GitHub issue #1049). Fixed an issue that caused the wrong date (month) to appear in the log. Fixed a bug that caused the seccomp filter to reject legit syscalls in some rare cases (GitHub issue #1206). Docs: updated the production host setup guide. Docs: updated the rootfs and kernel creation guide. Removed experimental support for vhost-based vsock devices. New API call: `PATCH /machine-config/`, used to update VM configuration, before the microVM boots. Added an experimental swagger definition that includes the specification for the vsock API call. Added a signal handler for `SIGBUS` and `SIGSEGV` that immediately terminates the process upon intercepting the signal. Added documentation for signal handling utilities. Added \\[alpha\\] aarch64 support. Added metrics for successful read and write operations of MMDS, Net and Block devices. `vcpucount`, `memsizemib` and `htenabled` have been changed to be mandatory for `PUT` requests on `/machine-config/`. Disallow invalid seccomp levels by exiting with error. Incorrect handling of bind mounts within the jailed rootfs. Corrected the guide for `Alpine` guest setup. Added \\[alpha\\] AMD support. New `devtool` command: `prepare_release`. This updates the Firecracker version, crate dependencies and credits in preparation for a new release. New `devtool` command: `tag`. This creates a new git tag for the specified release number, based on the changelog contents. New doc section about building with glibc. Dropped the JSON-formatted `context` command-line parameter from Firecracker in favor of individual classic command-line parameters. When running with `jailer` the location of the API socket has changed to `<jail-root-path>/api.socket` (API socket was moved inside the jail). `PUT` and `PATCH` requests on `/mmds` with data containing any value type other than `String`, `Array`, `Object` will returns status code 400. Improved multiple error messages. Removed all kernel modules from the recommended kernel config. Corrected the seccomp filter when building with glibc. Removed the `seccomp.bad_syscalls` metric. Corrected the conditional compilation of the seccomp rule for" }, { "data": "A `madvise` call issued by the `musl` allocator was added to the seccomp allow list to prevent Firecracker from terminating abruptly when allocating memory in certain conditions. New API action: SendCtrlAltDel, used to initiate a graceful shutdown, if the guest has driver support for i8042 and AT Keyboard. See for details. New metric counting the number of egress packets with a spoofed MAC: `net.txspoofedmac_count`. New API call: `PATCH /network-interfaces/`, used to update the rate limiters on a network interface, after the start of a microVM. Added missing `vmm_version` field to the InstanceInfo API swagger definition, and marked several other mandatory fields as such. New default command line for guest kernel: `reboot=k panic=1 pci=off nomodules 8250.nr_uarts=0 i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd`. virtio-blk: VIRTIOBLKT_FLUSH now working as expected. Vsock devices can be attached when starting Firecracker using the jailer. Vsock devices work properly when seccomp filtering is enabled. Documentation for development environment setup on AWS in `dev-machine-setup.md`. Documentation for microVM networking setup in `docs/network-setup.md`. Limit the maximum supported vCPUs to 32. Log the app version when the `Logger` is initialized. Pretty print panic information. Firecracker terminates with exit code 148 when a syscall which is not present in the allow list is intercepted. Fixed build with the `vsock` feature. Documentation for Logger API Requests in `docs/api_requests/logger.md`. Documentation for Actions API Requests in `docs/api_requests/actions.md`. Documentation for MMDS in `docs/mmds.md`. Flush metrics on request via a PUT `/actions` with the `action_type` field set to `FlushMetrics`. Updated the swagger definition of the `Logger` to specify the required fields and provide default values for optional fields. Default `seccomp-level` is `2` (was previously 0). API Resource IDs can only contain alphanumeric characters and underscores. Seccomp filters are now applied to all Firecracker threads. Enforce minimum length of 1 character for the jailer ID. Exit with error code when starting the jailer process fails. Removed `InstanceHalt` from the list of possible actions. The `/logger` API has a new field called `options`. This is an array of strings that specify additional logging configurations. The only supported value is `LogDirtyPages`. When the `LogDirtyPages` option is configured via `PUT /logger`, a new metric called `memory.dirty_pages` is computed as the number of pages dirtied by the guest since the last time the metric was flushed. Log messages on both graceful and forceful termination. Availability of the list of dependencies for each commit inside the code base. Documentation on vsock experimental feature and host setup recommendations. `PUT` requests on `/mmds` always return 204 on success. `PUT` operations on `/network-interfaces` API resources no longer accept the previously required `state` parameter. The jailer starts with `--seccomp-level=2` (was previously 0) by default. Log messages use `anonymous-instance` as instance id if none is specified. Fixed crash upon instance start on hosts without 1GB huge page support. Fixed \"fault_message\" inconsistency between Open API specification and code base. Ensure MMDS compatibility with C5's IMDS implementation. Corrected the swagger specification to ensure `OpenAPI 2.0` compatibility. Apache-2.0 license Docs: - - - Experimental vhost-based vsock implementation. Improved MMDS network stack performance. If the logging system is not yet initialized (via `PUT /logger`), log events are now sent to stdout/stderr. Moved the `instanceinfofails` metric under `getapirequests` Improved and added links to more detailed information, now featured in subject-specific docs. Fixed bug in the MMDS network stack, that caused some RST packets to be sent without a destination. Fixed bug in `PATCH /drives`, whereby the ID in the path was not checked against the ID in the body. The Swagger definition was corrected. Each Firecracker process has an associated microVM Metadata Store" }, { "data": "Its contents can be configured using the `/mmds` API resource. The boot source is specified only with the `kernelimagepath` and the optional parameter `boot_args`. All other fields are removed. The `pathonhost` property in the drive specification is now marked as mandatory. PATCH drive only allows patching/changing the `pathonhost` property. All PUT and PATCH requests return the status code 204. CPUID brand string (aka model name) now includes the host CPU frequency. API requests which add guest network interfaces have an additional parameter, `allowmmdsrequests` which defaults to `false`. Stopping the guest (e.g. using the `reboot` command) also terminates the Firecracker process. When the Firecracker process ends for any reason, (other than `kill -9`), metrics are flushed at the very end. On startup `jailer` closes all inherited file descriptors based on `sysconf(SCOPEN_MAX)` except input, output and error. The microVM ID prefixes each Firecracker log line. This ID also appears in the process `cmdline` so it's now possible to `ps | grep <ID>` for it. Seccomp filtering is configured via the `--seccomp-level` jailer parameter. Firecracker logs the starting addresses of host memory areas provided as guest memory slots to KVM. The metric `panic_count` gets incremented to signal that a panic has occurred. Firecracker logs a backtrace when it crashes following a panic. Added basic instrumentation support for measuring boot time. `StartInstance` is a synchronous API request (it used to be an asynchronous request). Ensure that fault messages sent by the API have valid JSON bodies. Use HTTP response code 500 for internal Firecracker errors, and 400 for user errors on InstanceStart. Serialize the machine configuration fields to the correct data types (as specified in the Swagger definition). NUMA node assignment is properly enforced by the jailer. The `isrootdevice` and `isreadonly` properties are now marked as required in the Swagger definition of `Drive` object properties. `GET` requests on the `/actions` API resource are no longer supported. The metrics associated with asynchronous actions have been removed. Remove the `action_id` parameter for `InstanceStart`, both from the URI and the JSON request body. The jailer can now be configured to enter a preexisting network namespace, and to run as a daemon. Enabled PATCH operations on `/drives` resources. The microVM `id` supplied to the jailer may now contain alphanumeric characters and hyphens, up to a maximum length of 64 characters. Replaced the `permissions` property of `/drives` resources with a boolean. Removed the `state` property of `/drives` resources. Rate limiting functionality allows specifying an initial one time burst size. Firecracker can now boot from an arbitrary boot partition by specifying its unique id in the driver's API call. Block device rescan is triggered via a PUT `/actions` with the drive ID in the action body's `payload` field and the `action_type` field set to `BlockDeviceRescan`. Removed `noapic` from the default guest kernel command line. The `action_id` parameter is no longer required for synchronous PUT requests to `/actions`. PUT requests are no longer allowed on `/drives` resources after the guest has booted. Fixed guest instance kernel loader to accelerate vCPUs launch and consequently guest kernel boot. Fixed network emulation to improve IO performance. Firecracker uses two different named pipes to record human readable logs and metrics, respectively. Seccomp filtering can be enabled via setting the `USE_SECCOMP` environment variable. It is possible to supply only a partial specification when attaching a rate limiter (i.e. just the bandwidth or ops parameter). Errors related to guest network interfaces are now more" }, { "data": "Fixed a bug that was causing Firecracker to panic whenever a `PUT` request was sent on an existing network interface. The `id` parameter of the `jailer` is required to be an RFC 4122-compliant UUID. Fixed an issue which caused the network RX rate limiter to be more restrictive than intended. API requests which contain unknown fields will generate an error. Fixed an issue related to high CPU utilization caused by improper `KVM PIT` configuration. It is now possible to create more than one network tun/tap interface inside a jailed Firecracker. Added metrics for API requests, VCPU and device actions for the serial console (`UART`), keyboard (`i8042`), block and network devices. Metrics are logged every 60 seconds. A CPU features template for C3 is available, in addition to the one for T2. Seccomp filters restrict Firecracker from calling any other system calls than the minimum set it needs to function properly. The filters are enabled by setting the `USE_SECCOMP` environment variable to 1 before running Firecracker. Firecracker can be started by a new binary called `jailer`. The jailer takes as command line arguments a unique ID, the path to the Firecracker binary, the NUMA node that Firecracker will be assigned to and a `uid` and `gid` for Firecracker to run under. It sets up a `chroot` environment and a `cgroup`, and calls exec to morph into Firecracker. In case of failure, the metrics and the panic location are logged before aborting. Metric values are reset with every flush. `CPUTemplate` is now called `CpuTemplate` in order to work seamlessly with the swagger code generator for Go. `firecracker-beta.yaml` is now called `firecracker.yaml`. Handling was added for several untreated KVM exit scenarios, which could have led to panic. Fixed a bug that caused Firecracker to crash when attempting to disable the `IA32DEBUGINTERFACE MSR` flag in the T2 CPU features. Removed a leftover file generated by the logger unit tests. Removed `firecracker-v1.0.yaml`. The CPU Template can be set with an API call on `PUT /machine-config`. The only available template is T2. Hyperthreading can be enabled/disabled with an API call on `PUT /machine-config`. By default, hyperthreading is disabled. Added boot time performance test (`tests/performance/test_boottime.py`). Added Rate Limiter for VirtIO/net and VirtIO/net devices. The Rate Limiter uses two token buckets to limit rate on bytes/s and ops/s. The rate limiter can be (optionally) configured per drive with a `PUT` on `/drives/{drive_id}` and per network interface with a `PUT` on `/network-interface/{iface_id}`. Implemented pre-boot PUT updates for `/boot-source`, `/drives`, `/network-interfaces` and `/vsock`. Added integration tests for `PUT` updates. Moved the API definition (`swagger/firecracker-beta.yaml`) to the `api_server` crate. Removed `\"console=ttyS0\"` and added `\"8250.nr_uarts=0\"` to the default kernel command line to decrease the boot time. Changed the CPU topology to have all logical CPUs on a single socket. Removed the upper bound on CPU count as with musl there is no good way to get the total number of logical processors on a host. Build time tests now print the full output of commands. Disabled the Performance Monitor Unit and the Turbo Boost. Check the expected KVM capabilities before starting the VM. Logs now have timestamps. `testrun.sh` can run on platforms with more than one package manager by setting the package manager via a command line parameter (`-p`). Allow correct set up of multiple network-interfaces with auto-generated MAC. Fixed sporadic bug in VirtIO which was causing lost packages. Don't allow `PUT` requests with empty body on `/machine-config`. Deny `PUT` operations after the microvm boots (exception: the temporarily fix for live resize of block" }, { "data": "Removed examples crate. This used to have a Python example of starting Firecracker. This is replaced by `test_api.py` integration tests. Removed helper scripts for getting coverage and coding style errors. These were replaced by `testcoverage.py` and `teststyle.py` test integration tests. Removed `--vmm-no-api` command line option. Firecracker can only be started via the API. Users can interrogate the Machine Configuration (i.e. vcpu count and memory size) using a `GET` request on `/machine-config`. The logging system can be configured through the API using a `PUT` on `/logger`. Block devices support live resize by calling `PUT` with the same parameters as when the block was created. Release builds have Link Time Optimization (LTO) enabled. Firecracker is built with `musl`, resulting in a statically linked binary. More in-tree integration tests were added as part of the continuous integration system. The vcpu count is enforced to `1` or an even number. The Swagger definition of rate limiters was updated. Syslog-enabled logs were replaced with a host-file backed mechanism. The host topology of the CPU and the caches is not leaked into the microvm anymore. Boot time was improved by advertising the availability of the TSC deadline timer. Fixed an issue which prevented Firecracker from working on 4.14 (or newer) host kernels. Specifying the MAC address for an interface through the API is optional. Removed support for attaching vsock devices. Removed support for building Firecracker with glibc. Users can now interrogate Instance Information (currently just instance state) through the API. Renamed `api/swagger/all.yaml` to `api/swagger/firecracker-v1.0.yaml` which specifies targeted API support for Firecracker v1.0. Renamed `api/swagger/firecracker-v0.1.yaml` to `api/swagger/firecracker-beta.yaml` which specifies the currently supported API. Users can now enforce that an emulated block device is read-only via the API. To specify whether a block device is read-only or read-write, an extra \"permissions\" field was added to the Drive definition in the API. The root filesystem is automatically mounted in the guest OS as `ro`/`rw` according to the specified \"permissions\". It's the responsibility of the user to mount any other read-only block device as such within the guest OS. Users can now stop the guest VM using the API. Actions of type `InstanceHalt` are now supported via the API. Added support for `getDeviceID()` in `virtIO-block`. Without this, the guest Linux kernel would complain at boot time that the operation is unsupported. `stdin` control is returned to the Firecracker process when guest VM is inactive. Raw mode `stdin` is forwarded to the guest OS when guest VM is running. Removed `api/swagger/actions.yaml`. Removed `api/swagger/devices.yaml`. Removed `api/swagger/firecracker-mvp.yaml`. Removed `api/swagger/limiters.yaml`. Users can now specify the MAC address of a guest network interface via the `PUT` network interface API request. Previously, the guest MAC address parameter was ignored. Fixed a guest memory allocation issue, which previously led to a potentially significant memory chunk being wasted. Fixed an issue which caused compilation problems, due to a compatibility breaking transitive dependency in the tokio suite of crates. One-process virtual machine manager (one Firecracker per microVM). RESTful API running on a unix socket. The API supported by v0.1 can be found at `api/swagger/firecracker-v0.1.yaml`. Emulated keyboard (`i8042`) and serial console (`UART`). The microVM serial console input and output are connected to those of the Firecracker process (this allows direct console access to the guest OS). The capability of mapping an existing host tun-tap device as a VirtIO/net device into the microVM. The capability of mapping an existing host file as a GirtIO/block device into the microVM. The capability of creating a VirtIO/vsock between the host and the microVM. Default demand fault paging & CPU oversubscription." } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "Name | Type | Description | Notes | - | - | - Amx | Pointer to bool | | [optional] `func NewCpuFeatures() *CpuFeatures` NewCpuFeatures instantiates a new CpuFeatures object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewCpuFeaturesWithDefaults() *CpuFeatures` NewCpuFeaturesWithDefaults instantiates a new CpuFeatures object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *CpuFeatures) GetAmx() bool` GetAmx returns the Amx field if non-nil, zero value otherwise. `func (o CpuFeatures) GetAmxOk() (bool, bool)` GetAmxOk returns a tuple with the Amx field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *CpuFeatures) SetAmx(v bool)` SetAmx sets Amx field to given value. `func (o *CpuFeatures) HasAmx() bool` HasAmx returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "CpuFeatures.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC" } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "Multus", "subcategory": "Cloud Native Network" }
[ { "data": "title: WeaveDNS (service discovery) Design Notes layout: default The model is that each host has a service that is notified of hostnames and weave addresses for containers on the host. Like IPAM, this service is embedded within the router. It binds to the host bridge to answer DNS queries from local containers; for anything it can't answer, it uses the information in the host's /etc/resolv.conf to query an 'fallback' server. The service is comprised of a DNS server, which answers all DNS queries from containers, and a in-memory database of hostnames and IPs. The database on each node contains a complete copy of the hostnames and IPs for every containers in the cluster. For hostname queries in the local domain (default weave.local), the DNS server will consult the in-memory database. For reverse queries, we first consult the local database, and if not found we query the upstream server. For all other queries, we consult the upstream server. Updates to the in-memory database are broadcast to other DNS servers within the cluster. The in-memory database only contains entries from connected DNS servers; if a DNS server becomes partitioned from the cluster, entries belonging to that server are removed from each node in the cluster. When the partitioned DNS server reconnects, the entries are re-broadcast around the cluster. The DNS server also listens to the Docker event stream, and removes entries for containers when they die. Entries removed in this way are tombstoned, and the tombstone lazily broadcast around the cluster. After a short timeout the tombstones are independently removed from each host. The DNS server accepts HTTP requests on the following URL (patterns) and methods: `PUT /name/<identifier>/<ip-address>` Put a record for an IP, bound to a host-scoped identifier (e.g., a container ID), in the DNS database. The request body must contain a `fqdn=foo.weave.local` key pair. `DELETE /name/<identifier>/<ip-address>` Remove a specific record for an IP and host-scoped identifier. The request body can optionally contain a `fqdn=foo.weave.local` key pair. `DELETE /name/<identifier>` Remove all records for the host-scoped identifier. `GET /name/<fqdn>` List of all IPs (in JSON format) for givne FQDN. The updater component uses the Docker remote API to monitor containers coming and going, and tells the DNS server to update its records via its HTTP interface. It does not need to be attached to the weave network. The updater starts by subscribing to the events, and getting a list of the current containers. Any containers given a domain ending with \".weave\" are considered for inclusion in the name database. When it sees a container start or stop, the updater checks the weave network attachment of the container, and updates the DNS server. How does it check the network attachment from within a container? Will it need to delay slightly so that `attach` has a chance to run? Perhaps it could put containers on a watch list when it's noticed them." } ]
{ "category": "Runtime", "file_name": "weavedns-design.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- toc --> - - - - - <!-- /toc --> You can inspect the `antrea-controller` logs in the `antrea-controller` Pod by running this `kubectl` command: ```bash kubectl logs -n kube-system <antrea-controller Pod name> ``` To check the logs of the `antrea-agent`, `antrea-ovs`, and `antrea-ipsec` containers in an `antrea-agent` Pod, run command: ```bash kubectl logs -n kube-system <antrea-agent Pod name> -c [antrea-agent|antrea-ovs|antrea-ipsec] ``` To check the OVS daemon logs (e.g. if the `antrea-ovs` container logs indicate that one of the OVS daemons generated an error), you can use `kubectl exec`: ```bash kubectl exec -n kube-system <antrea-agent Pod name> -c antrea-ovs -- tail /var/log/openvswitch/<DAEMON>.log ``` The `antrea-controller` Pod and the list of `antrea-agent` Pods, along with the Nodes on which the Pods are scheduled, can be returned by command: ```bash kubectl get pods -n kube-system -l app=antrea -o wide ``` Logs of `antrea-controller`, `antrea-agent`, OVS and strongSwan daemons are also stored in the filesystem of the Node (i.e. the Node on which the `antrea-controller` or `antrea-agent` Pod is scheduled). `antrea-controller` logs are stored in directory: `/var/log/antrea` (on the Node where the `antrea-controller` Pod is scheduled. `antrea-agent` logs are stored in directory: `/var/log/antrea` (on the Node where the `antrea-agent` Pod is scheduled). Logs of the OVS daemons - `ovs-vswitchd`, `ovsdb-server`, `ovs-monitor-ipsec` - are stored in directory: `/var/log/antrea/openvswitch` (on the Node where the `antrea-agent` Pod is scheduled). strongSwan daemon logs are stored in directory: `/var/log/antrea/strongswan` (on the Node where the `antrea-agent` Pod is scheduled). To increase the log level for the `antrea-agent` and the `antrea-controller`, you can edit the `--v=0` arg in the Antrea manifest to a desired level. Alternatively, you can generate an Antrea manifest with increased log level of 4 (maximum debug level) using `generate_manifest.sh`: ```bash hack/generate-manifest.sh --mode dev --verbose-log ``` antrea-controller runs as a Deployment, exposes its API via a Service and registers an APIService to aggregate into the Kubernetes API. To access the antrea-controller API, you need to know its address and have the credentials to access it. There are multiple ways in which you can access the API: Typically, `antctl` handles locating the Kubernetes API server and authentication when it runs in an environment with kubeconfig set up. Same as `kubectl`, `antctl` looks for a file named config in the $HOME/.kube directory. You can specify other kubeconfig files by setting the `--kubeconfig` flag. For example, you can view internal NetworkPolicy objects with this command: ```bash antctl get networkpolicy ``` As the antrea-controller API is aggregated into the Kubernetes API, you can access it through the Kubernetes API using the appropriate URL" }, { "data": "The following command runs `kubectl` in a mode where it acts as a reverse proxy for the Kubernetes API and handles authentication. ```bash kubectl proxy & curl 127.0.0.1:8001/apis/controlplane.antrea.io ``` Antctl supports running a reverse proxy (similar to the kubectl one) which enables access to the entire Antrea Controller API (not just aggregated API Services), but does not secure the TLS connection between the proxy and the Controller. Refer to the for more information. If you want to directly access the antrea-controller API, you need to get its address and pass an authentication token when accessing it, like this: ```bash ANTREA_SVC=$(kubectl get service antrea -n kube-system -o jsonpath='{.spec.clusterIP}') TOKEN=$(kubectl get secret/antctl-service-account-token -n kube-system -o jsonpath=\"{.data.token}\"|base64 --decode) curl --insecure --header \"Authorization: Bearer $TOKEN\" https://$ANTREA_SVC/apis ``` antrea-agent runs as a DaemonSet Pod on each Node and exposes its API via a local endpoint. There are two ways you can access it: To use `antctl` to access the antrea-agent API, you need to exec into the antrea-agent container first. `antctl` is embedded in the image so it can be used directly. For example, you can view the internal NetworkPolicy objects for a specific agent with this command: ```bash kubectl exec -it <antrea-agent Pod name> -n kube-system -c antrea-agent -- bash antctl get networkpolicy ``` Antctl supports running a reverse proxy (similar to the kubectl one) which enables access to the entire Antrea Agent API, but does not secure the TLS connection between the proxy and the Controller. Refer to the [antctl documentation](antctl.md#antctl-proxy) for more information. If you want to directly access the antrea-agent API, you need to log into the Node that the antrea-agent runs on or exec into the antrea-agent container. Then access the local endpoint directly using the Bearer Token stored in the file system: ```bash TOKEN=$(cat /var/run/antrea/apiserver/loopback-client-token) curl --insecure --header \"Authorization: Bearer $TOKEN\" https://127.0.0.1:10350/ ``` Note that you can also access the antrea-agent API from outside the Node by using the authentication token of the `antctl` ServiceAccount: ```bash TOKEN=$(kubectl get secret/antctl-service-account-token -n kube-system -o jsonpath=\"{.data.token}\"|base64 --decode) curl --insecure --header \"Authorization: Bearer $TOKEN\" https://<Node IP address>:10350/podinterfaces ``` However, in this case you will be limited to the endpoints that `antctl` is allowed to access, as defined . flow-aggregator runs as a Deployment and exposes its API via a local endpoint. There are two ways you can access it: To use `antctl` to access the flow-aggregator API, you need to exec into the flow-aggregator container first. `antctl` is embedded in the image so it can be used" }, { "data": "For example, you can dump the flow records with this command: ```bash kubectl exec -it <flow-aggregator Pod name> -n flow-aggregator -- bash antctl get flowrecords ``` If you want to directly access the flow-aggregator API, you need to exec into the flow-aggregator container. Then access the local endpoint directly using the Bearer Token stored in the file system: ```bash TOKEN=$(cat /var/run/antrea/apiserver/loopback-client-token) curl --insecure --header \"Authorization: Bearer $TOKEN\" https://127.0.0.1:10348/ ``` OVS daemons (`ovsdb-server` and `ovs-vswitchd`) run inside the `antrea-ovs` container of the `antrea-agent` Pod. You can use `kubectl exec` to execute OVS command line tools (e.g. `ovs-vsctl`, `ovs-ofctl`, `ovs-appctl`) in the container, for example: ```bash kubectl exec -n kube-system <antrea-agent Pod name> -c antrea-ovs -- ovs-vsctl show ``` By default the host directory `/var/run/antrea/openvswitch/` is mounted to `/var/run/openvswitch/` of the `antrea-ovs` container and is used as the parent directory of the OVS UNIX domain sockets and configuration database file. Therefore, you may execute some OVS command line tools (inc. `ovs-vsctl` and `ovs-ofctl`) from a Kubernetes Node - assuming they are installed on the Node - by specifying the socket file path explicitly, for example: ```bash ovs-vsctl --db unix:/var/run/antrea/openvswitch/db.sock show ovs-ofctl show unix:/var/run/antrea/openvswitch/br-int.mgmt ``` Commands to check basic OVS and OpenFlow information include: `ovs-vsctl show`: dump OVS bridge and port configuration. Outputs of the command are like: ```bash f06768ee-17ec-4abb-a971-b3b76abc8cda Bridge br-int datapath_type: system Port coredns--e526c8 Interface coredns--e526c8 Port antrea-tun0 Interface antrea-tun0 type: geneve options: {key=flow, remote_ip=flow} Port antrea-gw0 Interface antrea-gw0 type: internal ovs_version: \"2.17.7\" ``` `ovs-ofctl show br-int`: show OpenFlow information of the OVS bridge. `ovs-ofctl dump-flows br-int`: dump OpenFlow entries of the OVS bridge. `ovs-ofctl dump-ports br-int`: dump traffic statistics of the OVS ports. For more information on the usage of the OVS CLI tools, check the . `antctl` provides some useful commands to troubleshoot Antrea Controller and Agent, which can print the runtime information of `antrea-controller` and `antrea-agent`, dump NetworkPolicy objects, dump Pod network interface information on a Node, dump Antrea OVS flows, and perform OVS packet tracing. Refer to the to learn how to use these commands. The easiest way to profile the Antrea components is to use the Go tool. Both the Antrea Agent and the Antrea Controller use the K8s apiserver library to serve their API, and this library enables the pprof HTTP server by default. In order to access it without having to worry about authentication, you can use the antctl proxy function. For example, this is what you would do to look at a 30-second CPU profile for the Antrea Controller: ```bash antctl proxy --controller& go tool pprof http://127.0.0.1:8001/debug/pprof/profile?seconds=30 ``` If you are running into issues when running Antrea and you need help, ask your questions on or [reach out to us on Slack or during the Antrea office hours](../README.md#community)." } ]
{ "category": "Runtime", "file_name": "troubleshooting.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "Usually, upgrading rkt should work without doing anything special: pods started with the old version will continue running and new pods will be started with the new version. However, in some cases special care must be taken. If the api-service is running and the new version of rkt does a store version upgrade that requires migration, new invocations of rkt will be blocked. This is so because the api-service is a long running process that holds a lock on the store, and the store migration needs to take an exclusive lock on it. For this reason, it is recommended to stop the api-service and start the latest version when upgrading rkt. This recommendation doesn't apply if the new api-service is listening on a different port and using a different via the `--dir` flag." } ]
{ "category": "Runtime", "file_name": "upgrading.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "This release introduces bug fixes and improvements, with the main focus on stability. Please try it and provide feedback. Thanks for all the contributions! For the definition of stable or latest release, please check . Please ensure your Kubernetes cluster is at least v1.21 before installing v1.5.2. Longhorn supports three installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions . Please read the first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.2 from v1.4.x/v1.5.x, which are only supported source versions. Follow the upgrade instructions here. . N/A Please follow up on about any outstanding issues found after this release. - @c3y1huang @chriscchien - @derekbit - @PhanLe1010 @chriscchien - @PhanLe1010 @nitendra-suse - @james-munson @roger-ryao - @derekbit - @PhanLe1010 @ejweber @roger-ryao - @derekbit @roger-ryao - @ChanYiLin @roger-ryao - @james-munson - @ejweber @roger-ryao - @c3y1huang @roger-ryao - @smallteeths @chriscchien - @mantissahz - @PhanLe1010 @roger-ryao - @smallteeths @chriscchien - @ChanYiLin @roger-ryao - @mantissahz @roger-ryao - @ejweber @chriscchien - @ChanYiLin @roger-ryao - @c3y1huang @nitendra-suse - @ejweber @roger-ryao - @c3y1huang @roger-ryao - @derekbit @nitendra-suse - @derekbit @shuo-wu - @james-munson @chriscchien - @mantissahz @chriscchien - @ChanYiLin @nitendra-suse - @PhanLe1010 @chriscchien - @c3y1huang @nitendra-suse - @yangchiu @derekbit - @PhanLe1010 @chriscchien - @ChanYiLin @chriscchien - @ChanYiLin @roger-ryao - @votdev - @mantissahz @roger-ryao - @mantissahz @chriscchien - @c3y1huang - @mantissahz - @derekbit @chriscchien - @derekbit @chriscchien - @derekbit @roger-ryao - @ejweber - @derekbit @chriscchien - @shuo-wu - @c3y1huang @chriscchien - @derekbit @chriscchien @nitendra-suse - @c3y1huang @roger-ryao - @james-munson @chriscchien - @ejweber @chriscchien - @mantissahz - @PhanLe1010 @roger-ryao - @ejweber @chriscchien - @mantissahz @chriscchien - @mantissahz @chriscchien - @derekbit @roger-ryao @ChanYiLin @PhanLe1010 @c3y1huang @chriscchien @derekbit @ejweber @innobead @james-munson @mantissahz @nitendra-suse @roger-ryao @shuo-wu @smallteeths @votdev @yangchiu" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.5.2.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Velero File System Backup Performance Guide\" layout: docs When using Velero to do file system backup & restore, Restic uploader or Kopia uploader are both supported now. But the resources used and time consumption are a big difference between them. We've done series rounds of tests against Restic uploader and Kopia uploader through Velero, which may give you some guidance. But the test results will vary from different infrastructures, and our tests are limited and couldn't cover a variety of data scenarios, the test results and analysis are for reference only. Minio is used as Velero backend storage, Network File System (NFS) is used to create the persistent volumes (PVs) and Persistent Volume Claims (PVC) based on the storage. The minio and NFS server are deployed independently in different virtual machines (VM), which with 300 MB/s write throughput and 175 MB/s read throughput representatively. The details of environmental information as below: ``` root@velero-host-01:~# kubectl version Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.4\" Server Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.14\" root@velero-host-01:~# docker version Client: Version: 20.10.12 API version: 1.41 Server: Engine: Version: 20.10.12 API version: 1.41 (minimum version 1.12) Go version: go1.16.2 containerd: Version: 1.5.9-0ubuntu1~20.04.4 runc: Version: 1.1.0-0ubuntu1~20.04.1 docker-init: Version: 0.19.0 root@velero-host-01:~# kubectl get nodes |wc -l 6 // one master with 6 work nodes root@velero-host-01:~# smartctl -a /dev/sda smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-126-generic] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Vendor: VMware Product: Virtual disk Revision: 1.0 Logical block size: 512 bytes Rotation Rate: Solid State Device Device type: disk root@velero-host-01:~# free -h total used free shared buff/cache available Mem: 3.8Gi 328Mi 3.1Gi 1.0Mi 469Mi 3.3Gi Swap: 0B 0B 0B root@velero-host-01:~# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c 4 Intel(R) Xeon(R) Gold 6230R CPU @ 2.10GHz root@velero-host-01:~# cat /proc/version root@velero-host-01:~# cat /proc/version Linux version 5.4.0-126-generic (build@lcy02-amd64-072) (gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)) #142-Ubuntu SMP Fri Aug 26 12:12:57 UTC 2022 root@velero-host-01:~# velero version Client: Version: main ###v1.10 pre-release version Git commit: 9b22ca6100646523876b18a491d881561b4dbcf3-dirty Server: Version: main ###v1.10 pre-release version ``` Below we've done 6 groups of tests, for each single group of test, we used limited resources (1 core CPU 2 GB memory or 4 cores CPU 4 GB memory) to do Velero file system backup under Restic path and Kopia path, and then compare the results. Recorded the metrics of time consumption, maximum CPU usage, maximum memory usage, and minio storage usage for node-agent daemonset, and the metrics of Velero deployment are not included since the differences are not obvious by whether using Restic uploader or Kopia uploader. Compression is either disabled or not unavailable for both uploader. |Uploader| Resources|Times |Max CPU|Max Memory|Repo Usage| |--|-|:-:|:|:--:|:--:| | Kopia | 1c2g |24m54s| 65% |1530 MB |80 MB | | Restic | 1c2g |52m31s| 55% |1708 MB |3.3 GB | | Kopia | 4c4g |24m52s| 63% |2216 MB |80 MB | | Restic | 4c4g |52m28s| 54% |2329 MB |3.3 GB | The memory usage is larger than Velero's default memory limit (1GB) for both Kopia and Restic under massive empty" }, { "data": "For both using Kopia uploader and Restic uploader, there is no significant time reduction by increasing resources from 1c2g to 4c4g. Restic uploader is one more time slower than Kopia uploader under the same specification resources. Restic has an irrational repository size (3.3GB) | Uploader | Resources|Times |Max CPU|Max Memory|Repo Usage| |-|-|:-:|:|:--:|:--:| | Kopia | 1c1g |2m34s | 70% |692 MB |108 MB | | Restic| 1c1g |3m9s | 54% |714 MB |275 MB | | Uploader | Resources|Times |Max CPU|Max Memory|Repo Usage| |-|-|:-:|:|:--:|:--:| | Kopia | 1c1g |3m45s | 68% |831 MB |108 MB | | Restic| 1c1g |4m53s | 57% |788 MB |275 MB | |Uploader| Resources|Times |Max CPU|Max Memory|Repo Usage| |--|-|:-:|:|:--:|:--:| | Kopia | 1c1g |5m06s | 71% |861 MB |108 MB | | Restic | 1c1g |6m23s | 56% |810 MB |275 MB | |Uploader| Resources|Times |Max CPU|Max Memory|Repo Usage| |--|-|:-:|:|:--:|:--:| | Kopia | 1c1g |OOM | 74% |N/A |N/A | | Restic | 1c1g |41m47s| 52% |904 MB |3.2 GB | With the increasing number of files, there is no memory abnormal surge, the memory usage for both Kopia uploader and Restic uploader is linear increasing, until exceeds 1GB memory usage in Case 2.4 Kopia uploader OOM happened. Kopia uploader gets increasingly faster along with the increasing number of files. Restic uploader repository size is still much larger than Kopia uploader repository. |Uploader| Resources|Times |Max CPU|Max Memory|Repo Usage| |--|-|:-:|:|:--:|:--:| | Kopia | 1c2g |1m37s | 75% |251 MB |10 GB | | Restic | 1c2g |5m25s | 100% |153 MB |10 GB | | Kopia | 4c4g |1m35s | 75% |248 MB |10 GB | | Restic | 4c4g |3m17s | 171% |126 MB |10 GB | This case involves a relatively large backup size, there is no significant time reduction by increasing resources from 1c2g to 4c4g for Kopia uploader, but for Restic uploader when increasing CPU from 1 core to 4, backup time-consuming was shortened by one-third, which means in this scenario should allocate more CPU resources for Restic uploader. For the large backup size case, Restic uploader's repository size comes to normal |Uploader| Resources|Times |Max CPU|Max Memory|Repo Usage| |--|-|:--:|:|:--:|:--:| | Kopia | 1c2g |2h30m | 100% |714 MB |900 GB | | Restic | 1c2g |Timeout| 100% |416 MB |N/A | | Kopia | 4c4g |1h42m | 138% |786 MB |900 GB | | Restic | 4c4g |2h15m | 351% |606 MB |900 GB | When the target backup data is relatively large, Restic uploader starts to Timeout under 1c2g. So it's better to allocate more memory for Restic uploader when backup large sizes of data. For backup large amounts of data, Kopia uploader is both less time-consuming and less resource usage. With the same specification resources, Kopia uploader is less time-consuming when backup. Performance would be better if choosing Kopia uploader for the scenario in backup large mounts of data or massive small files. It's better to set one reasonable resource configuration instead of the default depending on your scenario. For default resource configuration, it's easy to be timeout with Restic uploader in backup large amounts of data, and it's easy to be OOM for both Kopia uploader and Restic uploader in backup of massive small files." } ]
{ "category": "Runtime", "file_name": "performance-guidance.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }