content
listlengths 1
171
| tag
dict |
---|---|
[
{
"data": "Our mission is to enable secure, multi-tenant, minimal-overhead execution of container and function workloads. These tenets guide Firecracker's development: Built-In Security: We provide compute security barriers that enable multi-tenant workloads, and cannot be mistakenly disabled by customers. Customer workloads are simultaneously considered sacred (shall not be touched) and malicious (shall be defended against). We continuously invest in defense in depth and maintain mechanisms that ensure security best practices. Light-Weight Virtualization: We prioritize measuring Firecracker's hardware overhead in the dimensions that are important for our customers, and we strive to make this overhead negligible. Minimalist in Features: If it's not clearly required for our mission, we won't build it. We maintain a single implementation per capability, and deprecate obsolete implementations; resolving exceptions is a high priority issue. Compute Oversubscription: All of the hardware compute resources exposed by Firecracker to guests can be securely oversubscribed. All contributions must align with this charter and follow Firecracker's . Firecracker merge contributions into the main branch and create Firecracker releases. Maintainers are also subject to the mission and tenets outlined above. Anyone may submit and review contributions."
}
] |
{
"category": "Runtime",
"file_name": "CHARTER.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "is an in-memory immutable data manager that provides out-of-the-box high-level abstraction and zero-copy in-memory sharing for distributed data in big data tasks, such as graph analytics, numerical computing, and machine learning. Vineyard is design to address the inefficiency of data sharing in big data analytical workflows on Kubernetes. Vineyard provides: Efficient in-memory data management and zero-copy sharing across different systems. Out-of-the-box high-level data abstraction for distributed objects (e.g., tensors, tables, graphs, and distributed datasets) and efficient polyglot support (currently including C++, Python, Go, Rust and Java). Seamless integration with Kubernetes for cluster deployment and management, workloads orchestration, and observability. Out-of-the-box integration with workflow orchestration engines (including , and ), providing end-users with a unified and intrusive experience to leverage Vineyard in their data-intensive workflows to improving performance. Alignment with CNCF: Vineyard builds on Kubernetes for deploying and scaling, and the objects are observable in Kubernetes as CRDs. Vineyard makes efficient zero-copy sharing possible for data-intensive workflows on cloud-native infrastructure by a data-aware Kubernetes scheduler plugin. Vineyard adopts an immutable object design, which aligns with the immutable infrastructure of the cloud-native environment. Vineyard aligns with the CNCF effort on helping migrate batching system workflows to cloud native environments. Include a link to your projects devstats page. We will be looking for signs of consistent or increasing contribution activity. Please feel free to add commentary to add colour to the numbers and graphs we will see on devstats. Stargazers and Forks Commits per week Contributors and Companies The vineyard community has grown since the project entered the CNCF sandbox. Number of contributors: 26 -> 40 Github stars: 600+ -> ~750 Github forks: 80+ -> 110+ Vineyard published 8 release (one release about per 1.5 month) since the last annual review. The major new features and improvements include: Language SDKs in Rust and Go, where the Rust SDK was a collaboration with our external end-user, and enabled users seamlessly and efficiently interoperating their data between Python and Rust. Integration with the workflow engine Kedro, and gained attention in the Kedro community. Vineyard supports the data processing engine, letting users can easily connect traditional data processing pipelines built with the Hadoop ecosystem with emerging big-data and AI applications (e.g., applications in the community). A initial version of CSI driver, which helped Vineyard aligned with the Kubernetes platform and enables users to leverage Vineyard in their Kubeflow pipelines to optimize the data sharing between steps with only minor changes to their existing source code. We have conducted a series of research work around Vineyard and published the paper Vineyard: Optimizing Data Sharing in Data-Intensive Analytics in SIGMOD 2023, a top-tier conference in the data management community. Wenyuan Yu, Tao He, Lei Wang, Ke Meng, Ye Cao, Diwen Zhu, Sanhong Li, Jingren"
},
{
"data": "Vineyard: Optimizing Data Sharing in Data-Intensive Analytics. ACM SIG Conference on Management of Data (SIGMOD), industry, 2023. . How many maintainers do you have, and which organisations are they from? (Feel free to link to an existing MAINTAINERS file if appropriate.) We currently have 10 maintainers and 2 committers and have . Initial maintainers | Name | GitHub ID | Affiliation | Email | | | | | | | Tao He | | Alibaba | | | Xiaojian Luo | | Alibaba | | | Ke Meng | | Alibaba | | | Wenyuan Yu | | Alibaba | | | Weibin Zeng | | Alibaba | | | Siyuan Zhang | | Alibaba | | | Diwen Zhu | | Alibaba | | New maintainers in this year | Name | GitHub ID | Affiliation | Email | | | | | | | Ye Cao | | Alibaba | | | Shumin Yuan | | Alibaba | | | Denghao Li | | PingAn Tech | | New Committers in this year | Name | GitHub ID | Affiliation | Email | | | | | | | Lihong Lin | | PKU | | | Pei Li | | CMU | | What do you know about adoption, and how has this changed since your last review / since you joined Sandbox? If you can list companies that are end-users of your project, please do so. (Feel free to link to an existing ADOPTERS file if appropriate.) We have tracked the following two major adoption since StartDT (Qidianyun): transiting towards production stage StartDT is a startup company in China, providing a cloud-native data platform for big-data analytics and machine learning applications. Vineyard is currently used in their Python-centric data processing pipelines to share distributed dataframe artifacts between steps, and help build a composable and efficient data processing platform to end-users. Vineyard has passed their eager-evaluation, and they are working on building their distributed data processing platform on top of Vineyard. PingAn Tech: production stage PingAn is a large-scale fin-tech company in China. Vineyard is used in their data science platform to support efficient dataset sharing and management among data science researchers. The status of Vineyard in their platform has been transited from testing to production stage, and one of their engineers has become a maintainer of the Vineyard project. Besides these two major companies, since our last annual review, we have also noticed some other questions about using Vineyard in machine learning inference scenarios, but we haven't tracked the actual adoption yet. How has the project performed against its goals since the last review? (We won't penalize you if your goals changed for good"
},
{
"data": "Vineyard has successfully archived the goals about easing the getting started process for end-users from three aspects: Out-of-the-box integration with data processing systems, especially Spark and Hive, the most popular data processing engines in the big data community; Data processing pipeline orchestration: providing non-intrusive interfaces to help users migrate their existing data processing pipelines to Vineyard on Kubernetes and finally benefit from the efficient data sharing; Seamless inter-operability with other systems in the cloud-native environments: we invest a lot of effort in the Vineyard operator to help use deploy vineyard along with their workloads in a non-intrusive, declarative way and has tested the functionality with GraphScope in end-users production environments. Besides, Vineyard has successfully attracted new end-users from the big data community to adopt Vineyard in their own data processing platform, and the feedback from the Kedro community is also positive. What are the current goals of the project? For example, are you working on major new features? Or are you concentrating on adoption or documentation? Our current goals are mainly focused on the attracting more end-user to adopt Vineyard in their scenarios from different domains. Specifically, we are keeping moving towards the following goals in the next year: Optimizing our current Kubeflow integration and find more opportunities to evaluate and deploy Vineyard in production machine learning applications; Publish our integration with the big data processing systems to their end-user community and gather feedback for further improvements; Seeking more opportunities to evaluate Vineyard in the emerging LLM applications, for both data preprocessing, training, and inference serving to see if Vineyard can bring added value to these applications as where the data cost is usually high; Getting engaged with the Batch System WG in CNCF to seek opportunities about further collaboration with other projects in CNCF. How can the CNCF help you achieve your upcoming goals? Vineyard has incredibly benefited from CNCF since accepted as a sandbox project. We believe the end-users in the CNCF community are critical for Vineyard to become successful. With the help of CNCF service desk, we have successfully built a new website for Vineyard, which is more friendly to end-users. We are also working on components like CSI driver and hope that could make the inter-operation with other projects in the CNCF community easier. We will host a Project Kiosk in this KubeCon China and hope to get more feedback from the community, and hope to get more feedback from the community. Furthermore, we hope we could have more opportunities to introduce our project to border end-users in the CNCF community to increase adoption. Do you think that your project meets the ? We think our project vineyard still needs further exploration to get border adoption in the end-user's production deployment and gather more feedback, and we are looking forward to meeting the incubation criteria in the near future."
}
] |
{
"category": "Runtime",
"file_name": "2023-vineyard-annual.md",
"project_name": "Vineyard",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "List of all the awesome people working to make Gin the best Web Framework in Go. Gin Core Team: Bo-Yi Wu (@appleboy), thinkerou (@thinkerou), Javier Provecho (@javierprovecho) Maintainers: Manu Martinez-Almeida (@manucorporat), Javier Provecho (@javierprovecho) People and companies, who have contributed, in alphabetical order. 178inaba <[email protected]> A. F <[email protected]> ABHISHEK SONI <[email protected]> Abhishek Chanda <[email protected]> Abner Chen <[email protected]> AcoNCodes <[email protected]> Adam Dratwinski <[email protected]> Adam Mckaig <[email protected]> Adam Zielinski <[email protected]> Adonis <[email protected]> Alan Wang <[email protected]> Albin Gilles <[email protected]> Aleksandr Didenko <[email protected]> Alessandro (Ale) Segala <[email protected]> Alex <[email protected]> Alexander <[email protected]> Alexander Lokhman <[email protected]> Alexander Melentyev <[email protected]> Alexander Nyquist <[email protected]> Allen Ren <[email protected]> AllinGo <[email protected]> Ammar Bandukwala <[email protected]> An Xiao (Luffy) <[email protected]> Andre Dublin <[email protected]> Andrew Szeto <[email protected]> Andrey Abramov <[email protected]> Andrey Nering <[email protected]> Andrey Smirnov <[email protected]> Andrii Bubis <[email protected]> Andr Bazaglia <[email protected]> Andy Pan <[email protected]> Antoine GIRARD <[email protected]> Anup Kumar Panwar <[email protected]> Aravinth Sundaram <[email protected]> Artem <[email protected]> Ashwani <[email protected]> Aurelien Regat-Barrel <[email protected]> Austin Heap <[email protected]> Barnabus <[email protected]> Bo-Yi Wu <[email protected]> Boris Borshevsky <[email protected]> Boyi Wu <[email protected]> BradyBromley <[email protected]> Brendan Fosberry <[email protected]> Brian Wigginton <[email protected]> Carlos Eduardo <[email protected]> Chad Russell <[email protected]> Charles <[email protected]> Christian Muehlhaeuser <[email protected]> Christian Persson <[email protected]> Christopher Harrington <[email protected]> Damon Zhao <[email protected]> Dan Markham <[email protected]> Dang Nguyen <[email protected]> Daniel Krom <[email protected]> Daniel M. Lambea <[email protected]> Danieliu <[email protected]> David Irvine <[email protected]> David Zhang <[email protected]> Davor Kapsa <[email protected]> DeathKing <[email protected]> Dennis Cho <[email protected]> Dmitry Dorogin <[email protected]> Dmitry Kutakov <[email protected]> Dmitry Sedykh <[email protected]> Don2Quixote <[email protected]> Donn Pebe <[email protected]> Dustin Decker <[email protected]> Eason Lin <[email protected]> Edward Betts <[email protected]> Egor Seredin <[email protected]> Emmanuel Goh <[email protected]> Equim <[email protected]> Eren A. Akyol <[email protected]> Eric_Lee <[email protected]> Erik Bender <[email protected]> Ethan Kan <[email protected]> Evgeny Persienko <[email protected]> Faisal Alam <[email protected]> Fareed Dudhia <[email protected]> Filip Figiel <[email protected]> Florian Polster <[email protected]> Frank Bille <[email protected]> Franz Bettag <[email protected]> Ganlv <[email protected]> Gaozhen Ying <[email protected]> George Gabolaev <[email protected]> George Kirilenko <[email protected]> Georges Varouchas <[email protected]> Gordon Tyler <[email protected]> Harindu Perera <[email protected]> Helios <[email protected]> Henry Kwan <[email protected]> Henry Yee <[email protected]> Himanshu Mishra <[email protected]> Hiroyuki Tanaka <[email protected]> Ibraheem Ahmed <[email protected]> Ignacio Galindo <[email protected]> Igor H. Vieira <[email protected]> Ildar1111 <[email protected]> Iskander (Alex) Sharipov <[email protected]> Ismail Gjevori <[email protected]> Ivan Chen <[email protected]> JINNOUCHI Yasushi <[email protected]> James Pettyjohn <[email protected]> Jamie Stackhouse <[email protected]> Jason Lee <[email protected]> Javier Provecho <[email protected]> Javier Provecho <[email protected]> Javier Provecho <[email protected]> Javier Provecho Fernandez <[email protected]> Javier Provecho Fernandez <[email protected]> Jean-Christophe Lebreton <[email protected]> Jeff <[email protected]> Jeremy Loy <[email protected]> Jim Filippou <[email protected]> Jimmy Pettersson <[email protected]> John Bampton <[email protected]> Johnny Dallas <[email protected]> Johnny Dallas <[email protected]> Jonathan (JC) Chen <[email protected]> Josep Jesus Bigorra Algaba <[email protected]> Josh Horowitz <[email protected]> Joshua Loper <[email protected]> Julien Schmidt <[email protected]> Jun Kimura <[email protected]> Justin Beckwith <[email protected]> Justin Israel <[email protected]> Justin Mayhew <[email protected]> Jrme Laforge <[email protected]> Kacper Bk <[email protected]> Kamron Batman <[email protected]> Kane Rogers <[email protected]> Kaushik Neelichetty <[email protected]> Keiji Yoshida <[email protected]> Kel Cecil <[email protected]> Kevin Mulvey <[email protected]> Kevin Zhu <[email protected]> Kirill Motkov <[email protected]> Klemen Sever <[email protected]> Kristoffer A. Iversen <[email protected]> Krzysztof Szafraski <[email protected]> Kumar McMillan <[email protected]> Kyle Mcgill <[email protected]> Lanco <[email protected]> Levi Olson <[email protected]> Lin Kao-Yuan <[email protected]> Linus Unnebck <[email protected]> Lucas Clemente <[email protected]> Ludwig Valda Vasquez <[email protected]> Luis GG <[email protected]> MW Lim <[email protected]> Maksimov Sergey <[email protected]> Manjusaka <[email protected]> Manu MA <[email protected]> Manu MA <[email protected]> Manu Mtz-Almeida <[email protected]> Manu Mtz.-Almeida <[email protected]> Manuel Alonso <[email protected]> Mara Kim <[email protected]> Mario Kostelac <[email protected]> Martin Karlsch <[email protected]> Matt Newberry <[email protected]> Matt Williams <[email protected]> Matthieu MOREL <[email protected]> Max Hilbrunner"
},
{
"data": "Maxime Soul <[email protected]> MetalBreaker <[email protected]> Michael Puncel <[email protected]> MichaelDeSteven <[email protected]> Mike <[email protected]> Mike Stipicevic <[email protected]> Miki Tebeka <[email protected]> Miles <[email protected]> Mirza Ceric <[email protected]> Mykyta Semenistyi <[email protected]> Naoki Takano <[email protected]> Ngalim Siregar <[email protected]> Ni Hao <[email protected]> Nick Gerakines <[email protected]> Nikifor Seryakov <[email protected]> Notealot <[email protected]> Olivier Mengu <[email protected]> Olivier Robardet <[email protected]> Pablo Moncada <[email protected]> Pablo Moncada <[email protected]> Panmax <[email protected]> Peperoncino <[email protected]> Philipp Meinen <[email protected]> Pierre Massat <[email protected]> Qt <[email protected]> Quentin ROYER <[email protected]> README Bot <[email protected]> Rafal Zajac <[email protected]> Rahul Datta Roy <[email protected]> Rajiv Kilaparti <[email protected]> Raphael Gavache <[email protected]> Ray Rodriguez <[email protected]> Regner Blok-Andersen <[email protected]> Remco <[email protected]> Rex Lee() <[email protected]> Richard Lee <[email protected]> Riverside <[email protected]> Robert Wilkinson <[email protected]> Rogier Lommers <[email protected]> Rohan Pai <[email protected]> Romain Beuque <[email protected]> Roman Belyakovsky <[email protected]> Roman Zaynetdinov <[email protected]> Roman Zaynetdinov <[email protected]> Ronald Petty <[email protected]> Ross Wolf <[email protected]> Roy Lou <[email protected]> Rubi <[email protected]> Ryan <[email protected]> Ryan J. Yoder <[email protected]> SRK.Lyu <[email protected]> Sai <[email protected]> Samuel Abreu <[email protected]> Santhosh Kumar <[email protected]> Sasha Melentyev <[email protected]> Sasha Myasoedov <[email protected]> Segev Finer <[email protected]> Sergey Egorov <[email protected]> Sergey Fedchenko <[email protected]> Sergey Gonimar <[email protected]> Sergey Ponomarev <[email protected]> Serica <[email protected]> Shamus Taylor <[email protected]> Shilin Wang <[email protected]> Shuo <[email protected]> Skuli Oskarsson <[email protected]> Snawoot <[email protected]> Sridhar Ratnakumar <[email protected]> Steeve Chailloux <[email protected]> Sudhir Mishra <[email protected]> Suhas Karanth <[email protected]> TaeJun Park <[email protected]> Tatsuya Hoshino <[email protected]> Tevic <[email protected]> Tevin Jeffrey <[email protected]> The Gitter Badger <[email protected]> Thibault Jamet <[email protected]> Thomas Boerger <[email protected]> Thomas Schaffer <[email protected]> Tommy Chu <[email protected]> Tudor Roman <[email protected]> Uwe Dauernheim <[email protected]> Valentine Oragbakosi <[email protected]> Vas N <[email protected]> Vasilyuk Vasiliy <[email protected]> Victor Castell <[email protected]> Vince Yuan <[email protected]> Vyacheslav Dubinin <[email protected]> Waynerv <[email protected]> Weilin Shi <[email protected]> Xudong Cai <[email protected]> Yasuhiro Matsumoto <[email protected]> Yehezkiel Syamsuhadi <[email protected]> Yoshiki Nakagawa <[email protected]> Yoshiyuki Kinjo <[email protected]> Yue Yang <[email protected]> ZYunH <[email protected]> Zach Newburgh <[email protected]> Zasda Yusuf Mikail <[email protected]> ZhangYunHao <[email protected]> ZhiFeng Hu <[email protected]> Zhu Xi <[email protected]> a2tt <[email protected]> ahuigo <[email protected]> ali <[email protected]> aljun <[email protected]> andrea <[email protected]> andriikushch <[email protected]> anoty <[email protected]> awkj <[email protected]> axiaoxin <[email protected]> bbiao <[email protected]> bestgopher <[email protected]> betahu <[email protected]> bigwheel <[email protected]> bn4t <[email protected]> bullgare <[email protected]> chainhelen <[email protected]> chenyang929 <[email protected]> chriswhelix <[email protected]> collinmsn <[email protected]> cssivision <[email protected]> danielalves <[email protected]> delphinus <[email protected]> dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> dickeyxxx <[email protected]> edebernis <[email protected]> error10 <[email protected]> esplo <[email protected]> eudore <[email protected]> ffhelicopter <[email protected]> filikos <[email protected]> forging2012 <[email protected]> goqihoo <[email protected]> grapeVine <[email protected]> guonaihong <[email protected]> heige <[email protected]> heige <[email protected]> hellojukay <[email protected]> henrylee2cn <[email protected]> htobenothing <[email protected]> iamhesir <[email protected]> ijaa <[email protected]> ishanray <[email protected]> ishanray <[email protected]> itcloudy <[email protected]> jarodsong6 <[email protected]> jasonrhansen <[email protected]> jincheng9 <[email protected]> joeADSP <[email protected]> junfengye <[email protected]> kaiiak <[email protected]> kebo <[email protected]> keke <[email protected]> kishor kunal raj <[email protected]> kyledinh <[email protected]> lantw44 <[email protected]> likakuli <[email protected]> linfangrong <[email protected]> linzi <[email protected]> llgoer <[email protected]> long-road <[email protected]> mbesancon <[email protected]> mehdy <[email protected]> metal A-wing <[email protected]> micanzhang <[email protected]> minarc <[email protected]> mllu <[email protected]> mopemoepe <[email protected]> msoedov <[email protected]> mstmdev <[email protected]> novaeye <[email protected]> olebedev <[email protected]> phithon <[email protected]> pjgg <[email protected]> qm012 <[email protected]> raymonder jin <[email protected]> rns <[email protected]> root@andrea:~# <[email protected]> sekky0905 <[email protected]> senhtry <[email protected]> shadrus <[email protected]> silasb <[email protected]> solos <[email protected]> songjiayang <[email protected]> sope <[email protected]> srt180 <[email protected]> stackerzzq <[email protected]> sunshineplan <[email protected]> syssam <[email protected]> techjanitor <[email protected]> techjanitor <[email protected]> thinkerou <[email protected]> thinkgo <[email protected]> tsirolnik <[email protected]> tyltr <[email protected]> vinhha96 <[email protected]> voidman <[email protected]> vz <[email protected]> wei <[email protected]> weibaohui <[email protected]> whirosan <[email protected]> willnewrelic <[email protected]> wssccc <[email protected]> wuhuizuo <[email protected]> xyb <[email protected]> y-yagi <[email protected]> yiranzai <[email protected]> youzeliang <[email protected]> yugu <[email protected]> yuyabe <[email protected]> zebozhuang <[email protected]> zero11-0203 <[email protected]> zesani <[email protected]> zhanweidu <[email protected]> zhing <[email protected]> ziheng <[email protected]> zzjin <[email protected]> <[email protected]> <[email protected]> <[email protected]> 233 <[email protected]> <[email protected]> <[email protected]> <[email protected]> Gopher <[email protected]>"
}
] |
{
"category": "Runtime",
"file_name": "AUTHORS.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: List of Environment Variables Alluxio supports defining a few frequently used configuration settings through environment variables. <table class=\"table table-striped\"> <tr><th>Environment Variable</th><th>Description</th></tr> {% for envvar in site.data.table.en.envvars %} <tr> <td markdown=\"span\">`{{env_var.name}}`</td> <td markdown=\"span\">{{env_var.description}}</td> </tr> {% endfor %} </table> The following example will set up: an Alluxio master at `localhost` the root mount point as an HDFS cluster with a namenode also running at `localhost` defines the maximum heap space of the VM to be 30g enable Java remote debugging at port 7001 ```shell $ export ALLUXIOMASTERHOSTNAME=\"localhost\" $ export ALLUXIOMASTERMOUNTTABLEROOT_UFS=\"hdfs://localhost:9000\" $ export ALLUXIOMASTERJAVA_OPTS=\"-Xmx30g\" $ export ALLUXIOMASTERATTACHOPTS=\"-agentlib:jdwp=transport=dtsocket,server=y,suspend=n,address=7001\" ``` Users can either set these variables through the shell or in `conf/alluxio-env.sh`. If this file does not exist yet, it can be copied from the template file under `${ALLUXIO_HOME}/conf`: ```shell $ cp conf/alluxio-env.sh.template conf/alluxio-env.sh ```"
}
] |
{
"category": "Runtime",
"file_name": "Environment-List.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Create a new metadata file in the backup repository's backup name sub-directory to store the backup-including PVC and PV information. The information includes the way of backing up the PVC and PV data, snapshot information, and status. The needed snapshot status can also be recorded there, but the Velero-Native snapshot plugin doesn't provide a way to get the snapshot size from the API, so it's possible that not all snapshot size information is available. This new additional metadata file is needed when: Get a summary of the backup's PVC and PV information, including how the data in them is backed up, or whether the data in them is skipped from backup. Find out how the PVC and PV should be restored in the restore process. Retrieve the PV's snapshot information for backup. There is already a to track the skipped PVC in the backup. This design will depend on it and go further to get a summary of PVC and PV information, then persist into a metadata file in the backup repository. In the restore process, the Velero server needs to decide how the PV resource should be restored according to how the PV is backed up. The current logic is to check whether it's backed up by Velero-native snapshot, by file-system backup, or having `DeletionPolicy` set as `Delete`. The checks are made by the backup-generated PVBs or Snapshots. There is no generic way to find this information, and the CSI backup and Snapshot data movement backup are not covered. Another thing that needs noticing is when describing the backup, there is no generic way to find the PV's snapshot information. Create a new metadata file to store backup's PVCs and PVs information and volume data backing up method. The file can be used to let downstream consumers generate a summary. Create a generic way to let the Velero server know how the PV resources are backed up. Create a generic way to let the Velero server find the PV corresponding snapshot information. Unify how to get snapshot size information for all PV backing-up methods, and all other currently not ready PVs' information. Create backup-name-volumes-info.json metadata file in the backup's repository. This file will be encoded to contain all the PVC and PV information included in the backup. The information covers whether the PV or PVC's data is skipped during backup, how its data is backed up, and the backed-up detail information. Please notice that the new metadata file includes all skipped volume information. This is used to address . The `restoreItem` function can decode the backup-name-volumes-info.json file to determine how to handle the PV resource. backup-name-volumes-info.json file is a structure that contains an array of structure `VolumeInfo`. ``` golang type VolumeInfo struct { PVCName string // The PVC's"
},
{
"data": "PVCNamespace string // The PVC's namespace. PVName string // The PV name. BackupMethod string // The way the volume data is backed up. The valid value includes `VeleroNativeSnapshot`, `PodVolumeBackup` and `CSISnapshot`. SnapshotDataMoved bool // Whether the volume's snapshot data is moved to specified storage. Skipped boolean // Whether the Volume is skipped in this backup. SkippedReason string // The reason for the volume is skipped in the backup. StartTimestamp *metav1.Time // Snapshot starts timestamp. OperationID string // The Async Operation's ID. CSISnapshotInfo CSISnapshotInfo SnapshotDataMovementInfo SnapshotDataMovementInfo NativeSnapshotInfo VeleroNativeSnapshotInfo PVBInfo PodVolumeBackupInfo PVInfo PVInfo } // CSISnapshotInfo is used for displaying the CSI snapshot status type CSISnapshotInfo struct { SnapshotHandle string // It's the storage provider's snapshot ID for CSI. Size int64 // The snapshot corresponding volume size. Driver string // The name of the CSI driver. VSCName string // The name of the VolumeSnapshotContent. } // SnapshotDataMovementInfo is used for displaying the snapshot data mover status. type SnapshotDataMovementInfo struct { DataMover string // The data mover used by the backup. The valid values are `velero` and ``(equals to `velero`). UploaderType string // The type of the uploader that uploads the snapshot data. The valid values are `kopia` and `restic`. RetainedSnapshot string // The name or ID of the snapshot associated object(SAO). SAO is used to support local snapshots for the snapshot data mover, e.g. it could be a VolumeSnapshot for CSI snapshot data moign/pvbackupinfo. SnapshotHandle string // It's the filesystem repository's snapshot ID. } // VeleroNativeSnapshotInfo is used for displaying the Velero native snapshot status. type VeleroNativeSnapshotInfo struct { SnapshotHandle string // It's the storage provider's snapshot ID for the Velero-native snapshot. VolumeType string // The cloud provider snapshot volume type. VolumeAZ string // The cloud provider snapshot volume's availability zones. IOPS string // The cloud provider snapshot volume's IOPS. } // PodVolumeBackupInfo is used for displaying the PodVolumeBackup snapshot status. type PodVolumeBackupInfo struct { SnapshotHandle string // It's the file-system uploader's snapshot ID for PodVolumeBackup. Size int64 // The snapshot corresponding volume size. UploaderType string // The type of the uploader that uploads the data. The valid values are `kopia` and `restic`. VolumeName string // The PVC's corresponding volume name used by Pod: https://github.com/kubernetes/kubernetes/blob/e4b74dd12fa8cb63c174091d5536a10b8ec19d34/pkg/apis/core/types.go#L48 PodName string // The Pod name mounting this PVC. PodNamespace string // The Pod namespace. NodeName string // The PVB-taken k8s node's name. } // PVInfo is used to store some PV information modified after creation. // Those information are lost after PV recreation. type PVInfo struct { ReclaimPolicy string // ReclaimPolicy of PV. It could be different from the referenced StorageClass. Labels map[string]string // The PV's labels should be kept after recreation. } ``` The function `persistBackup` has `backup *pkgbackup.Request` in parameters. From it, the `VolumeSnapshots`, `PodVolumeBackups`, `CSISnapshots`, `itemOperationsList`, and `SkippedPVTracker` can be"
},
{
"data": "All of them will be iterated and merged into the `VolumeInfo` array, and then persisted into backup repository in function `persistBackup`. Please notice that the change happened in async operations are not reflected in the new metadata file. The file only covers the volume changes happen in the Velero server process scope. A new methods are added to BackupStore to download the VolumeInfo metadata file. Uploading the metadata file is covered in the exiting `PutBackup` method. ``` golang type BackupStore interface { ... GetVolumeInfos(name string) ([]*VolumeInfo, error) ... } ``` The downstream tools can use this VolumeInfo array to format and display their volume information. This is not in the scope of this feature. The `velero backup describe` can also use this VolumeInfo array structure to display the volume information. The snapshot data mover volume should use this structure at first, then the Velero native snapshot, CSI snapshot, and PodVolumeBackup can also use this structure. The detailed implementation is also not in this feature's scope. In the function `restoreItem`, it will determine whether to restore the PV resource by checking it in the Velero native snapshots list, PodVolumeBackup list, and its DeletionPolicy. This logic is still kept. The logic will be used when the new `VolumeInfo` metadata cannot be found to support backward compatibility. ``` golang if groupResource == kuberesource.PersistentVolumes { switch { case hasSnapshot(name, ctx.volumeSnapshots): ... case hasPodVolumeBackup(obj, ctx): ... case hasDeleteReclaimPolicy(obj.Object): ... default: ... ``` After introducing the VolumeInfo array, the following logic will be added. ``` golang if groupResource == kuberesource.PersistentVolumes { volumeInfo := GetVolumeInfo(pvName) switch volumeInfo.BackupMethod { case VeleroNativeSnapshot: ... case PodVolumeBackup: ... case CSISnapshot: ... default: // Need to check whether the volume is backed up by the SnapshotDataMover. if volumeInfo.SnapshotDataMovement: // Check whether the Velero server should restore the PV depending on the DeletionPolicy setting. if volumeInfo.Skipped: ``` backup-name-volumes-info.json file is deleted during backup deletion. The restore process needs more information about how the PVs are backed up to determine whether this PV should be restored. The released branches also need a similar function, but backporting a new feature into previous releases may not be a good idea, so according to , adding more cases here to support checking PV backed-up by CSI plugin and CSI snapshot data mover: https://github.com/vmware-tanzu/velero/blob/5ff5073cc3f364bafcfbd26755e2a92af68ba180/pkg/restore/restore.go#L1206-L1324. There should be no security impact introduced by this design. After this design is implemented, there should be no impact on the existing . To support older version backup, which doesn't have the VolumeInfo metadata file, the old logic, which is checking the Velero native snapshots list, PodVolumeBackup list, and PVC DeletionPolicy, is still kept, and supporting CSI snapshots and snapshot data mover logic will be added too. This will be implemented in the Velero v1.13 development cycle. There are no open issues identified by now."
}
] |
{
"category": "Runtime",
"file_name": "pv_backup_info.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This file is automatically generated when running tests. Do not edit manually. Extension | MIME type | Aliases | | - n/a | application/octet-stream | - .xpm | image/x-xpixmap | - .7z | application/x-7z-compressed | - .zip | application/zip | application/x-zip, application/x-zip-compressed .xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | - .docx | application/vnd.openxmlformats-officedocument.wordprocessingml.document | - .pptx | application/vnd.openxmlformats-officedocument.presentationml.presentation | - .epub | application/epub+zip | - .jar | application/jar | - .odt | application/vnd.oasis.opendocument.text | application/x-vnd.oasis.opendocument.text .ott | application/vnd.oasis.opendocument.text-template | application/x-vnd.oasis.opendocument.text-template .ods | application/vnd.oasis.opendocument.spreadsheet | application/x-vnd.oasis.opendocument.spreadsheet .ots | application/vnd.oasis.opendocument.spreadsheet-template | application/x-vnd.oasis.opendocument.spreadsheet-template .odp | application/vnd.oasis.opendocument.presentation | application/x-vnd.oasis.opendocument.presentation .otp | application/vnd.oasis.opendocument.presentation-template | application/x-vnd.oasis.opendocument.presentation-template .odg | application/vnd.oasis.opendocument.graphics | application/x-vnd.oasis.opendocument.graphics .otg | application/vnd.oasis.opendocument.graphics-template | application/x-vnd.oasis.opendocument.graphics-template .odf | application/vnd.oasis.opendocument.formula | application/x-vnd.oasis.opendocument.formula .odc | application/vnd.oasis.opendocument.chart | application/x-vnd.oasis.opendocument.chart .sxc | application/vnd.sun.xml.calc | - .pdf | application/pdf | application/x-pdf .fdf | application/vnd.fdf | - n/a | application/x-ole-storage | - .msi | application/x-ms-installer | application/x-windows-installer, application/x-msi .aaf | application/octet-stream | - .msg | application/vnd.ms-outlook | - .xls | application/vnd.ms-excel | application/msexcel .pub | application/vnd.ms-publisher | - .ppt | application/vnd.ms-powerpoint | application/mspowerpoint .doc | application/msword | application/vnd.ms-word .ps | application/postscript | - .psd | image/vnd.adobe.photoshop | image/x-psd, application/photoshop .p7s | application/pkcs7-signature | - .ogg | application/ogg | application/x-ogg .oga | audio/ogg | - .ogv | video/ogg | - .png | image/png | - .png | image/vnd.mozilla.apng | - .jpg | image/jpeg | - .jxl | image/jxl | - .jp2 | image/jp2 | - .jpf | image/jpx | - .jpm | image/jpm | video/jpm .jxs | image/jxs | - .gif | image/gif | - .webp | image/webp | - .exe | application/vnd.microsoft.portable-executable | - n/a | application/x-elf | - n/a | application/x-object | - n/a | application/x-executable | - .so | application/x-sharedlib | - n/a | application/x-coredump | - .a | application/x-archive | application/x-unix-archive .deb | application/vnd.debian.binary-package | - .tar | application/x-tar | - .xar | application/x-xar | - .bz2 | application/x-bzip2 | - .fits | application/fits | - .tiff | image/tiff | - .bmp | image/bmp | image/x-bmp, image/x-ms-bmp .ico | image/x-icon | - .mp3 | audio/mpeg | audio/x-mpeg, audio/mp3 .flac | audio/flac | - .midi | audio/midi | audio/mid, audio/sp-midi, audio/x-mid, audio/x-midi .ape | audio/ape | - .mpc | audio/musepack | - .amr | audio/amr | audio/amr-nb .wav | audio/wav | audio/x-wav, audio/vnd.wave, audio/wave .aiff | audio/aiff | audio/x-aiff .au | audio/basic | - .mpeg | video/mpeg | - .mov | video/quicktime | - .mqv | video/quicktime | - .mp4 | video/mp4 | - .webm | video/webm | audio/webm .3gp | video/3gpp | video/3gp, audio/3gpp .3g2 | video/3gpp2 | video/3g2, audio/3gpp2 .avi | video/x-msvideo | video/avi, video/msvideo .flv | video/x-flv | - .mkv | video/x-matroska | - .asf | video/x-ms-asf | video/asf, video/x-ms-wmv .aac | audio/aac | - .voc | audio/x-unknown | - .mp4 | audio/mp4 | audio/x-m4a, audio/x-mp4a"
},
{
"data": "| audio/x-m4a | - .m3u | application/vnd.apple.mpegurl | audio/mpegurl .m4v | video/x-m4v | - .rmvb | application/vnd.rn-realmedia-vbr | - .gz | application/gzip | application/x-gzip, application/x-gunzip, application/gzipped, application/gzip-compressed, application/x-gzip-compressed, gzip/document .class | application/x-java-applet | - .swf | application/x-shockwave-flash | - .crx | application/x-chrome-extension | - .ttf | font/ttf | font/sfnt, application/x-font-ttf, application/font-sfnt .woff | font/woff | - .woff2 | font/woff2 | - .otf | font/otf | - .ttc | font/collection | - .eot | application/vnd.ms-fontobject | - .wasm | application/wasm | - .shx | application/vnd.shx | - .shp | application/vnd.shp | - .dbf | application/x-dbf | - .dcm | application/dicom | - .rar | application/x-rar-compressed | application/x-rar .djvu | image/vnd.djvu | - .mobi | application/x-mobipocket-ebook | - .lit | application/x-ms-reader | - .bpg | image/bpg | - .sqlite | application/vnd.sqlite3 | application/x-sqlite3 .dwg | image/vnd.dwg | image/x-dwg, application/acad, application/x-acad, application/autocad_dwg, application/dwg, application/x-dwg, application/x-autocad, drawing/dwg .nes | application/vnd.nintendo.snes.rom | - .lnk | application/x-ms-shortcut | - .macho | application/x-mach-binary | - .qcp | audio/qcelp | - .icns | image/x-icns | - .heic | image/heic | - .heic | image/heic-sequence | - .heif | image/heif | - .heif | image/heif-sequence | - .hdr | image/vnd.radiance | - .mrc | application/marc | - .mdb | application/x-msaccess | - .accdb | application/x-msaccess | - .zst | application/zstd | - .cab | application/vnd.ms-cab-compressed | - .rpm | application/x-rpm | - .xz | application/x-xz | - .lz | application/lzip | application/x-lzip .torrent | application/x-bittorrent | - .cpio | application/x-cpio | - n/a | application/tzif | - .xcf | image/x-xcf | - .pat | image/x-gimp-pat | - .gbr | image/x-gimp-gbr | - .glb | model/gltf-binary | - .avif | image/avif | - .cab | application/x-installshield | - .jxr | image/jxr | image/vnd.ms-photo .txt | text/plain | - .html | text/html | - .svg | image/svg+xml | - .xml | text/xml | - .rss | application/rss+xml | text/rss .atom | application/atom+xml | - .x3d | model/x3d+xml | - .kml | application/vnd.google-earth.kml+xml | - .xlf | application/x-xliff+xml | - .dae | model/vnd.collada+xml | - .gml | application/gml+xml | - .gpx | application/gpx+xml | - .tcx | application/vnd.garmin.tcx+xml | - .amf | application/x-amf | - .3mf | application/vnd.ms-package.3dmanufacturing-3dmodel+xml | - .xfdf | application/vnd.adobe.xfdf | - .owl | application/owl+xml | - .php | text/x-php | - .js | application/javascript | application/x-javascript, text/javascript .lua | text/x-lua | - .pl | text/x-perl | - .py | text/x-python | text/x-script.python, application/x-python .json | application/json | - .geojson | application/geo+json | - .har | application/json | - .ndjson | application/x-ndjson | - .rtf | text/rtf | - .srt | application/x-subrip | application/x-srt, text/x-srt .tcl | text/x-tcl | application/x-tcl .csv | text/csv | - .tsv | text/tab-separated-values | - .vcf | text/vcard | - .ics | text/calendar | - .warc | application/warc | - .vtt | text/vtt | -"
}
] |
{
"category": "Runtime",
"file_name": "supported_mimes.md",
"project_name": "Stash by AppsCode",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. --> <!-- Copyright 2019 Joyent, Inc. Copyright 2024 MNX Cloud, Inc. --> Thanks for using Manta and for considering contributing to it! All changes to Manta project repositories go through code review via a GitHub pull request. If you're making a substantial change, you probably want to contact developers first. If you have any trouble with the contribution process, please feel free to contact developers [on the mailing list or IRC](README.md#community). See the for useful information about building and testing the software. Manta repositories use the same [Joyent Engineering Guidelines](https://github.com/TritonDataCenter/eng/blob/master/docs/index.md) as the Triton project. Notably: The #master branch should be first-customer-ship (FCS) quality at all times. Don't push anything until it's tested. All repositories should be \"make check\" clean at all times. All repositories should have tests that run cleanly at all times. Typically each repository has `make check` to lint and check code style. Specific code style can vary by repository. There are two separate issue trackers that are relevant for Manta code: An internal JIRA instance. A JIRA ticket has an ID like `MANTA-380`, where \"MANTA\" is the JIRA project name. A read-only view of many JIRA tickets is made available at <https://smartos.org/bugview/> (e.g. <https://smartos.org/bugview/MANTA-380>). GitHub issues for the relevant repository. Before Manta was open sourced, Joyent engineering used a private JIRA instance. While Joyent continues to use JIRA internally, we also use GitHub issues for tracking -- primarily to allow interaction with those without access to JIRA. All persons and/or organizations contributing to, or intercting with our repositories or communities are required to abide by the ."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Triton Object Storage",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Discovering Containers with WeaveDNS menu_order: 5 search_type: Documentation WeaveDNS is a DNS server that answers name queries on a Weave network and provides a simple way for containers to find each other. Just give the containers hostnames and then tell other containers to connect to those names. Unlike Docker 'links', this requires no code changes and it works across hosts. WeaveDNS is deployed as an embedded service within the Weave router. The service is automatically started when the router is launched: ``` host1$ weave launch host1$ eval $(weave env) ``` WeaveDNS related configuration arguments can be passed to `launch`. Application containers use weaveDNS automatically if it is running when they are started. They use it for name resolution, and will register themselves if they either have a hostname in the weaveDNS domain (`weave.local` by default) or are given an explicit container name: ``` host1$ docker run -dti --name=pingme weaveworks/ubuntu host1$ docker run -ti --hostname=ubuntu.weave.local weaveworks/ubuntu root@ubuntu:/# ping pingme ... ``` Moreover, weaveDNS always register all network aliases (--network-alias option to docker run). ``` host1$ docker run --network weave --network-alias pingme --network-alias pingme2 -dti weaveworks/ubuntu host1$ docker run --network weave --hostname=ubuntu.weave.local -ti weaveworks/ubuntu root@ubuntu:/# ping pingme ... root@ubuntu:/# ping pingme2 ... ``` Note If both hostname and container name are specified at the same time, the hostname takes precedence. In this circumstance, if the hostname is not in the weaveDNS domain, the container is not registered, but it will still use weaveDNS for resolution. By default, weaveDNS will listen on port 53 on the address of the Docker bridge. To make it listen on a different address or port use `weave launch --dns-listen-address <address>:<port>` To disable an application container's use of weaveDNS, add the `--without-dns` option to `weave launch`. To disable weaveDNS itself, launch weave with the `--no-dns` option. Note WeaveDNS is not part of the Weave Net Kubernetes add-on. Kubernetes has its own DNS service, integrated with Kubernetes Services, and WeaveDNS does not duplicate that functionality. See Also *"
}
] |
{
"category": "Runtime",
"file_name": "weavedns.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Show bpf filesystem mount details ``` cilium-dbg bpf fs show [flags] ``` ``` cilium bpf fs show ``` ``` -h, --help help for show -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - BPF filesystem mount"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_fs_show.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The Tigera team generally support the most recent two minor versions of Project Calico on a rolling basis. Support for older versions is on a case-by-case basis. For example, at the time of writing, Calico v3.26.x and v3.25.x are supported. When v3.27.0 is released, automatic support for v3.25.x is dropped. Please follow responsible disclosure best practices and when submitting security vulnerabilities. Do not create a GitHub issue or pull request because those are immediately public. Instead: Email . Report a private through the GitHub interface. Please include as much information as possible, including the affected version(s) and steps to reproduce."
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "Project Calico",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Memory usage ============ (Highly unscientific.) Command used to gather static memory usage: ```sh grep ^Vm \"/proc/$(ps fax | grep [m]etrics-bench | awk '{print $1}')/status\" ``` Program used to gather baseline memory usage: ```go package main import \"time\" func main() { time.Sleep(600e9) } ``` Baseline -- ``` VmPeak: 42604 kB VmSize: 42604 kB VmLck: 0 kB VmHWM: 1120 kB VmRSS: 1120 kB VmData: 35460 kB VmStk: 136 kB VmExe: 1020 kB VmLib: 1848 kB VmPTE: 36 kB VmSwap: 0 kB ``` Program used to gather metric memory usage (with other metrics being similar): ```go package main import ( \"fmt\" \"metrics\" \"time\" ) func main() { fmt.Sprintf(\"foo\") metrics.NewRegistry() time.Sleep(600e9) } ``` 1000 counters registered ``` VmPeak: 44016 kB VmSize: 44016 kB VmLck: 0 kB VmHWM: 1928 kB VmRSS: 1928 kB VmData: 36868 kB VmStk: 136 kB VmExe: 1024 kB VmLib: 1848 kB VmPTE: 40 kB VmSwap: 0 kB ``` 1.412 kB virtual, TODO 0.808 kB resident per counter. 100000 counters registered -- ``` VmPeak: 55024 kB VmSize: 55024 kB VmLck: 0 kB VmHWM: 12440 kB VmRSS: 12440 kB VmData: 47876 kB VmStk: 136 kB VmExe: 1024 kB VmLib: 1848 kB VmPTE: 64 kB VmSwap: 0 kB ``` 0.1242 kB virtual, 0.1132 kB resident per counter. 1000 gauges registered ``` VmPeak: 44012 kB VmSize: 44012 kB VmLck: 0 kB VmHWM: 1928 kB VmRSS: 1928 kB VmData: 36868 kB VmStk: 136 kB VmExe: 1020 kB VmLib: 1848 kB VmPTE: 40 kB VmSwap: 0 kB ``` 1.408 kB virtual, 0.808 kB resident per counter. 100000 gauges registered ``` VmPeak: 55020 kB VmSize: 55020 kB VmLck: 0 kB VmHWM: 12432 kB VmRSS: 12432 kB VmData: 47876 kB VmStk: 136 kB VmExe: 1020 kB VmLib: 1848 kB VmPTE: 60 kB VmSwap: 0 kB ``` 0.12416 kB virtual, 0.11312 resident per gauge. 1000 histograms with a uniform sample size of 1028 -- ``` VmPeak: 72272 kB VmSize: 72272 kB VmLck: 0 kB VmHWM: 16204 kB VmRSS: 16204 kB VmData: 65100 kB VmStk: 136 kB VmExe: 1048 kB VmLib: 1848 kB VmPTE: 80 kB VmSwap: 0 kB ``` 29.668 kB virtual, TODO 15.084 resident per"
},
{
"data": "10000 histograms with a uniform sample size of 1028 ``` VmPeak: 256912 kB VmSize: 256912 kB VmLck: 0 kB VmHWM: 146204 kB VmRSS: 146204 kB VmData: 249740 kB VmStk: 136 kB VmExe: 1048 kB VmLib: 1848 kB VmPTE: 448 kB VmSwap: 0 kB ``` 21.4308 kB virtual, 14.5084 kB resident per histogram. 50000 histograms with a uniform sample size of 1028 ``` VmPeak: 908112 kB VmSize: 908112 kB VmLck: 0 kB VmHWM: 645832 kB VmRSS: 645588 kB VmData: 900940 kB VmStk: 136 kB VmExe: 1048 kB VmLib: 1848 kB VmPTE: 1716 kB VmSwap: 1544 kB ``` 17.31016 kB virtual, 12.88936 kB resident per histogram. 1000 histograms with an exponentially-decaying sample size of 1028 and alpha of 0.015 ``` VmPeak: 62480 kB VmSize: 62480 kB VmLck: 0 kB VmHWM: 11572 kB VmRSS: 11572 kB VmData: 55308 kB VmStk: 136 kB VmExe: 1048 kB VmLib: 1848 kB VmPTE: 64 kB VmSwap: 0 kB ``` 19.876 kB virtual, 10.452 kB resident per histogram. 10000 histograms with an exponentially-decaying sample size of 1028 and alpha of 0.015 -- ``` VmPeak: 153296 kB VmSize: 153296 kB VmLck: 0 kB VmHWM: 101176 kB VmRSS: 101176 kB VmData: 146124 kB VmStk: 136 kB VmExe: 1048 kB VmLib: 1848 kB VmPTE: 240 kB VmSwap: 0 kB ``` 11.0692 kB virtual, 10.0056 kB resident per histogram. 50000 histograms with an exponentially-decaying sample size of 1028 and alpha of 0.015 -- ``` VmPeak: 557264 kB VmSize: 557264 kB VmLck: 0 kB VmHWM: 501056 kB VmRSS: 501056 kB VmData: 550092 kB VmStk: 136 kB VmExe: 1048 kB VmLib: 1848 kB VmPTE: 1032 kB VmSwap: 0 kB ``` 10.2932 kB virtual, 9.99872 kB resident per histogram. 1000 meters -- ``` VmPeak: 74504 kB VmSize: 74504 kB VmLck: 0 kB VmHWM: 24124 kB VmRSS: 24124 kB VmData: 67340 kB VmStk: 136 kB VmExe: 1040 kB VmLib: 1848 kB VmPTE: 92 kB VmSwap: 0 kB ``` 31.9 kB virtual, 23.004 kB resident per meter. 10000 meters ``` VmPeak: 278920 kB VmSize: 278920 kB VmLck: 0 kB VmHWM: 227300 kB VmRSS: 227300 kB VmData: 271756 kB VmStk: 136 kB VmExe: 1040 kB VmLib: 1848 kB VmPTE: 488 kB VmSwap: 0 kB ``` 23.6316 kB virtual, 22.618 kB resident per meter."
}
] |
{
"category": "Runtime",
"file_name": "memory.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "For a basic description about checkpointing and restoring containers with `runc` please see and . In addition to specifying options on the command-line like it is described in the man-pages (see above), it is also possible to influence CRIU's behaviour using CRIU configuration files. For details about CRIU's configuration file support please see . In addition to CRIU's default configuration files `runc` tells CRIU to also evaluate the file `/etc/criu/runc.conf`. Using the annotation `org.criu.config` it is, however, possible to change this additional CRIU configuration file. If the annotation `org.criu.config` is set to an empty string `runc` will not pass any additional configuration file to CRIU. With an empty string it is therefore possible to disable the additional CRIU configuration file. This can be used to make sure that no additional configuration file changes CRIU's behaviour accidentally. If the annotation `org.criu.config` is set to a non-empty string `runc` will pass that string to CRIU to be evaluated as an additional configuration file. If CRIU cannot open this additional configuration file, it will ignore this file and continue. ``` { \"ociVersion\": \"1.0.0\", \"annotations\": { \"org.criu.config\": \"\" }, \"process\": { ``` ``` { \"ociVersion\": \"1.0.0\", \"annotations\": { \"org.criu.config\": \"/etc/special-runc-criu-options\" }, \"process\": { ```"
}
] |
{
"category": "Runtime",
"file_name": "checkpoint-restore.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This guide shows you how to use Transport Layer Security (TLS) for DRBD replication. TLS is a protocol designed to provide security, including privacy, integrity and authentication for network communications. Starting with DRBD 9.2.6, replication traffic in DRBD can be encrypted using TLS, using the Linux kernel . To complete this guide, you should be familiar with: editing `LinstorCluster` and `LinstorSatelliteConfiguration` resources. configuring . Install Piraeus Operator >= 2.3.0. The following command shows the deployed version, in this case v2.3.0: ``` $ kubectl get pods -l app.kubernetes.io/component=piraeus-operator -ojsonpath='{.items[*].spec.containers[?(@.name==\"manager\")].image}{\"\\n\"}' quay.io/piraeusdatastore/piraeus-operator:v2.3.0 ``` Use a host operating system with kernel TLS offload enabled. TLS offload was added in Linux 4.19. The following distributions are known to have TLS offload enabled: RHEL >= 8.2 Ubuntu >= 22.04 Debian >= 12 Have DRBD 9.2.6 or newer loaded. The following script shows the currently loaded DRBD version on all nodes: ``` for SATELLITE in $(kubectl get pods -l app.kubernetes.io/component=linstor-satellite -ojsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}{end}'); do echo \"$SATELLITE: $(kubectl exec $SATELLITE -- head -1 /proc/drbd)\" done ``` This step is covered in . Follow the steps in the guide, but set the `tlsHandshakeDaemon` field in the `LinstorSatelliteConfiguration` resource: ```yaml apiVersion: piraeus.io/v1 kind: LinstorSatelliteConfiguration metadata: name: internal-tls spec: internalTLS: tlsHandshakeDaemon: true ``` This change will cause the Piraeus Operator to deploy an additional container named `ktls-utils` on the Satellite Pod. ``` $ kubectl logs -l app.kubernetes.io/component=linstor-satellite -c ktls-utils tlshd[1]: Built from ktls-utils 0.10 on Oct 4 2023 07:26:06 tlshd[1]: Built from ktls-utils 0.10 on Oct 4 2023 07:26:06 tlshd[1]: Built from ktls-utils 0.10 on Oct 4 2023 07:26:06 ``` To instruct LINSTOR to configure TLS for DRBD, the `DrbdOptions/Net/tls` property needs to be set to `yes`. This can be done directly on the `LinstorCluster` resource, so it automatically applies to all resources, or as part of the parameters in a StorageClass. To apply the property cluster-wide, add the following property entry to your `LinstorCluster` resource: ```yaml apiVersion:"
},
{
"data": "kind: LinstorCluster metadata: name: linstorcluster spec: properties: name: DrbdOptions/Net/tls value: \"yes\" ``` To apply the property only for certain StorageClasses, add the following parameter to the selected StorageClasses: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: example provisioner: linstor.csi.linbit.com parameters: property.linstor.csi.linbit.com/DrbdOptions/Net/tls: \"yes\" ``` DRBD does not support online reconfiguration to use DRBD. To switch existing resources to use TLS, for example by setting the property on the `LinstorCluster` resource, you need to perform the following steps on all nodes: First, you need to temporarily stop all replication and suspend all DRBD volumes using `drbdadm suspend-io all`. The command is executed once on each LINSTOR Satellite Pod. ``` $ kubectl exec node1.example.com -- drbdadm suspend-io all $ kubectl exec node2.example.com -- drbdadm suspend-io all $ kubectl exec node3.example.com -- drbdadm suspend-io all ``` Next, you will need to disconnect all DRBD connections on all nodes. ``` $ kubectl exec node1.example.com -- drbdadm disconnect --force all $ kubectl exec node2.example.com -- drbdadm disconnect --force all $ kubectl exec node3.example.com -- drbdadm disconnect --force all ``` Now, we can safely reconnect DRBD connection paths, which configures the TLS connection parameters. ``` $ kubectl exec node1.example.com -- drbdadm adjust all $ kubectl exec node2.example.com -- drbdadm adjust all $ kubectl exec node3.example.com -- drbdadm adjust all ``` To confirm that DRBD is using TLS for replication, check the following resources. First, confirm that the `ktls-utils` container performed the expected handshakes: ``` $ kubectl logs -l app.kubernetes.io/component=linstor-satellite -c ktls-utils ... Handshake with node2.example.com (10.127.183.183) was successful Handshake with node2.example.com (10.127.183.183) was successful ... Handshake with node3.example.com (10.125.97.42) was successful Handshake with node3.example.com (10.125.97.42) was successful ... ``` [!NOTE] The following messages are expected when running `ktls-utils` in a container and can be safely ignored: ``` File /etc/tlshd.d/tls.key: expected mode 600 add_key: Bad message ``` Next, check the statistics on TLS sessions controlled by the kernel on each node. You should see an equal, nonzero number of `TlsRxSw` and `TlsRxSw`. ``` $ kubectl exec node1.example.com -- cat /proc/net/tls_stat TlsCurrTxSw 4 TlsCurrRxSw 4 TlsCurrTxDevice 0 TlsCurrRxDevice 0 TlsTxSw 4 TlsRxSw 4 TlsTxDevice 0 TlsRxDevice 0 TlsDecryptError 0 TlsRxDeviceResync 0 TlsDecryptRetry 0 TlsRxNoPadViolation 0 ``` [!NOTE] If your network card supports TLS offloading, you might see `TlsTxDevice` and `TlsRxDevice` being nonzero instead."
}
] |
{
"category": "Runtime",
"file_name": "drbd-tls.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "```{toctree} :maxdepth: 1 Main API documentation <rest-api> rest-api-spec Main API extensions <api-extensions> Instance API documentation <dev-incus> Events API documentation <events> Metrics API documentation <reference/provided_metrics> ```"
}
] |
{
"category": "Runtime",
"file_name": "api.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "layout: global title: Running Tensorflow on Alluxio-FUSE This guide describes how to run {:target=\"_blank\"} on top of Alluxio POSIX API. Tensorflow enables developers to quickly and easily get started with deep learning. This tutorial aims to provide some hands-on examples and tips for running Tensorflow on top of Alluxio POSIX API. Setup Java for Java 8 Update 60 or higher (8u60+), 64-bit. Alluxio has been set up and is running. {:target=\"_blank\"} installed. {:target=\"_blank\"} installed. This guide uses numpy 1.19.5*. {:target=\"_blank\"} installed. This guide uses Tensorflow v1.15*. Run the following command to install FUSE on Linux: ```shell $ yum install fuse fuse-devel ``` On macOS, download the {:target=\"_blank\"} instead and follow the installation instructions. In this guide, we use /training-data as Alluxio-Fuse's root directory and /mnt/fuse as the mount point of local directory. Create a folder at the root in Alluxio: ```shell $ ./bin/alluxio fs mkdir /training-data ``` Create a folder `/mnt/fuse`, change its owner to the current user (`$(whoami)`), and change its permissions to allow read and write: ```shell $ sudo mkdir -p /mnt/fuse $ sudo chown $(whoami) /mnt/fuse $ chmod 755 /mnt/fuse ``` Configure `conf/alluxio-site.properties`: ```properties alluxio.fuse.mount.alluxio.path=/training-data alluxio.fuse.mount.point=/mnt/fuse ``` Follow the instructions for to finish setting up Alluxio POSIX API and allow Tensorflow applications to access the data through Alluxio POSIX API. If the training data is already in a remote data storage, you can mount it as a folder under the Alluxio `/training-data` directory. This data will be visible to the applications running on local `/mnt/fuse/`. If the data is not in a remote data storage, you can copy it to Alluxio namespace: ```shell $ wget http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz $ ./bin/alluxio fs mkdir /training-data/imagenet $ ./bin/alluxio fs cp file://inception-2015-12-05.tgz /training-data/imagenet ``` Suppose the ImageNet's data is stored in an S3 bucket `s3://alluxio-tensorflow-imagenet/`, the following three commands will show the exact same data after the two mount processes: ```shell aws s3 ls s3://alluxio-tensorflow-imagenet/ bin/alluxio fs ls /training-data/imagenet/ ls -l /mnt/fuse/imagenet/ ``` Download the image recognition script and run it with the training data. ```shell $ curl -o classifyimage.py -L https://raw.githubusercontent.com/tensorflow/models/v1.11/tutorials/image/imagenet/classifyimage.py $ python classifyimage.py --modeldir /mnt/fuse/imagenet/ ``` This will use the input data in `/mnt/fuse/imagenet/` to recognize images, and if everything works you will see something like this in your command prompt: ``` giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca (score = 0.89107) indri, indris, Indri indri, Indri brevicaudatus (score ="
},
{
"data": "lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens (score = 0.00296) custard apple (score = 0.00147) earthstar (score = 0.00117) ``` Running Tensorflow on top of HDFS, S3, and other under storages could require different configurations, making it difficult to manage and integrate Tensorflow applications with different under storages. Through Alluxio POSIX API, users only need to mount under storages to Alluxio once and mount the parent folder of those under storages that contain training data to the local filesystem. After the initial mounting, the data becomes immediately available through the Alluxio FUSE mount point and can be transparently accessed in Tensorflow applications. If a Tensorflow application has the data location parameter set, we only need to pass the data location inside the FUSE mount point to the Tensorflow application without modifying it. This greatly simplifies the application development, which otherwise would require different integration setups and credential configurations for each under storage. By co-locating Tensorflow applications with an Alluxio Worker, Alluxio caches the remote data locally for future access, providing data locality. Without Alluxio, slow remote storage may result in bottleneck on I/O and leave GPU resources underutilized. When concurrently writing or reading big files, Alluxio POSIX API can provide significantly better performance when running on an Alluxio Worker node. Setting up a Worker node with memory space to host all the training data can allow the Alluxio POSIX API to provide nearly 2X performance improvement. Many Tensorflow applications generate a lot of small intermediate files during their workflow. Those intermediate files are only useful for a short time and do not need to be persisted to under storages. If we directly link Tensorflow with remote storages, all files (regardless of the type - data files, intermediate files, results, etc.) will be written to and persisted in the remote storage. With Alluxio -- a cache layer between the Tensorflow applications and remote storage, users can reduce unneeded remote persistent work and speed up the write/read time. With `alluxio.user.file.writetype.default` set to `MUST_CACHE`, we can write to the top tier (usually it is the memory tier) of Alluxio Worker storage. With `alluxio.user.file.readtype.default` set to `CACHE_PROMOTE`, we can cache the read data in Alluxio for future access. This will accelerate our Tensorflow workflow by writing to and reading from memory. If the remote storages are cloud storages like S3, the advantages will be more obvious."
}
] |
{
"category": "Runtime",
"file_name": "Tensorflow.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "| Case ID | Title | Priority | Smoke | Status | Other | | - | -- | -- | -- | | -- | | T00001 | SpiderIPPool feature supports third party controllers. | p3 | | done | | | T00002 | SpiderSubnet feature supports third party controllers. | p3 | | done | | | T00003 | The statefulset of the third-party control is removed, the endpoint will be released. | p3 | | done | | | T00004 | Third-party applications with the same name and type can use the reserved IPPool. | p2 | | done | | | T00005 | Third-party applications with the same name and different types cannot use the reserved IPPool. | p2 | | done | |"
}
] |
{
"category": "Runtime",
"file_name": "third-party-controller.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "We as members, contributors, and leaders pledge to make participation in our community a harassment-free experience for everyone, regardless of age, body size, visible or invisible disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, caste, color, religion, or sexual identity and orientation. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. Examples of behavior that contributes to a positive environment for our community include: Demonstrating empathy and kindness toward other people Being respectful of differing opinions, viewpoints, and experiences Giving and gracefully accepting constructive feedback Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: The use of sexualized language or imagery, and sexual attention or advances of any kind Trolling, insulting or derogatory comments, and personal or political attacks Public or private harassment Publishing others' private information, such as a physical or email address, without their explicit permission Other conduct which could reasonably be considered inappropriate in a professional setting Community leaders are responsible for clarifying and enforcing our standards of acceptable behavior and will take appropriate and fair corrective action in response to any behavior that they deem inappropriate, threatening, offensive, or harmful. Community leaders have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, and will communicate reasons for moderation decisions when appropriate. This Code of Conduct applies within all community spaces, and also applies when an individual is officially representing the community in public spaces. Examples of representing our community include using an official e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline"
},
{
"data": "Instances of abusive, harassing, or otherwise unacceptable behavior may be reported to the community leaders responsible for enforcement at [email protected]. All complaints will be reviewed and investigated promptly and fairly. All community leaders are obligated to respect the privacy and security of the reporter of any incident. Community leaders will follow these Community Impact Guidelines in determining the consequences for any action they deem in violation of this Code of Conduct: Community Impact: Use of inappropriate language or other behavior deemed unprofessional or unwelcome in the community. Consequence: A private, written warning from community leaders, providing clarity around the nature of the violation and an explanation of why the behavior was inappropriate. A public apology may be requested. Community Impact: A violation through a single incident or series of actions. Consequence: A warning with consequences for continued behavior. No interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, for a specified period of time. This includes avoiding interactions in community spaces as well as external channels like social media. Violating these terms may lead to a temporary or permanent ban. Community Impact: A serious violation of community standards, including sustained inappropriate behavior. Consequence: A temporary ban from any sort of interaction or public communication with the community for a specified period of time. No public or private interaction with the people involved, including unsolicited interaction with those enforcing the Code of Conduct, is allowed during this period. Violating these terms may lead to a permanent ban. Community Impact: Demonstrating a pattern of violation of community standards, including sustained inappropriate behavior, harassment of an individual, or aggression toward or disparagement of classes of individuals. Consequence: A permanent ban from any sort of public interaction within the community. This Code of Conduct is adapted from the , version 2.1, available at . Community Impact Guidelines were inspired by . For answers to common questions about this code of conduct, see the FAQ at . Translations are available at ."
}
] |
{
"category": "Runtime",
"file_name": "CODE_OF_CONDUCT.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Before starting work on a feature or bug fix, please search GitHub or reach out to us via GitHub, Slack etc. The purpose of this step is make sure no one else is already working on it and we'll ask you to open a GitHub issue if necessary. We will use the GitHub issue to discuss the feature and come to agreement. This is to prevent your time being wasted, as well as ours. If it is a major feature update, we highly recommend you also write a design document to help the community understand your motivation and solution. A good way to find a project properly sized for a first time contributor is to search for open issues with the label or . We're following and . Use `go fmt` to format your code before committing. You can find information in editor support for Go tools in . If you see any code which clearly violates the style guide, please fix it and send a pull request. Every new source file must begin with a license header. Install and use it to set up a pre-commit hook for static analysis. Just run `pre-commit install` in the root of the repo. Before you can contribute to JuiceFS, you will need to sign the . There're a CLA assistant to guide you when you first time submit a pull request. Presence of unit tests Adherence to the coding style Adequate in-line comments Explanatory commit message This is a rough outline of what a contributor's workflow looks like: Create a topic branch from where to base the contribution. This is usually `main`. Make commits of logical units. Make sure commit messages are in the proper format. Push changes in a topic branch to a personal fork of the repository. Submit a pull request to . The PR should link to one issue which either created by you or others. The PR must receive approval from at least one maintainer before it be merged. Happy hacking!"
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "At Weaveworks we take security very seriously, and value our close relationship with members of the security community. As an Open Source project, only the latest code has all fixes and security patches. If you require a higher level of commitment, please to discuss. To submit a security bug report please e-mail us at [email protected]."
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": ", or K8s, is a popular open source container orchestration engine. In Kubernetes, a set of containers sharing resources such as networking, storage, mount, PID, etc. is called a . A node can have multiple pods, but at a minimum, a node within a Kubernetes cluster only needs to run a container runtime and a container agent (called a ). Kata Containers represents a Kubelet pod as a VM. A Kubernetes cluster runs a control plane where a scheduler (typically running on a dedicated control-plane node) calls into a compute Kubelet. This Kubelet instance is responsible for managing the lifecycle of pods within the nodes and eventually relies on a container runtime to handle execution. The Kubelet architecture decouples lifecycle management from container execution through a dedicated gRPC based . In other words, a Kubelet is a CRI client and expects a CRI implementation to handle the server side of the interface. and are CRI implementations that rely on compatible runtimes for managing container instances. Kata Containers is an officially supported CRI-O and containerd runtime. Refer to the following guides on how to set up Kata Containers with Kubernetes:"
}
] |
{
"category": "Runtime",
"file_name": "kubernetes.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- Thanks for your contribution! please review https://github.com/v6d-io/v6d/blob/main/CONTRIBUTING.rst before opening a pull request. --> What do these changes do? <!-- Please give a short brief about these changes. --> Related issue number -- <!-- Are there any issues opened that will be resolved by merging this change? --> Fixes #issue number"
}
] |
{
"category": "Runtime",
"file_name": "PULL_REQUEST_TEMPLATE.md",
"project_name": "Vineyard",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"ark plugin add\" layout: docs Add a plugin Add a plugin ``` ark plugin add IMAGE [flags] ``` ``` -h, --help help for add --image-pull-policy the imagePullPolicy for the plugin container. Valid values are Always, IfNotPresent, Never. (default IfNotPresent) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with plugins"
}
] |
{
"category": "Runtime",
"file_name": "ark_plugin_add.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This page lists all active maintainers of this repository. If you were a maintainer and would like to add your name to the Emeritus list, please send us a PR. See for governance guidelines and how to become a maintainer. See for general contribution guidelines. , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC , Google LLC"
}
] |
{
"category": "Runtime",
"file_name": "MAINTAINERS.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Bug report about: Create a report to help us improve title: \"[BUG]\" labels: bug assignees: '' Describe the bug A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior: Go to '...' Click on '....' Scroll down to '....' See error Expected behavior A clear and concise description of what you expected to happen. Screenshots If applicable, add screenshots to help explain your problem. Environment Kubernetes Version/Provider: ... Storage Provider: ... Cluster Size (#nodes): ... Data Size: ... Additional context Add any other context about the problem here."
}
] |
{
"category": "Runtime",
"file_name": "bug_report.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Ceph Upgrades This guide will walk through the steps to upgrade the version of Ceph in a Rook cluster. Rook and Ceph upgrades are designed to ensure data remains available even while the upgrade is proceeding. Rook will perform the upgrades in a rolling fashion such that application pods are not disrupted. Rook is cautious when performing upgrades. When an upgrade is requested (the Ceph image has been updated in the CR), Rook will go through all the daemons one by one and will individually perform checks on them. It will make sure a particular daemon can be stopped before performing the upgrade. Once the deployment has been updated, it checks if this is ok to continue. After each daemon is updated we wait for things to settle (monitors to be in a quorum, PGs to be clean for OSDs, up for MDSes, etc.), then only when the condition is met we move to the next daemon. We repeat this process until all the daemons have been updated. WARNING*: Upgrading a Rook cluster is not without risk. There may be unexpected issues or obstacles that damage the integrity and health of the storage cluster, including data loss. The Rook cluster's storage may be unavailable for short periods during the upgrade process. Read this document in full before undertaking a Rook cluster upgrade. Rook v1.13 supports the following Ceph versions: Ceph Reef v18.2.0 or newer Ceph Quincy v17.2.0 or newer Support for Ceph Pacific (16.2.x) is removed in Rook v1.13. Upgrade to Quincy or Reef before upgrading to Rook v1.13. !!! important When an update is requested, the operator will check Ceph's status, if it is in `HEALTH_ERR` the operator will refuse to proceed with the upgrade. !!! warning Ceph v17.2.2 has a blocking issue when running with Rook. Use"
},
{
"data": "or newer when possible. Ceph Quincy v17.2.1 has a potentially breaking regression with CephNFS. See the NFS documentation's for more detail. Official Ceph container images can be found on . These images are tagged in a few ways: The most explicit form of tags are full-ceph-version-and-build tags (e.g., `v18.2.2-20240311`). These tags are recommended for production clusters, as there is no possibility for the cluster to be heterogeneous with respect to the version of Ceph running in containers. Ceph major version tags (e.g., `v18`) are useful for development and test clusters so that the latest version of Ceph is always available. Ceph containers other than the official images from the registry above will not be supported. The upgrade will be automated by the Rook operator after the desired Ceph image is changed in the CephCluster CRD (`spec.cephVersion.image`). ```console ROOKCLUSTERNAMESPACE=rook-ceph NEWCEPHIMAGE='quay.io/ceph/ceph:v18.2.2-20240311' kubectl -n $ROOKCLUSTERNAMESPACE patch CephCluster $ROOKCLUSTERNAMESPACE --type=merge -p \"{\\\"spec\\\": {\\\"cephVersion\\\": {\\\"image\\\": \\\"$NEWCEPHIMAGE\\\"}}}\" ``` Since the is not controlled by the Rook operator, users must perform a manual upgrade by modifying the `image` to match the ceph version employed by the new Rook operator release. Employing an outdated Ceph version within the toolbox may result in unexpected behaviour. ```console kubectl -n rook-ceph set image deploy/rook-ceph-tools rook-ceph-tools=quay.io/ceph/ceph:v18.2.2-20240311 ``` As with upgrading Rook, now wait for the upgrade to complete. Status can be determined in a similar way to the Rook upgrade as well. ```console watch --exec kubectl -n $ROOKCLUSTERNAMESPACE get deployments -l rookcluster=$ROOKCLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{.metadata.name}{\" \\treq/upd/avl: \"}{.spec.replicas}{\"/\"}{.status.updatedReplicas}{\"/\"}{.status.readyReplicas}{\" \\tceph-version=\"}{.metadata.labels.ceph-version}{\"\\n\"}{end}' ``` Confirm the upgrade is completed when the versions are all on the desired Ceph version. ```console kubectl -n $ROOKCLUSTERNAMESPACE get deployment -l rookcluster=$ROOKCLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{\"ceph-version=\"}{.metadata.labels.ceph-version}{\"\\n\"}{end}' | sort | uniq This cluster is not yet finished: ceph-version=v17.2.7-0 ceph-version=v18.2.2-0 This cluster is finished: ceph-version=v18.2.2-0 ``` Verify the Ceph cluster's health using the ."
}
] |
{
"category": "Runtime",
"file_name": "ceph-upgrade.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Currently, the which can be used to filter/handle volumes during backup only supports the skip action on matching conditions. Users need more actions to be supported. The `VolumePolicies` feature was introduced in Velero 1.11 as a flexible way to handle volumes. The main agenda of introducing the VolumePolicies feature was to improve the overall user experience when performing backup operations for volume resources, the feature enables users to group volumes according the `conditions` (criteria) specified and also lets you specify the `action` that velero needs to take for these grouped volumes during the backup operation. The limitation being that currently `VolumePolicies` only supports `skip` as an action, We want to extend the `action` functionality to support more usable options like `fs-backup` (File system backup) and `snapshot` (VolumeSnapshots). Extending the VolumePolicies to support more actions like `fs-backup` (File system backup) and `snapshot` (VolumeSnapshots). Improve user experience when backing up Volumes via Velero No changes to existing approaches to opt-in/opt-out annotations for volumes No changes to existing `VolumePolicies` functionalities No additions or implementations to support more granular actions like `snapshot-csi` and `snapshot-datamover`. These actions can be implemented as a future enhancement Use-case 1: A user wants to use `snapshot` (volumesnapshots) backup option for all the csi supported volumes and `fs-backup` for the rest of the volumes. Currently, velero supports this use-case but the user experience is not that great. The user will have to individually annotate the volume mounting pod with the annotation \"backup.velero.io/backup-volumes\" for `fs-backup` This becomes cumbersome at scale. Using `VolumePolicies`, the user can just specify 2 simple `VolumePolicies` like for csi supported volumes as `snapshot` action and rest can be backed up`fs-backup` action: ```yaml version: v1 volumePolicies: conditions: storageClass: gp2 action: type: snapshot conditions: {} action: type: fs-backup ``` Use-case 2: A user wants to use `fs-backup` for nfs volumes pertaining to a particular server In such a scenario the user can just specify a `VolumePolicy` like: ```yaml version: v1 volumePolicies: conditions: nfs: server: 192.168.200.90 action: type: fs-backup ``` When the VolumePolicy action is set as `fs-backup` the backup workflow modifications would be: We call on all the items that are to be backed up Here when we encounter We will have to modify the backup workflow to account for the `fs-backup` VolumePolicy action When the VolumePolicy action is set as `snapshot` the backup workflow modifications would be: Once again, We call on all the items that are to be backed up Here when we encounter And we call the We need to modify the takePVSnapshot function to account for the `snapshot` VolumePolicy action. In case of csi snapshots for PVC objects, these snapshot actions are taken by the velero-plugin-for-csi, we need to modify the function to account for the `snapshot` VolumePolicy action. Note: `Snapshot` action can either be a native snapshot or a csi snapshot, as is the case with the current flow where velero itself makes the decision based on the backup"
},
{
"data": "Update VolumePolicy action type validation to account for `fs-backup` and `snapshot` as valid VolumePolicy actions. Modifications needed for `fs-backup` action: Now based on the specification of volume policy on backup request we will decide whether to go via legacy pod annotations approach or the newer volume policy based fs-backup action approach. If there is a presence of volume policy(fs-backup/snapshot) on the backup request that matches as an action for a volume we use the newer volume policy approach to get the list of the volumes for `fs-backup` action Else continue with the annotation based legacy approach workflow. Modifications needed for `snapshot` action: In the we will check the PV fits the volume policy criteria and see if the associated action is `snapshot` If it is not snapshot then we skip the further workflow and avoid taking the snapshot of the PV Similarly, For csi snapshot of PVC object, we need to do similar changes in . we will check the PVC fits the volume policy criteria and see if the associated action is `snapshot` via csi plugin If it is not snapshot then we skip the csi BIA execute action and avoid taking the snapshot of the PVC by not invoking the csi plugin action for the PVC Note: When we are using the `VolumePolicy` approach for backing up the volumes then the volume policy criteria and action need to be specific and explicit, there is no default behaviour, if a volume matches `fs-backup` action then `fs-backup` method will be used for that volume and similarly if the volume matches the criteria for `snapshot` action then the snapshot workflow will be used for the volume backup. Another thing to note is the workflow proposed in this design uses the legacy opt-in/opt-out approach as a fallback option. For instance, the user specifies a VolumePolicy but for a particular volume included in the backup there are no actions(fs-backup/snapshot) matching in the volume policy for that volume, in such a scenario the legacy approach will be used for backing up the particular volume. The implementation should be included in velero 1.14 We will introduce a `VolumeHelper` interface. It will consist of two methods: `ShouldPerformFSBackupForPodVolume(pod *corev1api.Pod)` `ShouldPerformSnapshot(obj runtime.Unstructured)` ```go type VolumeHelper interface { GetVolumesForFSBackup(pod *corev1api.Pod) ([]string, []string, error) ShouldPerformSnapshot(obj runtime.Unstructured) (bool, error) } ``` The `VolumeHelperImpl` struct will implement the `VolumeHelper` interface and will consist of the functions that we will use through the backup workflow to accommodate volume policies for PVs and PVCs. ```go type VolumeHelperImpl struct { Backup *velerov1api.Backup VolumePolicy *resourcepolicies.Policies BackupExcludePVC bool DefaultVolumesToFsBackup bool SnapshotVolumes *bool Logger logrus.FieldLogger } ``` We will create an instance of the struct the `VolumeHelperImpl` in `item_backupper.go` ```go vh := &volumehelper.VolumeHelperImpl{ Backup: ib.backupRequest.Backup, VolumePolicy: ib.backupRequest.ResPolicies, BackupExcludePVC: !ib.backupRequest.ResourceIncludesExcludes.ShouldInclude(kuberesource.PersistentVolumeClaims.String()), DefaultVolumesToFsBackup: boolptr.IsSetToTrue(ib.backupRequest.Spec.DefaultVolumesToFsBackup), SnapshotVolumes: ib.backupRequest.Spec.SnapshotVolumes, Logger: logger, } ``` Regarding `fs-backup` action to decide whether to use legacy annotation based approach or volume policy based approach: We will use the"
},
{
"data": "function from the `volumehelper` package Functions involved in processing `fs-backup` volume policy action will somewhat look like: ```go func (v VolumeHelperImpl) GetVolumesForFSBackup(pod corev1api.Pod) ([]string, []string, error) { // Check if there is a fs-backup/snapshot volumepolicy specified by the user, if yes then use the volume policy approach to // get the list volumes for fs-backup else go via the legacy annotation based approach var includedVolumes = make([]string, 0) var optedOutVolumes = make([]string, 0) FSBackupOrSnapshot, err := checkIfFsBackupORSnapshotPolicyForPodVolume(pod, v.VolumePolicy) if err != nil { return includedVolumes, optedOutVolumes, err } if v.VolumePolicy != nil && FSBackupOrSnapshot { // Get the list of volumes to back up using pod volume backup for the given pod matching fs-backup volume policy action includedVolumes, optedOutVolumes, err = GetVolumesMatchingFSBackupAction(pod, v.VolumePolicy) if err != nil { return includedVolumes, optedOutVolumes, err } } else { // Get the list of volumes to back up using pod volume backup from the pod's annotations. includedVolumes, optedOutVolumes = pdvolumeutil.GetVolumesByPod(pod, v.DefaultVolumesToFsBackup, v.BackupExcludePVC) } return includedVolumes, optedOutVolumes, err } func checkIfFsBackupORSnapshotPolicyForPodVolume(pod corev1api.Pod, volumePolicies resourcepolicies.Policies) (bool, error) { for volume := range pod.Spec.Volumes { action, err := volumePolicies.GetMatchAction(volume) if err != nil { return false, err } if action.Type == resourcepolicies.FSBackup || action.Type == resourcepolicies.Snapshot { return true, nil } } return false, nil } // GetVolumesMatchingFSBackupAction returns a list of volume names to backup for the provided pod having fs-backup volume policy action func GetVolumesMatchingFSBackupAction(pod corev1api.Pod, volumePolicy resourcepolicies.Policies) ([]string, []string, error) { ActionMatchingVols := []string{} NonActionMatchingVols := []string{} for _, vol := range pod.Spec.Volumes { action, err := volumePolicy.GetMatchAction(vol) if err != nil { return nil, nil, err } // Now if the matched action is `fs-backup` then add that Volume to the fsBackupVolumeList if action != nil && action.Type == resourcepolicies.FSBackup { ActionMatchingVols = append(ActionMatchingVols, vol.Name) } else { NonActionMatchingVols = append(NonActionMatchingVols, vol.Name) } } return ActionMatchingVols, NonActionMatchingVols, nil } ``` The main function from the above `vph.ProcessVolumePolicyFSbackup` will be called when we encounter Pods during the backup workflow: ```go includedVolumes, optedOutVolumes, err := vh.GetVolumesForFSBackup(pod) if err != nil { backupErrs = append(backupErrs, errors.WithStack(err)) } ``` Making sure that `snapshot` action is skipped for PVs that do not fit the volume policy criteria, for this we will use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver. ```go func (v *VolumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured) (bool, error) { // check if volume policy exists and also check if the object(pv/pvc) fits a volume policy criteria and see if the associated action is snapshot // if it is not snapshot then skip the code path for snapshotting the PV/PVC if v.VolumePolicy != nil { action, err := v.VolumePolicy.GetMatchAction(obj) if err != nil { return false, err } // Also account for SnapshotVolumes flag on backup CR if action != nil && action.Type == resourcepolicies.Snapshot && boolptr.IsSetToTrue(v.SnapshotVolumes) { return true, nil } } // now if volumepolicy is not specified then just check for snapshotVolumes flag if"
},
{
"data": "{ return true, nil } return false, nil } ``` The above function will be used as follows in `takePVSnapshot` function of the backup workflow: ```go snapshotVolume, err := vh.ShouldPerformSnapshot(obj) if err != nil { return err } if !snapshotVolume { log.Info(fmt.Sprintf(\"skipping volume snapshot for PV %s as it does not fit the volume policy criteria for snapshot action\", pv.Name)) ib.trackSkippedPV(obj, kuberesource.PersistentVolumes, volumeSnapshotApproach, \"does not satisfy the criteria for volume policy based snapshot action\", log) return nil } ``` Making sure that `snapshot` action is skipped for PVCs that do not fit the volume policy criteria, for this we will again use the `vh.ShouldPerformSnapshot` from the `VolumeHelperImpl(vh)` receiver. We will pass the `VolumeHelperImpl(vh)` instance in `executeActions` method so that it is available to use. ```go func (v *VolumeHelperImpl) ShouldPerformSnapshot(obj runtime.Unstructured) (bool, error) { // check if volume policy exists and also check if the object(pv/pvc) fits a volume policy criteria and see if the associated action is snapshot // if it is not snapshot then skip the code path for snapshotting the PV/PVC if v.VolumePolicy != nil { action, err := v.VolumePolicy.GetMatchAction(obj) if err != nil { return false, err } // Also account for SnapshotVolumes flag on backup CR if action != nil && action.Type == resourcepolicies.Snapshot && boolptr.IsSetToTrue(v.SnapshotVolumes) { return true, nil } } // now if volumepolicy is not specified then just check for snapshotVolumes flag if boolptr.IsSetToTrue(v.SnapshotVolumes) { return true, nil } return false, nil } ``` The above function will be used as follows in the `executeActions` function of backup workflow: ```go if groupResource == kuberesource.PersistentVolumeClaims && actionName == csiBIAPluginName { snapshotVolume, err := vh.ShouldPerformSnapshot(obj) if err != nil { return nil, itemFiles, errors.WithStack(err) } if !snapshotVolume { log.Info(fmt.Sprintf(\"skipping csi volume snapshot for PVC %s as it does not fit the volume policy criteria for snapshot action\", namespace+\" /\"+name)) ib.trackSkippedPV(obj, kuberesource.PersistentVolumeClaims, volumeSnapshotApproach, \"does not satisfy the criteria for volume policy based snapshot action\", log) continue } } ``` It makes sense to add more specific actions in the future, once we deprecate the legacy opt-in/opt-out approach to keep things simple. Another point of note is, csi related action can be easier to implement once we decide to merge the csi plugin into main velero code flow. In the future, we envision the following actions that can be implemented: `snapshot-native`: only use volume snapshotter (native cloud provider snapshots), do nothing if not present/not compatible `snapshot-csi`: only use csi-plugin, don't use volume snapshotter(native cloud provider snapshots), don't use datamover even if snapshotMoveData is true `snapshot-datamover`: only use csi with datamover, don't use volume snapshotter (native cloud provider snapshots), use datamover even if snapshotMoveData is false Note: The above actions are just suggestions for future scope, we may not use/implement them as is. We could definitely merge these suggested actions as `Snapshot` actions and use volume policy parameters and criteria to segregate them instead of making the user explicitly supply the action names to such granular levels. Same as the earlier design as this is an extension of the original VolumePolicies design"
}
] |
{
"category": "Runtime",
"file_name": "Extend-VolumePolicies-to-support-more-actions.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "From the viewpoint of Kubernetes kubelet CNI-Genie is treated the same as any other CNI plugin. As a result, no changes to Kubernetes are required. CNI Genie proxies for all of the CNI plugins, each providing a unique container networking solution, that are available on the host. We start Kubelet with \"genie\" as the CNI \"type\". Note that for this to work we must have already placed genie binary under /opt/cni/bin as detailed in This is done by passing /etc/cni/net.d/genie.conf to kubelet ```json { \"name\": \"k8s-pod-network\", \"type\": \"genie\", \"etcd_endpoints\": \"http://10.96.232.136:6666\", \"log_level\": \"debug\", \"policy\": { \"type\": \"k8s\", \"k8sapiroot\": \"https://10.96.0.1:443\", \"k8sauthtoken\": \"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjYWxpY28tY25pLXBsdWdpbi10b2tlbi13Zzh3OSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJjYWxpY28tY25pLXBsdWdpbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImJlZDY2NTE3LTFiZjItMTFlNy04YmU5LWZhMTYzZTRkZWM2NyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpjYWxpY28tY25pLXBsdWdpbiJ9.GEAcibv-urfWRGTSK0gchlCB6mtCxbwnfgxgJYdEKRLDjo7Sjyekg5lWPJoMopzzPu8-Tddd-yPZDJc44NCGRep7ovjjJdlQvjhc0g1XA7NS8W0OMNHUJAzueyn4iuEwDHR7oNSnwMqsfzgCsiIRkc7NkQDtKaBj8GOYTz9126zk37TqXylh7hMKlwDFkv9vCBcPv-nYU22UM67Ux6emAtf1g1Yw9i8EfOkbuqURir66jtcnwh3HLPSYMAEyADxYtYAxG9Ca-HhdXXsvnQxhd4P0h2ctgg0NLTO6WRX47C3GNheLmq0tNttFXya0mHhcElSPQFZftzGw8ZvxTQ\" }, \"kubernetes\": { \"kubeconfig\": \"/etc/cni/net.d/genie-kubeconfig\" } } ``` A detailed illustration of the workflow is given in the following figure: Step 1. a Pod object is submitted to API Server by the user Step 2. Scheduler schedules the pod to one of the slave nodes Step 3. Kubelet of the slave node picks up the pod from API Server and creates corresponding container Step 4. Kubelet passes the following to CNI-Genie a. CNI_COMMAND b. CNI_CONTAINERID c. CNI_NETNS d. CNIARGS (K8SPODNAMESPACE, K8SPOD_NAME) e. CNI_IFNAME (always eth0, please see kubernetes/pkg/kubelet/network/network.go) Step 5. CNI-Genie queries API Server with K8SPODNAMESPACE, K8SPODNAME to get the pod object, from which it parses cni plugin type, e.g., canal, weave Step 6. CNI-Genie queries the cni plugin of choice with parameters from Step 4 to get IP Address(es) for the pod Step 7. CNI-Genie returns the IP Address(es) to Kubelet Step 8. Kubelet updates the Pod object with the IP Address(es) passed from CNI-Genie"
}
] |
{
"category": "Runtime",
"file_name": "HLD.md",
"project_name": "CNI-Genie",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The document outlines the set of instructions that are required to install Sysbox on Kinvolk's Flatcar Linux distribution. Due to the enterprise-oriented nature of Flatcar's typical deployments, as well as the extra maintenance cost that it entails from Sysbox maintainers, Flatcar support is currently only offered as part of the Sysbox Enterprise Edition (EE). Flatcar is a container-optimized Linux distro, meaning that the OS is designed to run workloads inside containers efficiently and securely. Running Sysbox on Flatcar further increases container security and flexibility, as Sysbox enables containers deployed by Docker or Kubernetes to run with stronger isolation (Linux user-namespace, procfs and sysfs virtualization, initial mount locking, etc.). In addition, Sysbox enables containers to run most workloads that run in virtual machines (including systemd, Docker, and even Kubernetes), thus enabling new powerful use cases for containers beyond microservice deployment. 2905.2.3 2905.2.6 The method of installation depends on whether Sysbox is installed on a Kubernetes node or a Docker host. Sysbox Enterprise (EE) can be easily installed on Kubernetes nodes running Flatcar. The installation method in this case fully matches the one of other distributions supported by Sysbox. Please refer to document for more details. To install Sysbox on a host machine (physical or VM) running Flatcar, simply use this . This config installs the on the Flatcar host. NOTE: Add to the config file any other configurations you need for the machine (e.g., users, ssh keys, etc.) For example, the steps below deploy Sysbox on a Google Compute Engine (GCE) virtual machine: Add the ssh authorized key to the : ```yaml passwd: users: name: core sshauthorizedkeys: \"ssh-rsa AAAAB3NzaC1yc...\" ``` Convert the config to the Ignition format. This is done using the `ct` tool, as described : ```console $ ct --platform=gce < flatcar-config.yaml > config.ign ``` Provision the GCE VM and pass the `config.ign` generated in the prior step as"
},
{
"data": "```console $ gcloud compute instances create flatcar-vm --image-project kinvolk-public --image-family flatcar-stable --zone us-central1-a --machine-type n2-standard-4 --metadata-from-file user-data=config.ign Created [https://www.googleapis.com/compute/v1/projects/predictive-fx-309900/zones/us-central1-a/instances/flatcar-vm]. NAME ZONE MACHINETYPE PREEMPTIBLE INTERNALIP EXTERNAL_IP STATUS flatcar-vm us-central1-a n2-standard-4 10.128.15.196 34.132.170.36 RUNNING ``` When the VM boots, Sysbox will already be installed and running. You can verify this as follows: ```console core@flatcar-vm ~ $ systemctl status sysbox sysbox.service - Sysbox container runtime Loaded: loaded (/etc/systemd/system/sysbox.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2021-07-05 19:35:41 UTC; 3h 10min ago Docs: https://github.com/nestybox/sysbox Main PID: 1064 (sh) Tasks: 2 (limit: 19154) Memory: 704.0K CGroup: /system.slice/sysbox.service 1064 /bin/sh -c /opt/bin/sysbox/sysbox-runc --version && /opt/bin/sysbox/sysbox-mgr --version && /opt/bin/sysbox/sysbox-fs --version && /bin/sleep infinity 1084 /bin/sleep infinity ``` You can now deploy containers with Docker + Sysbox as follows: ```console core@flatcar-vm ~ $ docker run --runtime=sysbox-runc -it --rm <some-image> ``` This will create a container that is strongly secured and is capable of running microservices as well as full OS environments (similar to a VM, but with the efficiency and speed of containers). For example, to deploy a \"VM-like\" container that runs Ubuntu Focal + systemd + Docker inside: ```console core@flatcar-vm ~ $ docker run --runtime=sysbox-runc -it --rm nestybox/ubuntu-focal-systemd-docker ``` Please refer the and for may more usage examples. NOTE: If you exclude the `--runtime=sysbox-runc` flag, Docker will launch containers with it's default runtime (aka runc). You can have regular Docker containers live side-by-side and communicate with Docker + Sysbox containers without problem. The performs the following configurations on a Flatcar host: Places the Sysbox-EE binaries in a directory called `/opt/bin/sysbox`. The binaries include: sysbox-mgr, sysbox-runc, sysbox-fs, the `shiftfs` module, and `fusermount`. Loads the `shiftfs` module into the kernel. This module is present in Ubuntu kernels, but typically not present on other distros. It brings multiple functional benefits as described . Configures some kernel sysctl parameters (the `flatcar-config.yaml` has the details). Installs and starts the Sysbox-EE systemd units. Configures Docker to learn about Sysbox-EE and restarts Docker. The result is that the host is fully configured to run Docker containers with Sysbox."
}
] |
{
"category": "Runtime",
"file_name": "install-flatcar.md",
"project_name": "Sysbox",
"subcategory": "Container Runtime"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for zsh Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: echo \"autoload -U compinit; compinit\" >> ~/.zshrc To load completions in your current shell session: source <(cilium-operator-alibabacloud completion zsh) To load completions for every new session, execute once: cilium-operator-alibabacloud completion zsh > \"${fpath[1]}/_cilium-operator-alibabacloud\" cilium-operator-alibabacloud completion zsh > $(brew --prefix)/share/zsh/site-functions/_cilium-operator-alibabacloud You will need to start a new shell for this setup to take effect. ``` cilium-operator-alibabacloud completion zsh [flags] ``` ``` -h, --help help for zsh --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-alibabacloud_completion_zsh.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "gVisor is a multi-layered security sandbox. is gVisor's second layer of defense against container escape attacks. gVisor uses `seccomp-bpf` to filter its own syscalls by the host kernel. This significantly reduces the attack surface to the host that a compromised gVisor process can access. However, this layer comes at a cost: every legitimate system call that gVisor makes must be evaluated against this filter by the host kernel before it is actually executed. This blog post contains more than you ever wanted to know about `seccomp-bpf`, and explores the past few months of work to optimize gVisor's use of it. {:style=\"max-width:100%\"} <span class=\"attribution\">A diagram showing gVisor's two main layers of security: gVisor itself, and `seccomp-bpf`. This blog post touches on the `seccomp-bpf` part. .</span> -- One challenge with gVisor performance improvement ideas is that it is often very difficult to estimate how much they will impact performance without first doing most of the work necessary to actually implement them. Profiling tools help with knowing where to look, but going from there to numbers is difficult. `seccomp-bpf` is one area which is actually much more straightforward to estimate. Because it is a secondary layer of defense that lives outside of gVisor, and it is merely a filter, we can simply yank it out of gVisor and benchmark the performance we get. While running gVisor in this way is strictly less secure and not a mode that gVisor should support, the numbers we get in this manner do provide an upper bound on the maximum potential performance gains we could see from optimizations within gVisor's use of `seccomp-bpf`. To visualize this, we can run a benchmark with the following variants: Unsandboxed*: Unsandboxed performance without gVisor. gVisor*: gVisor from before any of the performance improvements described later in this post. gVisor with empty filter: Same as gVisor*, but with the `seccomp-bpf` filter replaced with one that unconditionally approves every system call. From these three variants, we can break down the gVisor overhead that comes from gVisor itself vs the one that comes from `seccomp-bpf` filtering. The difference between gVisor and unsandboxed represents the total gVisor performance overhead, and the difference between gVisor and gVisor with empty filter represents the performance overhead of gVisor's `seccomp-bpf` filtering rules. Let's run these numbers for the ABSL build benchmark: {:style=\"max-width:100%\"} We can now use these numbers to give a rough breakdown of where the overhead is coming from: {:style=\"max-width:100%\"} The `seccomp-bpf` overhead is small in absolute terms. The numbers suggest that the best that can be shaved off by optimizing `seccomp-bpf` filters is up to 3.4 seconds off from the total ABSL build time, which represents a reduction of total runtime by ~3.6%. However, when looking at this amount relative to gVisor's overhead over unsandboxed time, this means that optimizing the `seccomp-bpf` filters may remove up to ~15% of gVisor overhead, which is significant. *(Not all benchmarks have this behavior; some benchmarks show smaller `seccomp-bpf`-related overhead. The overhead is also highly platform-dependent.)* Of course, this level of performance is what was reached with empty `seccomp-bpf` filtering rules, so we cannot hope to reach this level of performance gains. However, it is still useful as an upper bound. Let's see how much of it we can recoup without compromising security. is a virtual machine and eponymous machine language. Its name comes from its original purpose: filtering packets in a kernel network"
},
{
"data": "However, its use has expanded to other domains of the kernel where programmability is desirable. Syscall filtering in the context of `seccomp` is one such area. BPF itself comes in two dialects: \"Classic BPF\" (sometimes stylized as cBPF), and the now-more-well-known . eBPF is a superset of cBPF and is usable extensively throughout the kernel. However, `seccomp` is not one such area. While , the status quo remains that `seccomp` filters may only use cBPF, so this post will focus on cBPF alone. `seccomp-bpf` is a part of the Linux kernel which allows a program to impose syscall filters on itself. A `seccomp-bpf` filter is a cBPF program that is given syscall data as input, and outputs an \"action\" (a 32-bit integer) to do as a result of this system call: allow it, reject it, crash the program, trap execution, etc. The kernel evaluates the cBPF program on every system call the application makes. The \"input\" of this cBPF program is the byte layout of the `seccomp_data` struct, which can be loaded into the registers of the cBPF virtual machine for analysis. Here's what the `seccomp_data` struct looks like in : ```c struct seccomp_data { int nr; // 32 bits u32 arch; // 32 bits u64 instruction_pointer; // 64 bits u64 args[6]; // 64 bits 6 }; // Total 512 bits ``` Here is an example `seccomp-bpf` filter, adapted from the happens to work pretty well with this pseudo-assembly-like language. It is not actually JavaScript. --> ```javascript 00: load32 4 // Load 32 bits at offsetof(struct seccomp_data, arch) (= 4) // of the seccomp_data input struct into register A. 01: jeq 0xc000003e, 0, 11 // If A == AUDITARCHX86_64, jump by 0 instructions [to 02] // else jump by 11 instructions [to 13]. 02: load32 0 // Load 32 bits at offsetof(struct seccomp_data, nr) (= 0) // of the seccomp_data input struct into register A. 03: jeq 15, 10, 0 // If A == NRrtsigreturn, jump by 10 instructions [to 14] // else jump by 0 instructions [to 04]. 04: jeq 231, 9, 0 // If A == NRexitgroup, jump by 9 instructions [to 14] // else jump by 0 instructions [to 05]. 05: jeq 60, 8, 0 // If A == NR_exit, jump by 8 instructions [to 14] // else jump by 0 instructions [to 06]. 06: jeq 0, 7, 0 // Same thing for NR_read. 07: jeq 1, 6, 0 // Same thing for NR_write. 08: jeq 5, 5, 0 // Same thing for NR_fstat. 09: jeq 9, 4, 0 // Same thing for NR_mmap. 10: jeq 14, 3, 0 // Same thing for NRrtsigprocmask. 11: jeq 13, 2, 0 // Same thing for NRrtsigaction. 12: jeq 35, 1, 0 // If A == NR_nanosleep, jump by 1 instruction [to 14] // else jump by 0 instructions [to 13]. 13: return 0 // Return SECCOMPRETKILL_THREAD 14: return 0x7fff0000 // Return SECCOMPRETALLOW ``` This filter effectively allows only the following syscalls: `rt_sigreturn`, `exitgroup`, `exit`, `read`, `write`, `fstat`, `mmap`, `rtsigprocmask`, `rt_sigaction`, and `nanosleep`. All other syscalls result in the calling thread being killed. cBPF is quite limited as a language. The following limitations all factor into the optimizations described in this blog post: The cBPF virtual machine only has 2 32-bit registers, and a tertiary pseudo-register for a 32-bit immediate value. (Note that syscall arguments evaluated in the context of `seccomp` are 64-bit values, so you can already foresee that this leads to complications.) `seccomp-bpf` programs are limited to 4,096"
},
{
"data": "Jump instructions can only go forward (this ensures that programs must halt). Jump instructions may only jump by a fixed (\"immediate\") number of instructions. (You cannot say: \"jump by whatever this register says\".) Jump instructions come in two flavors: \"Unconditional\" jump instructions, which jump by a fixed number of instructions. This number must fit in 16 bits. \"Conditional\" jump instructions, which include a condition expression and two jump targets: The number of instructions to jump by if the condition is true. This number must fit in 8 bits, so this cannot jump by more than 255 instructions. The number of instructions to jump by if the condition is false. This number must fit in 8 bits, so this cannot jump by by more than 255 instructions. Since , when a program uploads a `seccomp-bpf` filter into the kernel, that looks for system call numbers where the BPF program doesn't do any fancy operations nor load any bits from the `instruction_pointer` or `args` fields of the `seccomp_data` input struct, and still returns \"allow\". When this is the case, Linux will cache this information in a per-syscall-number bitfield. Later, when a cacheable syscall number is executed, the BPF program is not evaluated at all; since the kernel knows that the program is deterministic and doesn't depend on the syscall arguments, it can safely allow the syscall without actually running the BPF program. This post uses the term \"cacheable\" to refer to syscalls that match this criteria. gVisor imposes a `seccomp-bpf` filter on itself as part of Sentry start-up. This process works as follows: gVisor gathers bits of configuration that are relevant to the construction of its `seccomp-bpf` filter. This includes which platform is in use, whether certain features that require looser filtering are enabled (e.g. host networking, profiling, GPU proxying, etc.), and certain file descriptors (FDs) which may be checked against syscall arguments that pass in FDs. gVisor generates a sequence of rulesets from this configuration. A ruleset is a mapping from syscall number to a predicate that must be true for this system call, along with an \"action\" (return code) that is taken should this predicate be satisfied. For ease of human understanding, the predicate is often written as a , for which each sub-rule is a that verifies each syscall argument. In other words, `(fA(args[0]) && fB(args[1]) && ...) || (fC(args[0]) && fD(args[1]) && ...) || ...`. This is represented as follows: ```go Or{ // Disjunction rule PerArg{ // Conjunction rule over each syscall argument fA, // Predicate for `seccomp_data.args[0]` fB, // Predicate for `seccomp_data.args[1]` // ... More predicates can go here (up to 6 arguments per syscall) }, PerArg{ // Conjunction rule over each syscall argument fC, // Predicate for `seccomp_data.args[0]` fD, // Predicate for `seccomp_data.args[1]` // ... More predicates can go here (up to 6 arguments per syscall) }, } ``` gVisor performs several optimizations on this data structure. gVisor then renders this list of rulesets into a linear program that looks close to the final machine language, other than jump offsets which are initially represented as symbolic named labels during the rendering process. gVisor then resolves all the labels to their actual instruction index, and computes the actual jump targets of all jump instructions to obtain valid cBPF machine code. gVisor runs further optimizations on this cBPF bytecode. Finally, the cBPF bytecode is uploaded into the host kernel and the `seccomp-bpf` filter becomes effective. Optimizing the `seccomp-bpf` filter to be more efficient allows the program to be more compact"
},
{
"data": "it's possible to pack more complex filters in the 4,096 instruction limit), and to run faster. While `seccomp-bpf` evaluation is measured in nanoseconds, the impact of any optimization is magnified here, because host syscalls are an important part of the synchronous \"syscall hot path\" that must execute as part of handling certain performance-sensitive syscall from the sandboxed application. The relationship is not 1-to-1: a single application syscall may result in several host syscalls, especially due to `futex(2)` which the Sentry calls many times to synchronize its own operations. Therefore, shaving a nanosecond here and there results in several shaved nanoseconds in the syscall hot path. The first optimization done for gVisor's `seccomp-bpf` was to turn its linear search over syscall numbers into a . This turns the search for syscall numbers from `O(n)` to `O(log n)` instructions. This is a very common `seccomp-bpf` optimization technique which is replicated in other projects such as and Chromium. To do this, a cBPF program basically loads the 32-bit `nr` (syscall number) field of the `seccomp_data` struct, and does a binary tree traversal of the . When it finds a match, it jumps to a set of instructions that check that syscall's arguments for validity, and then returns allow/reject. But why stop here? Let's go further. The problem with the binary search tree approach is that it treats all syscall numbers equally. This is a problem for three reasons: It does not matter to have good performance for disallowed syscalls, because such syscalls should never happen during normal program execution. It does not matter to have good performance for syscalls which can be cached by the kernel, because the BPF program will only have to run once for these system calls. For the system calls which are allowed but are not cacheable by the kernel, there is a of their relative frequency. To exploit this we should evaluate the most-often used syscalls faster than the least-often used ones. The binary tree structure does not exploit this distribution, and instead treats all syscalls equally. So gVisor splits syscall numbers into four sets: : Non-cacheable llowed, called very frequently. : Non-cacheable allowed, called once in a lue moon. : acheable allowed (whether called frequently or not). : isallowed (which, by definition, is neither cacheable nor expected to ever be called). Then, the cBPF program is structured in the following layout: Linear search over allowed frequently-called non-cacheable syscalls (). These syscalls are ordered in most-frequently-called first (e.g. `futex(2)` is the first one as it is by far the most-frequently-called system call). Binary search over allowed infrequently-called non-cacheable syscalls (). Binary search over allowed cacheable syscalls (). Reject anything else (). This structure takes full advantage of the kernel caching functionality, and of the Pareto distribution of syscalls. <details markdown=\"1\"> <summary markdown=\"1\"> Beyond classifying syscalls to see which binary search tree they should be a part of, gVisor also optimizes the binary search process itself. </summary> Each syscall number is a node in the"
},
{
"data": "When traversing the tree, there are three options at each point: The syscall number is an exact match The syscall number is lower than the node's value The syscall number is higher than the node's value In order to render the BST as cBPF bytecode, gVisor used to render the following (in pseudocode): ```javascript if syscall number == current node value jump @rulesforthis_syscall if syscall number < current node value jump @left_node jump @right_node @rulesforthis_syscall: // Render bytecode for this syscall's filters here... @left_node: // Recursively render the bytecode for the left node value here... @right_node: // Recursively render the bytecode for the right node value here... ``` Keep in mind the here. Because conditional jumps are limited to 255 instructions, the jump to `@left_node` can be further than 255 instructions away (especially for syscalls with complex filtering rules like ). The jump to `@right_node` is almost certainly more than 255 instructions away. This means in actual cBPF bytecode, we would often need to use conditional jumps followed by unconditional jumps in order to jump so far forward. Meanwhile, the jump to `@rulesforthis_syscall` would be a very short hop away, but this locality would only be taken advantage of for a single node of the entire tree for each traversal. Consider this structure instead: ```javascript // Traversal code: if syscall number < current node value jump @left_node if syscall_number > current node value jump @right_node jump @rulesforthis_syscall @left_node: // Recursively render only the traversal code for the left node here @right_node: // Recursively render only the traversal code for the right node here // Filtering code: @rulesforthis_syscall: // Render bytecode for this syscall's filters here // Recursively render only the filtering code for the left node here // Recursively render only the filtering code for the right node here ``` This effectively separates the per-syscall rules from the traversal of the BST. This ensures that the traversal can be done entirely using conditional jumps, and that for any given execution of the cBPF program, there will be at most one unconditional jump to the syscall-specific rules. This structure is further improvable by taking advantage of the fact that syscall numbers are a dense space, and so are syscall filter rules. This means we can often avoid needless comparisons. For example, given the following tree: ``` 22 / \\ 9 24 / / \\ 8 23 50 ``` Notice that the tree contains `22`, `23`, and `24`. This means that if we get to node `23`, we do not need to check for syscall number equality, because we've already established from the traversal that the syscall number must be `23`. </details> gVisor now implements a running a few lossless optimizations. These optimizations are run repeatedly until the bytecode no longer changes. This is because each type of optimization tends to feed on the fruits of the others, as we'll see below. {:style=\"max-width:100%\"} gVisor's `seccomp-bpf` program size is reduced by over a factor of 4 using the optimizations below. <details markdown=\"1\"> <summary markdown=\"1\"> The means that typical BPF bytecode rendering code will usually favor unconditional jumps even when they are not necessary. However, they can be optimized after the fact. </summary> Typical BPF bytecode rendering code for a simple condition is usually rendered as follows: ```javascript jif <condition>, 0, 1 // If <condition> is true, continue, // otherwise skip over 1 instruction. jmp @conditionwastrue // Unconditional jump to label @conditionwastrue. jmp @conditionwasfalse // Unconditional jump to label @conditionwasfalse. ``` ... or as follows: ```javascript jif <condition>, 1, 0 // If <condition> is true, jump by 1 instruction, // otherwise continue. jmp @conditionwasfalse // Unconditional jump to label @conditionwasfalse. // Flow through here if the condition was true. ``` ... In other words, the generated code always uses unconditional jumps, and conditional jump offsets are always either 0 or 1 instructions"
},
{
"data": "This is because conditional jumps are limited to 8 bits (255 instructions), and it is not always possible at BPF bytecode rendering time to know ahead of time that the jump targets (`@conditionwastrue`, `@conditionwasfalse`) will resolve to an instruction that is close enough ahead that the offset would fit in 8 bits. The safe thing to do is to always use an unconditional jump. Since unconditional jump targets have 16 bits to play with, and `seccomp-bpf` programs are limited to 4,096 instructions, it is always possible to encode a jump using an unconditional jump instruction. But of course, the jump target often does fit in 8 bits. So gVisor looks over the bytecode for optimization opportunities: Conditional jumps that jump to unconditional jumps are rewritten to their final destination, so long as this fits within the 255-instruction conditional jump limit. Unconditional jumps that jump to other unconditional jumps are rewritten to their final destination. Conditional jumps where both branches jump to the same instruction are replaced by an unconditional jump to that instruction. Unconditional jumps with a zero-instruction jump target are removed. The aim of these optimizations is to clean up after needless indirection that is a byproduct of cBPF bytecode rendering code. Once they all have run, all jumps are as tight as they can be. </details> <details markdown=\"1\"> <summary markdown=\"1\"> Because cBPF is a very restricted language, it is possible to determine with certainty that some instructions can never be reached. </summary> In cBPF, each instruction either: Flows forward (e.g. `load` operations, math operations). Jumps by a fixed (immediate) number of instructions. Stops the execution immediately (`return` instructions). Therefore, gVisor runs a simple program traversal algorithm. It creates a bitfield with one bit per instruction, then traverses the program and all its possible branches. Then, all instructions that were never traversed are removed from the program, and all jump targets are updated to account for these removals. In turn, this makes the program shorter, which makes more jump optimizations possible. </details> <details markdown=\"1\"> <summary markdown=\"1\"> cBPF programs filter system calls by inspecting their arguments. To do these comparisons, this data must first be loaded into the cBPF VM registers. These load operations can be optimized. </summary> cBPF's conditional operations (e.g. \"is equal to\", \"is greater than\", etc.) operate on a single 32-bit register called \"A\". As such, a `seccomp-bpf` program typically consists of many load operations (`load32`) that loads a 32-bit value from a given offset of the `seccomp_data` struct into register A, then performs a comparative operation on it to see if it matches the filter. ```javascript 00: load32 <offset> 01: jif <condition1>, @condition1wastrue, @condition1wasfalse 02: load32 <offset> 03: jif <condition2>, @condition2wastrue, @condition2wasfalse // ... ``` But when a syscall rule is of the form \"this syscall argument must be one of the following values\", we don't need to reload the same value (from the same offset) multiple times. So gVisor looks for redundant loads like this, and removes them. ```javascript 00: load32 <offset> 01: jif <condition1>, @condition1wastrue, @condition1wasfalse 02: jif <condition2>, @condition2wastrue, @condition2wasfalse // ... ``` Note that syscall arguments are 64-bit values, whereas the A register is only 32-bits wide. Therefore, asserting that a syscall argument matches a predicate usually involves at least 2 `load32` operations on different offsets, thereby making this optimization useless for the \"this syscall argument must be one of the following values\" case. We'll get back to that. </details> <details markdown=\"1\"> <summary markdown=\"1\"> A typical syscall filter program consists of many predicates which return either \"allowed\" or"
},
{
"data": "These are encoded in the bytecode as either `return` instructions, or jumps to `return` instructions. These instructions can show up dozens or hundreds of times in the cBPF bytecode in quick succession, presenting an optimization opportunity. </summary> Since two `return` instructions with the same immediate return code are exactly equivalent to one another, it is possible to rewrite jumps to all `return` instructions that return \"allowed\" to go to a single `return` instruction that returns this code, and similar for \"rejected\", so long as the jump offsets fit within the limits of conditional jumps (255 instructions). In turn, this makes the program shorter, and therefore makes more jump optimizations possible. To implement this optimization, gVisor first replaces all unconditional jump instructions that go to `return` statements with a copy of that `return` statement. This removes needless indirection. ```javascript Original bytecode New bytecode 00: jeq 0, 0, 1 00: jeq 0, 0, 1 01: jmp @good --> 01: return allowed 02: jmp @bad --> 02: return rejected ... ... 10: jge 0, 0, 1 10: jge 0, 0, 1 11: jmp @good --> 11: return allowed 12: jmp @bad --> 12: return rejected ... ... 100 [@good]: return allowed 100 [@good]: return allowed 101 [@bad]: return rejected 101 [@bad]: return rejected ``` gVisor then searches for `return` statements which can be entirely removed by seeing if it is possible to rewrite the rest of the program to jump or flow through to an equivalent `return` statement (without making the program longer in the process). In the above example: ```javascript Original bytecode New bytecode 00: jeq 0, 0, 1 --> 00: jeq 0, 99, 100 // Targets updated 01: return allowed 01: return allowed // Now dead code 02: return reject 02: return rejected // Now dead code ... ... 10: jge 0, 0, 1 --> 10: jge 0, 89, 90 // Targets updated 11: jmp @good 11: return allowed // Now dead code 12: jmp @bad 12: return rejected // Now dead code ... ... 100 [@good]: return allowed 100 [@good]: return allowed 101 [@bad]: return rejected 101 [@bad]: return rejected ``` Finally, the dead code removal pass cleans up the dead `return` statements and the program becomes shorter. ```javascript Original bytecode New bytecode 00: jeq 0, 99, 100 --> 00: jeq 0, 95, 96 // Targets updated 01: return allowed --> / Removed / 02: return reject --> / Removed / ... ... 10: jge 0, 89, 90 --> 08: jge 0, 87, 88 // Targets updated 11: return allowed --> / Removed / 12: return rejected --> / Removed / ... ... 100 [@good]: return allowed 96 [@good]: return allowed 101 [@bad]: return rejected 97 [@bad]: return rejected ``` While this search is expensive to perform, in a program full of predicates which is exactly what `seccomp-bpf` programs are this approach massively reduces program size. </details> Bytecode-level optimizations are cool, but why stop here? gVisor now also performs . In gVisor, a `seccomp` `RuleSet` is a mapping from syscall number to a logical expression named `SyscallRule`, along with a `seccomp-bpf` action (e.g. \"allow\") if a syscall with a given number matches its `SyscallRule`. <details markdown=\"1\"> <summary markdown=\"1\"> A `SyscallRule` is a predicate over the data contained in the `seccomp_data` struct (beyond its `nr`). A trivial implementation is `MatchAll`, which simply matches any"
},
{
"data": "Other implementations include `Or` and `And` (which do what they sound like), and `PerArg` which applies predicates to each specific argument of a `seccomp_data`, and forms the meat of actual syscall filtering rules. Some basic simplifications are already possible with these building blocks. </summary> gVisor implements the following basic optimizers, which look like they may be useless on their own but end up simplifying the logic of the more complex optimizer described in other sections quite a bit: `Or` and `And` rules with a single predicate within them are replaced with just that predicate. Duplicate predicates within `Or` and `And` rules are removed. `Or` rules within `Or` rules are flattened. `And` rules within `And` rules are flattened. An `Or` rule which contains a `MatchAll` predicate is replaced with `MatchAll`. `MatchAll` predicates within `And` rules are removed. `PerArg` rules with `MatchAll` predicates for each argument are replaced with a rule that matches anything. As with the bytecode-level optimizations, gVisor runs these in a loop until the structure of the rules no longer change. With the basic optimizations above, this silly-looking rule: ```go Or{ Or{ And{ MatchAll, PerArg{AnyValue, EqualTo(2), AnyValue}, }, MatchAll, }, PerArg{AnyValue, EqualTo(2), AnyValue}, PerArg{AnyValue, EqualTo(2), AnyValue}, } ``` ... is simplified down to just `PerArg{AnyValue, EqualTo(2), AnyValue}`. </details> <details markdown=\"1\"> <summary markdown=\"1\"> This is the main optimization that gVisor performs on rulesets. gVisor looks for common argument matchers that are repeated across all combinations of other argument matchers in branches of an `Or` rule. It removes them from these `PerArg` rules, and `And` the overall syscall rule with a single instance of that argument matcher. Sound complicated? Let's look at an example. </summary> In the , these are the rules for the : ```go rules = ...(map[uintptr]SyscallRule{ SYS_FCNTL: Or{ PerArg{ NonNegativeFD, EqualTo(F_GETFL), }, PerArg{ NonNegativeFD, EqualTo(F_SETFL), }, PerArg{ NonNegativeFD, EqualTo(F_GETFD), }, }, }) ``` ... This means that for the `fcntl(2)` system call, `seccomp_data.args[0]` may be any non-negative number, `seccompdata.args[1]` may be either `FGETFL`, `FSETFL`, or `FGETFD`, and all other `seccomp_data` fields may be any value. If rendered naively in BPF, this would iterate over each branch of the `Or` expression, and re-check the `NonNegativeFD` each time. Clearly wasteful. Conceptually, the ideal expression is something like this: ```go rules = ...(map[uintptr]SyscallRule{ SYS_FCNTL: PerArg{ NonNegativeFD, AnyOf(FGETFL, FSETFL, F_GETFD), }, }) ``` ... But going through all the syscall rules to look for this pattern would be quite tedious, and some of them are actually `Or`'d from multiple `map[uintptr]SyscallRule` in different files (e.g. platform-dependent syscalls), so they cannot be all specified in a single location with a single predicate on `seccomp_data.args[1]`. So gVisor needs to detect this programmatically at optimization time. Conceptually, gVisor goes from: ```go Or{ PerArg{A1, B1, C1, D}, PerArg{A2, B1, C1, D}, PerArg{A1, B2, C2, D}, PerArg{A2, B2, C2, D}, PerArg{A1, B3, C3, D}, PerArg{A2, B3, C3, D}, } ``` ... to (after one pass): ```go And{ Or{ PerArg{A1, AnyValue, AnyValue, AnyValue}, PerArg{A2, AnyValue, AnyValue, AnyValue}, PerArg{A1, AnyValue, AnyValue, AnyValue}, PerArg{A2, AnyValue, AnyValue, AnyValue}, PerArg{A1, AnyValue, AnyValue, AnyValue}, PerArg{A2, AnyValue, AnyValue, AnyValue}, }, Or{ PerArg{AnyValue, B1, C1, D}, PerArg{AnyValue, B1, C1, D}, PerArg{AnyValue, B2, C2, D}, PerArg{AnyValue, B2, C2, D}, PerArg{AnyValue, B3, C3, D}, PerArg{AnyValue, B3, C3, D}, }, } ``` Then the will kick in and detect duplicate `PerArg` rules in `Or` expressions, and delete them: ```go And{ Or{ PerArg{A1, AnyValue, AnyValue, AnyValue}, PerArg{A2, AnyValue, AnyValue, AnyValue}, }, Or{ PerArg{AnyValue, B1, C1, D}, PerArg{AnyValue, B2, C2, D}, PerArg{AnyValue, B3, C3, D}, }, } ```"
},
{
"data": "Then, on the next pass, the second inner `Or` rule gets recursively optimized: ```go And{ Or{ PerArg{A1, AnyValue, AnyValue, AnyValue}, PerArg{A2, AnyValue, AnyValue, AnyValue}, }, And{ Or{ PerArg{AnyValue, AnyValue, AnyValue, D}, PerArg{AnyValue, AnyValue, AnyValue, D}, PerArg{AnyValue, AnyValue, AnyValue, D}, }, Or{ PerArg{AnyValue, B1, C1, AnyValue}, PerArg{AnyValue, B2, C2, AnyValue}, PerArg{AnyValue, B3, C3, AnyValue}, }, }, } ``` ... which, after other basic optimizers clean this all up, finally becomes: ```go And{ Or{ PerArg{A1, AnyValue, AnyValue, AnyValue}, PerArg{A2, AnyValue, AnyValue, AnyValue}, }, PerArg{AnyValue, AnyValue, AnyValue, D}, Or{ PerArg{AnyValue, B1, C1, AnyValue}, PerArg{AnyValue, B2, C2, AnyValue}, PerArg{AnyValue, B3, C3, AnyValue}, }, } ``` This has turned what would be 24 comparisons into just 9: `seccomp_data[0]` must either match predicate `A1` or `A2`. `seccomp_data[3]` must match predicate `D`. At least one of the following must be true: `seccompdata[1]` must match predicate `B1` and `seccompdata[2]` must match predicate `C1`. `seccompdata[1]` must match predicate `B2` and `seccompdata[2]` must match predicate `C2`. `seccompdata[1]` must match predicate `B3` and `seccompdata[2]` must match predicate `C3`. To go back to our `fcntl(2)` example, the rules would therefore be rewritten to: ```go rules = ...(map[uintptr]SyscallRule{ SYS_FCNTL: And{ // Check for args[0] exclusively: PerArg{NonNegativeFD, AnyValue}, // Check for args[1] exclusively: Or{ PerArg{AnyValue, EqualTo(F_GETFL)}, PerArg{AnyValue, EqualTo(F_SETFL)}, PerArg{AnyValue, EqualTo(F_GETFD)}, }, }, }) ``` ... thus we've turned 6 comparisons into 4. But we can do better still! </details> <details markdown=\"1\"> <summary markdown=\"1\"> We can apply the same optimization, but down to the 32-bit matching logic that underlies the 64-bit syscall argument matching predicates. </summary> As you may recall, . This means that when rendered, each of these argument comparisons are actually 2 operations each: one for the first 32-bit half of the argument, and one for the second 32-bit half of the argument. Let's look at the `FGETFL`, `FSETFL`, and `F_GETFD` constants: ```go F_GETFL = 0x3 F_SETFL = 0x4 F_GETFD = 0x1 ``` The cBPF bytecode for checking the arguments of this syscall may therefore look something like this: ```javascript // Check for `seccomp_data.args[0]`: 00: load32 16 // Load the first 32 bits of // `seccomp_data.args[0]` into register A. 01: jeq 0, 0, @bad // If A == 0, continue, otherwise jump to @bad. 02: load32 20 // Load the second 32 bits of // `seccomp_data.args[0]` into register A. 03: jset 0x80000000, @bad, 0 // If A & 0x80000000 != 0, jump to @bad, // otherwise continue. // Check for `seccomp_data.args[1]`: 04: load32 24 // Load the first 32 bits of // `seccomp_data.args[1]` into register A. 05: jeq 0, 0, @next1 // If A == 0, continue, otherwise jump to @next1. 06: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 07: jeq 0x3, @good, @next1 // If A == 0x3, jump to @good, // otherwise jump to @next1. @next1: 08: load32 24 // Load the first 32 bits of // `seccomp_data.args[1]` into register A. 09: jeq 0, 0, @next2 // If A == 0, continue, otherwise jump to @next2. 10: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 11: jeq 0x4, @good, @next2 // If A == 0x3, jump to @good, // otherwise jump to @next2. @next2: 12: load32 24 // Load the first 32 bits of // `seccomp_data.args[1]` into register A. 13: jeq 0, 0, @bad // If A == 0, continue, otherwise jump to @bad. 14: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 15: jeq 0x1, @good, @bad // If A == 0x1, jump to @good, // otherwise jump to"
},
{
"data": "// Good/bad jump targets for the checks above to jump to: @good: 16: return ALLOW @bad: 17: return REJECT ``` Clearly this could be better. The first 32 bits must be zero in all possible cases. So the syscall argument value-matching primitives (e.g. `EqualTo`) may be split into 2 32-bit value matchers: ```go rules = ...(map[uintptr]SyscallRule{ SYS_FCNTL: And{ PerArg{NonNegativeFD, AnyValue}, Or{ PerArg{ AnyValue, splitMatcher{ high32bits: EqualTo32Bits( F_GETFL & 0xffffffff00000000 / = 0 /), low32bits: EqualTo32Bits( F_GETFL & 0x00000000ffffffff / = 0x3 /), }, }, PerArg{ AnyValue, splitMatcher{ high32bits: EqualTo32Bits( F_SETFL & 0xffffffff00000000 / = 0 /), low32bits: EqualTo32Bits( F_SETFL & 0x00000000ffffffff / = 0x4 /), }, }, PerArg{ AnyValue, splitMatcher{ high32bits: EqualTo32Bits( F_GETFD & 0xffffffff00000000 / = 0 /), low32bits: EqualTo32Bits( F_GETFD & 0x00000000ffffffff / = 0x1 /), }, }, }, }, }) ``` gVisor then applies the same optimization as earlier, but this time going into each 32-bit half of each argument. This means it can extract the `EqualTo32Bits(0)` matcher from the `high32bits` part of each `splitMatcher` and move it up to the `And` expression like so: ```go rules = ...(map[uintptr]SyscallRule{ SYS_FCNTL: And{ PerArg{NonNegativeFD, AnyValue}, PerArg{ AnyValue, splitMatcher{ high32bits: EqualTo32Bits(0), low32bits: Any32BitsValue, }, }, Or{ PerArg{ AnyValue, splitMatcher{ high32bits: Any32BitsValue, low32bits: EqualTo32Bits( F_GETFL & 0x00000000ffffffff / = 0x3 /), }, }, PerArg{ AnyValue, splitMatcher{ high32bits: Any32BitsValue, low32bits: EqualTo32Bits( F_SETFL & 0x00000000ffffffff / = 0x4 /), }, }, PerArg{ AnyValue, splitMatcher{ high32bits: Any32BitsValue, low32bits: EqualTo32Bits( F_GETFD & 0x00000000ffffffff / = 0x1 /), }, }, }, }, }) ``` This looks bigger as a tree, but keep in mind that the `AnyValue` and `Any32BitsValue` matchers do not produce any bytecode. So now let's render that tree to bytecode: ```javascript // Check for `seccomp_data.args[0]`: 00: load32 16 // Load the first 32 bits of // `seccomp_data.args[0]` into register A. 01: jeq 0, 0, @bad // If A == 0, continue, otherwise jump to @bad. 02: load32 20 // Load the second 32 bits of // `seccomp_data.args[0]` into register A. 03: jset 0x80000000, @bad, 0 // If A & 0x80000000 != 0, jump to @bad, // otherwise continue. // Check for `seccomp_data.args[1]`: 04: load32 24 // Load the first 32 bits of // `seccomp_data.args[1]` into register A. 05: jeq 0, 0, @bad // If A == 0, continue, otherwise jump to @bad. 06: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 07: jeq 0x3, @good, @next1 // If A == 0x3, jump to @good, // otherwise jump to @next1. @next1: 08: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 09: jeq 0x4, @good, @next2 // If A == 0x3, jump to @good, // otherwise jump to @next2. @next2: 10: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 11: jeq 0x1, @good, @bad // If A == 0x1, jump to @good, // otherwise jump to @bad. // Good/bad jump targets for the checks above to jump to: @good: 12: return ALLOW @bad: 13: return REJECT ``` This is where the bytecode-level optimization to remove redundant loads finally becomes relevant. We don't need to load the second 32 bits of `seccomp_data.args[1]` multiple times in a row: ```javascript // Check for `seccomp_data.args[0]`: 00: load32 16 // Load the first 32 bits of // `seccomp_data.args[0]` into register A. 01: jeq 0, 0, @bad // If A == 0, continue, otherwise jump to @bad. 02: load32 20 // Load the second 32 bits of // `seccomp_data.args[0]` into register"
},
{
"data": "03: jset 0x80000000, @bad, 0 // If A & 0x80000000 != 0, jump to @bad, // otherwise continue. // Check for `seccomp_data.args[1]`: 04: load32 24 // Load the first 32 bits of // `seccomp_data.args[1]` into register A. 05: jeq 0, 0, @bad // If A == 0, continue, otherwise jump to @bad. 06: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 07: jeq 0x3, @good, @next1 // If A == 0x3, jump to @good, // otherwise jump to @next1. @next1: 08: jeq 0x4, @good, @next2 // If A == 0x3, jump to @good, // otherwise jump to @next2. @next2: 09: jeq 0x1, @good, @bad // If A == 0x1, jump to @good, // otherwise jump to @bad. // Good/bad jump targets for the checks above to jump to: @good: 10: return ALLOW @bad: 11: return REJECT ``` Of course, in practice the `@good`/`@bad` jump targets would also be unified with rules from other system call filters in order to cut down on those too. And by having reduced the number of instructions in each individual filtering rule, the jumps to these targets can be deduplicated against that many more rules. This example demonstrates how optimizations build on top of each other, making each optimization more likely to make other optimizations useful in turn. </details> Beyond these, gVisor also has the following minor optimizations. <details markdown=\"1\"> <summary markdown=\"1\"> is by far the most-often-called system call that gVisor calls as part of its operation. It is used for synchronization, so it needs to be very efficient. </summary> Its rules used to look like this: ```go SYS_FUTEX: Or{ PerArg{ AnyValue, EqualTo(FUTEXWAIT | FUTEXPRIVATE_FLAG), }, PerArg{ AnyValue, EqualTo(FUTEXWAKE | FUTEXPRIVATE_FLAG), }, PerArg{ AnyValue, EqualTo(FUTEX_WAIT), }, PerArg{ AnyValue, EqualTo(FUTEX_WAKE), }, }, ``` Essentially a 4-way `Or` between 4 different values allowed for `seccomp_data.args[1]`. This is all well and good, and the above optimizations already optimize this down to the minimum amount of `jeq` comparison operations. But looking at the actual bit values of the `FUTEX_*` constants above: ```go FUTEX_WAIT = 0x00 FUTEX_WAKE = 0x01 FUTEXPRIVATEFLAG = 0x80 ``` ... We can see that this is equivalent to checking that no bits other than `0x01` and `0x80` may be set. Turns out that cBPF has an instruction for that. This is now optimized down to two comparison operations: ```javascript 01: load32 24 // Load the first 32 bits of // `seccomp_data.args[1]` into register A. 02: jeq 0, 0, @bad // If A == 0, continue, // otherwise jump to @bad. 03: load32 28 // Load the second 32 bits of // `seccomp_data.args[1]` into register A. 04: jset 0xffffff7e, @bad, @good // If A & ^(0x01 | 0x80) != 0, jump to @bad, // otherwise jump to @good. ``` </details> <details markdown=\"1\"> <summary markdown=\"1\"> A lot of syscall arguments are file descriptors (FD numbers), which we need to filter efficiently. </summary> An FD is a 32-bit positive integer, but is passed as a 64-bit value as all syscall arguments are. Instead of doing a \"less than\" operation, we can simply turn it into a bitwise check. We simply check that the first half of the 64-bit value is zero, and that the 31st bit of the second half of the 64-bit value is not set. </details> <details markdown=\"1\"> <summary markdown=\"1\"> When one syscall argument is checked consistently across all branches of an `Or`, enforcing that this is the case ensures that the remains effective. </summary> The `ioctl(2)` system call takes an FD as one of its"
},
{
"data": "Since it is a \"grab bag\" of a system call, gVisor's rules for `ioctl(2)` were similarly spread across many files and rules, and not all of them checked that the FD argument was non-negative; some of them simply accepted any value for the FD argument. Before this optimization work, this meant that the BPF program did less work for the rules which didn't check the value of the FD argument. However, now that gVisor , it is now actually cheaper if all `ioctl(2)` rules verify the value of the FD argument consistently, as that argument check can be performed exactly once for all possible branches of the `ioctl(2)` rules. So now gVisor has a test that verifies that this is the case. This is a good example that shows that optimization work can lead to improved security due to the efficiency gains that comes from applying security checks consistently. </details> To measure the effectiveness of the above improvements, measuring gVisor performance itself would be very difficult, because each improvement is a rather tiny part of the syscall hot path. At the scale of each of these optimizations, we need to zoom in a bit more. So now gVisor has . It works by taking a to try with it. It runs a subprocess that installs this program as `seccomp-bpf` filter for itself, replacing all actions (other than \"approve syscall\") with \"return error\" in order to avoid crashing. Then it measures the latency of each syscall. This is then measured against the latency of the very same syscalls in a subprocess that has an empty `seccomp-bpf` (i.e. the only instruction within it is `return ALLOW`). Let's measure the effect of the above improvements on a gVisor-like workload. <details markdown=\"1\"> <summary markdown=\"1\"> This can be done by running gVisor under `ptrace` to see what system calls the gVisor process is doing. </summary> Note that `ptrace` here refers to the mechanism by which we can inspect the system call that the gVisor Sentry is making. This is distinct from the system calls the sandboxed application is doing. It has also nothing to do with gVisor's former \"ptrace\" platform. For example, after running a Postgres benchmark inside gVisor with Systrap, the `ptrace` tool generated the following summary table: ```markdown % time seconds usecs/call calls errors syscall -- -- - 62.10 431.799048 496 870063 46227 futex 4.23 29.399526 106 275649 38 nanosleep 0.87 6.032292 37 160201 sendmmsg 0.28 1.939492 16 115769 fstat 27.96 194.415343 2787 69749 137 ppoll 1.05 7.298717 315 23131 fsync 0.06 0.446930 31 14096 pwrite64 3.37 23.398106 1907 12266 9 epoll_pwait 0.00 0.019711 9 1991 6 close 0.02 0.116739 82 1414 tgkill 0.01 0.068481 48 1414 201 rt_sigreturn 0.02 0.147048 104 1413 getpid 0.01 0.045338 41 1080 write 0.01 0.039876 37 1056 read 0.00 0.015637 18 836 24 openat 0.01 0.066699 81 814 madvise 0.00 0.029757 111 267 fallocate 0.00 0.006619 15 420 pread64 0.00 0.013334 35 375 sched_yield 0.00 0.008112 114 71 pwritev2 0.00 0.003005 57 52 munmap 0.00 0.000343 18 19 6 unlinkat 0.00 0.000249 15 16 shutdown 0.00 0.000100 8 12 getdents64 0.00 0.000045 4 10 newfstatat ... -- -- - 100.00 695.311111 447 1552214 46651 total ``` To mimic the syscall profile of this gVisor sandbox from the perspective of `seccomp-bpf` overhead, we need to have it call these system calls with the same relative"
},
{
"data": "Therefore, the dimension that matters here isn't `time` or `seconds` or even `usecs/call`; it is actually just the number of system calls (`calls`). In graph form: {:style=\"max-width:100%\"} The Pareto distribution of system calls becomes immediately clear. </details> The `secbench` library lets us take the top 10 system calls and measure their `seccomp-bpf` filtering overhead individually, as well as building a weighted aggregate of their overall overhead. Here are the numbers from before and after the filtering optimizations described in this post: {:style=\"max-width:100%\"} The `nanosleep(2)` system call is a bit of an oddball here. Unlike the others, this system call causes the current thread to be descheduled. To make the results more legible, here is the same data with the duration normalized to the `seccomp-bpf` filtering overhead from before optimizations: {:style=\"max-width:100%\"} This shows that most system calls have had their filtering overhead reduced, but others haven't significantly changed (10% or less change in either direction). This is to be expected: those that have not changed are the ones that are cacheable: `nanosleep(2)`, `fstat(2)`, `ppoll(2)`, `fsync(2)`, `pwrite64(2)`, `close(2)`, `getpid(2)`. The non-cacheable syscalls before the main BST, `futex(2)` and `sendmmsg(2)`, experienced the biggest boost. Lastly, `epoll_pwait(2)` is non-cacheable but doesn't have a dedicated check before the main BST, so while it still sees a small performance gain, that gain is lower than its counterparts. The \"Aggregate\" number comes from the `secbench` library and represents the total time difference spent in system calls after calling them using weighted randomness. It represents the average system call overhead that a Sentry using Systrap would incur. Therefore, per these numbers, these optimizations removed ~29% from gVisor's overall `seccomp-bpf` filtering overhead. Here is the same data for KVM, which has a slightly different syscall profile with `ioctl(2)` and `rt_sigreturn(2)` being critical for performance: {:style=\"max-width:100%\"} Lastly, let's look at GPU workload performance. This benchmark enables gVisor's . What matters for this workload is `ioctl(2)` performance, as this is the system call used to issue commands to the GPU. Here is the `seccomp-bpf` filtering overhead of various CUDA control commands issued via `ioctl(2)`: {:style=\"max-width:100%\"} As `nvproxy` adds a lot of complexity to the `ioctl(2)` filtering rules, this is where we see the most improvement from these optimizations. To ensure that the optimizations above don't accidentally end up producing a cBPF program that has different behavior from the unoptimized one used to do, gVisor also has . Because gVisor knows which high-level filters went into constructing the `seccomp-bpf` program, it also from these filters, and the fuzzer verifies that each line and every branch of the optimized cBPF bytecode is executed, and that the result is the same as giving the same input to the unoptimized program. (Line or branch coverage of the unoptimized program is not enforceable, because without optimizations, the bytecode contains many redundant checks for which later branches can never be reached.) gVisor supports sandboxed applications adding `seccomp-bpf` filters onto themselves, and for this purpose. Because the cBPF bytecode-level optimizations are lossless and are generally applicable to any cBPF program, they are applied onto programs uploaded by sandboxed applications to make filter evaluation faster in gVisor itself. Additionally, gVisor removed the use of Go interfaces previously used for loading data from the BPF \"input\" (i.e. the `seccomp_data` struct for `seccomp-bpf`). This used to require an endianness-specific interface due to how the BPF interpreter was used in two places in gVisor: network processing (which uses network byte ordering), and `seccomp-bpf` (which uses native byte ordering). This interface has now been replaced with , yielding to a 2x speedup on"
},
{
"data": "The more `load` instructions are in the filter, the better the effect. *(Naturally, this also benefits network filtering performance!)* The graph below shows the gVisor cBPF interpreter's performance against three sample filters: , and optimized vs unoptimized versions of gVisor's own syscall filter (to represent a more complex filter). {:style=\"max-width:100%\"} Lastly, gVisor now also implements an in-sandbox caching mechanism for syscalls which do not depend on the `instruction_pointer` or syscall arguments. Unlike Linux's `seccomp-bpf` cache, gVisor's implementation also handles actions other than \"allow\", and supports the entire set of cBPF instructions rather than the restricted emulator Linux uses for caching evaluation purposes. This removes the interpreter from the syscall hot path entirely for cacheable syscalls, further speeding up system calls from applications that use `seccomp-bpf` within gVisor. {:style=\"max-width:100%\"} Due to these optimizations, the overall process of building the syscall filtering rules, rendering them to cBPF bytecode, and running all the optimizations, can take quite a while (~10ms). As one of gVisor's strengths is its startup latency being much faster than VMs, this is an unacceptable delay. Therefore, gVisor now to optimized cBPF bytecode for most possible gVisor configurations. This means the `runsc` binary contains cBPF bytecode embedded in it for some subset of popular configurations, and it will use this bytecode rather than compiling the cBPF program from scratch during startup. If `runsc` is invoked with a configuration for which the cBPF bytecode isn't embedded in the `runsc` binary, it will fall back to compiling the program from scratch. <details markdown=\"1\"> <summary markdown=\"1\"> </summary> One challenge with this approach is to support parts of the configuration that are only known at `runsc` startup time. For example, many filters act on a specific file descriptor used for interacting with the `runsc` process after startup over a Unix Domain Socket (called the \"controller FD\"). This is an integer that is only known at runtime, so its value cannot be embedded inside the optimized cBPF bytecode prepared at `runsc` compilation time. To address this, the `seccomp-bpf` precompilation tooling actually supports the notions of 32-bit \"variables\", and takes as input a function to render cBPF bytecode for a given key-value mapping of variables to placeholder 32-bit values. The precompiler calls this function twice with different arbitrary value mappings for each variable, and observes where these arbitrary values show up in the generated cBPF bytecode. This takes advantage of the fact that gVisor's `seccomp-bpf` program generation is deterministic. If the two cBPF programs are of the same byte length, and the placeholder values show up at exactly the same byte offsets within the cBPF bytecode both times, and the rest of the cBPF bytecode is byte-for-byte equivalent, the precompiler has very high confidence that these offsets are where the 32-bit variables are represented in the cBPF bytecode. It then stores these offsets as part of the embedded data inside the `runsc` binary. Finally, at `runsc` execution time, the bytes at these offsets are replaced with the now-known values of the variables. </details> The short answer is: yes, but only slightly. As we , `seccomp-bpf` is only a small portion of gVisor's total overhead, and the `secbench` benchmark shows that this work only removes a portion of that overhead, so we should not expect large differences"
},
{
"data": "Let's come back to the trusty ABSL build benchmark, with a new build of gVisor with all of these optimizations turned on: {:style=\"max-width:100%\"} Let's zoom the vertical axis in on the gVisor variants to see the difference better: {:style=\"max-width:100%\"} This is about in line with what the earlier benchmarks showed. The initial benchmarks showed that `seccomp-bpf` filtering overhead for this benchmark was on the order of ~3.6% of total runtime, and the `secbench` benchmarks showed that the optimizations reduced `seccomp-bpf` filter evaluation time by ~29% in aggregate. The final absolute reduction in total runtime should then be around ~1%, which is just about what this result shows. Other benchmarks show a similar pattern. Here's gRPC build, similar to ABSL: {:style=\"max-width:100%\"} {:style=\"max-width:100%\"} Here's a benchmark running the test suite: {:style=\"max-width:100%\"} {:style=\"max-width:100%\"} Here's the 50th percentile of nginx serving latency for an empty webpage. , and here we've shaven off 20 of them. {:style=\"max-width:100%\"} {:style=\"max-width:100%\"} CUDA workloads also get a boost from this work. Since their gVisor-related overhead is already relatively small, `seccomp-bpf` filtering makes up a higher proportion of their overhead. Additionally, as the performance improvements described in this post disproportionately help the `ioctl(2)` system call, this cuts a larger portion of the `seccomp-bpf` filtering overhead of these workload, since CUDA uses the `ioctl(2)` system call to communicate with the GPU. {:style=\"max-width:100%\"} {:style=\"max-width:100%\"} While some of these results may not seem like much in absolute terms, it's important to remember: These improvements have resulted in gVisor being able to enforce more `seccomp-bpf` filters than it previously could; gVisor's `seccomp-bpf` filter was nearly half the maximum `seccomp-bpf` program size, so it could at most double in complexity. After optimizations, it is reduced to less than a fourth of this size. These improvements allow the gVisor filters to scale better. This is visible from the effects on `ioctl(2)` performance with `nvproxy` enabled. The resulting work has produced useful libraries for `seccomp-bpf` tooling which may be helpful for other projects: testing, fuzzing, and benchmarking `seccomp-bpf` filters. This overhead could not have been addressed in another way. Unlike other areas of gVisor, such as network overhead or file I/O, overhead from the host kernel evaluating `seccomp-bpf` filter lives outside of gVisor itself and therefore it can only be improved upon by this type of work. One potential source of work is to look into the performance gap between no `seccomp-bpf` filter at all versus performance with an empty `seccomp-bpf` filter (equivalent to an all-cacheable filter). This points to a potential inefficiency in the Linux kernel implementation of the `seccomp-bpf` cache. Another potential point of improvement is to port over the optimizations that went into searching for a syscall number into the . `ioctl(2)` is a \"grab-bag\" kind of system call, used by many drivers and other subsets of the Linux kernel to extend the syscall interface without using up valuable syscall numbers. For example, the subsystem is almost entirely controlled through `ioctl(2)` system calls issued against `/dev/kvm` or against per-VM file descriptors. For this reason, the first non-file-descriptor argument of (\"request\") usually encodes something analogous to what the syscall number usually represents: the type of request made to the kernel. Currently, gVisor performs a linear scan through all possible enumerations of this argument. This is usually fine, but with features like `nvproxy` which massively expand this list of possible values, this can take a long time. `ioctl` performance is also critical for gVisor's KVM platform. A binary search tree would make sense here. gVisor welcomes further contributions to its `seccomp-bpf` machinery. Thanks for reading! assembly-like code in this blog post is close to but diverges"
}
] |
{
"category": "Runtime",
"file_name": "2024-02-01-seccomp.md",
"project_name": "gVisor",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Firecracker allows users to customise how the vCPUs are represented to the guest software by changing the following configuration: CPUID (x86_64 only) MSRs (Model Specific Registers, x86_64 only) ARM registers (aarch64 only) vCPU features (aarch64 only) KVM capabilities (both x86_64 and aarch64) A combination of the changes to the above entities is called a CPU template. The functionality can be used when a user wants to mask a feature from the guest. A real world use case for this is representing a heterogeneous fleet (a fleet consisting of multiple CPU models) as a homogeneous fleet, so the guests will experience a consistent feature set supported by the host. Note Representing one CPU vendor as another CPU vendor is not supported. Note CPU templates shall not be used as a security protection against malicious guests. Disabling a feature in a CPU template does not generally make it completely unavailable to the guest. For example, disabling a feature related to an instruction set will indicate to the guest that the feature is not supported, but the guest may still be able to execute corresponding instructions if it does not obey the feature bit. Firecracker supports two types of CPU templates: Static CPU templates - a set of built-in CPU templates for users to choose from Custom CPU templates - users can create their own CPU templates in json format and pass them to Firecracker Note Static CPU templates are deprecated starting from v1.5.0 and will be removed in accordance with our deprecation policy. Even after the removal, custom CPU templates are available as an improved iteration of static CPU templates. For more information about the transition from static CPU templates to custom CPU templates, please refer to . Note CPU templates for ARM (both static and custom) require the following patch to be available in the host kernel: . Otherwise KVM will fail to write to the ARM registers. At the moment the following set of static CPU templates are supported: | CPU template | CPU vendor | CPU model | | | - | | | C3 | Intel | any | | T2 | Intel | any | | T2A | AMD | Milan | | T2CL | Intel | Cascade Lake or newer | | T2S | Intel | any | | V1N1 | ARM | Neoverse V1 | T2 and C3 templates are mapped as close as possible to AWS T2 and C3 instances in terms of CPU"
},
{
"data": "Note that on a microVM that is lauched with the C3 template and running on processors that do not enumerate FBSDPNO, PSDPNO and SBDRSSDPNO on IA32ARCHCAPABILITIES MSR, the kernel does not apply the mitigation against MMIO stale data vulnerability. The T2S template is designed to allow migrating between hosts with Intel Skylake and Intel Cascade Lake securely by further restricting CPU features for the guest, however this comes with a performance penalty. Users are encouraged to carry out a performance assessment if they wish to use the T2S template. Note that Firecracker expects the host to always be running the latest version of the microcode. The T2CL template is mapped to be close to Intel Cascade Lake. It is not safe to use it on Intel CPUs older than Cascade Lake (such as Skylake). The only AMD template is T2A. It is considered safe to be used with AMD Milan. Intel T2CL and AMD T2A templates together aim to provide instruction set feature parity between CPUs running them, so they can form a heterogeneous fleet exposing the same instruction sets to the application. The V1N1 template is designed to represent ARM Neoverse V1 as ARM Neoverse N1. Configuration of a static CPU template is performed via the `/machine-config` API endpoint: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/machine-config' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"vcpu_count\": 2, \"memsizemib\": 1024, \"cpu_template\": \"T2CL\" }' ``` Users can create their own CPU templates by creating a json file containing modifiers for CPUID, MSRs or ARM registers. Note Creating custom CPU templates requires expert knowledge of CPU architectures. Custom CPU templates must be tested thoroughly before use in production. An inappropriate configuration may lead to guest crashes or making guests vulnerable to security attacks. For example, if a CPU template signals a hardware vulnerability mitigation to the guest while the mitigation is in fact not supported by the hardware, the guest may decide to disable corresponding software mitigations which will make the guest vulnerable. Note Having MSRs or ARM registers in the custom CPU template does not affect access permissions that guests will have to those registers. The access control is handled by KVM and is not influenced by CPU templates. Note When setting guest configuration, KVM may reject setting some bits quietly. This is user's responsibility to make sure that their custom CPU template is applied as expected even if Firecracker does not report an error. In order to assist with creation and usage of CPU templates, there exists a CPU template helper tool. More details can be found"
},
{
"data": "Configuration of a custom CPU template is performed via the `/cpu-config` API endpoint. An example of configuring a custom CPU template on x86_64: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/cpu-config' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"kvm_capabilities\": [\"!56\"], \"cpuid_modifiers\": [ { \"leaf\": \"0x1\", \"subleaf\": \"0x0\", \"flags\": 0, \"modifiers\": [ { \"register\": \"eax\", \"bitmap\": \"0bxxxx000000000011xx00011011110010\" } ] } ], \"msr_modifiers\": [ { \"addr\": \"0x10a\", \"bitmap\": \"0b0000000000000000000000000000000000000000000000000000000000000000\" } ] }' ``` This CPU template will do the following: removes check for KVM capability: KVMCAPXCRS. This allows Firecracker to run on old cpus. See discussion. in leaf `0x1`, subleaf `0x0`, register `eax`: clear bits `0b00001111111111000011100100001101` set bits `0b00000000000000110000011011110010` leave bits `0b11110000000000001100000000000000` intact. in MSR `0x10`, it will clear all bits. An example of configuring a custom CPU template on ARM: ```bash curl --unix-socket /tmp/firecracker.socket -i \\ -X PUT 'http://localhost/cpu-config' \\ -H 'Accept: application/json' \\ -H 'Content-Type: application/json' \\ -d '{ \"kvm_capabilities\": [\"171\", \"172\"], \"vcpu_features\": [{ \"index\": 0, \"bitmap\": \"0b1100000\" }] \"reg_modifiers\": [ { \"addr\": \"0x603000000013c020\", \"bitmap\": \"0bxxxxxxxxxxxx0000xxxxxxxxxxxx0000xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\" } ] }' ``` This CPU template will do the following: add checks for KVM capabilities: KVMCAPARMPTRAUTHADDRESS and KVMCAPARMPTRAUTHGENERIC. These checks are to ensure that the host have capabilities needed for the vCPU features. enable additional vCPU features: KVMARMVCPUPTRAUTHADDRESS and KVMARMVCPUPTRAUTHGENERIC modify ARM register `0x603000000013c020`: clear bits `0b0000000000001111000000000000111100000000000000000000000000000000` leave bits `0b1111111111110000111111111111000011111111111111111111111111111111` intact. Information about KVM capabilities can be found in the . Information about vCPU features on aarch64 can be found in the . Information on how the ARM register addresses are constructed can be found in the . The full description of the custom CPU templates language can be found . Note You can also use `_` to visually separate parts of a bitmap. So instead of writing: `0b0000xxxx`, it can be `0b0000_xxxx`. If a contracted version of a bitmap is given, for example, `0b101` where a 32-bit bitmap is expected, missing characters are implied to be `x` (`0bxxxxxxxxxxxxxxxxxxxxxxxxxxxxx101`). Some of the configuration set by a custom CPU template may be overwritten by Firecracker. More details can be found and . For detailed information when working with custom CPU templates, please refer to hardware specifications from CPU vendors, for example: If a user configured both a static CPU template (via `/machine-config`) and a custom CPU template (via `/cpu-config`) in the same Firecracker process, only the configuration that was performed the last is applied. This means that if a static CPU template was configured first and a custom CPU template was configured later, only the custom CPU template configuration will be applied when starting a microVM."
}
] |
{
"category": "Runtime",
"file_name": "cpu-templates.md",
"project_name": "Firecracker",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The `yaml` Project is released on an as-needed basis. The process is as follows: An issue is proposing a new release with a changelog since the last release All must LGTM this release An OWNER runs `git tag -s $VERSION` and inserts the changelog and pushes the tag with `git push $VERSION` The release issue is closed An announcement email is sent to `[email protected]` with the subject `[ANNOUNCE] kubernetes-template-project $VERSION is released`"
}
] |
{
"category": "Runtime",
"file_name": "release.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The 'compression translator' compresses and decompresses data in-flight between client and bricks. When a writev call occurs, the client compresses the data before sending it to brick. On the brick, compressed data is decompressed. Similarly, when a readv call occurs, the brick compresses the data before sending it to client. On the client, the compressed data is decompressed. Thus, the amount of data sent over the wire is minimized. Compression/Decompression is done using Zlib library. During normal operation, this is the format of data sent over wire: ~~~ <compressed-data> + trailer(8 bytes) ~~~ The trailer contains the CRC32 checksum and length of original uncompressed data. This is used for validation. Turning on compression xlator: ~~~ gluster volume set <vol_name> network.compression on ~~~ Compression level ~~~ gluster volume set <vol_name> network.compression.compression-level 8 ~~~ ~~~ 0 : no compression 1 : best speed 9 : best compression -1 : default compression ~~~ Minimum file size ~~~ gluster volume set <vol_name> network.compression.min-size 50 ~~~ Data is compressed only when its size exceeds the above value in bytes. Other paramaters Other less frequently used parameters include `network.compression.mem-level` and `network.compression.window-size`. More details can about these options can be found by running `gluster volume set help` command. Compression translator cannot work with striped volumes. Mount point hangs when writing a file with write-behind xlator turned on. To overcome this, turn off `performance.write-behind` entirely OR set`performance.strict-write-ordering` to on. For glusterfs versions <= 3.5, compression traslator can ONLY work with pure distribute volumes. This limitation is caused by AFR not being able to propagate xdata. This issue has been fixed in glusterfs versions > 3.5 Although zlib offers high compression ratio, it is very slow. We can make the translator pluggable to add support for other compression methods such as"
}
] |
{
"category": "Runtime",
"file_name": "network_compression.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Kata Containers supports creation of containers that are \"privileged\" (i.e. have additional capabilities and access that is not normally granted). Warning: Whilst this functionality is supported, it can decrease the security of Kata Containers if not configured correctly. By default, when privileged is enabled for a container, all the `/dev/*` block devices from the host are mounted into the guest. This will allow the privileged container inside the Kata guest to gain access to mount any block device from the host, a potentially undesirable side-effect that decreases the security of Kata. The following sections document how to configure this behavior in different container runtimes. The Containerd allows configuring the privileged host devices behavior for each runtime in the containerd config. This is done with the `privilegedwithouthost_devices` option. Setting this to `true` will disable hot plugging of the host devices into the guest, even when privileged is enabled. Support for configuring privileged host devices behaviour was added in containerd `1.3.0` version. See below example config: ```toml [plugins] [plugins.cri] [plugins.cri.containerd] [plugins.cri.containerd.runtimes.runc] runtime_type = \"io.containerd.runc.v2\" privilegedwithouthost_devices = false [plugins.cri.containerd.runtimes.kata] runtime_type = \"io.containerd.kata.v2\" privilegedwithouthost_devices = true [plugins.cri.containerd.runtimes.kata.options] ConfigPath = \"/opt/kata/share/defaults/kata-containers/configuration.toml\" ``` - Similar to containerd, CRI-O allows configuring the privileged host devices behavior for each runtime in the CRI config. This is done with the `privilegedwithouthost_devices` option. Setting this to `true` will disable hot plugging of the host devices into the guest, even when privileged is enabled. Support for configuring privileged host devices behaviour was added in CRI-O `1.16.0` version. See below example config: ```toml [crio.runtime.runtimes.runc] runtime_path = \"/usr/local/bin/crio-runc\" runtime_type = \"oci\" runtime_root = \"/run/runc\" privilegedwithouthost_devices = false [crio.runtime.runtimes.kata] runtime_path = \"/usr/bin/kata-runtime\" runtime_type = \"oci\" privilegedwithouthost_devices = true [crio.runtime.runtimes.kata-shim2] runtime_path = \"/usr/local/bin/containerd-shim-kata-v2\" runtime_type = \"vm\" privilegedwithouthost_devices = true ```"
}
] |
{
"category": "Runtime",
"file_name": "privileged.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "CubeFS is a growing community of volunteers, users, and vendors. The CubeFS community has adopted this security disclosures and response policy to ensure we responsibly handle critical issues. Security vulnerabilities should be handled quickly and sometimes privately. The primary goal of this process is to reduce the total time users are vulnerable to publicly known exploits. The PSC is responsible for organizing the entire response including internal communication and external disclosure but will need help from relevant developers and release leads to successfully run this process. The initial PSC will consist of volunteers who have been involved in the initial discussion: Xiaochun He () Liang Chang () Yao Hu () Weilong Guo () Liying Zhang () Mofei Zhang () The PSC members will share various tasks as listed below: Triage: make sure the people who should be in \"the know\" (aka notified) are notified, also responds to issues that are not actually issues and let the CubeFS maintainers know that. This person is the escalation path for a bug if it is one. Infra: make sure we can test the fixes appropriately. Disclosure: handles public messaging around the bug. Documentation on how to upgrade. Changelog. Explaining to public the severity. notifications of bugs sent to mailing lists etc. Requests CVEs. Release: Create new release addressing a security fix. Contact the team by sending email to The PSC should be consist of 2-4 members. New potential members to the PSC can express their interest to the PSC members. These individuals can be nominated by PSC members or CubeFS maintainers. If representation changes due to job shifts then PSC members are encouraged to grow the team or replace themselves through mentoring new members. Selection of new members will be done by lazy consensus amongst members for adding new people with fallback on majority vote. Members may step down at any time and propose a replacement from existing active contributors of CubeFS. Members must remain active and responsive. Members taking an extended leave of two weeks or more should coordinate with other members to ensure the role is adequately staffed during the leave. Members going on leave for 1-3 months may identify a temporary replacement. Members of a role should remove any other members that have not communicated a leave of absence and either cannot be reached for more than 1 month or are not fulfilling their documented responsibilities for more than 1 month. This may be done through a super-majority vote of members. The CubeFS Community asks that all suspected vulnerabilities be privately and responsibly disclosed as explained in the . If anyone knows of a publicly disclosed security vulnerability please IMMEDIATELY email to inform the PSC about the vulnerability so they may start the patch, release, and communication process. If possible the PSC will ask the person making the public report if the issue can be handled via a private disclosure process. If the reporter denies the PSC will move swiftly with the fix and release process. In extreme cases GitHub can be asked to delete the issue but this generally isn't necessary and is unlikely to make a public disclosure less damaging. For each vulnerability, the PSC members will coordinate to create the fix and release, and sending email to the rest of the"
},
{
"data": "All of the timelines below are suggestions and assume a Private Disclosure. The PSC drives the schedule using their best judgment based on severity, development time, and release work. If the PSC is dealing with a Public Disclosure all timelines become ASAP. If the fix relies on another upstream project's disclosure timeline, that will adjust the process as well. We will work with the upstream project to fit their timeline and best protect CubeFS users. These steps should be completed within the first 24 hours of Disclosure. The PSC will work quickly to identify relevant engineers from the affected projects and packages and CC those engineers into the disclosure thread. These selected developers are the Fix Team. A best guess is to invite all maintainers. These steps should be completed within the 1-7 days of Disclosure. The PSC and the Fix Team will create a using the to determine the effect and severity of the bug. The PSC makes the final call on the calculated risk; it is better to move quickly than make the perfect assessment. The PSC will request a . The Fix Team will notify the PSC that work on the fix branch is complete once there are LGTMs on all commits from one or more maintainers. If the CVSS score is under ~4.0 () or the assessed risk is low the Fix Team can decide to slow the release process down in the face of holidays, developer bandwidth, etc. Note: CVSS is convenient but imperfect. Ultimately, the PSC has discretion on classifying the severity of a vulnerability. The severity of the bug and related handling decisions must be discussed on the [email protected] mailing list. With the Fix Development underway, the PSC needs to come up with an overall communication plan for the wider community. This Disclosure process should begin after the Fix Team has developed a Fix or mitigation so that a realistic timeline can be communicated to users. Fix Release Day (Completed within 1-21 days of Disclosure) The PSC will cherry-pick the patches onto the master branch and all relevant release branches. The Fix Team will `lgtm` and `approve`. The CubeFS maintainers will merge these PRs as quickly as possible. The PSC will ensure all the binaries are built, publicly available, and functional. The PSC will announce the new releases, the CVE number, severity, and impact, and the location of the binaries to get wide distribution and user action. As much as possible this announcement should be actionable, and include any mitigating steps users can take prior to upgrading to a fixed version. The recommended target time is 4pm UTC on a non-Friday weekday. This means the announcement will be seen morning Pacific, early evening Europe, and late evening Asia. The announcement will be sent via the following channels: [email protected] These steps should be completed 1-3 days after the Release Date. The retrospective process . The PSC will send a retrospective of the process to [email protected] including details on everyone involved, the timeline of the process, links to relevant PRs that introduced the issue, if relevant, and any critiques of the response and release process. The PSC and Fix Team are also encouraged to send their own feedback on the process to [email protected]. Honest critique is the only way we are going to get good at this as a community."
}
] |
{
"category": "Runtime",
"file_name": "security-release-process.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. -->"
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "OpenEBS follows similar security policy as other CNCF projects, primarily inspired from the Kubernetes project. As the community and adoption increases, a much more detailed process will be put in place. Security related issues once fixed will be tracked publicly on . New issue announcements are sent to [email protected] If you find a security bug please report it privately to the maintainers listed in the MAINTAINERS of the relevant repository. We will fix the issue and coordinate a release date with you, acknowledging your effort and mentioning you by name if you want. Each report is acknowledged and analyzed by the maintainers within 3 working days. As the security issue moves from triage, to identified fix, to release planning we will keep the reporter updated. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. The Fix Lead drives the schedule using their best judgment based on severity, development time, and release manager feedback. If the Fix Lead is dealing with a Public Disclosure all timelines become ASAP."
}
] |
{
"category": "Runtime",
"file_name": "SECURITY.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Virtual machine live migration is the key feature provided by StratoVirt. It needs to execute virtual machine migration when any of the following happens: Server overload: when a source server is overloaded, a set of the VMs from this server is migrated to an underloaded server using VM migration technique. Server maintenance: if there is a need for server maintenance, VMs from the source server are migrated to another server. Server fault: whenever there is server fault, VMs are migrated from the faulty server to the target server. The migration stream can be passed over any transport as following: TCP mode migration: using tcp sockets to do the migration. UNIX mode migration: using unix sockets to do the migration. Note: UNIX mode only supports migrate two VMs on the same host OS. TCP mode supports migrate both on the same or different host OS. Launch the source VM: ```shell ./stratovirt \\ -machine q35 \\ -kernel ./vmlinux.bin \\ -append \"console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda\" \\ -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \\ -device virtio-blk-pci,drive=rootfs,id=rootfs,bus=pcie.0,addr=0 \\ -qmp unix:path/to/socket1,server,nowait \\ -serial stdio \\ ``` Launch the destination VM: ```shell ./stratovirt \\ -machine q35 \\ -kernel ./vmlinux.bin \\ -append \"console=ttyS0 pci=off reboot=k quiet panic=1 root=/dev/vda\" \\ -drive file=path/to/rootfs,id=rootfs,readonly=off,direct=off \\ -device virtio-blk-pci,drive=rootfs,id=rootfs,bus=pcie.0,addr=0 \\ -qmp unix:path/to/socket2,server,nowait \\ -serial stdio \\ -incoming tcp:192.168.0.1:4446 \\ ``` Note: The destination VM command line parameter needs to be consistent with the source VM. If it is necessary to change the data transmission from tcp network protocol to unix socket, the parameter `-incoming tcp:192.168.0.1:4446` needs to be replaced with `-incoming unix:/tmp/stratovirt-migrate.socket`. Unix socket protocol only supports migrate two VMs on the same host OS. Start to send migration for the source VM: ```shell $ ncat -U path/to/socket1 <- {\"QMP\":{\"version\":{\"StratoVirt\":{\"micro\":1,\"minor\":0,\"major\":0},\"package\":\"\"},\"capabilities\":[]}} -> {\"execute\":\"migrate\","
},
{
"data": "<- {\"return\":{}} ``` Note: If using unix socket protocol to migrate vm, you need to modify QMP command of `\"uri\":\"tcp:192.168.0.1:4446\"` to `\"uri\":\"unix:/tmp/stratovirt-migrate.socket\"`. When finish executing the command line, the live migration is start. in a moment, the source VM should be successfully migrated to the destination VM. If you want to cancel the live migration, executing the following command: ```shell $ ncat -U path/to/socket1 <- {\"QMP\":{\"version\":{\"StratoVirt\":{\"micro\":1,\"minor\":0,\"major\":0},\"package\":\"\"},\"capabilities\":[]}} -> {\"execute\":\"migrate_cancel\"} <- {\"return\":{}} ``` Use QMP command `query-migrate` to check migration state: ```shell $ ncat -U path/to/socket <- {\"QMP\":{\"version\":{\"StratoVirt\":{\"micro\":1,\"minor\":0,\"major\":0},\"package\":\"\"},\"capabilities\":[]}} -> {\"execute\":\"query-migrate\"} <- {\"return\":{\"status\":\"completed\"}} ``` Now there are 5 states during migration: `None`: Resource is not prepared all. `Setup`: Resource is setup, ready to migration. `Active`: In migration. `Completed`: Migration completed. `Failed`: Migration failed. `Canceled`: Migration canceled. Migration supports machine type: `q35` (on x86_64 platform) `virt` (on aarch64 platform) Some devices and feature don't support to be migration yet: `vhost-net` `vhost-user-net` `vfio` devices `balloon` `mem-shared`,`backend file of memory` `pmu` `sve` `gic-version=2` Some device attributes can't be changed: `virtio-net`: mac `virtio-blk`: file(only ordinary file or copy file), serial_num `device`: bus, addr `smp` `m` If hot plug device before migrate source vm, add newly replaced device command should be add to destination vm. Before live migration: source and destination host CPU needs to be the same architecture. the VMs image needs to be shared by source and destination. live migration may fail if the VM is performing lifecycle operations, such as reboot, shutdown. the command to startup the VM needs to be consistent on source and destination host. During live migration: source and destination networks cannot be disconnected. it is banned to operate VM lifecycle, includes using the QMP command and executing in the VM. live migration time is affected by network performance, total memory of VM and applications. After live migration: it needs to wait for the source VM to release resources before fetching back the live migration operation."
}
] |
{
"category": "Runtime",
"file_name": "migration.md",
"project_name": "StratoVirt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Config | | | State | string | | MemoryActualSize | Pointer to int64 | | [optional] DeviceTree | Pointer to | | [optional] `func NewVmInfo(config VmConfig, state string, ) *VmInfo` NewVmInfo instantiates a new VmInfo object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewVmInfoWithDefaults() *VmInfo` NewVmInfoWithDefaults instantiates a new VmInfo object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *VmInfo) GetConfig() VmConfig` GetConfig returns the Config field if non-nil, zero value otherwise. `func (o VmInfo) GetConfigOk() (VmConfig, bool)` GetConfigOk returns a tuple with the Config field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmInfo) SetConfig(v VmConfig)` SetConfig sets Config field to given value. `func (o *VmInfo) GetState() string` GetState returns the State field if non-nil, zero value otherwise. `func (o VmInfo) GetStateOk() (string, bool)` GetStateOk returns a tuple with the State field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmInfo) SetState(v string)` SetState sets State field to given value. `func (o *VmInfo) GetMemoryActualSize() int64` GetMemoryActualSize returns the MemoryActualSize field if non-nil, zero value otherwise. `func (o VmInfo) GetMemoryActualSizeOk() (int64, bool)` GetMemoryActualSizeOk returns a tuple with the MemoryActualSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmInfo) SetMemoryActualSize(v int64)` SetMemoryActualSize sets MemoryActualSize field to given value. `func (o *VmInfo) HasMemoryActualSize() bool` HasMemoryActualSize returns a boolean if a field has been set. `func (o *VmInfo) GetDeviceTree() map[string]DeviceNode` GetDeviceTree returns the DeviceTree field if non-nil, zero value otherwise. `func (o VmInfo) GetDeviceTreeOk() (map[string]DeviceNode, bool)` GetDeviceTreeOk returns a tuple with the DeviceTree field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *VmInfo) SetDeviceTree(v map[string]DeviceNode)` SetDeviceTree sets DeviceTree field to given value. `func (o *VmInfo) HasDeviceTree() bool` HasDeviceTree returns a boolean if a field has been set."
}
] |
{
"category": "Runtime",
"file_name": "VmInfo.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "sidebar_position: 4 sidebar_label: \"Volume Provisioned IO\" In HwameiStor, it allows users to specify the maximum IOPS and throughput of a volume on a Kuberentes cluster. Please follow the steps below to create a volume with the maximum IOPS and throughput and create a workload to use it. cgroup v2 has the following requirements: OS distribution enables cgroup v2 Linux Kernel version is 5.8 or later More info, please refer to the By default, HwameiStor won't auto-create such a StorageClass during the installation, so you need to create it manually. A sample StorageClass is as follows: ```yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hwameistor-storage-lvm-hdd-sample parameters: convertible: \"false\" csi.storage.k8s.io/fstype: xfs poolClass: HDD poolType: REGULAR provision-iops-on-creation: \"100\" provision-throughput-on-creation: 1Mi replicaNumber: \"1\" striped: \"true\" volumeKind: LVM provisioner: lvm.hwameistor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer ``` Compare to the regular StorageClass created by HwameiStor installer, the following parameters are added: provision-iops-on-creation: It specifies the maximum IOPS of the volume on creation. provision-throughput-on-creation: It specifies the maximum throughput of the volume on creation. After the StorageClass is created, you can use it to create a PVC. A sample PVC is as follows: ```yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-sample spec: accessModes: ReadWriteOnce resources: requests: storage: 10Gi storageClassName: hwameistor-storage-lvm-hdd-sample ``` After the PVC is created, you can create a deployment to use it. A sample Deployment is as follows: ```yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: pod-sample name: pod-sample spec: replicas: 1 selector: matchLabels: app: pod-sample strategy: {} template: metadata: creationTimestamp: null labels: app: pod-sample spec: volumes: name: data persistentVolumeClaim: claimName: pvc-sample containers: command: sleep \"100000\" image: busybox name: busybox resources: {} volumeMounts: name: data mountPath: /data status: {} ``` After the Deployment is created, you can test the volume's IOPS and throughput by using the following command: shell 1: ```bash kubectl exec -it pod-sample-5f5f8f6f6f-5q4q5 -- /bin/sh dd if=/dev/zero of=/data/test bs=4k count=1000000 oflag=direct ``` shell 2: `/dev/LocalStorage_PoolHDD/pvc-c623054b-e7e9-41d7-a987-77acd8727e66` is the path of the volume on the node. you can find it by using the `kubectl get lvr` command. ```bash iostat -d /dev/LocalStorage_PoolHDD/pvc-c623054b-e7e9-41d7-a987-77acd8727e66 -x -k 2 ``` :::note Due to the cgroupv1 limitation, the settings of the maximum IOPS and throughput may not take effect on non-direct IO. However, it will take effect on non-direct IO in"
},
{
"data": "::: The maximum IOPS and throughput are specified on the parameters of the StorageClass, you can not change it directly because it is immutable today. Different from the other storage vendors, HwameiStor is a Native Kubernetes storage solution and it defines a set of operation primitives based on the Kubernetes CRDs. It means that you can modify the related CRD to change the actual maximum IOPS and throughput of a volume. The following steps show how to change the maximum IOPS and throughput of a volume. ```console $ kubectl get pvc pvc-sample NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE demo Bound pvc-c354a56a-5cf4-4ff6-9472-4e24c7371e10 10Gi RWO hwameistor-storage-lvm-hdd 5d23h pvc-sample Bound pvc-cac82087-6f6c-493a-afcd-09480de712ed 10Gi RWO hwameistor-storage-lvm-hdd-sample 5d23h $ kubectl get localvolume NAME POOL REPLICAS CAPACITY USED STATE RESOURCE PUBLISHED FSTYPE AGE pvc-c354a56a-5cf4-4ff6-9472-4e24c7371e10 LocalStorage_PoolHDD 1 10737418240 33783808 Ready -1 master xfs 5d23h pvc-cac82087-6f6c-493a-afcd-09480de712ed LocalStorage_PoolHDD 1 10737418240 33783808 Ready -1 master xfs 5d23h ``` According to the print out, the LocalVolume CR for the PVC is `pvc-cac82087-6f6c-493a-afcd-09480de712ed`. ```bash kubectl edit localvolume pvc-cac82087-6f6c-493a-afcd-09480de712ed ``` In the editor, find the `spec.volumeQoS` section and modify the `iops` and `throughput` fields. By the way, an empty value means no limit. At last, save the changes and exit the editor. The settings will take effect in a few seconds. :::note In the future, we will allow users to modify the maximum IOPS and throughput of a volume directly once the Kubernetes supports . ::: HwameiStor uses the or to limit the IOPS and throughput of a volume, so you can use the following command to check the actual IOPS and throughput of a volume. ```console $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 160G 0 disk sda1 8:1 0 1G 0 part /boot sda2 8:2 0 159G 0 part centos-root 253:0 0 300G 0 lvm / centos-swap 253:1 0 7.9G 0 lvm centos-home 253:2 0 101.1G 0 lvm /home sdb 8:16 0 100G 0 disk LocalStorage_PoolHDD-pvc--cac82087--6f6c--493a--afcd--09480de712ed 253:3 0 10G 0 lvm /var/lib/kubelet/pods/3d6bc980-68ae-4a65-a1c8-8b410b7d240f/v LocalStorage_PoolHDD-pvc--c354a56a--5cf4--4ff6--9472--4e24c7371e10 253:4 0 10G 0 lvm /var/lib/kubelet/pods/521fd7b4-3bef-415b-8720-09225f93f231/v sdc 8:32 0 300G 0 disk sdc1 8:33 0 300G 0 part centos-root 253:0 0 300G 0 lvm / sr0 11:0 1 973M 0 rom $ cat /sys/fs/cgroup/blkio/blkio.throttle.readiopsdevice 253:3 100 $ cat /sys/fs/cgroup/blkio/blkio.throttle.writeiopsdevice 253:3 100 $ cat /sys/fs/cgroup/blkio/blkio.throttle.readbpsdevice 253:3 1048576 $ cat /sys/fs/cgroup/blkio/blkio.throttle.writebpsdevice 253:3 1048576 253:0 rbps=1048576 wbps=1048576 riops=100 wiops=100 ```"
}
] |
{
"category": "Runtime",
"file_name": "volume_provisioned_io.md",
"project_name": "HwameiStor",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document describes the requirements for committing to this repository. In order to contribute to this project, you must sign each of your commits to attest that you have the right to contribute that code. This is done with the `-s`/`--signoff` flag on `git commit`. More information about DCO can be found All code that is contributed to Krustlet must go through the Pull Request (PR) process. To contribute a PR, fork this project, create a new branch, make changes on that branch, and then use GitHub to open a pull request with your changes. Every PR must be reviewed by at least one Core Maintainer of the project. Once a PR has been marked \"Approved\" by a Core Maintainer (and no other core maintainer has an open \"Rejected\" vote), the PR may be merged. While it is fine for non-maintainers to contribute their own code reviews, those reviews do not satisfy the above requirement. This project has adopted the [CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Krustlet",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Date: 2024-02-01 Writing At the moment, flannel uses iptables to mask and route packets. Our implementation is based on the library from coreos (https://github.com/coreos/go-iptables). There are several issues with using iptables in flannel: performance: packets are matched using a list so performance is O(n). This isn't very important for flannel because use few iptables rules anyway. stability: rules must be purged then updated every time flannel needs to change a rule to keep the correct order there can be interferences with other k8s components using iptables as well (kube-proxy, kube-router...) deprecation: nftables is pushed as a replacement for iptables in the kernel and in future distros including the future RHEL. References: https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/3866-nftables-proxy/README.md#motivation In flannel code, all references to iptables are wrapped in the `iptables` package. The package provides the type `IPTableRule` to represent an individual rule. This type is almost entirely internal to the package so it would be easy to refactor the code to hide in favor of a more abstract type that would work for both iptables and nftables rules. Unfortunately the package doesn't provide an interface so in order to provide both an iptables-based and an nftables-based implementation this needs to be refactored. This package includes several Go interfaces (`IPTables`, `IPTablesError`) that are used for testing. Ideally, flannel will include both iptables and nftables implementation. These need to coexist in the code but will be mutually exclusive at runtime. The choice of which implementation to use will be triggered by an optional CLI flag. iptables will remain the default for the time being. Using nftables is an opportunity for optimising the rules deployed by flannel but we need to be careful about retro-compatibility with the current backend. Starting flannel in either mode should reset the other mode as best as possible to ensure that users don't need to reboot if they need to change mode. Currently, flannel uses two dedicated tables for its own rules: `FLANNEL-POSTRTG` and `FLANNEL-FWD`. flannel adds rules to the `FORWARD` and `POSTROUTING` tables to direct traffic to its own tables. rules in `FLANNEL-POSTRTG` are used to manage masquerading of the traffic to/from the pods rules in `FLANNEL-FWD` are used to ensure that traffic to and from the flannel network can be forwarded With nftables, flannel would have its own dedicated table (`flannel`) with arbitrary chains and rules as needed. see https://wiki.nftables.org/wiki-nftables/index.php/PerformingNetworkAddressTranslation(NAT) ``` table flannel { chain flannel-postrtg { type nat hook postrouting priority 0; meta mark 0x4000/0x4000 return ip saddr $podcidr ip daddr $clustercidr return ip saddr $clustercidr ip daddr $podcidr return ip saddr != $podcidr ip daddr $clustercidr return ip saddr $cluster_cidr ip daddr != 224.0.0.0/4 nat ip saddr != $clustercidr ip daddr $clustercidr nat } chain flannel-fwd { type filter hook input priority 0; policy drop; ip saddr flannelNetwork accept ip daddr flannelNetwork accept } } ``` We can either: call the `nft` executable directly use https://github.com/kubernetes-sigs/knftables which is developed for kube-proxy and should cover our use case refactor current iptables code to better encapsulate iptables calls in the dedicated package implement nftables mode that is the exact equivalent of the current iptables code add similar unit tests and e2e test coverage try to optimize the code using nftables-specific feature integrate the new flag in k3s"
}
] |
{
"category": "Runtime",
"file_name": "add-nftables-implementation.md",
"project_name": "Flannel",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Copyright (C) 2018-2022 Matt Layher Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
}
] |
{
"category": "Runtime",
"file_name": "LICENSE.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "layout: home hero: name: \"Kanister\" text: \"Application-Specific Data Management\" tagline: Kanister is a data protection workflow management tool. It provides a set of cohesive APIs for defining and curating data operations by abstracting away tedious details around executing data operations on Kubernetes. It's extensible and easy to install, operate and scale. image: src: /kanister.svg alt: VitePress actions: theme: brand text: Overview link: /overview theme: alt text: Install link: /install features:"
}
] |
{
"category": "Runtime",
"file_name": "index.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "layout: global title: CephFS This guide describes how to configure Alluxio with {:target=\"_blank\"} as the under storage system. The Ceph File System (CephFS) is a POSIX-compliant file system built on top of Cephs distributed object store, RADOS. CephFS endeavors to provide a state-of-the-art, multi-use, highly available, and performant file store for a variety of applications, including traditional use-cases like shared home directories, HPC scratch space, and distributed workflow shared storage. Alluxio supports two different implementations of under storage system for CephFS. Fore more information, please read its documentation: {:target=\"_blank\"} {:target=\"_blank\"} If you haven't already, please see before you get started. In preparation for using CephFS with Alluxio: <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<CEPHFSCONFFILE>`</td> <td markdown=\"span\">Local path to Ceph configuration file ceph.conf</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<CEPHFS_NAME>`</td> <td markdown=\"span\">Ceph URI that is used to identify dameon instances in the ceph.conf</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<CEPHFS_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use, either by creating a new directory or using an existing one</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<CEPHFSAUTHID>`</td> <td markdown=\"span\">Ceph user id</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<CEPHFSKEYRINGFILE>`</td> <td markdown=\"span\">Ceph keyring file that stores one or more Ceph authentication keys</td> </tr> </table> Follow {:target=\"_blank\"} to install below packages: ``` cephfs-java libcephfs_jni libcephfs2 ``` ```shell $ ln -s /usr/lib64/libcephfsjni.so.1.0.0 /usr/lib64/libcephfsjni.so $ ln -s /usr/lib64/libcephfs.so.2.0.0 /usr/lib64/libcephfs.so $ java_path=`which java | xargs readlink | sed 's#/bin/java##g'` $ ln -s /usr/share/java/libcephfs.jar $java_path/jre/lib/ext/libcephfs.jar ``` ```shell $ curl -o $java_path/jre/lib/ext/hadoop-cephfs.jar -s https://download.ceph.com/tarballs/hadoop-cephfs.jar ``` To use CephFS as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties $ cp conf/core-site.xml.template conf/core-site.xml ``` Set the following property to define CephFS as the root mount ```properties alluxio.dora.client.ufs.root=cephfs://mon1\\;mon2\\;mon3/ ``` {% navtabs Setup %} {% navtab cephfs %} Modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.cephfs.conf.file=<CEPHFSCONFFILE> alluxio.underfs.cephfs.mds.namespace=<CEPHFS_NAME> alluxio.underfs.cephfs.mount.point=<CEPHFS_DIRECTORY> alluxio.underfs.cephfs.auth.id=<CEPHFSAUTHID> alluxio.underfs.cephfs.auth.keyring=<CEPHFSKEYRINGFILE> ``` {% endnavtab %} {% navtab cephfs-hadoop %} Modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.hdfs.configuration=${ALLUXIO_HOME}/conf/core-site.xml ``` Modify `conf/core-site.xml` to include: ```xml <configuration> <property> <name>fs.default.name</name> <value>ceph://mon1,mon2,mon3/</value> </property> <property> <name>fs.defaultFS</name> <value>ceph://mon1,mon2,mon3/</value> </property> <property> <name>ceph.data.pools</name> <value>${data-pools}</value> </property> <property> <name>ceph.auth.id</name> <value>${client-id}</value> </property> <property> <name>ceph.conf.options</name> <value>clientmountgid=${gid},clientmountuid=${uid},clientmdsnamespace=${ceph-fs-name}</value> </property> <property> <name>ceph.root.dir</name> <value>${ceph-fs-dir}</value> </property> <property> <name>ceph.mon.address</name> <value>mon1,mon2,mon3</value> </property> <property> <name>fs.AbstractFileSystem.ceph.impl</name> <value>org.apache.hadoop.fs.ceph.CephFs</value> </property> <property> <name>fs.ceph.impl</name> <value>org.apache.hadoop.fs.ceph.CephFileSystem</value> </property> <property> <name>ceph.auth.keyring</name> <value>${client-keyring-file}</value> </property> </configuration> ``` {% endnavtab %} {% endnavtabs %} Once you have configured Alluxio to CephFS, try to see that everything works. ```shell $ ./bin/alluxio init format $ ./bin/alluxio process start local ``` Run a simple example program: ```shell $ ./bin/alluxio exec basicIOTest ``` Visit your cephfs to verify the files and directories created by Alluxio exist. You should see files named like: ``` ${cephfs-dir}/defaulttestsfiles/BasicCACHETHROUGH ``` In cephfs, you can visit cephfs with ceph-fuse or mount by POSIX APIs. {:target=\"_blank\"} In Alluxio, you can visit the nested directory in the Alluxio. Alluxio's can be used for this purpose. ``` /mnt/cephfs/defaulttestsfiles/BasicCACHETHROUGH ``` CephFS and CephFS-Hadoop UFS integration is contributed and maintained by the Alluxio community. The source code for CephFS is located {:target=\"blank\"} and for CephFS-Hadoop is located {:target=\"blank\"}. Feel free submit pull requests to improve the integration and update the documentation {:target=\"_blank\"} if any information is missing or out of date."
}
] |
{
"category": "Runtime",
"file_name": "CephFS.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Show contents of table \"routes\" ``` cilium-dbg statedb routes [flags] ``` ``` -h, --help help for routes -w, --watch duration Watch for new changes with the given interval (e.g. --watch=100ms) ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Inspect StateDB"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_statedb_routes.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- toc --> - - - - <!-- /toc --> When an actionset triggers a long-running task like the `CopyVolumeData` function, the only way to determine any sort of progress is to gain direct access to the logs and events of the task pod. This may not always be feasible as the user may not have the appropriate RBAC permissions to access these subresource endpoints in the pod's namespace. If the export operation takes a long time, it's also possible that the pod might be terminated prematurely without leaving behind any traces of logs to indicate how far along the task was. Being able to persist progress data will be very helpful for both live reporting of task progress, as well as future retrospection (e.g., the latency of every phase of an action). The `ActionSet` CRD's `status` subresource will be updated with new fields to communicate the progress of the action and its phases to the user. The `kube.ExecOutput()` and `kube.Task()` interfaces will be updated to accept a new progress I/O writer. The progress computation should not compromise the main data protection task's latency nor lead to resource contention. Progress computation will be performed on a best-effort basis, where it may be de-prioritized with no guarantee on the accuracy of its result, or skipped entirely in the event of resource contention. An action overall progress will be reported under the new `status.progress` field of the `ActionSet` resource. The progress of each phase will be included in the phase's subsection as `status.actions[].phases[].progress`. For example, ```yaml status: progress: percentCompleted: 50.00 # 1 out of 2 actions are completed lastTransitionTime: 2022-04-06 14:23:34 actions: blueprint: my-blueprint name: action-00 phases: name: echo state: completed progress: percentCompleted: 100.00 lastTransitionTime: 2022-04-06 14:13:00 extraStats: transferredBytes: 20KiB processedBytes: 15KiB readBytes: 120KiB totalBytes: 120KiB name: echo state: completed progress: percentCompleted: 100.00 lastTransitionTime: 2022-04-06 14:23:34 name: action-01 phases: name: echo state: pending progress: percentCompleted: 30.00 lastTransitionTime: 2022-04-06 14:30:31 state: pending ``` Since progress tracking may not be meaningful for short-lived tasks, we will limit the initial implementation effort to the following Kanister Functions which normally are used to invoke long-running operations: `BackupData` `BackupDataAll` `RestoreData` `RestoreDataAll` `CopyVolumeData` `CreateRDSSnapshot` `ExportRDSSnapshotToLocation` `RestoreRDSSnapshot` Initially, the progress of an action is computed by checking the number of completed phases against the total number of phases within the action: ``` actionprogresspercent = numcompletedphases / totalnumphases * 100 ``` In subsequent implementation, the computation alogrithm can be updated to assign more weight to phases with long-running operations. It's also possible to post periodic progress updates using an exponential backoff mechanism as long as the underlying phases are still alive. When an action starts, its progress will be updated to 10%, instead of keeping it at 0%. This will help to distinguish between in-progress action from an inactive one. The action's progress will only be set to 100% after all its phases completed without failures. The action's progress should never exceed 100%. As each phase within a blueprint may involve executing different commands producing different outputs, this design proposes a phase progress tracking interface that can use different \"trackers\" to map command outputs to numeric progress"
},
{
"data": "Some example trackers include ones that can track progress by: checking the number of uploaded bytes against estimated total bytes checking the duration elapsed against the estimated duration to complete the operation parsing the log outputs for milestone events to indicate the 25%, 50%, 75% and 100% markers Currently, Kanister Functions do not utilize Kopia to perform their underlying work. Once the work to integrate Kopia into Kanister is completed, we can extract the progress status directly from the log outputs. Here's a sample log output of the Kopia create snapshot function: ```sh $ kopia snapshot create kanister Snapshotting isim@pop-os:/home/isim/workspace/kanisterio/kanister ... 5 hashing, 4186 hashed (329.1 MB), 0 cached (0 B), uploaded 309.8 MB, estimated 2 GB (16.3%) 3m38s left ``` Kanister Functions that are currently using Restic already have a set of library functions that can be used to extract progress status from the Restic logs. See e.g., the `restic.SnapshotStatsFromBackupLog()` function in Since all the long-running functions rely on the `KubeExec` and `KubeTask` functions, most implementation changes will be done on these two functions. Defer phase should also included in the phase-level progress tracking. Here's an example code snippet of the proposed interface written in Go: ```go // ./pkg/progress/phase package phase type ProgressTracker struct { t Tracker R Result } type Result struct { StatusInPercent chan string Err chan error } func (pt *ProgressTracker) Write(p []byte) (n int, err error) { if err := pt.t.Compute(string(p), pt); err != nil { return len(p), err } return len(p), nil } func (pt *ProgressTracker) Result() <-chan string { return pt.R.StatusInPercent } func (pt *ProgressTracker) Err() <-chan error { return pt.R.Err } type Tracker interface { Compute(cmdOutput string, p *ProgressTracker) error } ``` This is an example of what the client code would look like: ```go ctx, cancel := context.WithCancel(context.Background()) defer cancel() bytesTracker := BytesTracker { totalNumBytes: 268435456 } progressTracker := phase.NewProgressTracker(bytesTracker) go func() { for { select { case <-context.Done(): // handle context.Err() return case err := <-progressTracker.Err(): // handle err return case r := <-progressTracker.Result(): // update the actionset's status with r. // might need some more refactoring in order to return // this to ./pkg/controller/controller.go } } }() out := io.MultiWriter(os.Stdout, progressTracker) kube.ExecOutput(cli, namespace, pod, container, command, in, out, errw) ``` Here's an sample tracker that computes progress status based on the amount of data uploaded vs. the total amount of data: ```go var _ Tracker = (*BytesTracker)(nil) type BytesTracker struct { totalNumBytes int64 } func (b BytesTracker) Compute(cmdOutput string, t *ProgressTracker) error { totalNumBytesUploaded, err := parse(cmdOutput) if err != nil { return err } pt.R.StatusInPercent = totalNumBytesUploaded/totalNumBytes * 100 return nil } ``` If a phase failed, the progress tracking will cease immediately. The last reported progress will be retained in the actionset's `status` subresource. New unit tests to be added to the new `progress` package to cover blueprint progress, and phase progress calculated with a sample tracker: Blueprint with single phase: Completed successfully - assert that blueprint and phase progress are at 100% Failed to finish - assert that blueprint progress and phase progress are at 10% Blueprint with multiple phases: Completed successfully - assert that blueprint and phase progress are at 100% Failed to finish at different phases - assert that progress calculation is correct"
}
] |
{
"category": "Runtime",
"file_name": "progress-tracking.md",
"project_name": "Kanister",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "A coroutines-based, cooperative multi-tasking framework. Glossary Lifecycle of a synctask Existing usage syncenv is an object that provides access to a pool of worker threads. synctasks execute in a syncenv. synctask can be informally defined as a pair of function pointers, namely _the call and the callback_ (see syncop.h for more details). synctaskfnt - 'the call' synctaskcbkt - 'the callback' synctask has two modes of operation, The calling thread waits for the synctask to complete. The calling thread schedules the synctask and continues. synctask guarantees that the callback is called after the call completes. A synctask could go into the following stages while in execution. CREATED - On calling synctaskcreate/synctasknew. RUNNABLE - synctask is queued in env->runq. RUNNING - When one of syncenv's worker threads calls synctaskswitchto. WAITING - When a synctask calls synctask_yield. DONE - When a synctask has run to completion. +-+ | CREATED | +-+ | | synctasknew/synctaskcreate v +-+ | RUNNABLE (in env->runq) | <+ +-+ | | | | synctaskswitchto | v | ++ on task completion +-+ | | DONE | <-- | RUNNING | | synctask_wake/wake ++ +-+ | | | | synctask_yield/yield | v | +-+ | | WAITING (in env->waitq) | -+ +-+ Note: A synctask is not guaranteed to run on the same thread throughout its lifetime. Every time a synctask yields, it is possible for it to run on a different thread."
}
] |
{
"category": "Runtime",
"file_name": "syncop.md",
"project_name": "Gluster",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Name | Type | Description | Notes | - | - | - Id | string | | Bdf | string | | `func NewPciDeviceInfo(id string, bdf string, ) *PciDeviceInfo` NewPciDeviceInfo instantiates a new PciDeviceInfo object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewPciDeviceInfoWithDefaults() *PciDeviceInfo` NewPciDeviceInfoWithDefaults instantiates a new PciDeviceInfo object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *PciDeviceInfo) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o PciDeviceInfo) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PciDeviceInfo) SetId(v string)` SetId sets Id field to given value. `func (o *PciDeviceInfo) GetBdf() string` GetBdf returns the Bdf field if non-nil, zero value otherwise. `func (o PciDeviceInfo) GetBdfOk() (string, bool)` GetBdfOk returns a tuple with the Bdf field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PciDeviceInfo) SetBdf(v string)` SetBdf sets Bdf field to given value."
}
] |
{
"category": "Runtime",
"file_name": "PciDeviceInfo.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "](https://travis-ci.org/kubernetes-sigs/yaml) kubernetes-sigs/yaml is a permanent fork of . A wrapper around designed to enable a better way of handling YAML when marshaling to and from structs. In short, this library first converts YAML to JSON using go-yaml and then uses `json.Marshal` and `json.Unmarshal` to convert to or from the struct. This means that it effectively reuses the JSON struct tags as well as the custom JSON methods `MarshalJSON` and `UnmarshalJSON` unlike go-yaml. For a detailed overview of the rationale behind this method, . This package uses and therefore supports . Caveat #1: When using `yaml.Marshal` and `yaml.Unmarshal`, binary data should NOT be preceded with the `!!binary` YAML tag. If you do, go-yaml will convert the binary data from base64 to native binary data, which is not compatible with JSON. You can still use binary in your YAML files though - just store them without the `!!binary` tag and decode the base64 in your code (e.g. in the custom JSON methods `MarshalJSON` and `UnmarshalJSON`). This also has the benefit that your YAML and your JSON binary data will be decoded exactly the same way. As an example: ``` BAD: exampleKey: !!binary gIGC GOOD: exampleKey: gIGC ... and decode the base64 data in your code. ``` Caveat #2: When using `YAMLToJSON` directly, maps with keys that are maps will result in an error since this is not supported by JSON. This error will occur in `Unmarshal` as well since you can't unmarshal map keys anyways since struct fields can't be keys. To install, run: ``` $ go get sigs.k8s.io/yaml ``` And import using: ``` import \"sigs.k8s.io/yaml\" ``` Usage is very similar to the JSON library: ```go package main import ( \"fmt\" \"sigs.k8s.io/yaml\" ) type Person struct { Name string `json:\"name\"` // Affects YAML field names too. Age int `json:\"age\"` } func main() { // Marshal a Person struct to YAML. p := Person{\"John\", 30} y, err := yaml.Marshal(p) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: age: 30 name: John */ // Unmarshal the YAML back into a Person struct. var p2 Person err = yaml.Unmarshal(y, &p2) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(p2) /* Output: {John 30} */ } ``` `yaml.YAMLToJSON` and `yaml.JSONToYAML` methods are also available: ```go package main import ( \"fmt\" \"sigs.k8s.io/yaml\" ) func main() { j := []byte(`{\"name\": \"John\", \"age\": 30}`) y, err := yaml.JSONToYAML(j) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(y)) /* Output: age: 30 name: John */ j2, err := yaml.YAMLToJSON(y) if err != nil { fmt.Printf(\"err: %v\\n\", err) return } fmt.Println(string(j2)) /* Output: {\"age\":30,\"name\":\"John\"} */ } ```"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Generate the autocompletion script for zsh Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: echo \"autoload -U compinit; compinit\" >> ~/.zshrc To load completions in your current shell session: source <(cilium-operator-azure completion zsh) To load completions for every new session, execute once: cilium-operator-azure completion zsh > \"${fpath[1]}/_cilium-operator-azure\" cilium-operator-azure completion zsh > $(brew --prefix)/share/zsh/site-functions/_cilium-operator-azure You will need to start a new shell for this setup to take effect. ``` cilium-operator-azure completion zsh [flags] ``` ``` -h, --help help for zsh --no-descriptions disable completion descriptions ``` - Generate the autocompletion script for the specified shell"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-azure_completion_zsh.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "We follow semantic versioning and try to the best of our abilities to maintain a stable interface between patch versions. For example, `v0.1.1` -> `v0.1.2` should be a perfectly safe upgrade path, without data service interruption. However, major (`vX.0.0`) and minor (`v0.Y.0`) version upgrades may contain breaking changes, which will be detailed here and in the release notes. First check if you are upgrading across one of the . If so, read the relevant section(s) first before proceeding with the general guidelines below. Here we will assume that you have the following in your kube-router DaemonSet: ```yaml imagePullPolicy: Always ``` If that's not the case, you will need to manually pull the desired image version on each of your nodes with a command like: `docker pull cloudnativelabs/kube-router:VERSION` This is the default situation with our DaemonSet manifests. We will soon be switching these manifests to use Rolling Updates though. The following example(s) show an upgrade from `v0.0.15` to `v0.0.16`. First we will modify the kube-router DaemonSet resource's image field: ```sh kubectl -n kube-system set image ds/kube-router kube-router=cloudnativelabs/kube-router:v0.0.16 ``` This does not actually trigger any version changes yet. It is recommended that you upgrade only one node and perform any tests you see fit to ensure nothing goes wrong. For example, we'll test upgrading kube-router on worker-01: ```sh TEST_NODE=\"worker-01\" TESTPOD=\"$(kubectl -n kube-system get pods -o wide|grep -E \"^kube-router.*${TESTNODE}\"|awk '{ print $1 }')\" kubectl -n kube-system delete pod \"${TEST_POD}\" ``` You can watch to make sure the new kube-router pod comes up and stays running with: ```sh kubectl -n kube-system get pods -o wide -w ``` Check the logs with: ```sh TEST_NODE=\"worker-01\" TESTPOD=\"$(kubectl -n kube-system get pods -o wide|grep -E"
},
{
"data": "'{ print $1 }')\" kubectl -n kube-system logs \"${TEST_POD}\" ``` If it all looks good, go ahead and upgrade kube-router on all nodes: ```sh kubectl -n kube-system delete pods -l k8s-app=kube-router ``` After updating a DaemonSet template, old DaemonSet pods will be killed, and new DaemonSet pods will be created automatically, in a controlled fashion If your global BGP peers supports gracefull restarts and has it enabled, can be used to upgrade your kube-router DaemonSet without network downtime. To enable gracefull BGP restart kube-router must be started with `--bgp-graceful-restart` To enable rolling updates on your kube-router DaemonSet modify it and add a updateStrategy ```yaml updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 ``` maxUnavailable controls the maximum number of pods to simultaneously upgrade Starting from the top of the DaemonSet, it should look like this after you are done editing ```yaml apiVersion: extensions/v1beta1 kind: DaemonSet metadata: labels: k8s-app: kube-router tier: node name: kube-router namespace: kube-system spec: updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 ... ``` This section covers version specific upgrade instructions. While kube-router is in its alpha stage changes can be expected to be rapid. Therefor we cannot guarantee that a new alpha release will not break previous expected behavior. This version brings changes to hairpin and BGP peering CLI/annotation configuration flags/keys. CLI flag changes: OLD: `--peer-router` -> NEW: `--peer-router-ips` OLD: `--peer-asn` -> NEW: `--peer-router-asns` CLI flag additions: NEW: `--peer-router-passwords` Annotation key changes: OLD: `kube-router.io/hairpin-mode=` -> NEW: `kube-router.io/service.hairpin=` OLD: `net.kuberouter.nodeasn=` -> NEW: `kube-router.io/node.asn=` OLD: `net.kuberouter.node.bgppeer.address=` -> NEW: `kube-router.io/peer.ips` OLD: `net.kuberouter.node.bgppeer.asn` -> NEW: `kube-router.io/peer.asns` Annotation key additions: NEW: `kube-router.io/peer.passwords` For CLI flag changes, all that is required is to change the flag names you use above to their new names at the same time that you change the image version. ```sh kubectl -n kube-system edit ds kube-router ``` For Annotations, the recommended approach is to copy all the values of your current annotations into new annotations with the updated keys. You can get a quick look at all your service and node annotations with these commands: ```sh kubectl describe services --all-namespaces |grep -E '^(Name:|Annotations:)' kubectl describe nodes |grep -E '^(Name:|Annotations:)' ``` For example if you have a service annotation to enable Hairpin mode like: ```sh Name: hairpin-service Annotations: kube-router.io/hairpin-mode= ``` You will then want to make a new annotation with the new key: ```sh kubectl annotate service hairpin-service \"kube-router.io/service.hairpin=\" ``` Once all new annotations are created, proceed with the . After the upgrades tested and complete, you can delete the old annotations. ```sh kubectl annotate service hairpin-service \"kube-router.io/hairpin-mode-\" ```"
}
] |
{
"category": "Runtime",
"file_name": "upgrading.md",
"project_name": "Kube-router",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: \"Development \" layout: docs Run `make update` to regenerate files if you make the following changes: Add/edit/remove command line flags and/or their help text Add/edit/remove commands or subcommands Add new API types Add/edit/remove plugin protobuf message or service definitions The following files are automatically generated from the source code: The clientset Listers Shared informers Documentation Protobuf/gRPC types You can run `make verify` to ensure that all generated files (clientset, listers, shared informers, docs) are up to date. You can run `make lint` which executes golangci-lint inside the build image, or `make local-lint` which executes outside of the build image. Both `make lint` and `make local-lint` will only run the linter against changes. Use `lint-all` to run the linter against the entire code base. The default linters are defined in the `Makefile` via the `LINTERS` variable. You can also override the default list of linters by running the command `$ make lint LINTERS=gosec` To run unit tests, use `make test`. If you are developing or using the main branch, note that you may need to update the Velero CRDs to get new changes as other development work is completed. ```bash velero install --crds-only --dry-run -o yaml | kubectl apply -f - ``` NOTE: You could change the default CRD API version (v1beta1 or v1) if Velero CLI can't discover the Kubernetes preferred CRD API version. The Kubernetes version < 1.16 preferred CRD API version is v1beta1; the Kubernetes version >= 1.16 preferred CRD API version is v1."
}
] |
{
"category": "Runtime",
"file_name": "development.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "This document defines a high level roadmap for Kuasar development. Define and develop new Sandbox API with containerd team Containerd (of Kuasar community) Containerd (of Containerd community with kuasar-shim) iSulad Cloud Hypervisor QEMU StratoVirt WasmEdge QuarkContainer Performance testing towards on startup time and memory overhead of kuasar vmm sandbox Containerd (of Containerd community) Wasmtime Runc Support Kubernetes Dynamic Resource Allocation (DRA) and Node Resource Interface (NRI) Support Evented PLEG Support CgroupV2 More observabilities to the project by opentracing Enhancement of sandboxer recovery Complete security vulnerability scanning Building e2e test workflow with more scenarios gVisor Firecracker Support CgroupV2 Running vm on Container OS Container checkpointing In-place Update of Pod Resources Develop CLI tool for operation and maintenance Image distribution eBPF observation"
}
] |
{
"category": "Runtime",
"file_name": "ROADMAP.md",
"project_name": "Kuasar",
"subcategory": "Container Runtime"
}
|
[
{
"data": "MDS is the center node of the system, responsible for managing metadata, collecting cluster status data and scheduling. MDS consists of following components: Topology: Managing topology metadata of the cluster NameServer: Managing file metadata CopySet: Replica placement strategy Heartbeat: Receiving and replying to heartbeat message from chunkserver, collecting load status and copyset info of chunkserver Schedule: Module for fault tolerance and load balance Topology module is for managing and coordinating servers. It provides business-oriented functional and non-functional service listed below by coordinating network and the placement of servers. Failure domain isolation: placing replicas in different servers, different racks or under different network switch Isolation and sharing: data of different users can isolated from each other on or share certain physical resources Figure 1 shows the topological diagram of CURVE and the explanation of corresponding components. <p align=\"center\"> <img src=\"../images/mds-topology-all.png\" alt=\"mds-topology-all.png\" width=\"900\"><br> <font size=3>Figure 1: Topological diagram of CURVE</font> </p> chunkserver: A chunkserver is an abstraction of a physical disk (SSD in our scenario) in a server (physical), and disk is the service unit of chunkserver. server: Server represent an actual physical server, to one of which any chunkservers must belong. zone: Zone is the unit of failure isolation. In common cases, servers (a physical machine) of different zones should at least be deployed under different racks. To become stricter for some scenarios, they should be deployed under different groups of racks (racks that share the same set of leaf switches). A server must be owned by a certain zone. pool: Pool is for implementing physical isolation of resources. Servers are not able to communicate across their pool. In the maintenance of the system, we can arrange a pool for a new set of machines, and extend the storage by pools. Extending storage by adding machines inside a pool is supported, but this is not recommended since it will affect the copyset number of every chunkserver. Learned from the design of Ceph, CURVE introduced the concept of logical pool on top of a physical pool in order to satisfy the requirement of building a unified storage system. In our design, we support the coexist of block storage (based on multi-replica), online object storage(based on three replicas storage that support appends, to be implemented) and nearline object storage (based on Erasure Code storage that support appends, to be implemented). <p align=\"center\"> <img src=\"../images/mds-topology-l.png\" alt=\"mds-topology-l.png\" width=\"600\"><br> <font size=3> Figure 2: An example of the relation between logical pool and physical pool</font> </p> Figure 2 is an example of the N:1 relation between logical pool and physical pool, and many types file can be stored in a physical pool. Multi pools are supported by CURVE, but you can also configure a single physical pool for only one logical pool. With the help of CURVE user system, logical pool can achieve a physical isolation of data from different users by specifying and restricting their behaviors (to be developed). logicalPool: A logical pool is for building pools of different characteristics on logical aspect (e.g. ). AppendECFile pool, AppendEC pool and PageFile pool shown in the figure above). This is for user level data isolation and sharing. NameServer is for managing metadata of namespace, including (for more details please check"
},
{
"data": "``FileInfo:`` File information. ``PageFileSegment:`` segment is the smallest unit of files spaces assignment. ``PageFileChunkInfo:`` chunks are the smallest unit for data fragmentation. Figure 3 below shows the relation between segment and chunk: <p align=\"center\"> <img src=\"../images/mds-segment-chunk-en.png\" alt=\"mds-segment-chunk-en.png\" width=\"900\"><br> <font size=3> Figure 3: Relation between segment and chunk</font> </p> Namespace info is rather intuitive, which is the hierarchy of files: <p align=\"center\"> <img src=\"../images/mds-nameserver.png\" alt=\"mds-nameserver.png\" width=\"700\"><br> <font size=3> Figure 4: Example of namespace data and operations </font> </p> Figure 4 illustrates how namespace info is stored in form of KV pairs. The key consists of parent directory ID and target name (separated by a '/'), and the value is the ID of the target file. In this way we struck a great balance between the workload of some of the operations we implemented: List: List all file and directories under a certain directory. Find: Find a specific file under a location Rename: Rename a file or a directory Currently, encoded metadata is stored in etcd. The unit of CURVE fragmentation is called a chunk, which occupies 16MB of spaces by default. In the scenario of large scale storage, many chunks will be created. With such a great amount of chunks, it will be stressful to store and manage corresponding metadata. To solve this problem, we introduced copyset to our system. In the scenario of block device based on replica storage, a copyset is a group of chunkservers that stores same replicas, and a chunkserver can store different copysets. The concept of copyset is proposed by Asaf et al. in paper Copysets: Reducing the Frequency of Data Loss in Cloud Storage. Basically it's for improving data persistency in distributed system and reduce the rate of data loss. We introduced copyset for three reasons: Reduce metadata: If we store replica relation for every chunk, for each chunk we need chunk ID + 3 node ID = 20 bytes of metadata. In the scenario of 1PB data and 5MB chunk size, there will be 5GB of metadata. But if we introduce copyset between chunks and replica groups, for each chunk we only need 12 bytes of metadata (chunk ID + copyset ID = 12 bytes), which reduces total metadata size to 3GB. Reduce the number of replica groups (a group of replicas for the same chunk): Imagine a scenario with 256K replica groups, in this case huge amount of RAM will be occupied for their data. Also,massive data flow will be created when secondary replicas send regular heartbeat to their primary. With copyset introduced, we can do liveness probe and configuration changing in granularity of copyset. Improve the reliability of data: When replicas are scattered too randomly in different servers, data reliability will be challenged when large-scale correlated failures occurred. For more details of this, please refer to the copyset paper. Figure 5 demonstrates the relation between ChunkServer, Copyset and Chunk: <p align=\"center\"> <img src=\"../images/mds-copyset.png\" alt=\"mds-copyset.png\" width=\"900\"><br> <font size=3> Figure 5: Relation between chunk, copyset and chunkserver</font> </p> Heartbeat is for data exchange between center nodes and data nodes, and it works in following ways: Monitor online status(online/offline) of chunkservers by regular heartbeats from chunkserver. Record status information(disk capacity, disk load, copyset load"
},
{
"data": "reported by chunkservers for Ops tools. Served as a reference by receiving regular heartbeats for scheduler module to balance workload and change configurations. Detect the difference between the copyset info from chunksevers and mds by comparing the copyset epoch reported by chunkservers, then synchronize them. Implement configurations changing by distributing configurations changes from mds in replies to chunkserver heartbeat, and monitor the progress of the changing in upcoming heartbeats. From figure 6 you can see the structure of heartbeat module: <p align=\"center\"> <img src=\"../images/mds-heartbeat.png\" alt=\"mds-heartbeat.png\" width=\"600\"><br> <font size=3> Figure 6: Structure of heartbeat module</font> </p> On MDS side, heartbeat module consists of three parts: TopoUpdater: This part updates info in Topology module according to copyset info reported by chunkservers. ConfGenerator: Forward info reported by copyset to scheduler, and fetch operations for copyset to execute. HealthyChecker: Update chunkserver status by checking the time gap between current time and the last heartbeat of a chunkserver. As for chunkserver side, there are two parts including: ChunkServerInfo/CopySetInfo: Fetch copyset info on chunkserver currently and report the info to MDS. Order ConfigChange: Summit operations distributed by MDS to corresponding copyset. System scheduling is for implementing auto fault tolerance and load balancing, which are core issues of distributed system, and are also two of the decisive features for whether or not CURVE can be deployed in production environment. Auto fault tolerance promises that data loss caused by commonly seen abnormals (e.g. disk failure and system outage) will be fixed automatically without people getting involved. Load balancing and resources balancing make sure that the system can make the best use of hardware resources like disk, CPU and memory. <p align=\"center\"> <img src=\"../images/mds-schedule-en.png\" alt=\"mds-schedule-en.png\" width=\"700\" /><br> <font size=3> Figure 7 Structure of scheduler module</font> </p> Figure 7 shows the structure of the scheduler module. Coordinator: Coordinator serves as the interface of the scheduler module. After receiving copyset info provided by heartbeats from chunkserver, coordinator will decide whether there's any configuration change for current copyset, and will distribute the change if there is. Task calculation: Task calculation module is for generating tasks by calculating data of corresponding status. This module consists of a few regular tasks and a triggerable task. Regular tasks include CopySetScheduler, LeaderScheduler, ReplicaScheduler and RecoverScheduler. CopySetScheduler is the scheduler for copyset balancing, generating copysets immigration tasks according to their distribution. LeaderScheduler is the scheduler for leader balancing, which responsible for changing leader according to leaders' distribution. ReplicaScheduler is for scheduling replica number, managing the generation and deletion of replica by analysing current replica numbers of a copyset, while RecoverScheduler controls the immigration of copysets according to their liveness. For triggerable task, RapidLeaderScheduler is for quick leader balancing, triggered by external events, and generates multiple leader changing task at a time to make leaders of the cluster balance as quick as possible. Another two modules are TopoAdapter and CommonStrategy. The former one is for fetching data required by topology module, while the later one implements general strategies for adding and removing replica. Task managing: Task managing module manages tasks generated by task calculation module. Inside this module we can see components OperatorController, OperatorStateUpdate and Metric, responsible for fetching and storing tasks, updating status according to copyset info reported and measuring tasks number respectively."
}
] |
{
"category": "Runtime",
"file_name": "mds_en.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "`kg` is the Kilo agent that runs on every Kubernetes node in a Kilo mesh. It performs several key functions, including: adding the node to the Kilo mesh; installing CNI configuration on the node; configuring the WireGuard network interface; and maintaining routing table entries and iptables rules. `kg` is typically installed on all nodes of a Kubernetes cluster using a DaemonSet. Example manifests can be found . The behavior of `kg` can be configured using the command line flags listed below. ```txt kg is the Kilo agent. It runs on every node of a cluster, setting up the public and private keys for the VPN as well as the necessary rules to route packets between locations. Usage: kg [flags] kg [command] Available Commands: completion generate the autocompletion script for the specified shell help Help about any command version Print the version and exit. webhook webhook starts a HTTPS server to validate updates and creations of Kilo peers. Flags: --backend string The backend for the mesh. Possible values: kubernetes (default \"kubernetes\") --clean-up Should kilo clean up network modifications on shutdown? (default true) --clean-up-interface Should Kilo delete its interface when it shuts down? --cni Should Kilo manage the node's CNI configuration? (default true) --cni-path string Path to CNI config. (default \"/etc/cni/net.d/10-kilo.conflist\") --compatibility string Should Kilo run in compatibility mode? Possible values: flannel, cilium --create-interface Should kilo create an interface on startup? (default true) --encapsulate string When should Kilo encapsulate packets within a location? Possible values: never, crosssubnet, always (default \"always\") -h, --help help for kg --hostname string Hostname of the node on which this process is running. --interface string Name of the Kilo interface to use; if it does not exist, it will be created. (default \"kilo0\") --iptables-forward-rules Add default accept rules to the FORWARD chain in iptables. Warning: this may break firewalls with a deny all policy and is potentially insecure! --kubeconfig string Path to kubeconfig. --listen string The address at which to listen for health and metrics. (default \":1107\") --local Should Kilo manage routes within a location? (default true) --log-level string Log level to use. Possible values: all, debug, info, warn, error, none (default \"info\") --master string The address of the Kubernetes API server (overrides any value in kubeconfig). --mesh-granularity string The granularity of the network mesh to create. Possible values: location, full (default \"location\") --mtu uint The MTU of the WireGuard interface created by Kilo. (default 1420) --port int The port over which WireGuard peers should communicate. (default 51820) --prioritise-private-addresses Prefer to assign a private IP address to the node's endpoint. --resync-period duration How often should the Kilo controllers reconcile? (default 30s) --service-cidr strings The service CIDR for the Kubernetes cluster. Can be provided optionally to avoid masquerading packets sent to service IPs. Can be specified multiple times. --subnet string CIDR from which to allocate addresses for WireGuard interfaces. (default \"10.4.0.0/16\") --topology-label string Kubernetes node label used to group nodes into logical locations. (default \"topology.kubernetes.io/region\") --version Print version and exit. ```"
}
] |
{
"category": "Runtime",
"file_name": "kg.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "| json type \\ dest type | bool | int | uint | float |string| | | | | |--|--| | number | positive => true <br/> negative => true <br/> zero => false| 23.2 => 23 <br/> -32.1 => -32| 12.1 => 12 <br/> -12.1 => 0|as normal|same as origin| | string | empty string => false <br/> string \"0\" => false <br/> other strings => true | \"123.32\" => 123 <br/> \"-123.4\" => -123 <br/> \"123.23xxxw\" => 123 <br/> \"abcde12\" => 0 <br/> \"-32.1\" => -32| 13.2 => 13 <br/> -1.1 => 0 |12.1 => 12.1 <br/> -12.3 => -12.3<br/> 12.4xxa => 12.4 <br/> +1.1e2 =>110 |same as origin| | bool | true => true <br/> false => false| true => 1 <br/> false => 0 | true => 1 <br/> false => 0 |true => 1 <br/>false => 0|true => \"true\" <br/> false => \"false\"| | object | true | 0 | 0 |0|originnal json| | array | empty array => false <br/> nonempty array => true| [] => 0 <br/> [1,2] => 1 | [] => 0 <br/> [1,2] => 1 |[] => 0<br/>[1,2] => 1|original json|"
}
] |
{
"category": "Runtime",
"file_name": "fuzzy_mode_convert_table.md",
"project_name": "Kilo",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Given a pod UUID, if you want to enter a running pod to explore its filesystem or see what's running you can use rkt enter. ``` Pod contains multiple apps: redis etcd Unable to determine app name: specify app using \"rkt enter --app= ...\" No command specified, assuming \"/bin/bash\" root@rkt-76dc6286-f672-45f2-908c-c36dcd663560:/# ls bin data entrypoint.sh home lib64 mnt proc run selinux sys usr boot dev etc lib media opt root sbin srv tmp var ``` | Flag | Default | Options | Description | | | | | | | `--app` | `` | Name of an app | Name of the app to enter within the specified pod | See the table with ."
}
] |
{
"category": "Runtime",
"file_name": "enter.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "% runc-resume \"8\" runc-resume - resume all processes that have been previously paused runc resume container-id The resume command resumes all processes in the instance of the container identified by container-id. Use runc list to identify instances of containers and their current status. runc-list(8), runc-pause(8), runc(8)."
}
] |
{
"category": "Runtime",
"file_name": "runc-resume.8.md",
"project_name": "runc",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The Scientific Filesystem is well suited for Singularity containers to allow you to build a container that has multiple entrypoints, along with modular environments, libraries, and executables. Here we will review the basic building and using of a Singularity container that implements SCIF. For more quick start tutorials, see the . Build your image ```sh sudo singularity build cowsay.simg Singularity.cowsay ``` What apps are installed? ```console $ singularity apps cowsay.simg cowsay fortune lolcat ``` Ask for help for a specific app! ```console $ singularity help --app fortune cowsay.simg fortune is the best app ``` Run a particular app ```console $ singularity run --app fortune cowsay.simg When I reflect upon the number of disagreeable people who I know who have gone to a better world, I am moved to lead a different life. -- Mark Twain, \"Pudd'nhead Wilson's Calendar\" ``` Inspect an app ```console $ singularity inspect --app fortune cowsay.img { \"SCIF_APPNAME\": \"fortune\", \"SCIF_APPSIZE\": \"1MB\" } ```"
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Singularity",
"subcategory": "Container Runtime"
}
|
[
{
"data": "What type of PR is this? What this PR does / why we need it: Which issue(s) this PR fixes: <!-- *Automatically closes linked issue when PR is merged. Usage: `Fixes #<issue number>`, or `Fixes (paste link of issue)`. If PR is about `failing-tests or flakes`, please post the related issues/tests in a comment and do not use `Fixes`* --> Fixes # Special notes for your reviewer: Does this PR introduce an API-breaking change?: <!-- If no, just write \"NONE\" in the release-note block below. If yes, a release note is required: Enter your extended release note in the block below. If the PR requires additional action from users switching to the new release, include the string \"action required\". --> ```release-note none ```"
}
] |
{
"category": "Runtime",
"file_name": "PULL_REQUEST_TEMPLATE.md",
"project_name": "Container Storage Interface (CSI)",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage the IPCache mappings for IP/CIDR <-> Identity ``` -h, --help help for ipcache ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Retrieve identity for an ip - List endpoint IPs (local and remote) and their corresponding security identities"
}
] |
{
"category": "Runtime",
"file_name": "cilium-dbg_bpf_ipcache.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This resource controls the state of the LINSTOR cluster and integration with Kubernetes. In particular, it controls: LINSTOR Controller LINSTOR CSI Driver `LinstorSatellite`, configured through `LinstorSatelliteConfiguration` resources. Configures the desired state of the cluster. Selects on which nodes Piraeus Datastore should be deployed. Nodes that are excluded by the selector will not be able to run any workload using a Piraeus volume. If empty (the default), Piraeus will be deployed on all nodes in the cluster. When this is used together with `.spec.nodeAffinity`, both need to match in order for a node to run Piraeus. This example restricts Piraeus Datastore to nodes matching `example.com/storage: \"yes\"`: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: nodeSelector: example.com/storage: \"yes\" ``` Selects on which nodes Piraeus Datastore should be deployed. Nodes that are excluded by the affinity will not be able to run any workload using a Piraeus volume. If empty (the default), Piraeus will be deployed on all nodes in the cluster. When this is used together with `.spec.nodeSelector`, both need to match in order for a node to run Piraeus. This example restricts Piraeus Datastore to nodes in zones `a` and `b`, but not on `control-plane` nodes: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: nodeAffinity: nodeSelectorTerms: matchExpressions: key: topology.kubernetes.io/zone operator: In values: a b key: node-role.kubernetes.io/control-plane operator: DoesNotExist ``` Sets the default image registry to use for all Piraeus images. The full image name is created by appending an image identifier and tag. If empty (the default), Piraeus will use `quay.io/piraeusdatastore`. The current list of default images is available . This example pulls all Piraeus images from `registry.example.com/piraeus-mirror` rather than `quay.io/piraeusdatastore`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: repository: registry.example.com/piraeus-mirror ``` Sets the given properties on the LINSTOR Controller level, applying them to the whole Cluster. This example sets the port range used for DRBD volumes to `10000-20000`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: properties: name: TcpPortAutoRange value: \"10000-20000\" ``` Configures the , used by LINSTOR when creating encrypted volumes and storing access credentials for backups. The referenced secret must exist in the same namespace as the operator (by default `piraeus-datastore`), and have a `MASTER_PASSPHRASE` entry. This example configures a passphrase `example-passphrase`. Please choose a different passphrase for your deployment. ```yaml apiVersion: v1 kind: Secret metadata: name: linstor-passphrase namespace: piraeus-datastore data: MASTER_PASSPHRASE: ZXhhbXBsZS1wYXNzcGhyYXNl apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: linstorPassphraseSecret: linstor-passphrase ``` The given patches will be applied to all resources controlled by the operator. The patches are forwarded to `kustomize` internally, and take the . The unpatched resources are available in the"
},
{
"data": "No checks are run on the result of user-supplied patches: the resources are applied as-is. Patching some fundamental aspect, such as removing a specific volume from a container may lead to a degraded cluster. This example sets a CPU limit of `10m` on the CSI Node init container and changes the LINSTOR Controller service to run in `SingleStack` mode. ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: patches: target: kind: Daemonset name: csi-node patch: |- op: add path: /spec/template/spec/initContainers/0/resources value: limits: cpu: 10m target: kind: Service name: linstor-controller patch: |- apiVersion: v1 kind: service metadata: name: linstor-controller spec: ipFamilyPolicy: SingleStack ``` Configures the Operator to use an external controller instead of deploying one in the Cluster. This examples instructs the Operator to use the external LINSTOR Controller reachable at `http://linstor.example.com:3370`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: externalController: url: http://linstor.example.com:3370 ``` Controls the LINSTOR Controller Deployment: Setting `enabled: false` disables the controller deployment entirely. See also . Setting a `podTemplate:` allows for simple modification of the LINSTOR Controller Deployment. This example configures a resource request of `memory: 1Gi` for the LINSTOR Controller Deployment: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: controller: enabled: true podTemplate: spec: containers: name: linstor-controller resources: requests: memory: 1Gi ``` Controls the CSI Controller Deployment: Setting `enabled: false` disables the deployment entirely. Setting a `podTemplate:` allows for simple modification of the CSI Controller Deployment. This example configures a resource request of `cpu: 10m` for the CSI Controller Deployment: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: csiController: enabled: true podTemplate: spec: containers: name: linstor-csi resources: requests: memory: 1Gi ``` Controls the CSI Node DaemonSet: Setting `enabled: false` disables the deployment entirely. Setting a `podTemplate:` allows for simple modification of the CSI Node DaemonSet. This example configures a resource request of `cpu: 10m` for the CSI Node DaemonSet: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: csiNode: enabled: true podTemplate: spec: containers: name: linstor-csi resources: requests: memory: 1Gi ``` Controls the High Availability Controller DaemonSet: Setting `enabled: false` disables the deployment entirely. Setting a `podTemplate:` allows for simple modification of the CSI Node Deployment. This example configures a resource request of `cpu: 10m` for the CSI Node Deployment: ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: highAvailabilityController: enabled: true podTemplate: spec: containers: name: ha-controller resources: requests: memory: 1Gi ``` Configures a TLS secret used by the LINSTOR Controller to: Validate the certificate of the LINSTOR Satellites, that is the Satellites must have certificates signed by `ca.crt`. Provide a client certificate for authentication with LINSTOR Satellites, that is `tls.key` and `tls.crt` must be accepted by the Satellites. To configure TLS communication between Satellite and Controller, must be set"
},
{
"data": "Setting a `secretName` is optional, it will default to `linstor-controller-internal-tls`. Optional, a reference to a can be provided to let the operator create the required secret. This example creates a manually provisioned TLS secret and references it in the LinstorCluster configuration. ```yaml apiVersion: v1 kind: Secret metadata: name: my-linstor-controller-tls namespace: piraeus-datastore data: ca.crt: LS0tLS1CRUdJT... tls.crt: LS0tLS1CRUdJT... tls.key: LS0tLS1CRUdJT... apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: internalTLS: secretName: my-linstor-controller-tls ``` This example sets up automatic creation of the LINSTOR Controller TLS secret using a cert-manager issuer named `piraeus-root`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: internalTLS: certManager: kind: Issuer name: piraeus-root ``` Configures the TLS secrets used to secure the LINSTOR API. There are four different secrets to configure: `apiSecretName`: sets the name of the secret used by the LINSTOR Controller to enable HTTPS. Defaults to `linstor-api-tls`. All clients of the API must have certificates signed by the `ca.crt` of this secret. `clientSecretName`: sets the name of the secret used by the Operator to connect to the LINSTOR API. Defaults to `linstor-client-tls`. Must be trusted by `ca.crt` in the API Secret. Also used by the LINSTOR Controller to configure the included LINSTOR CLI. `csiControllerSecretName` sets the name of the secret used by the CSI Controller. Defaults to `linstor-csi-controller-tls`. Must be trusted by `ca.crt` in the API Secret. `csiNodeSecretName` sets the name of the secret used by the CSI Controller. Defaults to `linstor-csi-node-tls`. Must be trusted by `ca.crt` in the API Secret. Optional, a reference to a can be provided to let the operator create the required secrets. This example creates a manually provisioned TLS secret and references it in the LinstorCluster configuration. It uses the same secret for all clients of the LINSTOR API. ```yaml apiVersion: v1 kind: Secret metadata: name: my-linstor-api-tls namespace: piraeus-datastore data: ca.crt: LS0tLS1CRUdJT... tls.crt: LS0tLS1CRUdJT... tls.key: LS0tLS1CRUdJT... apiVersion: v1 kind: Secret metadata: name: my-linstor-client-tls namespace: piraeus-datastore data: ca.crt: LS0tLS1CRUdJT... tls.crt: LS0tLS1CRUdJT... tls.key: LS0tLS1CRUdJT... apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: apiTLS: apiSecretName: my-linstor-api-tls clientSecretName: my-linstor-client-tls csiControllerSecretName: my-linstor-client-tls csiNodeSecretName: my-linstor-client-tls ``` This example sets up automatic creation of the LINSTOR API and LINSTOR Client TLS secret using a cert-manager issuer named `piraeus-root`. ```yaml apiVersion: piraeus.io/v1 kind: LinstorCluster metadata: name: linstorcluster spec: apiTLS: certManager: kind: Issuer name: piraeus-root ``` Reports the actual state of the cluster. The Operator reports the current state of the Cluster through a set of conditions. Conditions are identified by their `type`. | `type` | Explanation | |--|-| | `Applied` | All Kubernetes resources controlled by the Operator are applied and up to date. | | `Available` | The LINSTOR Controller is deployed and reponding to requests. | | `Configured` | The LINSTOR Controller is configured with the properties from `.spec.properties` |"
}
] |
{
"category": "Runtime",
"file_name": "linstorcluster.md",
"project_name": "Piraeus Datastore",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: Using JuiceFS in Docker sidebar_position: 6 slug: /juicefsondocker description: Using JuiceFS in Docker in different ways, including volume mapping, volume plugin, and mounting in containers. You can use the JuiceFS file system in Docker by running the client directly in the container or using a volume plugin. If you have specific requirements for mount management, such as managing mount points through Docker to facilitate different application containers using different JuiceFS file systems, you can use a . Docker plugins are usually provided in the form of images. The contains the and clients. After installation, you can run the volume plugin to create JuiceFS volumes in Docker. Install the plugin using the following command and provide the necessary permissions for FUSE as prompted: ```shell docker plugin install juicedata/juicefs ``` You can use the following commands to manage the volume plugin: ```shell docker plugin disable juicedata/juicefs docker plugin upgrade juicedata/juicefs docker plugin enable juicedata/juicefs docker plugin rm juicedata/juicefs ``` Replace `<VOLUMENAME>`, `<METAURL>`, `<STORAGETYPE>`, `<BUCKETNAME>`, `<ACCESSKEY>`, and `<SECRETKEY>` in the following command with your own file system configuration: ```shell docker volume create -d juicedata/juicefs \\ -o name=<VOLUME_NAME> \\ -o metaurl=<META_URL> \\ -o storage=<STORAGE_TYPE> \\ -o bucket=<BUCKET_NAME> \\ -o access-key=<ACCESS_KEY> \\ -o secret-key=<SECRET_KEY> \\ jfsvolume ``` For pre-created file systems, you only need to specify the file system name and database address when creating the volume plugin, for example: ```shell docker volume create -d juicedata/juicefs \\ -o name=<VOLUME_NAME> \\ -o metaurl=<META_URL> \\ jfsvolume ``` If you need to pass additional environment variables when mounting the file system, such as in , you can append parameters similar to `-o env=FOO=bar,SPAM=egg` to the above command. ```shell docker run -it -v jfsvolume:/opt busybox ls /opt docker volume rm jfsvolume ``` Here is an example of using the JuiceFS volume plugin in `docker-compose`: ```yaml version: '3' services: busybox: image: busybox command: \"ls /jfs\" volumes: jfsvolume:/jfs volumes: jfsvolume: driver: juicedata/juicefs driver_opts: name: ${VOL_NAME} metaurl: ${META_URL} storage: ${STORAGE_TYPE} bucket: ${BUCKET} access-key: ${ACCESS_KEY} secret-key: ${SECRET_KEY} ``` Usage and management: ```shell docker-compose up docker-compose down --volumes ``` If it is not working properly, it is recommended to first , and then check the logs based on the problem. Collect JuiceFS client logs. The logs are located inside the Docker volume plugin container and need to be accessed by entering the container: ```shell ls /run/docker/plugins/runtime-root/plugins.moby runc --root /run/docker/plugins/runtime-root/plugins.moby list runc --root /run/docker/plugins/runtime-root/plugins.moby exec 452d2c0cf3fd45e73a93a2f2b00d03ed28dd2bc0c58669cca9d4039e8866f99f cat /var/log/juicefs.log ``` If the container does not exist (`ls` finds an empty directory) or the `juicefs.log` does not exist in the final log printing stage, it is likely that the mount itself failed. Continue to check the plugin's own logs to find the"
},
{
"data": "Collect plugin logs, using systemd as an example: ```shell journalctl -f -u docker | grep \"plugin=\" ``` If there is an error when the plugin calls `juicefs` or if the plugin itself reports an error, it will be reflected in the logs. Compared to the volume plugin, using the JuiceFS client directly in the container is more flexible. You can directly mount the JuiceFS file system in the container or access it through S3 Gateway or WebDAV. The JuiceFS client is a standalone binary program that provides versions for both AMD64 and ARM64 architectures. You can define the command to download and install the JuiceFS client in the Dockerfile, for example: ```Dockerfile FROM ubuntu:22.04 ... RUN curl -sSL https://d.juicefs.com/install | sh - ``` For more information, see . The JuiceFS officially maintained image is tagged to specify the desired version. The community edition tags include `latest` and `ce`, such as `ce-v1.1.2` and `ce-nightly`. The `latest` tag represents the latest community edition, and the `nightly` tag points to the latest development version. For details, see the on Docker Hub. Before you start, you need to prepare and . Create a file system through a temporary container, for example: ```sh docker run --rm \\ juicedata/mount:ce-v1.1.2 juicefs format \\ --storage s3 \\ --bucket https://xxx.your-s3-endpoint.com \\ --access-key=ACCESSKEY \\ --secret-key=SECRETKEY \\ rediss://user:[email protected]:6379/1 myjfs ``` Replace `--storage`, `--bucket`, `--access-key`, `--secret-key`, and the metadata engine URL with your own configuration. Create a container and mount the JuiceFS file system in the container, for example: ```sh docker run --privileged --name myjfs \\ juicedata/mount:ce-v1.1.2 juicefs mount \\ rediss://user:[email protected]:6379/1 /mnt ``` Replace the metadata engine URL with your own configuration. `/mnt` is the mount point and can be modified as needed. Since FUSE is used, `--privileged` permission is also required. Here is an example using Docker Compose. Replace the metadata engine URL and mount point with your own configuration. ```yaml version: \"3\" services: juicefs: image: juicedata/mount:ce-v1.1.2 container_name: myjfs volumes: ./mnt:/mnt:rw,rshared cap_add: SYS_ADMIN devices: /dev/fuse security_opt: apparmor:unconfined command: [\"juicefs\", \"mount\", \"rediss://user:[email protected]:6379/1\", \"/mnt\"] restart: unless-stopped ``` In the container, the JuiceFS file system is mounted to the `/mnt` directory, and the volumes section in the configuration file maps the `/mnt` in the container to the `./mnt` directory on the host, allowing direct access to the JuiceFS file system mounted in the container from the host. Here is an example of exposing JuiceFS for access through S3 Gateway. Replace `MINIOROOTUSER`, `MINIOROOTPASSWORD`, the metadata engine URL, and the address and port number to listen on with your own configuration. ```yaml version: \"3\" services: s3-gateway: image: juicedata/mount:ce-v1.1.2 container_name: juicefs-s3-gateway environment: MINIOROOTUSER=your-username MINIOROOTPASSWORD=your-password ports: \"9090:9090\" command: [\"juicefs\", \"gateway\", \"rediss://user:[email protected]:6379/1\", \"0.0.0.0:9090\"] restart: unless-stopped ``` Use port `9090` on the host to access the S3 Gateway console, and use the same address to read and write the JuiceFS file system through the S3 client or SDK."
}
] |
{
"category": "Runtime",
"file_name": "juicefs_on_docker.md",
"project_name": "JuiceFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "orphan: true nosearch: true myst: substitutions: reuse_key: \"This is included text.\" advancedreusekey: \"This is a substitution that includes a code block: ``` code block ```\" The documentation files use a mixture of and syntax. See the following sections for syntax help and conventions. ```{list-table} :header-rows: 1 - Input Description - `# Title` Page title and H1 heading - `## Heading` H2 heading - `### Heading` H3 heading - `#### Heading` H4 heading - ... Further headings ``` Adhere to the following conventions: Do not use consecutive headings without intervening text. Use sentence style for headings (capitalize only the first word). Do not skip levels (for example, always follow an H2 with an H3, not an H4). ```{list-table} :header-rows: 1 - Input Output - `` {guilabel}`UI element` `` {guilabel}`UI element` - `` `code` `` `code` - `` {command}`command` `` {command}`command` `Italic*` Italic `Bold*` Bold ``` Adhere to the following conventions: Use italics sparingly. Common uses for italics are titles and names (for example, when referring to a section title that you cannot link to, or when introducing the name for a concept). Use bold sparingly. A common use for bold is UI elements (\"Click OK\"). Avoid using bold for emphasis and rather rewrite the sentence to get your point across. Start and end a code block with three back ticks: ``` You can specify the code language after the back ticks to enforce a specific lexer, but in many cases, the default lexer works just fine. ```{list-table} :header-rows: 1 - Input Output - ```` ``` code: example: true ``` ```` ``` code: example: true ``` - ```` ```yaml code: example: true ``` ```` ```yaml code: example: true ``` ``` To include back ticks in a code block, increase the number of surrounding back ticks: ```{list-table} :header-rows: 1 - Input Output - ````` ```` ``` ```` ````` ```` ``` ```` ``` How to link depends on if you are linking to an external URL or to another page in the documentation. For external links, use only the URL, or Markdown syntax if you want to override the link text. ```{list-table} :header-rows: 1 - Input Output - `https://linuxcontainers.org/incus` - `` ``` To display a URL as text and prevent it from being linked, add a `<span></span>`: ```{list-table} :header-rows: 1 - Input Output - `https:/<span></span>/linuxcontainers.org/incus` {spellexception}`https:/<span></span>/linuxcontainers.org/incus` ``` For internal references, both Markdown and MyST syntax are supported. In most cases, you should use MyST syntax though, because it resolves the link text automatically and gives an indication of the link in GitHub rendering. To reference a documentation page, use MyST syntax to automatically extract the link text. When overriding the link text, use Markdown syntax. ```{list-table} :header-rows: 1 - Input Output Output on GitHub Status - `` {doc}`index` `` {doc}`index` {doc}<span></span>`index` Preferred. - `` - Do not use. - `` - Preferred when overriding the link text. - `` {doc}`Incus documentation <index>` `` {doc}`Incus documentation <index>` {doc}<span></span>`Incus documentation <index>` Alternative when overriding the link text. ``` Adhere to the following conventions: Override the link text only when it is"
},
{
"data": "If you can use the document title as link text, do so, because the text will then update automatically if the title changes. Never \"override\" the link text with the same text that would be generated automatically. (asectiontarget)= To reference a section within the documentation (on the same page or on another page), you can either add a target to it and reference that target, or you can use an automatically generated anchor in combination with the file name. Adhere to the following conventions: Add targets for sections that are central and a \"typical\" place to link to, so you expect they will be linked frequently. For \"one-off\" links, you can use the automatically generated anchors. Override the link text only when it is necessary. If you can use the section title as link text, do so, because the text will then update automatically if the title changes. Never \"override\" the link text with the same text that would be generated automatically. You can add targets at any place in the documentation. However, if there is no heading or title for the targeted element, you must specify a link text. (arandomtarget)= ```{list-table} :header-rows: 1 - Input Output Output on GitHub Description - `(target_ID)=` - \\(target_ID\\)= Adds the target ``target_ID``. - `` {ref}`asectiontarget` `` {ref}`asectiontarget` \\{ref\\}`asectiontarget` References a target that has a title. - `` {ref}`link text <arandomtarget>` `` {ref}`link text <arandomtarget>` \\{ref\\}`link text <arandomtarget>` References a target and specifies a title. - ```` - (link is broken) Use Markdown syntax if you need markup on the link text. ``` You must use Markdown syntax to use automatically generated anchors. You can leave out the file name when linking within the same file. ```{list-table} :header-rows: 1 - Input Output Output on GitHub Description - `` - Do not use. - `` - Preferred when overriding the link text. ``` Every documentation page must be included as a subpage to another page in the navigation. This is achieved with the directive in the parent page: <!-- wokeignore:rule=master --> ```` ```{toctree} :hidden: subpage1 subpage2 ``` ```` If a page should not be included in the navigation, you can suppress the resulting build warning by putting the following instruction at the top of the file: ``` orphan: true ``` Use orphan pages sparingly and only if there is a clear reason for it. ```{list-table} :header-rows: 1 - Input Output - ``` Item 1 Item 2 Item 3 ``` - Item 1 Item 2 Item 3 - ``` Step 1 Step 2 Step 3 ``` Step 1 Step 2 Step 3 - ``` Step 1 Item 1 Subitem Item 2 Step 2 Substep 1 Substep 2 ``` Step 1 Item 1 Subitem Item 2 Step 2 Substep 1 Substep 2 ``` Adhere to the following conventions: In numbered lists, use ``1.`` for all items to generate the step numbers automatically. Use `-` for unordered lists. When using nested lists, you can use `*` for the nested level. ```{list-table} :header-rows: 1 - Input Output - ``` Term 1 : Definition Term 2 : Definition ``` Term 1 : Definition Term 2 : Definition ``` You can use standard Markdown tables. However, using the rST syntax is usually much"
},
{
"data": "Both markups result in the following output: ```{list-table} :header-rows: 1 - Header 1 Header 2 - Cell 1 Second paragraph cell 1 Cell 2 - Cell 3 Cell 4 ``` ``` | Header 1 | Header 2 | ||-| | Cell 1<br><br>2nd paragraph cell 1 | Cell 2 | | Cell 3 | Cell 4 | ``` ```` ```{list-table} :header-rows: 1 - Header 1 Header 2 - Cell 1 2nd paragraph cell 1 Cell 2 - Cell 3 Cell 4 ``` ```` ```{list-table} :header-rows: 1 - Input Output - ```` ```{note} A note. ``` ```` ```{note} A note. ``` - ```` ```{tip} A tip. ``` ```` ```{tip} A tip. ``` - ```` ```{important} Important information ``` ```` ```{important} Important information. ``` - ```` ```{caution} This might damage your hardware! ``` ```` ```{caution} This might damage your hardware! ``` ``` Adhere to the following conventions: Use notes sparingly. Only use the following note types: `note`, `tip`, `important`, `caution` Only use a caution if there is a clear hazard of hardware damage or data loss. ```{list-table} :header-rows: 1 - Input Output - ``` ``` - ```` ```{figure} https://linuxcontainers.org/incus/docs/main/_static/tag.png :width: 100px :alt: Alt text Figure caption ``` ```` ```{figure} https://linuxcontainers.org/incus/docs/main/_static/tag.png :width: 100px :alt: Alt text Figure caption ``` ``` Adhere to the following conventions: For pictures in the `doc` directory, start the path with `/` (for example, `/images/image.png`). Use PNG format for screenshots and SVG format for graphics. A big advantage of MyST in comparison to plain Markdown is that it allows to reuse content. To reuse sentences or paragraphs without too much markup and special formatting, use substitutions. Substitutions can be defined in the following locations: In the `substitutions.yaml` file. Substitutions defined in this file are available in all documentation pages. At the top of a single file in the following format: ```` myst: substitutions: reuse_key: \"This is included text.\" advancedreusekey: \"This is a substitution that includes a code block: ``` code block ```\" ```` You can combine both options by defining a default substitution in `reuse/substitutions.py` and overriding it at the top of a file. ```{list-table} :header-rows: 1 - Input Output - `{{reuse_key}}` {{reuse_key}} - `{{advancedreusekey}}` {{advancedreusekey}} ``` Adhere to the following convention: Substitutions do not work on GitHub. Therefore, use key names that indicate the included text (for example, `notenotsupported` instead of `reuse_note`). To reuse longer sections or text with more advanced markup, you can put the content in a separate file and include the file or parts of the file in several locations. You cannot put any targets into the content that is being reused (because references to this target would be ambiguous then). You can, however, put a target right before including the file. By combining file inclusion and substitutions, you can even replace parts of the included text. `````{list-table} :header-rows: 1 - Input Output - ```` % Include parts of the content from file ```{include} ../README.md :start-after: <!-- Include start Incus intro --> :end-before: <!-- Include end Incus intro --> ``` ```` % Include parts of the content from file ```{include} ../README.md :start-after: <!-- Include start Incus intro --> :end-before: <!-- Include end Incus intro --> ``` ````` Adhere to the following convention: File inclusion does not work on GitHub. Therefore, always add a comment linking to the included"
},
{
"data": "To select parts of the text, add HTML comments for the start and end points and use `:start-after:` and `:end-before:`, if possible. You can combine `:start-after:` and `:end-before:` with `:start-line:` and `:end-line:` if required. Using only `:start-line:` and `:end-line:` is error-prone though. ``````{list-table} :header-rows: 1 - Input Output - ````` ````{tabs} ```{group-tab} Tab 1 Content Tab 1 ``` ```{group-tab} Tab 2 Content Tab 2 ``` ```` ````` ````{tabs} ```{group-tab} Tab 1 Content Tab 1 ``` ```{group-tab} Tab 2 Content Tab 2 ``` ```` `````` There is no support for details sections in rST, but you can insert HTML to create them. ```{list-table} :header-rows: 1 - Input Output - ``` <details> <summary>Details</summary> Content </details> ``` <details> <summary>Details</summary> Content </details> ``` You can define glossary terms in any file. Ideally, all terms should be collected in one glossary file though, and they can then be referenced from any file. `````{list-table} :header-rows: 1 - Input Output - ```` ```{glossary} example term Definition of the example term. ``` ```` ```{glossary} example term Definition of the example term. ``` - ``{term}`example term` `` {term}`example term` ````` `````{list-table} :header-rows: 1 - Input Output - ```` ```{versionadded} X.Y ``` ```` ```{versionadded} X.Y ``` - `` {abbr}`API (Application Programming Interface)` `` {abbr}`API (Application Programming Interface)` ````` The documentation uses some custom extensions. You can add links to related websites to the sidebar by adding the following field at the top of the page: relatedlinks: https://github.com/canonical/lxd-sphinx-extensions, To override the title, use Markdown syntax. Note that spaces are ignored; if you need spaces in the title, replace them with ` `, and include the value in quotes if Sphinx complains about the metadata value because it starts with `[`. To add a link to a Discourse topic, add the following field at the top of the page (where `12345` is the ID of the Discourse topic): discourse: 12345 To add a link to a YouTube video, use the following directive: `````{list-table} :header-rows: 1 - Input Output - ```` ```{youtube} https://www.youtube.com/watch?v=iMLiK1fX4I0 :title: Demo ``` ```` ```{youtube} https://www.youtube.com/watch?v=iMLiK1fX4I0 :title: Demo ``` ````` The video title is extracted automatically and displayed when hovering over the link. To override the title, add the `:title:` option. If you need to use a word that does not comply to the spelling conventions, but is correct in a certain context, you can exempt it from the spelling checker by surrounding it with `{spellexception}`. ```{list-table} :header-rows: 1 - Input Output - `` {spellexception}`PurposelyWrong` `` {spellexception}`PurposelyWrong` ``` To show a terminal view with commands and output, use the following directive: `````{list-table} :header-rows: 1 - Input Output - ```` ```{terminal} :input: command number one :user: root :host: vm output line one output line two :input: another command more output ``` ```` ```{terminal} :input: command number one :user: root :host: vm output line one output line two :input: another command more output ``` ````` Input is specified as the `:input:` option (or prefixed with `:input:` as part of the main content of the directive). Output is the main content of the directive. To override the prompt (`user@host:~$` by default), specify the `:user:` and/or `:host:` options. To make the terminal scroll horizontally instead of wrapping long lines, add `:scroll:`."
}
] |
{
"category": "Runtime",
"file_name": "doc-cheat-sheet.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "All notable changes to this project will be documented in this file. Fix to allowing running x86 apps in Sysbox containers (issue #350). Fix sysbox-fs nsenter mount leak. Fix sysbox emulation of /proc and /sys in containers for kernels 6.5+. Add hardening against CVE-2024-21626. Fix ordering of mounts under /run for containers with systemd (issue #767). Fix to ensure \"docker --net=host\" works inside Sysbox containers (issue #712). Fix bug when mounting host kernel headers into containers (issue #727). Fix emulation of /sys/devices/virtual/ inside containers (issue #719). Don't intercept xattr syscalls by default (improves performance). Add feature to skip shiftfs and idmapping on specific container files/dirs (via `SYSBOXSKIPUID_SHIFT` container env var). Fix bug with fsuid-map-fail-on-error config option. Fix bug with pivot-root inside Sysbox containers (ensures docker:24-dind image can run inside Sysbox containers). sysbox-deploy-k8s: add support for Kubernetes v1.27 and v1.28. sysbox-deploy-k8s: automatically detect installation on GKE clusters and set up configs accordingly. sysbox-deploy-k8s: support installation on Debian-based K8s nodes. sysbox-deploy-k8s: don't install shiftfs on K8s nodes with kernel >= 5.19. sysbox-deploy-k8s: deprecated support for K8s v1.24 and v1.25 (EOL'd). Fix bug in Sysbox's checking of host support for idmapping and shiftfs. Fix storage leak in /var/lib/sysbox when using Sysbox on K8s clusters. Fix bug in Sysbox's handling of \"docker run -w\" flag. Change disable-inner-image-preload flag to allow running (but not committing) sysbox containers with preloaded inner images. Set disable-inner-image-preload flag in Sysbox K8s deployments to improve performance when stopping pods. Added support for ID-mapped overlayfs lower layers; eliminates need for shiftfs and Sysbox rootfs chown; requires kernel 5.19+. Have Sysbox perform shiftfs and ID-mapping functional checks during init (issue #596). Fixed rootfs cloning to prevent inode leakage (for hosts with kernel < 5.19 and no shiftfs) (issue #570). Added support for Kubernetes v1.24 to v1.26. Added --disable-inner-image-preload flag to sysbox-mgr (speeds up Sysbox container startup). Added --syscont-mode flag to sysbox-mgr; allows Sysbox to work in system container mode (default) or regular container mode; the latter is meant for running microservices with stronger isolation. Added --disable-shiftfs-on-fuse flag to sysbox-mgr; prevents Sysbox from mounting shiftfs on top of FUSE-backed filesystems (some of which don't work with shiftfs). Added few optimizations to expedite I/O operations in procfs/sysfs emulated resources. Enhanced life-cycle management of Sysbox daemons in Systemd-free scenarios. Prevented concurrent execution of Sysbox daemons (multi-instance problem). Improved the handling of ungraceful shutdown scenarios. Eliminated Sysbox dependencies on configfs kernel module presence. Fixed emulation of /sys/module/nf_conntrack/parameters inside containers. Added emulation of /sys/devices/virtual/dmi branch inside containers (for hosts where this or inner resources is not present). Hide /sys/kernel/security inside containers (issue #662) Don't assign more capabilities to the container than those given to Sysbox itself. Don't fail in kernel distros without /lib/modules/<kernel-release>. Increased the pods-per-node limit from 16 to 4K (Sysbox-CE now matches Sysbox-EE on this regard). Extended kubelet config-detection process to multiple drop-in files in sysbox-deploy-k8s daemon-set. Incorporated taints during sysbox-deploy-k8s installation process. Fixed issue preventing sysbox-deploy-k8s installation in rke2 environments (issue #614). Fixed issue preventing proper sysbox-deploy-k8s installation in Azure (issue #612). Fixed"
},
{
"data": "#544 preventing containers initialization within sysbox containers when running latest oci-runc releases (1.1.0-rc.1+). Added support to allow CIFS mounts within Sysbox containers (Sysbox-EE only). Fixed issue to allow shiftfs mounts over files that are themselves bind-mounts. Added support for Linux ID-mapped mounts (shiftfs alternative in kernels >= 5.12). Added support for ARM64 hosts. Added support for running buildx/buildkit inside Sysbox containers. Added support for running Rancher RKE2 and Mirantis K0s inside Sysbox containers. Added configs to disable trapping chown and xattr syscalls (improves performance but may reduce functionality). Added config to strictly honor container capabilities from higher-level container manager. Added support for per-container configs via `SYSBOX_` env vars. Improved performance of Sysbox's syscall interception code. Improved the way Sysbox releases the seccomp-fd handles for intercept syscalls (kernels >= 5.8). Improved Sysbox's cross-compilation support (artifacts can now be generated from/to either AMD64 or ARM64 hosts). Update to golang 1.16. Replaced the per-distro .deb installation packages with a single deb bundle package. Allow alternative Docker data-root inside a Sysbox container (if Docker is pre-installed in the Sysbox container image). Fixed segfault when building Docker image inside Sysbox container (issue #484). Fixed segfault when running python pip install inside nested sysbox container (issue #485). Fixed issue with running KinD inside a Sysbox container (issue #415). Fixed problem with shiftfs mounts on Kubernetes persistent volumes (issue #431). None. Added important optimization to expedite the container creation cycle. Enhanced uid-shifting logic to perform shifting operations of Sysbox's special dirs on a need basis. Added support for Kinvolk's Flatcar Linux distribution (Sysbox-EE only). Added basic building-blocks to allow Sysbox support on ARM platforms. Fixed issue preventing Sysbox folders from being eliminated from HDD when Sysbox is shutdown. Enable sys container processes to set 'trusted.overlay.opaque' xattr on files (issue #254). Fixed bug resulting in the failure of \"mount\" operation within a sys container. Made various enhancements to Sysbox's kubernetes installer to simplify its operation. Extend Sysbox's kubernetes installer to support Rancher's RKE k8s distribution. Added support to create secure Kubernetes PODs with Sysbox (sysbox-pods). Added support for Cgroups-v2 systems. Added support to allow K3s execution within Sysbox containers. Extended Sysbox support to Fedora-33 and Fedora-34 releases. Extended Sysbox support to Flatcar Linux distribution. Modified Sysbox binaries' installation path (\"/usr/local/sbin\" -> \"/usr/bin\"). Enhanced generation and handling of logging output by relying on systemd (journald) subsystem. Multiple enhancements in /proc & /sys file-system's emulation logic. Extended installer to allow it to deploy Sysbox in non-strictly-supported distros / releases. Improved security of shiftfs mounts. Fixed issue impacting sysbox-fs stability in scaling scenarios (issue #266). Fixed issue preventing sys-container initialization due a recent change in oci-runc (issue #291). Fixed issue with \"--mountpoint\" cli knob being ignored (sysbox issue #310). Fixed issue causing sysbox-fs handlers to stall upon access to a procfs node (issue #306). Fixed issue preventing write access to 'domainname' procfs node (issue #287). Fixed issue preventing systemd-based containers from being able to initialize (issue #273). Made changes to allow Docker network sharing between"
},
{
"data": "Ensure that Sysbox mounts in read-only containers are mounted as read only. Deprecated EOL'd Fedora-31 and Fedora-32 releases. Secured system container initial mounts (mount/remount/unmounts on these from within the container are now restricted). See for details. Improved Sysbox systemd service unit files (dependencies, open-file limits). Improved logging by sysbox-mgr and sysbox-fs (json logging, more succint logs). Added support for systemd-managed cgroups v1 on the host (cgroups v2 still not supported). Added support for read-only Docker containers. Synced-up sysbox-runc to include the latest changes from the OCI runc. Added support for Debian distribution (Buster and Bullseye). Added ground-work to support Sysbox on RedHat, Fedora, and CentOS (next step is creating a package manager for these). Added config option to configure the Sysbox work directory (defaults to /var/lib/sysbox). Added support and required automation for Sysbox-in-Docker deployments. Fixed sporadic session stalling issue during syscall interception handling. Fixed sysbox-mgr file descriptor leak (sysbox issue #195). Fixed problem with \"docker --restart\" on Sysbox containers (sysbox issue #184). Fixed race condition in sysbox-fs procfs & sysfs emulation. Fixed problem preventing kernel-headers from being properly imported within sys containers. Fixed inappropriate handling of mount instructions in chroot jail environments. None. Created debian packages for first community-edition release. Fixed package installer bug preventing 'shiftfs' feature from being properly utilized. Enhanced package installer to prevent network overlaps between inner and outer containers. Deprecated support of Ubuntu's EOL release: Eoan (19.10). Added initial Kubernetes-in-Docker support to enable secure, flexible and portable K8s clusters. Added support for running privileged-containers within secure system containers. Added support for containerd to run within system containers. Made multiple performance improvements to expedite container initialization and i/o operations. Added support for Ubuntu-Eoan (19.10) and Ubuntu-Focal (20.04). Extended support for Ubuntu-Cloud releases (Bionic, Eoan, Focal). Enhanced Sysbox documentation. Deprecated support of Ubuntu's EOL releases: Ubuntu-Disco (19.04) and Ubuntu-Cosmic (18.10). Created Sysbox Quick Start Guide document (with several examples on how to use system containers). Added support for running Systemd in a system container. Added support for the Ubuntu shiftfs filesytem (replaces the Nestybox shiftfs). Using `docker build` to create a system container image that includes inner container images. Using `docker commit` to create a system container image that includes inner container images. Added support for mounts over a system container's `/var/lib/docker` (for persistency of inner container images). Made multiple improvements to the Sysbox User's Guide and Design Guide docs. Rebranded 'sysboxd' to 'sysbox'. Deprecated Nestybox shiftfs module. Extend installer support to latest Ubuntu kernel (5.0.0-27). Initial public release. Added external documentation: README, user-guide, design-guide, etc. Extend support to Ubuntu-Bionic (+5.x kernel) with userns-remap disabled. Added consistent versioning to all sysboxd components. Increased list of kernels supported by nbox-shiftfs module (refer to nbox-shiftfs module documentation). Add changelog info to the debian package installer. Internal release (non-public). Supports launching system containers with Docker. Supports running Docker inside a system container. Supports exclusive uid(gid) mappings per system container. Supports partially virtualized procfs. Supports docker with or without userns-remap. Supports Ubuntu Disco (with userns-remap disabled). Supports Ubuntu Disco, Cosmic, and Bionic (with userns-remap enabled). Includes the Nestybox shiftfs kernel module for uid(gid) shifting."
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG.md",
"project_name": "Sysbox",
"subcategory": "Container Runtime"
}
|
[
{
"data": "title: OpenEBS link: https://github.com/openebs/velero-plugin objectStorage: false volumesnapshotter: true localStorage: true To do backup/restore of OpenEBS CStor volumes through Velero utility, you need to install and configure OpenEBS velero-plugin."
}
] |
{
"category": "Runtime",
"file_name": "05-openebs.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "(cluster-config-storage)= All members of a cluster must have identical storage pools. The only configuration keys that may differ between pools on different members are , , , and . See {ref}`clustering-member-config` for more information. Incus creates a default `local` storage pool for each cluster member during initialization. Creating additional storage pools is a two-step process: Define and configure the new storage pool across all cluster members. For example, for a cluster that has three members: incus storage create --target server1 data zfs source=/dev/vdb1 incus storage create --target server2 data zfs source=/dev/vdc1 incus storage create --target server3 data zfs source=/dev/vdb1 size=10GiB ```{note} You can pass only the member-specific configuration keys `source`, `size`, `zfs.poolname`, `lvm.thinpoolname` and `lvm.vg_name`. Passing other configuration keys results in an error. ``` These commands define the storage pool, but they don't create it. If you run , you can see that the pool is marked as \"pending\". Run the following command to instantiate the storage pool on all cluster members: incus storage create data zfs ```{note} You can add configuration keys that are not member-specific to this command. ``` If you missed a cluster member when defining the storage pool, or if a cluster member is down, you get an error. Also see {ref}`storage-pools-cluster`. Running shows the cluster-wide configuration of the storage pool. To view the member-specific configuration, use the `--target` flag. For example: incus storage show data --target server2 For most storage drivers (all except for Ceph-based storage drivers), storage volumes are not replicated across the cluster and exist only on the member for which they were created. Run to see on which member a certain volume is located. When creating a storage volume, use the `--target` flag to create a storage volume on a specific cluster member. Without the flag, the volume is created on the cluster member on which you run the command. For example, to create a volume on the current cluster member `server1`: incus storage volume create local vol1 To create a volume with the same name on another cluster member: incus storage volume create local vol1 --target server2 Different volumes can have the same name as long as they live on different cluster members. Typical examples for this are image volumes. You can manage storage volumes in a cluster in the same way as you do in non-clustered deployments, except that you must pass the `--target` flag to your commands if more than one cluster member has a volume with the given name. For example, to show information about the storage volumes: incus storage volume show local vol1 --target server1 incus storage volume show local vol1 --target server2"
}
] |
{
"category": "Runtime",
"file_name": "cluster_config_storage.md",
"project_name": "lxd",
"subcategory": "Container Runtime"
}
|
[
{
"data": "name: Bug report about: Create a report to help us improve title: '' labels: community, triage assignees: '' If this case is urgent, please subscribe to so that our 24/7 support team may help you faster. <! Provide a general summary of the issue in the Title above --> <! If you're describing a bug, tell us what should happen --> <! If you're suggesting a change/improvement, tell us how it should work --> <! If describing a bug, tell us what happens instead of the expected behavior --> <! If suggesting a change/improvement, explain the difference from current behavior --> <! Not obligatory, but suggest a fix/reason for the bug, --> <! or ideas how to implement the addition or change --> <! Provide a link to a live example, or an unambiguous set of steps to --> <! reproduce this bug. Include code to reproduce, if relevant --> <! and make sure you have followed https://github.com/minio/minio/tree/release/docs/debugging to capture relevant logs --> <! How has this issue affected you? What are you trying to accomplish? --> <! Providing context helps us come up with a solution that is most useful in the real world --> <!-- Is this issue a regression? (Yes / No) --> <!-- If Yes, optionally please include minio version or commit id or PR# that caused this regression, if you have these details. --> <! Include as many relevant details about the environment you experienced the bug in --> Version used (`minio --version`): Server setup and configuration: Operating System and version (`uname -a`):"
}
] |
{
"category": "Runtime",
"file_name": "bug_report.md",
"project_name": "MinIO",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "title: \"ark delete schedule\" layout: docs Delete a schedule Delete a schedule ``` ark delete schedule NAME [flags] ``` ``` -h, --help help for schedule ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Delete ark resources"
}
] |
{
"category": "Runtime",
"file_name": "ark_delete_schedule.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Applications like Kafka will have a deployment with multiple running instances. Each service instance will create a new claim and is expected to be located in a different zone. Since the application has its own redundant instances, there is no requirement for redundancy at the data layer. A storage class is created that will provision storage from replica 1 Ceph pools that are located in each of the separate zones. Add the required flags to the script: `create-external-cluster-resources.py`: `--topology-pools`: (optional) Comma-separated list of topology-constrained rbd pools `--topology-failure-domain-label`: (optional) K8s cluster failure domain label (example: zone, rack, or host) for the topology-pools that match the ceph domain `--topology-failure-domain-values`: (optional) Comma-separated list of the k8s cluster failure domain values corresponding to each of the pools in the `topology-pools` list The import script will then create a new storage class named `ceph-rbd-topology`. Determine the names of the zones (or other failure domains) in the Ceph CRUSH map where each of the pools will have corresponding CRUSH rules. Create a zone-specific CRUSH rule for each of the pools. For example, this is a CRUSH rule for `zone-a`: ```console $ ceph osd crush rule create-replicated <rule-name> <root> <failure-domain> <class> { \"rule_id\": 5, \"rulename\": \"rulehost-zone-a-hdd\", \"type\": 1, \"steps\": [ { \"op\": \"take\", \"item\": -10, \"item_name\": \"zone-a~hdd\" }, { \"op\": \"choose_firstn\", \"num\": 0, \"type\": \"osd\" }, { \"op\": \"emit\" } ] } ``` Create replica-1 pools based on each of the CRUSH rules from the previous step. Each pool must be created with a CRUSH rule to limit the pool to OSDs in a specific zone. !!! note Disable the ceph warning for replica-1 pools: `ceph config set global monallowpoolsizeone true` Determine the zones in the K8s cluster that correspond to each of the pools in the Ceph pool. The K8s nodes require labels as defined with the . Some environments already have nodes labeled in zones. Set the topology labels on the nodes if not already present. Set the flags of the external cluster configuration script based on the pools and failure domains. --topology-pools=pool-a,pool-b,pool-c --topology-failure-domain-label=zone --topology-failure-domain-values=zone-a,zone-b,zone-c Then run the python script to generate the settings which will be imported to the Rook cluster: ```console python3 create-external-cluster-resources.py --rbd-data-pool-name replicapool --topology-pools pool-a,pool-b,pool-c --topology-failure-domain-label zone --topology-failure-domain-values zone-a,zone-b,zone-c ``` Output: ```console export ROOKEXTERNALFSID=8f01d842-d4b2-11ee-b43c-0050568fb522 .... .... .... export TOPOLOGY_POOLS=pool-a,pool-b,pool-c export TOPOLOGYFAILUREDOMAIN_LABEL=zone export TOPOLOGYFAILUREDOMAIN_VALUES=zone-a,zone-b,zone-c ``` Check the external cluster is created and connected as per the installation steps. Review the new storage class: ```console $ kubectl get sc ceph-rbd-topology -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: creationTimestamp: \"2024-03-07T12:10:19Z\" name: ceph-rbd-topology resourceVersion: \"82502\" uid: 68448a14-3a78-42c5-ac29-261b6c3404af parameters: ... ... topologyConstrainedPools: | [ {\"poolName\":\"pool-a\", \"domainSegments\":[ {\"domainLabel\":\"zone\",\"value\":\"zone-a\"}]}, {\"poolName\":\"pool-b\", \"domainSegments\":[ {\"domainLabel\":\"zone\",\"value\":\"zone-b\"}]}, {\"poolName\":\"pool-c\", \"domainSegments\":[ {\"domainLabel\":\"zone\",\"value\":\"zone-c\"}]}, ] provisioner: rook-ceph.rbd.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer ``` Set two values in the : `CSIENABLETOPOLOGY: \"true\"`: Enable the feature `CSITOPOLOGYDOMAIN_LABELS: \"topology.kubernetes.io/zone\"`: Set the topology domain labels that the CSI driver will analyze on the nodes during scheduling. The topology-based storage class is ready to be consumed! Create a PVC from the `ceph-rbd-topology` storage class above, and watch the OSD usage to see how the data is spread only among the topology-based CRUSH buckets."
}
] |
{
"category": "Runtime",
"file_name": "topology-for-external-mode.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Longhorn will take advantage of SPDK to launch the second version engine with higher performance. https://github.com/longhorn/longhorn/issues/5406 https://github.com/longhorn/longhorn/issues/5282 https://github.com/longhorn/longhorn/issues/5751 Have a set of APIs that talks with spdk_tgt to operate SPDK components. Launch a control panel that manage and operate SPDK engines and replica. The SPDK engine architecture is different from the legacy engine: Unlike the legacy engine, the data flow will be taken over by SPDK. The new engine or replica won't directly touch the data handling. The new engine or replica is actually one or a set of SPDK components handled by spdk_tgt. Since the main task is to manage SPDK components and abstract them as Longhorn engines or replicas, we can use a single service rather than separate processes to launch and manage engine or replicas. As SPDK handles the disks by itself, the disk management logic should be moved to SPDK engine service as well. The abstraction of SPDK engine and replica: A data disk will be abstracted as an aio bdev + a lvstore. Each snapshot or volume head file is a logical volume (lvol) inside a lvstore. A remote replica is finally exposed as a NVMe-oF subsystem, in which the corresponding SPDK lvol stand behind. While a local replica is just a lvol. An engine backend is actually a SPDK RAID1 bdev, which may consist of multiple attached replica NVMe-oF subsystems and local lvol. An engine frontend is typically a NVMe-oF initiator plus a NVMe-oF subsystem of the RAID bdev. Do spdk_tgt initializations during instance manager startup. Before the enhancement, users need to launch a RAID1 bdev then expose it as a NVMe-oF initiator as the Longhorn SPDK engine manually by following . Besides, rebuilding replicas would be pretty complicated. After the enhancement, users can directly launch and control Longhorn SPDK engine via the gRPC SPDK engine service. And the rebuilding can be triggered and handled automatically. The new gRPC SPDK engine service: Replica: | API | Caller | Input | Output | Comments | | | | | | | | Create | Instance manager proxy | name, lvsName, lvsUUID string, specSize uint64, exposeRequired bool | err error | Create a new replica or start an existing one | | Delete | Instance manager proxy | name string, cleanupRequired bool | err error | Remove or stop an existing replica | | List | Instance manager proxy | | replicas map\\[string\\]Replica, err error | Get all abstracted replica info from the cache of the SPDK engine service | | Get | Instance manager proxy | | replica Replica, err error | Get the abstracted replica info from the cache of the SPDK engine service | | Watch | Instance manager proxy | | ReplicaStream, err error | Establish a streaming for the replica update notification | | SnapshotCreate | Instance manager proxy | name, snapshotName string | err error | | | SnapshotDelete | Instance manager proxy | name, snapshotName string | err error | | | Rebuilding APIs | The engine inside one gRPC SPDK engine service | | | This set of APIs is responsible for starting and finishing the rebuilding for source replica or destination"
},
{
"data": "And it help start data transmission from src to dst | Engine: | API | Caller | Input | Output | Comments | | | | | | | | Create | Instance manager proxy | name, lvsName, lvsUUID string, specSize uint64, exposeRequired bool | err error | Start a new engine and connect it with corresponding replicas | | Delete | Instance manager proxy | name string, cleanupRequired bool | err error | Stop an existing engine | | List | Instance manager proxy | | engines map\\[string\\]Engine, err error | Get the abstracted engine info from the cache of the SPDK engine service | | Get | Instance manager proxy | | engine Engine, err error | Get the abstracted engine info from the cache of the SPDK engine service | | Watch | Instance manager proxy | | EngineStream, err error | Establish a streaming for the engine update notification | | SnapshotCreate | Instance manager proxy | name, snapshotName string | err error | | | SnapshotDelete | Instance manager proxy | name, snapshotName string | err error | | | ReplicaAdd | Instance manager proxy | engineName, replicaName, replicaAddress string | err error | Find a healthy RW replica as source replica then rebuild the destination replica. To rebuild a replica, the engine will call rebuilding start and finish APIs for both replicas and launch data transmission | | ReplicaDelete | Instance manager proxy | engineName, replicaName, replicaAddress string | err error | Remove a replica from the engine | Disk: | API | Caller | Input | Output | Comments | | | | | | | | Create | Instance manager proxy | diskName, diskUUID, diskPath string, blockSize int64 | disk Disk, err error | Use the specified block device as blob store | | Delete | Instance manager proxy | diskName, diskUUID string | err error | Remove a store from spdk_tgt | | Get | Instance manager proxy | diskName string | disk Disk, err error | Detect the store status and get the abstracted disk info from spdk_tgt | The SPDK Target is exposed as a . Instead of using the existing sample python script , we will have a helper repo similar to to talk with spdk_tgt over Unix domain socket `/var/tmp/spdk.sock`.. The SPDK target config and launching. Then live upgrade, and shutdown if necessary/possible. The JSON RPC client that directly talks with spdk_tgt. The exposed Golang SPDK component operating APIs. e.g., lvstore, lvol, RAID creation, deletion, and list. The NVMe initiator handling Golang APIs (for the engine frontend). Launch a gRPC server as the control panel. Have a goroutine that periodically check and update engine/replica caches. Implement the engine/replica/disk APIs listed above. Notify upper layers about the engine/replica update via streaming. Start spdk_tgt on demand. Update the proxy service so that it forwards SPDK engine/replica requests to the gRPC service. Starting and stopping related tests: If Longhorn can start or stop one engine + multiple replicas correctly. Basic IO tests: If Data can be r/w correctly. And if data still exists after restart. Basic snapshot tests: If snapshots can be created and keeps identical among all replicas. If a snapshot can be deleted from all replicas. If snapshot revert work. SPDK volume creation/deletion/attachment/detachment tests. Basic IO tests: If Data can be r/w correctly when volume is degraded or healthy. And if data still exists after restart. Basic offline rebuilding tests. This is an experimental engine. We do not need to consider the upgrade or compatibility issues now."
}
] |
{
"category": "Runtime",
"file_name": "20230619-spdk-engine.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The is used by Rook to allow creation and customization of storage pools. The is used by Rook to allow creation of Ceph RADOS Namespaces. CephClient CRD is used by Rook to allow and updating clients. The is used by Rook to allow creation and customization of storage clusters through the custom resource definitions (CRDs). The implement an interface between a CSI-enabled Container Orchestrator (CO) and Ceph clusters. The is used by Rook to allow creation and customization of shared filesystems through the custom resource definitions (CRDs). The is used by Rook to allow creation and updating the Ceph fs-mirror daemon. CephFilesystemMirror CRD is used by Rook to allow of Ceph Filesystem SubVolumeGroups. CephNFS CRD is used by Rook to allow exporting NFS shares of a CephFilesystem or CephObjectStore through the CephNFS custom resource definition. For further information please refer to the example . CephObjectStore CRD is used by Rook to allow and customization of object stores. CephObjectStoreUser CRD is used by Rook to allow creation and customization of object store users. For more information and examples refer to this . CephObjectRealm CRD is used by Rook to allow creation of a realm in a Ceph Object Multisite configuration. For more information and examples refer to this . CephObjectZoneGroup CRD is used by Rook to allow creation of zone groups in a Ceph Object Multisite configuration. For more information and examples refer to this . CephObjectZone CRD is used by Rook to allow creation of zones in a ceph cluster for a Ceph Object Multisite configuration. For more information and examples refer to this . CephRBDMirror CRD is used by Rook to allow creation and updating rbd-mirror daemon(s) through the custom resource definitions (CRDs). For more information and examples refer to this . An is a Ceph configuration that is managed outside of the local K8s cluster. A is where Rook configures Ceph to store data directly on the host devices. The is a tool to help troubleshoot your Rook cluster. An Object Bucket Claim (OBC) is custom resource which requests a bucket (new or existing) from a Ceph object store. For further reference please refer to . An Object Bucket (OB) is a custom resource automatically generated when a bucket is provisioned. It is a global resource, typically not visible to non-admin users, and contains information specific to the bucket. Container Platform is a distribution of the Kubernetes container platform. In a , the Ceph persistent data is stored on volumes requested from a storage class of your choice. A stretched cluster is a deployment model in which two datacenters with low latency are available for storage in the same K8s cluster, rather than three or more. To support this scenario, Rook has integrated support for . The is a container with common tools used for rook debugging and testing. is a distributed network storage and file system with distributed metadata management and POSIX semantics. See also the . Here are a few of the important terms to understand: (MON) (MGR) (MDS) (OSD) (RBD) (RGW) Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. For further information see also the for more definitions. Here are a few of the important terms to understand: for Kubernetes"
}
] |
{
"category": "Runtime",
"file_name": "glossary.md",
"project_name": "Rook",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Antrea supports encrypting traffic across Linux Nodes with IPsec ESP or WireGuard. Traffic encryption is not supported on Windows Nodes yet. IPsec encryption works for all tunnel types supported by OVS including Geneve, GRE, VXLAN, and STT tunnel. Note that GRE is not supported for IPv6 clusters (IPv6-only or dual-stack clusters). For such clusters, please choose a different tunnel type such as Geneve or VXLAN. IPsec requires a set of Linux kernel modules. Check the required kernel modules listed in the . Make sure the required kernel modules are loaded on the Kubernetes Nodes before deploying Antrea with IPsec encryption enabled. If you want to enable IPsec with Geneve, please make sure is included in the kernel. For Ubuntu 18.04, kernel version should be at least `4.15.0-128`. For Ubuntu 20.04, kernel version should be at least `5.4.70`. You can simply apply the to deploy Antrea with IPsec encryption enabled. To deploy a released version of Antrea, pick a version from the . Note that IPsec support was added in release 0.3.0, which means you can not pick a release older than 0.3.0. For any given release `<TAG>` (e.g. `v0.3.0`), get the Antrea IPsec deployment yaml at: ```text https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea-ipsec.yml ``` To deploy the latest version of Antrea (built from the main branch), get the IPsec deployment yaml at: ```text https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea-ipsec.yml ``` Antrea leverages strongSwan as the IKE daemon, and supports using pre-shared key (PSK) for IKE authentication. The deployment yaml creates a Kubernetes Secret `antrea-ipsec` to store the PSK string. For security consideration, we recommend to change the default PSK string in the yaml file. You can edit the yaml file, and update the `psk` field in the `antrea-ipsec` Secret spec to any string you want to use. Check the `antrea-ipsec` Secret spec below: ```yaml apiVersion: v1 kind: Secret metadata: name: antrea-ipsec namespace: kube-system stringData: psk: changeme type: Opaque ``` After updating the PSK value, deploy Antrea with: ```bash kubectl apply -f antrea-ipsec.yml ``` By default, the deployment yaml uses GRE as the tunnel type, which you can change by editing the file. You will need to change the tunnel type to another one if your cluster supports IPv6. Antrea can leverage to encrypt Pod traffic between Nodes. WireGuard encryption works like another tunnel type, and when it is enabled the `tunnelType` parameter in the `antrea-agent` configuration file will be ignored. WireGuard encryption requires the `wireguard` kernel module be present on the Kubernetes Nodes. `wireguard` module is part of mainline kernel since Linux 5.6. Or, you can compile the module from source code with a kernel version >= 3.10. documents how to install WireGuard together with the kernel module on various operating systems. First, download the . To deploy a released version of Antrea, pick a version from the . Note that WireGuard support was added in release 1.3.0, which means you can not pick a release older than 1.3.0. For any given release `<TAG>` (e.g. `v1.3.0`), get the Antrea deployment yaml at: ```text https://github.com/antrea-io/antrea/releases/download/<TAG>/antrea.yml ``` To deploy the latest version of Antrea (built from the main branch), get the deployment yaml at: ```text https://raw.githubusercontent.com/antrea-io/antrea/main/build/yamls/antrea.yml ``` To enable WireGuard encryption, the `trafficEncryptionMode` config parameter of `antrea-agent` to `wireGuard`. The `trafficEncryptionMode` config parameter is defined in `antrea-agent.conf` of `antrea` ConfigMap in the Antrea deployment yaml: ```yaml kind: ConfigMap apiVersion: v1 metadata: name: antrea-config namespace: kube-system data: antrea-agent.conf: | trafficEncryptionMode: wireGuard ``` After saving the yaml file change, deploy Antrea with: ```bash kubectl apply -f antrea.yml ```"
}
] |
{
"category": "Runtime",
"file_name": "traffic-encryption.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "The virtcontainers 1.0 API operates on two high level objects: and : The virtcontainers 1.0 sandbox API manages hardware virtualized . The virtcontainers sandbox semantics strictly follow the ones. The sandbox API allows callers to VM (Virtual Machine) based sandboxes. To initially create a sandbox, the API caller must prepare a and pass it to either . Upon successful sandbox creation, the virtcontainers API will return a interface back to the caller. The `VCSandbox` interface is a sandbox abstraction hiding the internal and private virtcontainers sandbox structure. It is a handle for API callers to manage the sandbox lifecycle through the rest of the . * * * * * ```Go // SandboxConfig is a Sandbox configuration. type SandboxConfig struct { ID string Hostname string HypervisorType HypervisorType HypervisorConfig HypervisorConfig AgentConfig KataAgentConfig NetworkConfig NetworkConfig // Volumes is a list of shared volumes between the host and the Sandbox. Volumes []types.Volume // Containers describe the list of containers within a Sandbox. // This list can be empty and populated by adding containers // to the Sandbox a posteriori. Containers []ContainerConfig // SandboxBindMounts - list of paths to mount into guest SandboxBindMounts []string // Experimental features enabled Experimental []exp.Feature // Cgroups specifies specific cgroup settings for the various subsystems that the container is // placed into to limit the resources the container has available Cgroups *configs.Cgroup // Annotations keys must be unique strings and must be name-spaced // with e.g. reverse domain notation (org.clearlinux.key). Annotations map[string]string ShmSize uint64 // SharePidNs sets all containers to share the same sandbox level pid namespace. SharePidNs bool // SystemdCgroup enables systemd cgroup support SystemdCgroup bool // SandboxCgroupOnly enables cgroup only at podlevel in the host SandboxCgroupOnly bool DisableGuestSeccomp bool } ``` ```Go // HypervisorType describes an hypervisor type. type HypervisorType string const ( // FirecrackerHypervisor is the FC hypervisor. FirecrackerHypervisor HypervisorType = \"firecracker\" // QemuHypervisor is the QEMU hypervisor. QemuHypervisor HypervisorType = \"qemu\" // AcrnHypervisor is the ACRN hypervisor. AcrnHypervisor HypervisorType = \"acrn\" // ClhHypervisor is the ICH hypervisor. ClhHypervisor HypervisorType = \"clh\" // MockHypervisor is a mock hypervisor for testing purposes MockHypervisor HypervisorType = \"mock\" ) ``` ```Go // HypervisorConfig is the hypervisor configuration. type HypervisorConfig struct { // NumVCPUs specifies default number of vCPUs for the VM. NumVCPUs uint32 //DefaultMaxVCPUs specifies the maximum number of vCPUs for the VM. DefaultMaxVCPUs uint32 // DefaultMem specifies default memory size in MiB for the VM. MemorySize uint32 // DefaultBridges specifies default number of bridges for the VM. // Bridges can be used to hot plug devices DefaultBridges uint32 // Msize9p is used as the msize for 9p shares Msize9p uint32 // MemSlots specifies default memory slots the VM. MemSlots uint32 // MemOffset specifies memory space for nvdimm device MemOffset uint32 // VirtioFSCacheSize is the DAX cache size in MiB VirtioFSCacheSize uint32 // KernelParams are additional guest kernel parameters. KernelParams []Param // HypervisorParams are additional hypervisor parameters. HypervisorParams []Param // KernelPath is the guest kernel host path. KernelPath string // ImagePath is the guest image host path. ImagePath string // InitrdPath is the guest initrd image host path. // ImagePath and InitrdPath cannot be set at the same"
},
{
"data": "InitrdPath string // FirmwarePath is the bios host path FirmwarePath string // MachineAccelerators are machine specific accelerators MachineAccelerators string // CPUFeatures are cpu specific features CPUFeatures string // HypervisorPath is the hypervisor executable host path. HypervisorPath string // HypervisorPathList is the list of hypervisor paths names allowed in annotations HypervisorPathList []string // HypervisorCtlPathList is the list of hypervisor control paths names allowed in annotations HypervisorCtlPathList []string // HypervisorCtlPath is the hypervisor ctl executable host path. HypervisorCtlPath string // JailerPath is the jailer executable host path. JailerPath string // JailerPathList is the list of jailer paths names allowed in annotations JailerPathList []string // BlockDeviceDriver specifies the driver to be used for block device // either VirtioSCSI or VirtioBlock with the default driver being defaultBlockDriver BlockDeviceDriver string // HypervisorMachineType specifies the type of machine being // emulated. HypervisorMachineType string // MemoryPath is the memory file path of VM memory. Used when either BootToBeTemplate or // BootFromTemplate is true. MemoryPath string // DevicesStatePath is the VM device state file path. Used when either BootToBeTemplate or // BootFromTemplate is true. DevicesStatePath string // EntropySource is the path to a host source of // entropy (/dev/random, /dev/urandom or real hardware RNG device) EntropySource string // EntropySourceList is the list of valid entropy sources EntropySourceList []string // Shared file system type: // - virtio-9p (default) // - virtio-fs SharedFS string // VirtioFSDaemon is the virtio-fs vhost-user daemon path VirtioFSDaemon string // VirtioFSDaemonList is the list of valid virtiofs names for annotations VirtioFSDaemonList []string // VirtioFSCache cache mode for fs version cache VirtioFSCache string // VirtioFSExtraArgs passes options to virtiofsd daemon VirtioFSExtraArgs []string // File based memory backend root directory FileBackedMemRootDir string // FileBackedMemRootList is the list of valid root directories values for annotations FileBackedMemRootList []string // PFlash image paths PFlash []string // customAssets is a map of assets. // Each value in that map takes precedence over the configured assets. // For example, if there is a value for the \"kernel\" key in this map, // it will be used for the sandbox's kernel path instead of KernelPath. customAssets map[types.AssetType]*types.Asset // BlockDeviceCacheSet specifies cache-related options will be set to block devices or not. BlockDeviceCacheSet bool // BlockDeviceCacheDirect specifies cache-related options for block devices. // Denotes whether use of O_DIRECT (bypass the host page cache) is enabled. BlockDeviceCacheDirect bool // BlockDeviceCacheNoflush specifies cache-related options for block devices. // Denotes whether flush requests for the device are ignored. BlockDeviceCacheNoflush bool // DisableBlockDeviceUse disallows a block device from being used. DisableBlockDeviceUse bool // EnableIOThreads enables IO to be processed in a separate thread. // Supported currently for virtio-scsi driver. EnableIOThreads bool // Debug changes the default hypervisor and kernel parameters to // enable debug output where available. Debug bool // MemPrealloc specifies if the memory should be pre-allocated MemPrealloc bool // HugePages specifies if the memory should be pre-allocated from huge pages HugePages bool // VirtioMem is used to enable/disable virtio-mem VirtioMem bool // IOMMU specifies if the VM should have a vIOMMU IOMMU bool // IOMMUPlatform is used to indicate if IOMMU_PLATFORM is enabled for supported devices IOMMUPlatform bool // DisableNestingChecks is used to override customizations performed // when running on top of another"
},
{
"data": "DisableNestingChecks bool // DisableImageNvdimm is used to disable guest rootfs image nvdimm devices DisableImageNvdimm bool // HotPlugVFIO is used to indicate if devices need to be hotplugged on the // root port, switch, bridge or no port HotPlugVFIO hv.PCIePort // ColdPlugVFIO is used to indicate if devices need to be coldplugged on the // root port, switch, bridge or no port ColdPlugVFIO hv.PCIePort // BootToBeTemplate used to indicate if the VM is created to be a template VM BootToBeTemplate bool // BootFromTemplate used to indicate if the VM should be created from a template VM BootFromTemplate bool // DisableVhostNet is used to indicate if host supports vhost_net DisableVhostNet bool // EnableVhostUserStore is used to indicate if host supports vhost-user-blk/scsi EnableVhostUserStore bool // GuestSwap Used to enable/disable swap in the guest GuestSwap bool // VhostUserStorePath is the directory path where vhost-user devices // related folders, sockets and device nodes should be. VhostUserStorePath string // VhostUserStorePathList is the list of valid values for vhost-user paths VhostUserStorePathList []string // GuestHookPath is the path within the VM that will be used for 'drop-in' hooks GuestHookPath string // VMid is the id of the VM that create the hypervisor if the VM is created by the factory. // VMid is \"\" if the hypervisor is not created by the factory. VMid string // SELinux label for the VM SELinuxProcessLabel string // RxRateLimiterMaxRate is used to control network I/O inbound bandwidth on VM level. RxRateLimiterMaxRate uint64 // TxRateLimiterMaxRate is used to control network I/O outbound bandwidth on VM level. TxRateLimiterMaxRate uint64 // SGXEPCSize specifies the size in bytes for the EPC Section. // Enable SGX. Hardware-based isolation and memory encryption. SGXEPCSize int64 // Enable annotations by name EnableAnnotations []string // GuestCoredumpPath is the path in host for saving guest memory dump GuestMemoryDumpPath string // GuestMemoryDumpPaging is used to indicate if enable paging // for QEMU dump-guest-memory command GuestMemoryDumpPaging bool // Enable confidential guest support. // Enable or disable different hardware features, ranging // from memory encryption to both memory and CPU-state encryption and integrity. ConfidentialGuest bool // Enables SEV-SNP guests in case both AMD SEV and SNP are supported. // SEV is default. SevSnpGuest bool } ``` ```Go // NetworkConfig is the network configuration related to a network. type NetworkConfig struct { NetworkID string InterworkingModel NetInterworkingModel NetworkCreated bool DisableNewNetwork bool } ``` ```Go // NetInterworkingModel defines the network model connecting // the network interface to the virtual machine. type NetInterworkingModel int const ( // NetXConnectDefaultModel Ask to use DefaultNetInterworkingModel NetXConnectDefaultModel NetInterworkingModel = iota // NetXConnectMacVtapModel can be used when the Container network // interface can be bridged using macvtap NetXConnectMacVtapModel // NetXConnectTCFilterModel redirects traffic from the network interface // provided by the network plugin to a tap interface. // This works for ipvlan and macvlan as well. NetXConnectTCFilterModel // NetXConnectNoneModel can be used when the VM is in the host network namespace NetXConnectNoneModel // NetXConnectInvalidModel is the last item to check valid values by IsValid() NetXConnectInvalidModel ) ``` ```Go // Volume is a shared volume between the host and the VM, // defined by its mount tag and its host path. type Volume struct { // MountTag is a label used as a hint to the"
},
{
"data": "MountTag string // HostPath is the host filesystem path for this volume. HostPath string } ``` ```Go // ContainerConfig describes one container runtime configuration. type ContainerConfig struct { ID string // RootFs is the container workload image on the host. RootFs RootFs // ReadOnlyRootfs indicates if the rootfs should be mounted readonly ReadonlyRootfs bool // Cmd specifies the command to run on a container Cmd types.Cmd // Annotations allow clients to store arbitrary values, // for example to add additional status values required // to support particular specifications. Annotations map[string]string Mounts []Mount // Device configuration for devices that must be available within the container. DeviceInfos []config.DeviceInfo // Resources container resources Resources specs.LinuxResources // Raw OCI specification, it won't be saved to disk. CustomSpec *specs.Spec `json:\"-\"` } ``` ```Go // Cmd represents a command to execute in a running container. type Cmd struct { Args []string Envs []EnvVar SupplementaryGroups []string // Note that these fields MUST remain as strings. // // The reason being that we want runtimes to be able to support CLI // operations like \"exec --user=\". That option allows the // specification of a user (either as a string username or a numeric // UID), and may optionally also include a group (groupame or GID). // // Since this type is the interface to allow the runtime to specify // the user and group the workload can run as, these user and group // fields cannot be encoded as integer values since that would imply // the runtime itself would need to perform a UID/GID lookup on the // user-specified username/groupname. But that isn't practically // possible given that to do so would require the runtime to access // the image to allow it to interrogate the appropriate databases to // convert the username/groupnames to UID/GID values. // // Note that this argument applies solely to the runtime supporting // a \"--user=\" option when running in a \"standalone mode\" - there is // no issue when the runtime is called by a container manager since // all the user and group mapping is handled by the container manager // and specified to the runtime in terms of UID/GID's in the // configuration file generated by the container manager. User string PrimaryGroup string WorkDir string Console string Capabilities *specs.LinuxCapabilities Interactive bool Detach bool NoNewPrivileges bool } ``` ```Go // Mount describes a container mount. type Mount struct { Source string Destination string // Type specifies the type of filesystem to mount. Type string // Options list all the mount options of the filesystem. Options []string // HostPath used to store host side bind mount path HostPath string // ReadOnly specifies if the mount should be read only or not ReadOnly bool // BlockDeviceID represents block device that is attached to the // VM in case this mount is a block device file or a directory // backed by a block device. BlockDeviceID string } ``` ```Go // DeviceInfo is an embedded type that contains device data common to all types of"
},
{
"data": "type DeviceInfo struct { // Hostpath is device path on host HostPath string // ContainerPath is device path inside container ContainerPath string `json:\"-\"` // Type of device: c, b, u or p // c , u - character(unbuffered) // p - FIFO // b - block(buffered) special file // More info in mknod(1). DevType string // Major, minor numbers for device. Major int64 Minor int64 // Pmem enabled persistent memory. Use HostPath as backing file // for a nvdimm device in the guest. Pmem bool // If applicable, should this device be considered RO ReadOnly bool // ColdPlug specifies whether the device must be cold plugged (true) // or hot plugged (false). ColdPlug bool // FileMode permission bits for the device. FileMode os.FileMode // id of the device owner. UID uint32 // id of the device group. GID uint32 // ID for the device that is passed to the hypervisor. ID string // DriverOptions is specific options for each device driver // for example, for BlockDevice, we can set DriverOptions[\"block-driver\"]=\"virtio-blk\" DriverOptions map[string]string } ``` ```Go // VCSandbox is the Sandbox interface // (required since virtcontainers.Sandbox only contains private fields) type VCSandbox interface { Annotations(key string) (string, error) GetNetNs() string GetAllContainers() []VCContainer GetAnnotations() map[string]string GetContainer(containerID string) VCContainer ID() string SetAnnotations(annotations map[string]string) error Stats(ctx context.Context) (SandboxStats, error) Start(ctx context.Context) error Stop(ctx context.Context, force bool) error Release(ctx context.Context) error Monitor(ctx context.Context) (chan error, error) Delete(ctx context.Context) error Status() SandboxStatus CreateContainer(ctx context.Context, contConfig ContainerConfig) (VCContainer, error) DeleteContainer(ctx context.Context, containerID string) (VCContainer, error) StartContainer(ctx context.Context, containerID string) (VCContainer, error) StopContainer(ctx context.Context, containerID string, force bool) (VCContainer, error) KillContainer(ctx context.Context, containerID string, signal syscall.Signal, all bool) error StatusContainer(containerID string) (ContainerStatus, error) StatsContainer(ctx context.Context, containerID string) (ContainerStats, error) PauseContainer(ctx context.Context, containerID string) error ResumeContainer(ctx context.Context, containerID string) error EnterContainer(ctx context.Context, containerID string, cmd types.Cmd) (VCContainer, *Process, error) UpdateContainer(ctx context.Context, containerID string, resources specs.LinuxResources) error WaitProcess(ctx context.Context, containerID, processID string) (int32, error) SignalProcess(ctx context.Context, containerID, processID string, signal syscall.Signal, all bool) error WinsizeProcess(ctx context.Context, containerID, processID string, height, width uint32) error IOStream(containerID, processID string) (io.WriteCloser, io.Reader, io.Reader, error) AddDevice(ctx context.Context, info config.DeviceInfo) (api.Device, error) AddInterface(ctx context.Context, inf pbTypes.Interface) (pbTypes.Interface, error) RemoveInterface(ctx context.Context, inf pbTypes.Interface) (pbTypes.Interface, error) ListInterfaces(ctx context.Context) ([]*pbTypes.Interface, error) UpdateRoutes(ctx context.Context, routes []pbTypes.Route) ([]pbTypes.Route, error) ListRoutes(ctx context.Context) ([]*pbTypes.Route, error) GetOOMEvent(ctx context.Context) (string, error) GetHypervisorPid() (int, error) UpdateRuntimeMetrics() error GetAgentMetrics(ctx context.Context) (string, error) GetAgentURL() (string, error) } ``` ```Go // CreateSandbox is the virtcontainers sandbox creation entry point. // CreateSandbox creates a sandbox and its containers. It does not start them. func (impl *VCImpl) CreateSandbox(ctx context.Context, sandboxConfig SandboxConfig) (VCSandbox, error) ``` ```Go // CleanupContainer is used by shimv2 to stop and delete a container exclusively, once there is no container // in the sandbox left, do stop the sandbox and delete it. Those serial operations will be done exclusively by // locking the sandbox. func (impl *VCImpl) CleanupContainer(ctx context.Context, sandboxID, containerID string, force bool) error ``` ```Go // SetFactory implements the VC function of the same name. func (impl *VCImpl) SetFactory(ctx context.Context, factory Factory) ``` ```Go // SetLogger implements the VC function of the same name. func (impl VCImpl) SetLogger(ctx context.Context, logger logrus.Entry) ``` The virtcontainers 1.0 container API manages sandbox . A virtcontainers container is process running inside a containerized environment, as part of a hardware virtualized"
},
{
"data": "In other words, a virtcontainers container is just a regular container running inside a virtual machine's guest OS. A virtcontainers container always belong to one and only one virtcontainers sandbox, again following the . logic and semantics. The container API allows callers to , , , , and containers. It also allows for running inside a specific container. As a virtcontainers container is always linked to a sandbox, the entire container API always takes a sandbox ID as its first argument. To create a container, the API caller must prepare a and pass it to together with a sandbox ID. Upon successful container creation, the virtcontainers API will return a interface back to the caller. The `VCContainer` interface is a container abstraction hiding the internal and private virtcontainers container structure. It is a handle for API callers to manage the container lifecycle through the rest of the . * * ```Go // ContainerConfig describes one container runtime configuration. type ContainerConfig struct { ID string // RootFs is the container workload image on the host. RootFs RootFs // ReadOnlyRootfs indicates if the rootfs should be mounted readonly ReadonlyRootfs bool // Cmd specifies the command to run on a container Cmd types.Cmd // Annotations allow clients to store arbitrary values, // for example to add additional status values required // to support particular specifications. Annotations map[string]string Mounts []Mount // Device configuration for devices that must be available within the container. DeviceInfos []config.DeviceInfo // Resources container resources Resources specs.LinuxResources // Raw OCI specification, it won't be saved to disk. CustomSpec *specs.Spec `json:\"-\"` } ``` ```Go // Cmd represents a command to execute in a running container. type Cmd struct { Args []string Envs []EnvVar SupplementaryGroups []string // Note that these fields MUST remain as strings. // // The reason being that we want runtimes to be able to support CLI // operations like \"exec --user=\". That option allows the // specification of a user (either as a string username or a numeric // UID), and may optionally also include a group (groupame or GID). // // Since this type is the interface to allow the runtime to specify // the user and group the workload can run as, these user and group // fields cannot be encoded as integer values since that would imply // the runtime itself would need to perform a UID/GID lookup on the // user-specified username/groupname. But that isn't practically // possible given that to do so would require the runtime to access // the image to allow it to interrogate the appropriate databases to // convert the username/groupnames to UID/GID values. // // Note that this argument applies solely to the runtime supporting // a \"--user=\" option when running in a \"standalone mode\" - there is // no issue when the runtime is called by a container manager since // all the user and group mapping is handled by the container manager // and specified to the runtime in terms of UID/GID's in the // configuration file generated by the container manager. User string PrimaryGroup string WorkDir string Console string Capabilities *specs.LinuxCapabilities Interactive bool Detach bool NoNewPrivileges bool } ``` ```Go // Mount describes a container"
},
{
"data": "type Mount struct { Source string Destination string // Type specifies the type of filesystem to mount. Type string // Options list all the mount options of the filesystem. Options []string // HostPath used to store host side bind mount path HostPath string // ReadOnly specifies if the mount should be read only or not ReadOnly bool // BlockDeviceID represents block device that is attached to the // VM in case this mount is a block device file or a directory // backed by a block device. BlockDeviceID string } ``` ```Go // DeviceInfo is an embedded type that contains device data common to all types of devices. type DeviceInfo struct { // Hostpath is device path on host HostPath string // ContainerPath is device path inside container ContainerPath string `json:\"-\"` // Type of device: c, b, u or p // c , u - character(unbuffered) // p - FIFO // b - block(buffered) special file // More info in mknod(1). DevType string // Major, minor numbers for device. Major int64 Minor int64 // Pmem enabled persistent memory. Use HostPath as backing file // for a nvdimm device in the guest. Pmem bool // If applicable, should this device be considered RO ReadOnly bool // ColdPlug specifies whether the device must be cold plugged (true) // or hot plugged (false). ColdPlug bool // FileMode permission bits for the device. FileMode os.FileMode // id of the device owner. UID uint32 // id of the device group. GID uint32 // ID for the device that is passed to the hypervisor. ID string // DriverOptions is specific options for each device driver // for example, for BlockDevice, we can set DriverOptions[\"block-driver\"]=\"virtio-blk\" DriverOptions map[string]string } ``` ```Go // Process gathers data related to a container process. type Process struct { // Token is the process execution context ID. It must be // unique per sandbox. // Token is used to manipulate processes for containers // that have not started yet, and later identify them // uniquely within a sandbox. Token string // Pid is the process ID as seen by the host software // stack, e.g. CRI-O, containerd. This is typically the // shim PID. Pid int StartTime time.Time } ``` ```Go // ContainerStatus describes a container status. type ContainerStatus struct { ID string State types.ContainerState PID int StartTime time.Time RootFs string Spec *specs.Spec // Annotations allow clients to store arbitrary values, // for example to add additional status values required // to support particular specifications. Annotations map[string]string } ``` ```Go // VCContainer is the Container interface // (required since virtcontainers.Container only contains private fields) type VCContainer interface { GetAnnotations() map[string]string GetPid() int GetToken() string ID() string Sandbox() VCSandbox Process() Process } ``` ```Go // CreateContainer is the virtcontainers container creation entry point. // CreateContainer creates a container on a given sandbox. func (s *Sandbox) CreateContainer(ctx context.Context, contConfig ContainerConfig) (VCContainer, error) ``` ```Go // DeleteContainer is the virtcontainers container deletion entry point. // DeleteContainer deletes a Container from a Sandbox. If the container is running, // it needs to be stopped first. func (s *Sandbox) DeleteContainer(ctx context.Context, containerID string) (VCContainer, error) ``` ```Go // StartContainer is the virtcontainers container starting entry"
},
{
"data": "// StartContainer starts an already created container. func (s *Sandbox) StartContainer(ctx context.Context, containerID string) (VCContainer, error) ``` ```Go // StopContainer is the virtcontainers container stopping entry point. // StopContainer stops an already running container. func (s *Sandbox) StopContainer(ctx context.Context, containerID string, force bool) (VCContainer, error) ``` ```Go // EnterContainer is the virtcontainers container command execution entry point. // EnterContainer enters an already running container and runs a given command. func (s Sandbox) EnterContainer(ctx context.Context, containerID string, cmd types.Cmd) (VCContainer, Process, error) ``` ```Go // StatusContainer is the virtcontainers container status entry point. // StatusContainer returns a detailed container status. func (s *Sandbox) StatusContainer(containerID string) (ContainerStatus, error) ``` ```Go // KillContainer is the virtcontainers entry point to send a signal // to a container running inside a sandbox. If all is true, all processes in // the container will be sent the signal. func (s *Sandbox) KillContainer(ctx context.Context, containerID string, signal syscall.Signal, all bool) error ``` ```Go // StatsContainer return the stats of a running container func (s *Sandbox) StatsContainer(ctx context.Context, containerID string) (ContainerStats, error) ``` ```Go // PauseContainer pauses a running container. func (s *Sandbox) PauseContainer(ctx context.Context, containerID string) error ``` ```Go // ResumeContainer resumes a paused container. func (s *Sandbox) ResumeContainer(ctx context.Context, containerID string) error ``` ```Go // UpdateContainer update a running container. func (s *Sandbox) UpdateContainer(ctx context.Context, containerID string, resources specs.LinuxResources) error ``` ```Go // WaitProcess waits on a container process and return its exit code func (s *Sandbox) WaitProcess(ctx context.Context, containerID, processID string) (int32, error) ``` ```Go // SignalProcess sends a signal to a process of a container when all is false. // When all is true, it sends the signal to all processes of a container. func (s *Sandbox) SignalProcess(ctx context.Context, containerID, processID string, signal syscall.Signal, all bool) error ``` ```Go // WinsizeProcess resizes the tty window of a process func (s *Sandbox) WinsizeProcess(ctx context.Context, containerID, processID string, height, width uint32) error ``` ```Go // IOStream returns stdin writer, stdout reader and stderr reader of a process func (s *Sandbox) IOStream(containerID, processID string) (io.WriteCloser, io.Reader, io.Reader, error) ``` ```Go import ( \"context\" \"fmt\" \"strings\" vc \"github.com/kata-containers/kata-containers/src/runtime/virtcontainers\" \"github.com/kata-containers/kata-containers/src/runtime/virtcontainers/types\" ) var containerRootfs = vc.RootFs{Target: \"/var/lib/container/bundle/\", Mounted: true} // This example creates and starts a single container sandbox, // using qemu as the hypervisor and kata as the VM agent. func Example_createAndStartSandbox() { envs := []types.EnvVar{ { Var: \"PATH\", Value: \"/bin:/usr/bin:/sbin:/usr/sbin\", }, } cmd := types.Cmd{ Args: strings.Split(\"/bin/sh\", \" \"), Envs: envs, WorkDir: \"/\", } // Define the container command and bundle. container := vc.ContainerConfig{ ID: \"1\", RootFs: containerRootfs, Cmd: cmd, } // Sets the hypervisor configuration. hypervisorConfig := vc.HypervisorConfig{ KernelPath: \"/usr/share/kata-containers/vmlinux.container\", ImagePath: \"/usr/share/kata-containers/clear-containers.img\", HypervisorPath: \"/usr/bin/qemu-system-x86_64\", MemorySize: 1024, MemSlots: 10, } // Use kata default values for the agent. agConfig := vc.KataAgentConfig{} // The sandbox configuration: // - One container // - Hypervisor is QEMU // - Agent is kata sandboxConfig := vc.SandboxConfig{ ID: \"sandbox-abc\", HypervisorType: vc.QemuHypervisor, HypervisorConfig: hypervisorConfig, AgentConfig: agConfig, Containers: []vc.ContainerConfig{container}, } // Create the sandbox s, err := vc.CreateSandbox(context.Background(), sandboxConfig, nil) if err != nil { fmt.Printf(\"Could not create sandbox: %s\", err) return } // Start the sandbox err = s.Start() if err != nil { fmt.Printf(\"Could not start sandbox: %s\", err) } } ```"
}
] |
{
"category": "Runtime",
"file_name": "api.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "The sequence diagrams are generated with `seqdiag`: http://blockdiag.com/en/seqdiag/index.html An easy way to work on them is to automatically update the generated files with https://github.com/cespare/reflex : reflex -g 'doc/[^.]*.seq' -- seqdiag -T svg -o '{}.svg' '{}' & reflex -g 'doc/[^.]*.seq' -- seqdiag -T png -o '{}.png' '{}' & The markdown files refer to PNG images because of Github limitations, but the SVG is generally more pleasant to view."
}
] |
{
"category": "Runtime",
"file_name": "writing-docs.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "A SpiderSubnet resource represents a collection of IP addresses from which Spiderpool expects SpiderIPPool IPs to be assigned. For details on using this CRD, please read the . ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderSubnet metadata: name: default-v4-subnet spec: ipVersion: 4 ips: 172.22.40.2-172.22.40.254 subnet: 172.22.0.0/16 excludeIPs: 172.22.40.10-172.22.40.20 gateway: 172.22.40.1 ``` | Field | Description | Schema | Validation | |-|-|--|| | name | the name of this SpiderSubnet resource | string | required | This is the SpiderSubnet spec for users to configure. | Field | Description | Schema | Validation | Values | Default | |-||-|||| | ipVersion | IP version of this subnet | int | optional | 4,6 | | | subnet | subnet of this resource | string | required | IPv4 or IPv6 CIDR.<br/>Must not overlap | | | ips | IP ranges for this resource to use | list of strings | optional | array of IP ranges and single IP address | | | excludeIPs | isolated IP ranges for this resource to filter | list of strings | optional | array of IP ranges and single IP address | | | gateway | gateway for this resource | string | optional | an IP address | | | vlan | vlan ID(deprecated) | int | optional | [0,4094] | 0 | | routes | custom routes in this resource | list of | optional | | | The Subnet status is a subresource that processed automatically by the system to summarize the current state. | Field | Description | Schema | |-|-|--| | controlledIPPools | current IP allocations in this subnet resource | string | | totalIPCount | total IP addresses counts of this subnet resource to use | int | | allocatedIPCount | current allocated IP addresses counts | int |"
}
] |
{
"category": "Runtime",
"file_name": "crd-spidersubnet.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "|Author | | | | - | | Date | 2022-09-19 | | Emial | [email protected]| The isulad process and the container process managed by isulad can run independently. When isulad exits, the container process can continue to run unaffected; when isulad restarts, the restore module can restore the state of all containers to the isulad process, and let the isulad process re-manage these containers. The overall flow chart of the restore module is as follows: The restore module provides an interface to complete all restore work when isulad starts. In general, the entire restore phase will do two things: restore: First, build a container object from the persisted data. The container object is a structure named `containert`. All persistent data is stored in a directory named after the container ID. This deserialization process is done by the function `containerload`. After that, the successfully restored container objects will be put into a map for unified management. handle: After restoring all container objects, the next thing to do is to synchronize the container objects according to the state of the actual container process on the host. A container object should correspond to a specific container process on the host. If isulad wants to manage this specific container process, some additional operations are required. ````c // 1. Container state restore interface; extern void containers_restore(void); ```` Here are some key processes: container_load: An interface provided by the container module is used here, and a container object is constructed by parsing various configuration files in the directory named by the container id. checkcontainerimage_exist: Check whether the image layer of the container still exists. If it has been deleted, the restore of the container fails. restore_state: The container status of the persistent storage may have expired, so try to use the runtime interface to obtain the runtime container status, and use the real status to modify the container status. containerstoreadd: Use the map and interface provided by the container store sub module to manage successfully restored container objects. The main process is to complete some operations according to different container states: gc state: No need to do any processing, the gc thread will complete the resource recovery of the container. Running state: Try to restore supervisor and init health checker. isulad requires supervisor and health checker to manage real container processes. When the two steps are completed, a running container is successfully restored. For other states: For example, if the container is in the stopped state, check whether it is set to automatically remove after exit, if set, execute the remove operation, otherwise execute the restart operation. The restart operation is briefly described here. For detailed documentation, please refer to the restart manager design document. Since the container has a restart strategy, the restart operation can only be completed by isulad. Therefore, when isulad exits, the container that needs to be restarted cannot complete the restart operation. After the restore operation is completed, the restart is completed according to the customized restart policy of the container."
}
] |
{
"category": "Runtime",
"file_name": "restore_design.md",
"project_name": "iSulad",
"subcategory": "Container Runtime"
}
|
[
{
"data": "`rkt` is designed to cooperate with init systems, like . rkt implements a simple CLI that directly executes processes, and does not interpose a long-running daemon, so the lifecycle of rkt pods can be directly managed by systemd. Standard systemd idioms like `systemctl start` and `systemctl stop` work out of the box. In the shell excerpts below, a `#` prompt indicates commands that require root privileges, while the `$` prompt denotes commands issued as an unprivileged user. The utility is a convenient shortcut for testing a service before making it permanent in a unit file. To start a \"daemonized\" container that forks the container processes into the background, wrap the invocation of `rkt` with `systemd-run`: ``` Running as unit run-29486.service. ``` The `--slice=machine` option to `systemd-run` places the service in `machine.slice` rather than the host's `system.slice`, isolating containers in their own cgroup area. Invoking a rkt container through systemd-run in this way creates a transient service unit that can be managed with the usual systemd tools: ``` $ systemctl status run-29486.service run-29486.service - /bin/rkt run coreos.com/etcd:v2.2.5 Loaded: loaded (/run/systemd/system/run-29486.service; static; vendor preset: disabled) Drop-In: /run/systemd/system/run-29486.service.d 50-Description.conf, 50-ExecStart.conf, 50-Slice.conf Active: active (running) since Wed 2016-02-24 12:50:20 CET; 27s ago Main PID: 29487 (ld-linux-x86-64) Memory: 36.1M CPU: 1.467s CGroup: /machine.slice/run-29486.service 29487 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot -Zsystemu:systemr:svirtlxcnet_t:s0:c46... 29535 /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --log-level=warning --show-status=0 system.slice etcd.service 29544 /etcd systemd-journald.service 29539 /usr/lib/systemd/systemd-journald ``` Since every pod is registered with with a machine name of the form `rkt-$UUID`, the systemd tools can inspect pod logs, or stop and restart pod \"machines\". Use the `machinectl` tool to print the list of rkt pods: ``` $ machinectl list MACHINE CLASS SERVICE rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 container nspawn 1 machines listed. ``` Given the name of this rkt machine, `journalctl` can inspect its logs, or `machinectl` can shut it down: ``` ... Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518030 I | raft: ce2a822cea30bfca received vote from ce2a822cea30bfca at term 2 Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518073 I | raft: ce2a822cea30bfca became leader at term 2 Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518086 I | raft: raft.node: ce2a822cea30bfca elected leader ce2a822cea30bfca at te Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518720 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379 h Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.518955 I | etcdserver: setting up the initial cluster version to 2.2 Feb 24 12:50:22 rkt-2b0b2cec-8f63-4451-9431-9f8e9b265a23 etcd[4]: 2016-02-24 11:50:22.521680 N | etcdserver: set the initial cluster version to 2.2 $ machinectl list MACHINE CLASS SERVICE 0 machines listed. ``` Note that, for the \"coreos\" and \"kvm\" stage1 flavors, journald integration is only supported if systemd is compiled with `lz4` compression enabled. To inspect this, use `systemctl`: ``` $ systemctl --version systemd 235 [...] +LZ4 [...] ``` If the output contains `-LZ4`, journal entries will not be available. Sometimes, defining dependencies between containers makes sense. An example use case of this is a container running a server, and another container running a client. We want the server to start before the client tries to connect. This can be accomplished by using systemd services and dependencies. However, for this to work in rkt containers, we need special support. systemd inside stage1 can notify systemd on the host that it is ready, to make sure that stage1 systemd send the notification at the right time you can use the"
},
{
"data": "To make use of this feature, you need to set the annotation `appc.io/executor/supports-systemd-notify` to true in the image manifest whenever the app supports sd\\_notify (see example manifest below). If you build your image with you can use the command: `acbuild annotation add appc.io/executor/supports-systemd-notify true`. ``` { \"acKind\": \"ImageManifest\", \"acVersion\": \"0.8.4\", \"name\": \"coreos.com/etcd\", ... \"app\": { \"exec\": [ \"/etcd\" ], ... }, \"annotations\": [ \"name\": \"appc.io/executor/supports-systemd-notify\", \"value\": \"true\" ] } ``` This feature is always available when using the \"coreos\" stage1 flavor. If you use the \"host\" stage1 flavor (e.g. Fedora RPM or Debian deb package), you will need systemd >= v231. To verify how it works, run in a terminal the command: `sudo systemd-run --unit=test --service-type=notify rkt run --insecure-options=image /path/to/your/app/image`, then periodically check the status with `systemctl status test`. If the pod uses a stage1 image with systemd v231 (or greater), then the pod will be seen active form the host when systemd inside stage1 will reach default target. Instead, before it was marked as active as soon as it started. In this way it is possible to easily set up dependencies between pods and host services. Moreover, using in the application it is possible to make the pod marked as ready when all the apps or a particular one is ready. For more information check documentation. This is how the sd_notify signal is propagated to the host system: Below there is a simple example of an app using the systemd notification mechanism via binding library. ```go package main import ( \"log\" \"net\" \"net/http\" \"github.com/coreos/go-systemd/daemon\" ) func main() { http.HandleFunc(\"/\", func(w http.ResponseWriter, r *http.Request) { log.Printf(\"request from %v\\n\", r.RemoteAddr) w.Write([]byte(\"hello\\n\")) }) ln, err := net.Listen(\"tcp\", \":5000\") if err != nil { log.Fatalf(\"Listen failed: %s\", err) } sent, err := daemon.SdNotify(true, \"READY=1\") if err != nil { log.Fatalf(\"Notification failed: %s\", err) } if !sent { log.Fatalf(\"Notification not supported: %s\", err) } log.Fatal(http.Serve(ln, nil)) } ``` You can run an app that supports `sd\\_notify()` with this command: ``` Running as unit run-29486.service. ``` The following is a simple example of a unit file using `rkt` to run an `etcd` instance under systemd service management: ``` [Unit] Description=etcd [Service] Slice=machine.slice ExecStart=/usr/bin/rkt run coreos.com/etcd:v2.2.5 KillMode=mixed Restart=always ``` This unit can now be managed using the standard `systemctl` commands: ``` ``` Note that no `ExecStop` clause is required. Setting means that running `systemctl stop etcd.service` will send `SIGTERM` to `stage1`'s `systemd`, which in turn will initiate orderly shutdown inside the pod. Systemd is additionally able to send the cleanup `SIGKILL` to any lingering service processes, after a timeout. This comprises complete pod lifecycle management with familiar, well-known system init tools. A more advanced unit example takes advantage of a few convenient `systemd` features: Inheriting environment variables specified in the unit with `--inherit-env`. This feature helps keep units concise, instead of layering on many flags to `rkt run`. Using the dependency graph to start our pod after networking has come online. This is helpful if your application requires outside connectivity to fetch remote configuration (for example, from `etcd`). Set resource limits for this `rkt` pod. This can also be done in the unit file, rather than flagged to `rkt run`. Set `ExecStopPost` to invoke `rkt gc --mark-only` to record the timestamp when the pod exits. (Run `rkt gc --help` to see more details about this"
},
{
"data": "After running `rkt gc --mark-only`, the timestamp can be retrieved from rkt API service in pod's `gcmarkedat` field. The timestamp can be treated as the finished time of the pod. Here is what it looks like all together: ``` [Unit] Description=MyApp Documentation=https://myapp.com/docs/1.3.4 Requires=network-online.target After=network-online.target [Service] Slice=machine.slice Delegate=true CPUShares=512 MemoryLimit=1G Environment=HTTP_PROXY=192.0.2.3:5000 Environment=STORAGE_PATH=/opt/myapp Environment=TMPDIR=/var/tmp ExecStartPre=/usr/bin/rkt fetch myapp.com/myapp-1.3.4 ExecStart=/usr/bin/rkt run --inherit-env --port=http:8888 myapp.com/myapp-1.3.4 ExecStopPost=/usr/bin/rkt gc --mark-only KillMode=mixed Restart=always ``` rkt must be the main process of the service in order to support correctly and to be well-integrated with . To ensure that rkt is the main process of the service, the pattern `/bin/sh -c \"foo ; rkt run ...\"` should be avoided, because in that case the main process is `sh`. In most cases, the parameters `Environment=` and `ExecStartPre=` can simply be used instead of starting a shell. If shell invocation is unavoidable, use `exec` to ensure rkt replaces the preceding shell process: ``` ExecStart=/bin/sh -c \"foo ; exec rkt run ...\" ``` `rkt` inherits resource limits configured in the systemd service unit file. The systemd documentation explains various , and settings to restrict the CPU, IO, and memory resources. For example to restrict the CPU time quota, configure the corresponding setting: ``` [Service] ExecStart=/usr/bin/rkt run s-urbaniak.github.io/images/stress:0.0.1 CPUQuota=30% ``` ``` $ ps -p <PID> -o %cpu% CPU 30.0 ``` Moreover to pin the rkt pod to certain CPUs, configure the corresponding setting: ``` [Service] ExecStart=/usr/bin/rkt run s-urbaniak.github.io/images/stress:0.0.1 CPUAffinity=0,3 ``` ``` $ top Tasks: 235 total, 1 running, 234 sleeping, 0 stopped, 0 zombie %Cpu0 : 100.0/0.0 100[|||||||||||||||||||||||||||||||||||||||||||||| %Cpu1 : 6.0/0.7 7[||| %Cpu2 : 0.7/0.0 1[ %Cpu3 : 100.0/0.0 100[|||||||||||||||||||||||||||||||||||||||||||||| GiB Mem : 25.7/19.484 [ GiB Swap: 0.0/8.000 [ PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND 11684 root 20 0 3.6m 1.1m 200.0 0.0 8:58.63 S stress ``` `rkt` supports . This means systemd will listen on a port on behalf of a container, and start the container when receiving a connection. An application needs to be able to accept sockets from systemd's native socket passing interface in order to handle socket activation. To make socket activation work, add a to the app container manifest: ```json ... { ... \"app\": { ... \"ports\": [ { \"name\": \"80-tcp\", \"protocol\": \"tcp\", \"port\": 80, \"count\": 1, \"socketActivated\": true } ] } } ``` Then you will need a pair of `.service` and `.socket` unit files. In this example, we want to use the port 8080 on the host instead of the app's default 80, so we use rkt's `--port` option to override it. ``` [Unit] Description=My socket-activated app's socket [Socket] ListenStream=8080 ``` ``` [Unit] Description=My socket-activated app [Service] ExecStart=/usr/bin/rkt run --port 80-tcp:8080 myapp.com/my-socket-activated-app:v1.0 KillMode=mixed ``` Finally, start the socket unit: ``` $ systemctl status my-socket-activated-app.socket my-socket-activated-app.socket - My socket-activated app's socket Loaded: loaded (/etc/systemd/system/my-socket-activated-app.socket; static; vendor preset: disabled) Active: active (listening) since Thu 2015-07-30 12:24:50 CEST; 2s ago Listen: [::]:8080 (Stream) Jul 30 12:24:50 locke-work systemd[1]: Listening on My socket-activated app's socket. ``` Now, a new connection to port 8080 will start your container to handle the request. `rkt` also supports the . Much like socket activation, with socket-proxyd systemd provides a listener on a given port on behalf of a container, and starts the container when a connection is received. Socket-proxy listening can be useful in environments that lack native support for socket activation. The LKVM stage1 flavor is an example of such an"
},
{
"data": "To set up socket proxyd, create a network template consisting of three units, like the example below. This example uses the redis app and the PTP network template in `/etc/rkt/net.d/ptp0.conf`: ```json { \"name\": \"ptp0\", \"type\": \"ptp\", \"ipMasq\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"172.16.28.0/24\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ] } } ``` ``` [Unit] Description=Socket-proxyd redis server [Service] ExecStart=/usr/bin/rkt --insecure-options=image run --net=\"ptp:IP=172.16.28.101\" docker://redis KillMode=process ``` Note that you have to specify IP manually in systemd unit. Then you will need a pair of `.service` and `.socket` unit files. We want to use the port 6379 on the localhost instead of the remote container IP, so we use next systemd unit to override it. ``` [Unit] Requires=rkt-redis.service After=rkt-redis.service [Service] ExecStart=/usr/lib/systemd/systemd-socket-proxyd 172.16.28.101:6379 ``` Lastly the related socket unit, ``` [Socket] ListenStream=6371 [Install] WantedBy=sockets.target ``` Finally, start the socket unit: ``` $ sudo systemctl start proxy-to-redis.socket proxy-to-rkt-redis.socket Loaded: loaded (/etc/systemd/system/proxy-to-rkt-redis.socket; enabled; vendor preset: disabled) Active: active (listening) since Mon 2016-03-07 11:53:32 CET; 8s ago Listen: [::]:6371 (Stream) Mar 07 11:53:32 user-host systemd[1]: Listening on proxy-to-rkt-redis.socket. Mar 07 11:53:32 user-host systemd[1]: Starting proxy-to-rkt-redis.socket. ``` Now, a new connection to localhost port 6371 will start your container with redis, to handle the request. ``` $ curl http://localhost:6371/ ``` Let us assume the service from the simple example unit file, above, is started on the host. The snippet below taken from output of `ps auxf` shows several things: `rkt` `exec`s stage1's `systemd-nspawn` instead of using `fork-exec` technique. That is why rkt itself is not listed by `ps`. `systemd-nspawn` runs a typical boot sequence - it spawns `systemd` inside the container, which in turn spawns our desired service(s). There can be also other services running, which may be `systemd`-specific, like `systemd-journald`. ``` $ ps auxf USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 7258 0.2 0.0 19680 2664 ? Ss 12:38 0:02 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn --boot --register=true --link-journal=try-guest --quiet --keep-unit --uuid=6d0d9608-a744-4333-be21-942145a97a5a --machine=rkt-6d0d9608-a744-4333-be21-942145a97a5a --directory=stage1/rootfs -- --default-standard-output=tty --log-target=null --log-level=warning --show-status=0 root 7275 0.0 0.0 27348 4316 ? Ss 12:38 0:00 \\_ /usr/lib/systemd/systemd --default-standard-output=tty --log-target=null --log-level=warning --show-status=0 root 7277 0.0 0.0 23832 6100 ? Ss 12:38 0:00 \\_ /usr/lib/systemd/systemd-journald root 7343 0.3 0.0 10652 7332 ? Ssl 12:38 0:04 \\_ /etcd ``` The `systemd-cgls` command prints the list of cgroups active on the system. The inner `system.slice` shown in the excerpt below is a cgroup in rkt's `stage1`, below which an in-container systemd has been started to shepherd pod apps with complete process lifecycle management: ``` $ systemd-cgls 1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 machine.slice etcd.service 1204 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/s... 1421 /usr/lib/systemd/systemd --default-standard-output=tty --log-targe... system.slice etcd.service 1436 /etcd systemd-journald.service 1428 /usr/lib/systemd/systemd-journald ``` To display all active cgroups, use the `--all` flag. This will show two cgroups for `mount` in the host's `system.slice`. One mount cgroup is for the `stage1` root filesystem, the other for the `stage2` root (the pod's filesystem). Inside the pod's `system.slice` there are more `mount` cgroups -- mostly for bind mounts of standard `/dev`-tree device files. ``` $ systemd-cgls --all 1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22 machine.slice etcd.service 1204 stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/s... 1421 /usr/lib/systemd/systemd --default-standard-output=tty --log-targe... system.slice proc-sys-kernel-random-boot_id.mount opt-stage2-etcd-rootfs-proc-kmsg.mount opt-stage2-etcd-rootfs-sys.mount opt-stage2-etcd-rootfs-dev-shm.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-perf_event.mount etcd.service 1436 /etcd opt-stage2-etcd-rootfs-proc-sys-kernel-random-boot_id.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-cpu\\x2ccpuacct.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-devices.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-freezer.mount shutdown.service -.mount opt-stage2-etcd-rootfs-data\\x2ddir.mount system-prepare\\x2dapp.slice tmp.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-cpuset.mount opt-stage2-etcd-rootfs-proc.mount systemd-journald.service 1428 /usr/lib/systemd/systemd-journald opt-stage2-etcd-rootfs.mount opt-stage2-etcd-rootfs-dev-random.mount opt-stage2-etcd-rootfs-dev-pts.mount opt-stage2-etcd-rootfs-sys-fs-cgroup.mount run-systemd-nspawn-incoming.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-systemd-machine.slice-etcd.service.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-memory-machine.slice-etcd.service-system.slice-etcd.service-cgroup.procs.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-blkio.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-netcls\\x2cnetprio.mount opt-stage2-etcd-rootfs-dev-net-tun.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-memory-machine.slice-etcd.service-system.slice-etcd.service-memory.limitinbytes.mount opt-stage2-etcd-rootfs-dev-tty.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-pids.mount reaper-etcd.service opt-stage2-etcd-rootfs-sys-fs-selinux.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-memory.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-cpu\\x2ccpuacct-machine.slice-etcd.service-system.slice-etcd.service-cpu.cfsquotaus.mount opt-stage2-etcd-rootfs-dev-urandom.mount opt-stage2-etcd-rootfs-dev-zero.mount opt-stage2-etcd-rootfs-dev-null.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-systemd.mount opt-stage2-etcd-rootfs-dev-console.mount opt-stage2-etcd-rootfs-dev-full.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-cpu\\x2ccpuacct-machine.slice-etcd.service-system.slice-etcd.service-cgroup.procs.mount opt-stage2-etcd-rootfs-proc-sys.mount opt-stage2-etcd-rootfs-sys-fs-cgroup-hugetlb.mount ```"
}
] |
{
"category": "Runtime",
"file_name": "using-rkt-with-systemd.md",
"project_name": "rkt",
"subcategory": "Container Runtime"
}
|
[
{
"data": "This directory contains an example `prometheus_adapter.lua` on how to use to push metrics from the RGW requests to , specifically to collect information on object sizes. As every single run of a lua script is short-lived, so should be used as an intermediate service to enable Prometheus to scrape data from RGW. Install and run Pushgateway using docker: ```bash docker pull prom/pushgateway docker run -p 9091:9091 -it prom/pushgateway ``` Install and run Prometheus using docker: ```bash docker pull prom/prometheus docker run --network host -v ${CEPH_DIR}/examples/lua/config/prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus ``` Upload the script: ```bash radosgw-admin script put --infile=prometheus_adapter.lua --context=postRequest ``` Add the packages used in the script: ```bash radosgw-admin script-package add --package='luasocket' --allow-compilation ``` Restart radosgw. Send a request: ```bash s3cmd --host=localhost:8000 --host-bucket=\"localhost:8000/%(bucket)\" --accesskey=0555b35654ad1656d804 --secretkey=h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q== mb s3://mybucket s3cmd --host=localhost:8000 --host-bucket=\"localhost:8000/%(bucket)\" --accesskey=0555b35654ad1656d804 --secretkey=h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q== put -P /etc/hosts s3://mybucket curl http://localhost:8000/mybucket/hosts ``` Open `http://localhost:9090` by browser and search for `rgwrequestcontent_length` Lua 5.3 or higher"
}
] |
{
"category": "Runtime",
"file_name": "prometheus_adapter.md",
"project_name": "Ceph",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Talos Linux isnt based on any other distribution. We think of ourselves as being the second-generation of container-optimised operating systems, where things like CoreOS, Flatcar, and Rancher represent the first generation (but the technology is not derived from any of those.) Talos Linux is actually a ground-up rewrite of the userspace, from PID 1. We run the Linux kernel, but everything downstream of that is our own custom code, written in Go, rigorously-tested, and published as an immutable, integrated image. The Linux kernel launches what we call machined, for instance, not systemd. There is no systemd on our system. There are no GNU utilities, no shell, no SSH, no packages, nothing you could associate with any other distribution. Currently, Longhorn (at version v1.5.x) does not support Talos Linux as one of the due to reliance on host binaries like BASH and iscsiadm. The goal of this proposal is to enable Longhorn to be installed and functional on Talos Linux clusters. https://github.com/longhorn/longhorn/issues/3161 We have been approached by and observed a number of Talos Linux users who wish to utilize Longhorn. However, the lack of support for Talos Linux prevents them from using Longhorn as their storage solution. The primary goal of this proposal is to introduce support for Talos Linux, allowing users to installed and operate Longhorn on Talos Linux clusters. `None` Whenever possible, replace host binary dependencies with alternative solutions: Develop a common thread namespace switch package that can be utilized by projects requiring interaction with the host. This is necessary due to the absence of GNU utilities in Talos Linux. For example, Longhorn is unable to utilize nsenter and execute binaries in . Modify the invocation of the `iscsiadm` binary to execute within the `iscsid` process namespace, In cases where replacing binary invocations is not feasible, leverage the Talos Linux `kubelet` namespace specifically for Talos Linux. For other operating systems, maintain the existing approach. As a user running Talos Linux, I want to be able to use Longhorn as my storage solution. Currently, Longhorn does not support Talos Linux, so I am unable to utilize its features. With this enhancement, I will be able to install and use Longhorn on my Talos Linux cluster. Provision a Talos Linux cluster. Apply the required machine configurations to the cluster nodes: Install the iscsi-tool system extension. Install the util-linux system extension. Add the /var/lib/longhorn extra mount. Install Longhorn on the Talos Linux"
},
{
"data": "Once Longhorn is successfully installed, access and utilize Longhorn just like users on other supported operating systems. `None` Create a package in the repository that can be imported by other projects. The targeting function execute within a goroutine and utilize the `unix.Setns` to switch to different namespaces. As Golang being a multi-threaded language, to ensure exclusive use of the thread, lock the thread when switching to a different namespace. Once the targeting function completes its execution, the thread will switch back to the original namespace and unlock the thread to allow other goroutines to execute. The primary goal of this implementation is to replace the usage of GNU utilities with Go-based implementations where applicable, particularly in areas such as file handling. Longhorn currently assumes that `iscsid` is running on the host. However, in Talos Linux, `iscsid` runs as a Talos extension service in a different namespace. To address this, modify the `iscsiadm` binary invocation to execute within the `iscsid` namespace using `nsenter`. ~Instead of rewriting the `fstrim` binary dependency, leverage the existing `fstrim` binary within the `kubelet` namespace. However, considering that the `kubelet` namespace might not be present in other operating systems, Longhorn needs to switch to the host namespace to retrieve the OS distribution and determine if it is a Talos Linux. If the host is identified as Talos Linux, Longhorn can then switch to the `kubelet` namespace to utilize the existing `fstrim` binary.~ Keep the binary execution in the host namespace, following a discussion with @frezbo from @siderolabs. The Talos team has proposed an approach to include the fstrim in the host as part of `util-linux` extension. Keep the current implementation of the binary execution in the host namespace, as the `cryptsetup` binary comes in Talos Linux, for the . Replace the `nsenter` path with the `iscsid` proc path when invoking `iscsiadm` binary. Replace the `nsenter` path with the `kubelet` proc path when invoking other dependency binaries. This change will be applied only if the operating system distribution is identified as Talos Linux, as other distributions may not have a running `kubelet` process. For binary invocations are not part of the , replace them with appropriate Golang libraries. Utilize the to handle the necessary namespace switching when interacting with the host. Update existing test cases that rely on `nsenter` to either correct the namespace or replace them with appropriate Python libraries. Perform regression testing by running existing test cases to ensure that no regressions are introduced. Introduce a new pipeline for testing Longhorn on Talos Linux clusters in https://ci.longhorn.io/. `None` `None`"
}
] |
{
"category": "Runtime",
"file_name": "20230814-talos-linux-support.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "name: Proposal about: Describe a feature you are planning to implement title: '' labels: kind/design assignees: '' Describe what you are trying to solve <!-- A description of the current limitation/problem/challenge that you are experiencing. --> Describe the solution you have in mind <!-- A clear and concise description of what you want to happen. --> Describe how your solution impacts user flows <!-- Does your solution impact how users interact with Antrea? If your proposal introduces a new user-facing feature, describe how it can be consumed. --> Describe the main design/architecture of your solution <!-- A clear and concise description of what does your solution look like. Rich text and diagrams are preferred. --> Alternative solutions that you considered <!-- A list of the alternative solutions that you considered, and why they fell short. You can list the pros and cons of each solution. --> Test plan <!-- Describe what kind of tests you plan on adding to exercise your changes. --> Additional context <!-- Any other relevant information.-->"
}
] |
{
"category": "Runtime",
"file_name": "proposal.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "title: Launching Weave Net menu_order: 20 search_type: Documentation Weave Net provides a simple to deploy networking solution for containerized apps. Here, we describe how to manage a Weave container network using a sample application which consists of two simple `netcat` services deployed to containers on two separate hosts. This section contains the following topics: * * Before launching Weave Net and deploying your apps, ensure that Docker is on both hosts. On `$HOST1` run: host1$ weave launch host1$ eval $(weave env) host1$ docker run --name a1 -ti weaveworks/ubuntu Where, The first line runs Weave Net. The second line configures the Weave Net environment, so that containers launched via the Docker command line are automatically attached to the Weave network, and, The third line runs the using . Note If the first command results in an error like ``` Cannot connect to the Docker daemon. Is the docker daemon running on this host? ``` or ``` http:///var/run/docker.sock/v1.19/containers/create: dial unix/var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS? ``` then you likely need to be 'root' in order to connect to the Docker daemon. If so, run the above and all subsequent commands in a single root shell: ``` host1$ sudo -s host1# weave launch host1# eval $(weave env) host1# docker run --name a1 -ti weaveworks/ubuntu ``` Do not prefix individual commands with `sudo`, since some commands modify environment entries and hence they all need to be executed from the same shell. Weave Net must be launched once per host. The relevant container images will be pulled down from Docker Hub on demand during `weave launch`. You can also preload the images by running `weave setup`. Preloaded images are useful for automated deployments, and ensure there are no delays during later operations. If you are deploying an application that consists of more than one container to the same host, launch them one after another using `docker run`, as appropriate. To launch Weave Net on an additional host and create a peer connection, run the following: host2$ weave launch $HOST1 host2$ eval $(weave env) host2$ docker run --name a2 -ti weaveworks/ubuntu As noted above, the same steps are repeated for `$HOST2`. The only difference, besides the application containers name, is that `$HOST2` is told to peer with Weave Net on `$HOST1` during"
},
{
"data": "You can also peer with other hosts by specifying the IP address, and a `:port` by which `$HOST2` can reach `$HOST1`. Note: If there is a firewall between `$HOST1` and `$HOST2`, you must permit traffic to flow through TCP 6783 and UDP 6783/6784, which are Weaves control and data ports. There are a number of different ways that you can specify peers on a Weave network. You can launch Weave Net on `$HOST1` and then peer with `$HOST2`, or you can launch on `$HOST2` and peer with `$HOST1` or you can tell both hosts about each other at launch. The order in which peers are specified is not important. Weave Net automatically (re)connects to peers when they become available. To specify multiple peers, supply a list of addresses to which you want to connect, all separated by spaces. For example: host2$ weave launch <ip address> <ip address> Peers can also be dynamically added. See for more information. By default Weave Net listens on all host IPs (i.e. 0.0.0.0). This can be altered with the `--host` parameter to `weave launch`, for example, to ensure that Weave Net only listens on IPs on an internal network. Standard firewall rules can be deployed to restrict access to the Weave Net control and data ports. For communication across untrusted networks, connections can be . With two containers running on separate hosts, test that both containers are able to find and communicate with one another using ping. From the container started on `$HOST1`... root@a1:/# ping -c 1 -q a2 PING a2.weave.local (10.40.0.2) 56(84) bytes of data. a2.weave.local ping statistics 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms Similarly, in the container started on `$HOST2`... root@a2:/# ping -c 1 -q a1 PING a1.weave.local (10.32.0.2) 56(84) bytes of data. a1.weave.local ping statistics 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms The `netcat` service can be started using the following commands: root@a1:/# nc -lk -p 4422 and then connected to from the another container on `$HOST2` using: root@a2:/# echo 'Hello, world.' | nc a1 4422 Weave Net supports any protocol, and it doesn't have to be over TCP/IP. For example, a netcat UDP service can also be run by using the following: root@a1:/# nc -lu -p 5533 root@a2:/# echo 'Hello, world.' | nc -u a1 5533 See Also * *"
}
] |
{
"category": "Runtime",
"file_name": "using-weave.md",
"project_name": "Weave Net",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "name: Bug Report about: Report a bug encountered <!-- Please use this template while reporting a bug and provide as much info as possible. Not doing so may result in your bug not being addressed in a timely manner. Thanks!--> What happend: What you expected to happen: How to reproduce it (as minimally and precisely as possible): Anything else we need to know?: Environment: Multus version image path and image ID (from 'docker images') Kubernetes version (use `kubectl version`): Primary CNI for Kubernetes cluster: OS (e.g. from /etc/os-release): File of '/etc/cni/net.d/' File of '/etc/cni/multus/net.d' NetworkAttachment info (use `kubectl get net-attach-def -o yaml`) Target pod yaml info (with annotation, use `kubectl get pod <podname> -o yaml`) Other log outputs (if you use multus logging)"
}
] |
{
"category": "Runtime",
"file_name": "bug-report.md",
"project_name": "Multus",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "layout: global title: NFS This guide describes the instructions to configure {:target=\"_blank\"} as Alluxio's under storage system. Network File System (NFS) is a distributed file system protocol that allows a client computer to access files over a network as if they were located on its local storage. NFS enables file sharing and remote file access between systems in a networked environment. You'll need to have a configured and running installation of NFS for the rest of this guide. If you need to get your own NFS installation up and running, we recommend taking a look at the {:target=\"_blank\"}. If you haven't already, please see before you get started. Make sure that you have a version of {:target=\"_blank\"} installed. Turn on remote login service so that `ssh localhost` can succeed. To avoid the need to repeatedly input the password, you can add the public SSH key for the host into `~/.ssh/authorizedkeys`. See {:target=\"blank\"} for more details. Before Alluxio master and workers can access the NFS server, mount points to the NFS server need to be created. Typically, all the machines will have the NFS shares located at the same path, such as `/mnt/nfs`. NFS client cache can interfere with the correct operation of Alluxio, specifically if Alluxio master creates a file on the NFS but the NFS client on the Alluxio worker continue to use the cached file listing, it will not see the newly created file. Thus we highly recommend setting the attribute cache timeout to 0. Please mount your nfs share like this. ```shell $ sudo mount -o actimeo=0 nfshost:/nfsdir /mnt/nfs ``` To use NFS as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Assume we have mounted NFS share at `/mnt/nfs` on all Alluxio masters and workers, modify `conf/alluxio-site.properties` to include: ```properties alluxio.master.hostname=localhost alluxio.dora.client.ufs.root=/mnt/nfs ``` Once you have configured Alluxio to NFS, try to see that everything works. Visit your NFS volume at `/mnt/nfs` to verify the files and directories created by Alluxio exist. For this test, you should see files named: ``` /mnt/nfs/defaulttestsfiles/BASICCACHETHROUGH ```"
}
] |
{
"category": "Runtime",
"file_name": "NFS.md",
"project_name": "Alluxio",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "<!-- toc --> - - - <!-- /toc --> Add the new /metrics/resource endpoint in the virtual-kubelet to support the metrics server update for new Kubernetes versions `>=1.24` The Kubernetes metrics server now tries to get metrics from the kubelet using the new metrics endpoint , while Virtual Kubelet is still exposing the earlier metrics endpoint . This causes metrics to break when using virtual kubelet with newer Kubernetes versions (>=1.24). To support the new metrics server, this document proposes adding a new handler to handle the updated metrics endpoint. This will be an additive update, and the old endpoint will still be available to maintain backward compatibility with the older metrics server version. Support metrics for kubernetes version `>=1.24` through adding /metrics/resource endpoint handler. Ensure pod autoscaling works as expected with the newer kubernetes versions `>=1.24` as expected Add a new handler for `/metrics/resource` endpoint that calls a new `GetMetricsResource` method in the provider, which in-turn returns metrics using the prometheus `model.Samples` data structure as expected by the new metrics server. The provider will need to implement the `GetMetricsResource` method in order to add support for the new `/metrics/resource` endpoint with Kubernetes version >=1.24 Currently the virtual kubelet code uses the `PodStatsSummaryHandler` method to set up a http handler for serving pod metrics via the `/stats/summary` endpoint. To support the updated metrics server, we need to add another handler `PodMetricsResourceHandler` which can serve metrics via the `/metrics/resource` endpoint. The `PodMetricsResourceHandler` calls the new `GetMetricsResource` method of the provider to get the metrics from the specific provider. Add `GetMetricsResource` to `PodHandlerConfig` ```go type PodHandlerConfig struct { //nolint:golint RunInContainer ContainerExecHandlerFunc GetContainerLogs ContainerLogsHandlerFunc // GetPods is meant to enumerate the pods that the provider knows about GetPods PodListerFunc // GetPodsFromKubernetes is meant to enumerate the pods that the node is meant to be running GetPodsFromKubernetes PodListerFunc GetStatsSummary PodStatsSummaryHandlerFunc GetMetricsResource PodMetricsResourceHandlerFunc StreamIdleTimeout time.Duration StreamCreationTimeout time.Duration } ``` Add endpoint to `PodHandler` method ```go const MetricsResourceRouteSuffix = \"/metrics/resource\" func PodHandler(p PodHandlerConfig, debug bool) http.Handler { r := mux.NewRouter() // This matches the behaviour in the reference kubelet r.StrictSlash(true) if debug { r.HandleFunc(\"/runningpods/\", HandleRunningPods(p.GetPods)).Methods(\"GET\") } r.HandleFunc(\"/pods\", HandleRunningPods(p.GetPodsFromKubernetes)).Methods(\"GET\") r.HandleFunc(\"/containerLogs/{namespace}/{pod}/{container}\", HandleContainerLogs(p.GetContainerLogs)).Methods(\"GET\") r.HandleFunc( \"/exec/{namespace}/{pod}/{container}\", HandleContainerExec( p.RunInContainer, WithExecStreamCreationTimeout(p.StreamCreationTimeout), WithExecStreamIdleTimeout(p.StreamIdleTimeout), ), ).Methods(\"POST\", \"GET\") if p.GetStatsSummary != nil { f := HandlePodStatsSummary(p.GetStatsSummary) r.HandleFunc(\"/stats/summary\", f).Methods(\"GET\") r.HandleFunc(\"/stats/summary/\", f).Methods(\"GET\") } if p.GetMetricsResource != nil { f := HandlePodMetricsResource(p.GetMetricsResource) r.HandleFunc(MetricsResourceRouteSuffix, f).Methods(\"GET\") r.HandleFunc(MetricsResourceRouteSuffix+\"/\", f).Methods(\"GET\") } r.NotFoundHandler = http.HandlerFunc(NotFound) return r } ``` New `PodMetricsResourceHandler` method, that uses the new `PodMetricsResourceHandlerFunc` definition. ```go // PodMetricsResourceHandler creates an http handler for serving pod metrics. // // If the passed in handler func is nil this will create handlers which only // serves http.StatusNotImplemented func PodMetricsResourceHandler(f PodMetricsResourceHandlerFunc) http.Handler { if f == nil { return http.HandlerFunc(NotImplemented) } r := mux.NewRouter() h := HandlePodMetricsResource(f) r.Handle(MetricsResourceRouteSuffix, ochttp.WithRouteTag(h, \"PodMetricsResourceHandler\")).Methods(\"GET\") r.Handle(MetricsResourceRouteSuffix+\"/\", ochttp.WithRouteTag(h, \"PodMetricsResourceHandler\")).Methods(\"GET\") r.NotFoundHandler = http.HandlerFunc(NotFound) return r } ``` `HandlePodMetricsResource` method returns a HandlerFunc which serves the metrics encoded in prometheus' text format encoding as expected by the metrics-server ```go // HandlePodMetricsResource makes an HTTP handler for implementing the kubelet /metrics/resource endpoint func HandlePodMetricsResource(h PodMetricsResourceHandlerFunc) http.HandlerFunc { if h == nil { return NotImplemented } return handleError(func(w http.ResponseWriter, req *http.Request) error { metrics, err := h(req.Context()) if err != nil { if isCancelled(err) { return err } return errors.Wrap(err, \"error getting status from provider\") } b, err := json.Marshal(metrics) if err != nil { return errors.Wrap(err, \"error marshalling metrics\") } if _, err :="
},
{
"data": "err != nil { return errors.Wrap(err, \"could not write to client\") } return nil }) } ``` The `PodMetricsResourceHandlerFunc` returns the metrics data using Prometheus' `MetricFamily` data structure. More details are provided in the Data subsection ```go // PodMetricsResourceHandlerFunc defines the handler for getting pod metrics type PodMetricsResourceHandlerFunc func(context.Context) ([]*dto.MetricFamily, error) ``` The updated metrics server does not add any new fields to the metrics data but uses the Prometheus textparse series parser to parse and reconstruct the data structure. Currently virtual-kubelet is sending data to the server using the data structure. The Prometheus text parser expects a series of bytes as in the Prometheus data structure, similar to the test . Examples of how the new metrics are defined may be seen in the Kubernetes e2e test that calls the /metrics/resource endpoint , and the kubelet metrics defined in the Kubernetes/kubelet code . ```go var ( nodeCPUUsageDesc = metrics.NewDesc(\"nodecpuusagesecondstotal\", \"Cumulative cpu time consumed by the node in core-seconds\", nil, nil, metrics.ALPHA, \"\") nodeMemoryUsageDesc = metrics.NewDesc(\"nodememoryworkingsetbytes\", \"Current working set of the node in bytes\", nil, nil, metrics.ALPHA, \"\") containerCPUUsageDesc = metrics.NewDesc(\"containercpuusagesecondstotal\", \"Cumulative cpu time consumed by the container in core-seconds\", []string{\"container\", \"pod\", \"namespace\"}, nil, metrics.ALPHA, \"\") containerMemoryUsageDesc = metrics.NewDesc(\"containermemoryworkingsetbytes\", \"Current working set of the container in bytes\", []string{\"container\", \"pod\", \"namespace\"}, nil, metrics.ALPHA, \"\") podCPUUsageDesc = metrics.NewDesc(\"podcpuusagesecondstotal\", \"Cumulative cpu time consumed by the pod in core-seconds\", []string{\"pod\", \"namespace\"}, nil, metrics.ALPHA, \"\") podMemoryUsageDesc = metrics.NewDesc(\"podmemoryworkingsetbytes\", \"Current working set of the pod in bytes\", []string{\"pod\", \"namespace\"}, nil, metrics.ALPHA, \"\") resourceScrapeResultDesc = metrics.NewDesc(\"scrape_error\", \"1 if there was an error while getting container metrics, 0 otherwise\", nil, nil, metrics.ALPHA, \"\") containerStartTimeDesc = metrics.NewDesc(\"containerstarttime_seconds\", \"Start time of the container since unix epoch in seconds\", []string{\"container\", \"pod\", \"namespace\"}, nil, metrics.ALPHA, \"\") ) ``` The kubernetes/kubelet code implements Prometheus' interface which is used along with the k8s.io/component-base implementation of the interface in order to collect and return the metrics data using the Prometheus' data structure. The Gather method in the registry calls the kubelet collector's Collect method, and returns the data using the MetricFamily data structure. The metrics server expects metrics to be encoded in prometheus' text format, and the kubelet uses the http handler from prometheus' promhttp module which returns the metrics data encoded in prometheus' text format encoding. ```go type KubeRegistry interface { // Deprecated RawMustRegister(...prometheus.Collector) // CustomRegister is our internal variant of Prometheus registry.Register CustomRegister(c StableCollector) error // CustomMustRegister is our internal variant of Prometheus registry.MustRegister CustomMustRegister(cs ...StableCollector) // Register conforms to Prometheus registry.Register Register(Registerable) error // MustRegister conforms to Prometheus registry.MustRegister MustRegister(...Registerable) // Unregister conforms to Prometheus registry.Unregister Unregister(collector Collector) bool // Gather conforms to Prometheus gatherer.Gather Gather() ([]*dto.MetricFamily, error) // Reset invokes the Reset() function on all items in the registry // which are added as resettables. Reset() } ``` Prometheus MetricsFamily data structure: ```go type MetricFamily struct { Name *string `protobuf:\"bytes,1,opt,name=name\" json:\"name,omitempty\"` Help *string `protobuf:\"bytes,2,opt,name=help\" json:\"help,omitempty\"` Type *MetricType `protobuf:\"varint,3,opt,name=type,enum=io.prometheus.client.MetricType\" json:\"type,omitempty\"` Metric []*Metric `protobuf:\"bytes,4,rep,name=metric\" json:\"metric,omitempty\"` XXX_NoUnkeyedLiteral struct{} `json:\"-\"` XXX_unrecognized []byte `json:\"-\"` XXX_sizecache int32 `json:\"-\"` } ``` Therefore the provider's GetMetricsResource method should use the same return type as the Gather method in the registry interface. In order to support the new metrics endpoint the Provider must implement the GetMetricsResource method with definition ```golang import ( dto \"github.com/prometheus/client_model/go\" \"context\" ) func GetMetricsResource(context.Context) ([]*dto.MetricsFamily, error) { ... } ``` Write a provider implementation for GetMetricsResource method in ACI Provider and deploy pods get metrics using kubectl Run end-to-end tests with the provider implementation"
}
] |
{
"category": "Runtime",
"file_name": "MetricsUpdateProposal.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
}
|
[
{
"data": "oep-number: Pod Disruption Budget title: Pod Disruption Budget for cStor authors: \"@mittachaitu\" owners: \"@vishnuitta\" \"@kmova\" \"@AmitKumarDas\" editor: \"@mittachaitu\" creation-date: 2020-01-13 last-updated: 2019-01-13 status: Implementable - - - - - - - - - This proposal includes design details of the Pod Disruption Budget(PDB) for cStor pool pods. Not to allow more than quorum no.of HighAvailable(HA i.e volume provisioned with >= 3 replicas) volume replica pool pods to go down during valid voluntary disruptions. There are cases arising where OpenEBS users upgrade cluster or take out the multiple nodes for maintenance due to this HA volume is going to ReadOnly mode without informing users/admin[How? Multiple pool pod nodes are taken at a time]. Create a PodDisruptionBudget among the cStor pool pods where HA volume replicas exist. Below are examples of valid voluntary disruptions Draining a node for repair or upgrade. Draining a node from a cluster to scale the cluster down. Invalid/Other valid voluntary disruptions are not supported via PDB. Below are examples of invalid voluntary & involuntary disruptions Involuntary disruptions All kinds of system failures. Invalid voluntary disruptions Deleting the deployment or other controller that manages the pod. Updating a deployments pod template causing a restart. Directly deleting a pod (e.g. by accident). Removing a pod from a node to permit something else to fit on that node. Updating a deployments pod template causing a restarts. As an OpenEBS user, HA volumes shouldn't be disturbed for cluster upgrade or maintenance activities. Kubernetes will send volume creation request to CSI driver when PVC is created. CSI driver will send a cStorVolume provision request via CVC CR to cStorVolumeConfig(CVC) controller. CVC controller will schedule the replicas among the pools after that CVC controller will create a PDB among the pool pods where the volume replicas are scheduled. PDB will be created only if PDB among those pools is not yet available. If PDB already exists, a label will be added on CVC with the PDB name. Example: Provisioned a CSPC with five pool specs that intern creates 5 cStor Pools. Now, when 'n' no.of HA volumes are provisioned(with replica count as 3) then volume replicas can be schedule in any available cStor pools. After volume replicas are placed then cStorVolumeConfig controller will create PDB among those cStor pools if one such PDB doesn't exists. After successful creation of PDB corresponding cStorVolumeConfig will be updated with PDB label. So when `n` no.of HA volumes are provisioned then sigma(nCr)[where n is no.of pools, r is no.of replicas] no.of PDB's will be"
},
{
"data": "For example if `n` pools are created then at max nC3 + nC4 + nC5 PDB's will be created(Where 3, 4 & 5 are replicas). Note: Further optimization is done on PDB creation. PDB creation can be avoided when the current volume replica pools are already subset of any existing PDB then current CVC will refer to existing superset PDB(instead of creating new PDB). But not vice versa. Example: If PDB2 needs to be created among pool1, pool2 and pool3 this can be avoided if there is any existing PDB created among pool1, pool2, pool3 and pool4. PDB will be created only for HA volumes(i.e only for the volumes greater than or equal to 3 replica count). During deprovision of volume PDB will be deleted automatically if no cStorVolume refers to PDB. User has to create HighAvailable volumes on cStor then PDB will be created by default. User/Admin provisioned HA volume, Now CVC controller schedules the replicas in the cStor pools. After scheduling the replicas CVC controller will try to get PDB that was created among those cStor pools. If PDB doesn't exist then PDB will be created with following details: All the pool names where the volume replicas are scheduled will be added as labels on PDB[Why? to identify the PDB without iterating over all the existing PDBS]. Once PDB exists for those pools, PDB name will be added as label(openebs.io/pod-disruption-budget: <pdb_name>) on CVC[why? To identify how many volumes are referring particular PDB. So during deprovisioning time if there are no reference to PDB then PDB can be deleted]. During deprovision of volume CVC controller will verify is there any other cStor volume referring to this PDB(i.e PDB created among current volume replica pools). If no such volume exists then PDB will be destroyed and then finalizers will be removed from CVC or else finalizers will be removed from CVC. Example PDB yaml: ```yaml apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: cspc-name-<hash> labels: openebs.io/<cstor-pool1>: \"true\" openebs.io/<cstor-pool2>: \"true\" openebs.io/<cstor-pool3>: \"true\" spec: maxUnavailable: 1 selector: matchLabels: app: cstor-pool matchExpression: key: openebs.io/cstor-pool-instance Operator: In Values: {pool1, pool2, pool3} ``` PodDisruptionBudget is supported but not guaranteed! For more detailed conversation please take a look into doc https://docs.google.com/document/d/1Pq2ZDE7K1ttmqdJl4LgZW1B6JImxzJsLGrydOpjV7rs/edit?usp=sharing. Provision a cStorVolume with less than 3 replicas then verify PDB shouldn't be created. Provision a cStorVolume with greaterthan or equal to 3 replicas then PDB should be created among those pools. Create multiple volumes referring to same PDB then PDB should be deleted only after last referring volume deprovision time. Induce network error during PDB creation time then PDB should be created during next reconciliation time. Provision a cStorVolume with replica count equal to pool count(i.e count > 3) of CSPC and provision another HA volume and verify only one PDB should exist. NA NA NA"
}
] |
{
"category": "Runtime",
"file_name": "20200113-cstor-poddisruptionbudget.md",
"project_name": "OpenEBS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Stop using `projects.registry.vmware.com` for user-facing images. (, [@antoninbas]) Persist TLS certificate and key of antrea-controller and periodically sync the CA cert to improve robustness. (, [@tnqn]) Disable cgo for all Antrea binaries. (, [@antoninbas]) Disable `libcapng` to make logrotate run as root in UBI images to fix an OVS crash issue. (, [@xliuxu]) Fix nil pointer dereference when ClusterGroup/Group is used in NetworkPolicy controller. (, [@tnqn]) Fix race condition in agent Traceflow controller when a tag is associated again with a new Traceflow before the old Traceflow deletion event is processed. (, [@tnqn]) Change the maximum flags from 7 to 255 to fix the wrong TCP flags validation issue in Traceflow CRD. (, [@gran-vmv]) Update maximum number of buckets to 700 in OVS group add/insert_bucket message. (, [@hongliangl]) Use 65000 MTU upper bound for interfaces in encap mode in case of large packets being dropped unexpectedly. (, [@antoninbas]) Skip loading openvswitch kernel module if it's already built-in. (, [@antoninbas]) Support Egress using IPs from a subnet that is different from the default Node subnet . (, [@tnqn]) Refer to for more information about this feature. Add a migration tool to support migrating from other CNIs to Antrea. (, [@hjiajing]) Refer to for more information about this tool. Add L7 network flow export support in Antrea that enables exporting network flows with L7 protocol information. (, [@tushartathgur]) Refer to for more information about this feature. Add a new feature `NodeNetworkPolicy` that allows users to apply `ClusterNetworkPolicy` to Kubernetes Nodes. ( , [@hongliangl] [@Atish-iaf]) Refer to for more information about this feature. Add Antrea flexible IPAM support for the Multicast feature. (, [@ceclinux]) Support Talos clusters to run Antrea as the CNI, and add Talos to the K8s installers document. ( , [@antoninbas]) Support secondary network when the network configuration in `NetworkAttachmentDefinition` does not include IPAM configuration. (, [@jianjuns]) Add instructions to install Antrea in `encap` mode in AKS. (, [@antoninbas]) Change secondary network Pod controller to subscribe to CNIServer events to support bridging and VLAN network. (, [@jianjuns]) Use Antrea IPAM for secondary network support. (, [@jianjuns]) Create different images for antrea-agent and antrea-controller to minimize the overall image size, speeding up the startup of both antrea-agent and antrea-controller. ( , [@jainpulkit22]) Don't create tunnel interface (antrea-tun0) when using Wireguard encryption mode. ( , [@antoninbas]) Record an event when Egress IP assignment changes for better troubleshooting. (, [@jainpulkit22]) Update Windows documentation with clearer installation guide and instructions. (, [@antoninbas]) Enable IPv4/IPv6 forwarding on demand automatically to eliminate the need for user intervention or dependencies on other"
},
{
"data": "(, [@tnqn]) Add ability to skip loading kernel modules in antrea-agent to support some specialized distributions (e.g.: Talos). (, [@antoninbas]) Add NetworkPolicy rule name in Traceflow observation. (, [@Atish-iaf]) Use Traceflow API v1beta1 instead of the deprecated API version in `antctl traceflow`. (, [@Atish-iaf]) Replace `net.IP` with `netip.Addr` in FlowExporter which optimizes the memory usage and improves the performance of the FlowExporter. (, [@antoninbas]) Update kubemark from v1.18.4 to v1.29.0 for antrea-agent-simulator. (, [@luolanzone]) Upgrade CNI plugins to v1.4.0. ( , [@antoninbas] [@luolanzone]) Update the document for Egress feature's options and usage on AWS cloud. (, [@tnqn]) Add Flexible IPAM design details in `antrea-ipam.md`. (, [@gran-vmv]) Fix incorrect MTU configurations for the WireGuard encryption mode and GRE tunnel mode. ( , [@hjiajing] [@tnqn]) Prioritize L7 NetworkPolicy flows over `TrafficControl` to avoid a potential issue that a `TrafficControl` CR with a redirect action to the same Pod could bypass the L7 engine. (, [@hongliangl]) Delete OVS port and flows before releasing Pod IP. (, [@tnqn]) Store NetworkPolicy in filesystem as fallback data source to let antre-agent fallback to use the files if it can't connect to antrea-controller on startup. (, [@tnqn]) Enable Pod network after realizing initial NetworkPolicies to avoid traffic from/to Pods bypassing NetworkPolicy when antrea-agent restarts. (, [@tnqn]) Fix Clean-AntreaNetwork.ps1 invocation in Prepare-AntreaAgent.ps1 for containerized OVS on Windows. (, [@antoninbas]) Add missing space to kubelet args in Prepare-Node.ps1 so that kubelet can start successfully on Windows. (, [@antoninbas]) Fix `antctl trace-packet` command failure which is caused by missing arguments. (, [@luolanzone]) Support Local ExternalTrafficPolicy for Services with ExternalIPs when Antrea proxyAll mode is enabled. (, [@tnqn]) Set `net.ipv4.conf.antrea-gw0.arp_announce` to 1 to fix an ARP request leak when a Node or hostNetwork Pod accesses a local Pod and AntreaIPAM is enabled. (, [@gran-vmv]) Skip enforcement of ingress NetworkPolicies rules for hairpinned Service traffic (Pod accessing itself via a Service). ( , [@GraysonWu]) Add host-local IPAM GC on startup to avoid potential IP leak issue after antrea-agent restart. (, [@antoninbas]) Fix the CrashLookBackOff issue when using the UBI-based image. (, [@antoninbas]) Remove redundant log in `fillPodInfo`/`fillServiceInfo` to fix log flood issue, and update `DestinationServiceAddress` for deny connections. ( , [@yuntanghsu]) Enhance HNS network initialization on Windows to avoid some corner cases. (, [@XinShuYang]) Fix endpoint querier rule index in response to improve troubleshooting. (, [@qiyueyao]) Avoid unnecessary rule reconciliations in FQDN controller. (, [@Dyanngg]) Update Windows OVS download link to remove the invalid certificate preventing unsigned OVS driver installation. (, [@XinShuYang]) Fix IP annotation not working on StatefulSets for Antrea FlexibleIPAM. (, [@gran-vmv]) Add DHCP IP retries in `PrepareHNSNetwork` to fix potential IP retrieving failure. (, [@XinShuYang]) Revise `antctl mc deploy` to support Antrea Multi-cluster deployment update when the manifests are changed. (, [@luolanzone])"
}
] |
{
"category": "Runtime",
"file_name": "CHANGELOG-1.15.md",
"project_name": "Antrea",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Ensure the bug was not already reported by searching on GitHub under . If you're unable to find an open issue addressing the problem, . Be sure to include a title and clear description, as much relevant information as possible, and a code sample or an executable test case demonstrating the expected behavior that is not occurring. Check if you're using the latest version of Curve. If you're unable to find an open issue addressing the problem, . Please follow Write Test Code. Make sure you pass the Unit test and Integration test. The test lines coverage rate must 80% abovebranches coverage rate must 70% above. Please create pull request to opencurve/curve master branch. Please follow these steps to have your contribution considered by the maintainers: Follow all instructions in the template Follow the styleguides After you submit your pull request, verify that all status checks are passing While the prerequisites above must be satisfied prior to having your pull request reviewed, the reviewer(s) may ask you to complete additional design work, tests, or other changes before your pull request can be ultimately accepted."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Curve",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "The most common reason for this issue is improper use of struct tags (eg. `yaml` or `json`). Viper uses under the hood for unmarshaling values which uses `mapstructure` tags by default. Please refer to the library's documentation for using other struct tags. Viper installation seems to fail a lot lately with the following (or a similar) error: ``` cannot find package \"github.com/hashicorp/hcl/tree/hcl1\" in any of: /usr/local/Cellar/go/1.15.7_1/libexec/src/github.com/hashicorp/hcl/tree/hcl1 (from $GOROOT) /Users/user/go/src/github.com/hashicorp/hcl/tree/hcl1 (from $GOPATH) ``` As the error message suggests, Go tries to look up dependencies in `GOPATH` mode (as it's commonly called) from the `GOPATH`. Viper opted to use to manage its dependencies. While in many cases the two methods are interchangeable, once a dependency releases new (major) versions, `GOPATH` mode is no longer able to decide which version to use, so it'll either use one that's already present or pick a version (usually the `master` branch). The solution is easy: switch to using Go Modules. Please refer to the on how to do that. tl;dr* `export GO111MODULE=on` This is a YAML 1.1 feature according to . Potential solutions are: Quoting values resolved as boolean Upgrading to YAML v3 (for the time being this is possible by passing the `viper_yaml3` tag to your build)"
}
] |
{
"category": "Runtime",
"file_name": "TROUBLESHOOTING.md",
"project_name": "Spiderpool",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "<!-- This file was autogenerated via cilium-operator --cmdref, do not edit manually--> Access metric status of the operator ``` -h, --help help for metrics ``` - Run cilium-operator-generic - List all metrics for the operator"
}
] |
{
"category": "Runtime",
"file_name": "cilium-operator-generic_metrics.md",
"project_name": "Cilium",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "Contributions to VPP-Agent are welcome. We use the standard pull request model. You can either pick an open issue and assign it to yourself or open a new issue and discuss your feature. In any case, before submitting your pull request please check the and cover the newly added code with tests and documentation. The dependencies are managed using Go modules. On any change of dependencies, run `go mod tidy` to update `go.mod`/`go.sum` files. We follow the ."
}
] |
{
"category": "Runtime",
"file_name": "CONTRIBUTING.md",
"project_name": "Ligato",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "[toc] Cloud Clustera standard k8s cluster, located at the cloud side, providing the cloud computing capability. Edge Cluster: a standard k8s cluster, located at the edge side, providing the edge computing capability. Edge Node: a k8s node, located at the edge side, joining the cloud cluster using the framework, such as KubeEdge. Host Cluster: a selective cloud cluster, used to manage the cross-cluster communication. The 1st cluster deployed by FabEdge must be host cluster. Member Cluster: an edge cluster, registered into the host cluster, reports the network information to host cluster. CommunityK8S CRD defined by FabEdgethere are two types Node Type to define the communication between nodes within the same cluster Cluster Typeto define the cross-cluster communication Kubernetes (v1.18.81.22.7) Flannel (v0.14.0) or Calico (v3.16.5) KubeEdge v1.5or SuperEdgev0.5.0or OpenYurt v0.4.1 Make sure the following ports are allowed by firewall or security group. ESP(50)UDP/500UDP/4500 Collect the configuration of the current cluster ```shell $ curl -s http://116.62.127.76/installer/v0.5.0/getclusterinfo.sh | bash - This may take some time. Please wait. clusterDNS : 169.254.25.10 clusterDomain : root-cluster cluster-cidr : 10.233.64.0/18 service-cluster-ip-range : 10.233.0.0/18 ``` Label all edge nodes ```shell $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge= node/edge1 labeled $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge= node/edge2 labeled $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 22h v1.18.2 edge2 Ready edge 22h v1.18.2 master Ready master 22h v1.18.2 node1 Ready <none> 22h v1.18.2 ``` Deploy FabEdge ```shell $ curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name beijing --cluster-role host --cluster-zone beijing --cluster-region china --connectors node1 --connector-public-addresses 10.22.46.47 --chart http://116.62.127.76/fabedge-0.5.0.tgz ``` > Note > --connectors: names of k8s nodes which connectors are located > --connector-public-addresses: ip addresses of k8s nodes which connectors are located Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` Create community for edges which need to communicate with each other ```shell $ cat > node-community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: beijing-edge-nodes # community name spec: members: beijing.edge1 # format{cluster name}.{edge node name} beijing.edge2 EOF $ kubectl apply -f"
},
{
"data": "``` Update the dependent configuration Update the dependent configuration If any member cluster, register it in the host cluster first, then deploy FabEdge in it. in the host clustercreate an edge cluster named \"shanghai\". Get the token for registration. ```shell $ cat > shanghai.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Cluster metadata: name: shanghai # cluster name EOF $ kubectl apply -f shanghai.yaml $ kubectl get cluster shanghai -o go-template --template='{{.spec.token}}' | awk 'END{print}' eyJomitted--9u0 ``` Label all edge nodes ```shell $ kubectl label node --overwrite=true edge1 node-role.kubernetes.io/edge= node/edge1 labeled $ kubectl label node --overwrite=true edge2 node-role.kubernetes.io/edge= node/edge2 labeled $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 22h v1.18.2 edge2 Ready edge 22h v1.18.2 master Ready master 22h v1.18.2 node1 Ready <none> 22h v1.18.2 ``` Deploy FabEdge in the member cluster ```shell curl 116.62.127.76/installer/v0.5.0/install.sh | bash -s -- --cluster-name shanghai --cluster-role member --cluster-zone shanghai --cluster-region china --connectors node1 --chart http://116.62.127.76/fabedge-0.5.0.tgz --server-serviceHub-api-server https://10.22.46.47:30304 --host-operator-api-server https://10.22.46.47:30303 --connector-public-addresses 10.22.46.26 --init-token eyJomitted--9u0 ``` > Note > --server-serviceHub-api-server: endpoint of serviceHub in the host cluster > --host-operator-api-server: endpoint of operator-api in the host cluster > --connector-public-addresses: ip address of k8s nodes on which connectors are located in the member cluster > --init-token: token when the member cluster is added in the host cluster Verify the deployment ```shell $ kubectl get no NAME STATUS ROLES AGE VERSION edge1 Ready edge 5h22m v1.18.2 edge2 Ready edge 5h21m v1.18.2 master Ready master 5h29m v1.18.2 node1 Ready connector 5h23m v1.18.2 $ kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-8b5ff5d58-lqg66 1/1 Running 0 17h calico-node-7dkwj 1/1 Running 0 16h calico-node-q95qp 1/1 Running 0 16h coredns-86978d8c6f-qwv49 1/1 Running 0 17h kube-apiserver-master 1/1 Running 0 17h kube-controller-manager-master 1/1 Running 0 17h kube-proxy-ls9d7 1/1 Running 0 17h kube-proxy-wj8j9 1/1 Running 0 17h kube-scheduler-master 1/1 Running 0 17h metrics-server-894c64767-f4bvr 2/2 Running 0 17h nginx-proxy-node1 1/1 Running 0 17h nodelocaldns-fmx7f 1/1 Running 0 17h nodelocaldns-kcz6b 1/1 Running 0 17h nodelocaldns-pwpm4 1/1 Running 0 17h $ kubectl get po -n fabedge NAME READY STATUS RESTARTS AGE fabdns-7b768d44b7-bg5h5 1/1 Running 0 9m19s fabedge-agent-edge1 2/2 Running 0 8m18s fabedge-cloud-agent-hxjtb 1/1 Running 4 9m19s fabedge-connector-8c949c5bc-7225c 2/2 Running 0 8m18s fabedge-operator-dddd999f8-2p6zn 1/1 Running 0 9m19s service-hub-74d5fcc9c9-f5t8f 1/1 Running 0 9m19s ``` in the host clustercreate a community for all clusters which need to communicate with each other ```shell $ cat > community.yaml << EOF apiVersion: fabedge.io/v1alpha1 kind: Community metadata: name: all-clusters spec: members: shanghai.connector # format: {cluster name}.connector beijing.connector # format: {cluster name}.connector EOF $ kubectl apply -f community.yaml ``` the DNS components need to be modified if `nodelocaldns` is usedmodify `nodelocaldns` only, if SuperEdge `edge-coredns` is usedmodify `coredns` and `edge-coredns`, modify `coredns` for others Update `nodelocaldns` ```shell $ kubectl -n kube-system edit cm nodelocaldns global:53 { errors cache 30 reload bind 169.254.25.10 # local bind address forward . 10.233.12.205 # cluset-ip of fab-dns service } ``` Update `edge-coredns` ```shell $ kubectl -n edge-system edit cm edge-coredns global { forward ."
},
{
"data": "# cluset-ip of fab-dns service } ``` Update `coredns ` ```shell $ kubectl -n kube-system edit cm coredns global { forward . 10.109.72.43 # cluset-ip of fab-dns service } ``` Reboot corednsedge-coredns or nodelocaldns to take effect Make sure `nodelocaldns` is running on all edge nodes ```shell $ kubectl get po -n kube-system -o wide | grep nodelocaldns nodelocaldns-cz5h2 1/1 Running 0 56m 10.22.46.47 master <none> <none> nodelocaldns-nk26g 1/1 Running 0 47m 10.22.46.23 edge1 <none> <none> nodelocaldns-wqpbw 1/1 Running 0 17m 10.22.46.20 node1 <none> <none> ``` Update `edgecore` for all edge nodes ```shell $ vi /etc/kubeedge/config/edgecore.yaml edgeMesh: enable: false edged: enable: true cniBinDir: /opt/cni/bin cniCacheDirs: /var/lib/cni/cache cniConfDir: /etc/cni/net.d networkPluginName: cni networkPluginMTU: 1500 clusterDNS: 169.254.25.10 # clusterDNS of getclusterinfo script output clusterDomain: \"root-cluster\" # clusterDomain of getclusterinfo script output ``` > clusterDNSif no nodelocaldnscoredns service can be used. Reboot `edgecore` on all edge nodes ```shell $ systemctl restart edgecore ``` Verify the serviceif not readyto rebuild the Pod ```shell $ kubectl get po -n edge-system application-grid-controller-84d64b86f9-29svc 1/1 Running 0 15h application-grid-wrapper-master-pvkv8 1/1 Running 0 15h application-grid-wrapper-node-dqxwv 1/1 Running 0 15h application-grid-wrapper-node-njzth 1/1 Running 0 15h edge-coredns-edge1-5758f9df57-r27nf 0/1 Running 8 15h edge-coredns-edge2-84fd9cfd98-79hzp 0/1 Running 8 15h edge-coredns-master-f8bf9975c-77nds 1/1 Running 0 15h edge-health-7h29k 1/1 Running 3 15h edge-health-admission-86c5c6dd6-r65r5 1/1 Running 0 15h edge-health-wcptf 1/1 Running 3 15h tunnel-cloud-6557fcdd67-v9h96 1/1 Running 1 15h tunnel-coredns-7d8b48c7ff-hhc29 1/1 Running 0 15h tunnel-edge-dtb9j 1/1 Running 0 15h tunnel-edge-zxfn6 1/1 Running 0 15h $ kubectl delete po -n edge-system edge-coredns-edge1-5758f9df57-r27nf pod \"edge-coredns-edge1-5758f9df57-r27nf\" deleted $ kubectl delete po -n edge-system edge-coredns-edge2-84fd9cfd98-79hzp pod \"edge-coredns-edge2-84fd9cfd98-79hzp\" deleted ``` By default the master node has the taint of `node-role.kubernetes.io/master:NoSchedule`which prevents fabedge-cloud-agent to start. It caused pods on the master node cannot communicate with the other Pods on the other nodes. If needed, to modify the DamonSet of fabedge-cloud-agent to tolerant this taint Regardless the cluster role, add all Pod and Service network segments of all other clusters to the cluster with Calico, which prevents Calico from doing source address translation. one example with the clusters of: host (Calico) + member1 (Calico) + member2 (Flannel) on the host (Calico) cluster, to add the addresses of the member (Calico) cluster and the member(Flannel) cluster on the member1 (Calico) cluster, to add the addresses of the host (Calico) cluster and the member(Flannel) cluster on the member2 (Flannel) cluster, there is NO any configuration required. ```shell $ cat > cluster-cidr-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-cluster-cidr spec: blockSize: 26 cidr: 10.233.64.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f cluster-cidr-pool.yaml $ cat > service-cluster-ip-range-pool.yaml << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: cluster-beijing-service-cluster-ip-range spec: blockSize: 26 cidr: 10.233.0.0/18 natOutgoing: false disabled: true ipipMode: Always EOF $ calicoctl.sh create -f service-cluster-ip-range-pool.yaml ``` If asymmetric routes exist, to disable rp_filter on all cloud node ```shell $ sudo for i in /proc/sys/net/ipv4/conf/*/rp_filter; do echo 0 >$i; done $ sudo vi /etc/sysctl.conf net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 ``` If Error withError: cannot re-use a name that is still in use. to uninstall fabedge and try again. ```shell $ helm uninstall -n fabedge fabedge release \"fabedge\" uninstalled ```"
}
] |
{
"category": "Runtime",
"file_name": "get-started-v0.5.0.md",
"project_name": "FabEdge",
"subcategory": "Cloud Native Network"
}
|
[
{
"data": "This document describes the versioning policy for this repository. This policy is designed so the following goals can be achieved. Users are provided a codebase of value that is stable and secure. Versioning of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be used. Versions will comply with [semver 2.0](https://semver.org/spec/v2.0.0.html) with the following exceptions. New methods may be added to exported API interfaces. All exported interfaces that fall within this exception will include the following paragraph in their public documentation. > Warning: methods may be added to this interface in minor releases. If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/otel/v2`, `require go.opentelemetry.io/otel/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/otel/v2/trace\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/otel/[email protected]`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. Modules will be used to encapsulate signals and components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API will be versioned with a major version greater than `v0`. The decision to make a module stable will be made on a case-by-case basis by the maintainers of this project. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. All stable modules that use the same major version number will use the same entire version number. Stable modules may be released with an incremented minor or patch version even though that module has not been changed, but rather so that it will remain at the same version as other stable modules that did undergo change. When an experimental module becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable modules as well as the newly stable module being released. Versioning of the associated [contrib repository](https://github.com/open-telemetry/opentelemetry-go-contrib) of this project will be idiomatic of a Go project using [Go modules](https://github.com/golang/go/wiki/Modules). [Semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) will be"
},
{
"data": "Versions will comply with . If a module is version `v2` or higher, the major version of the module must be included as a `/vN` at the end of the module paths used in `go.mod` files (e.g., `module go.opentelemetry.io/contrib/instrumentation/host/v2`, `require go.opentelemetry.io/contrib/instrumentation/host/v2 v2.0.1`) and in the package import path (e.g., `import \"go.opentelemetry.io/contrib/instrumentation/host/v2\"`). This includes the paths used in `go get` commands (e.g., `go get go.opentelemetry.io/contrib/instrumentation/host/[email protected]`. Note there is both a `/v2` and a `@v2.0.1` in that example. One way to think about it is that the module name now includes the `/v2`, so include `/v2` whenever you are using the module name). If a module is version `v0` or `v1`, do not include the major version in either the module path or the import path. In addition to public APIs, telemetry produced by stable instrumentation will remain stable and backwards compatible. This is to avoid breaking alerts and dashboard. Modules will be used to encapsulate instrumentation, detectors, exporters, propagators, and any other independent sets of related components. Experimental modules still under active development will be versioned at `v0` to imply the stability guarantee defined by . > Major version zero (0.y.z) is for initial development. Anything MAY > change at any time. The public API SHOULD NOT be considered stable. Mature modules for which we guarantee a stable public API and telemetry will be versioned with a major version greater than `v0`. Experimental modules will start their versioning at `v0.0.0` and will increment their minor version when backwards incompatible changes are released and increment their patch version when backwards compatible changes are released. Stable contrib modules cannot depend on experimental modules from this project. All stable contrib modules of the same major version with this project will use the same entire version as this project. Stable modules may be released with an incremented minor or patch version even though that module's code has not been changed. Instead the only change that will have been included is to have updated that modules dependency on this project's stable APIs. When an experimental module in contrib becomes stable a new stable module version will be released and will include this now stable module. The new stable module version will be an increment of the minor version number and will be applied to all existing stable contrib modules, this project's modules, and the newly stable module being released. Contrib modules will be kept up to date with this project's releases. Due to the dependency contrib modules will implicitly have on this project's modules the release of stable contrib modules to match the released version number will be staggered after this project's release. There is no explicit time guarantee for how long after this projects release the contrib release will be. Effort should be made to keep them as close in time as"
},
{
"data": "No additional stable release in this project can be made until the contrib repository has a matching stable release. No release can be made in the contrib repository after this project's stable release except for a stable release of the contrib repository. GitHub releases will be made for all releases. Go modules will be made available at Go package mirrors. To better understand the implementation of the above policy the following example is provided. This project is simplified to include only the following modules and their versions: `otel`: `v0.14.0` `otel/trace`: `v0.14.0` `otel/metric`: `v0.14.0` `otel/baggage`: `v0.14.0` `otel/sdk/trace`: `v0.14.0` `otel/sdk/metric`: `v0.14.0` These modules have been developed to a point where the `otel/trace`, `otel/baggage`, and `otel/sdk/trace` modules have reached a point that they should be considered for a stable release. The `otel/metric` and `otel/sdk/metric` are still under active development and the `otel` module depends on both `otel/trace` and `otel/metric`. The `otel` package is refactored to remove its dependencies on `otel/metric` so it can be released as stable as well. With that done the following release candidates are made: `otel`: `v1.0.0-RC1` `otel/trace`: `v1.0.0-RC1` `otel/baggage`: `v1.0.0-RC1` `otel/sdk/trace`: `v1.0.0-RC1` The `otel/metric` and `otel/sdk/metric` modules remain at `v0.14.0`. A few minor issues are discovered in the `otel/trace` package. These issues are resolved with some minor, but backwards incompatible, changes and are released as a second release candidate: `otel`: `v1.0.0-RC2` `otel/trace`: `v1.0.0-RC2` `otel/baggage`: `v1.0.0-RC2` `otel/sdk/trace`: `v1.0.0-RC2` Notice that all module version numbers are incremented to adhere to our versioning policy. After these release candidates have been evaluated to satisfaction, they are released as version `v1.0.0`. `otel`: `v1.0.0` `otel/trace`: `v1.0.0` `otel/baggage`: `v1.0.0` `otel/sdk/trace`: `v1.0.0` Since both the `go` utility and the Go module system support [the semantic versioning definition of precedence](https://semver.org/spec/v2.0.0.html#spec-item-11), this release will correctly be interpreted as the successor to the previous release candidates. Active development of this project continues. The `otel/metric` module now has backwards incompatible changes to its API that need to be released and the `otel/baggage` module has a minor bug fix that needs to be released. The following release is made: `otel`: `v1.0.1` `otel/trace`: `v1.0.1` `otel/metric`: `v0.15.0` `otel/baggage`: `v1.0.1` `otel/sdk/trace`: `v1.0.1` `otel/sdk/metric`: `v0.15.0` Notice that, again, all stable module versions are incremented in unison and the `otel/sdk/metric` package, which depends on the `otel/metric` package, also bumped its version. This bump of the `otel/sdk/metric` package makes sense given their coupling, though it is not explicitly required by our versioning policy. As we progress, the `otel/metric` and `otel/sdk/metric` packages have reached a point where they should be evaluated for stability. The `otel` module is reintegrated with the `otel/metric` package and the following release is made: `otel`: `v1.1.0-RC1` `otel/trace`: `v1.1.0-RC1` `otel/metric`: `v1.1.0-RC1` `otel/baggage`: `v1.1.0-RC1` `otel/sdk/trace`: `v1.1.0-RC1` `otel/sdk/metric`: `v1.1.0-RC1` All the modules are evaluated and determined to a viable stable release. They are then released as version `v1.1.0` (the minor version is incremented to indicate the addition of new signal). `otel`: `v1.1.0` `otel/trace`: `v1.1.0` `otel/metric`: `v1.1.0` `otel/baggage`: `v1.1.0` `otel/sdk/trace`: `v1.1.0` `otel/sdk/metric`: `v1.1.0`"
}
] |
{
"category": "Runtime",
"file_name": "VERSIONING.md",
"project_name": "Kata Containers",
"subcategory": "Container Runtime"
}
|
[
{
"data": "- - - - - - Main changes and tasks for OCP are: On OCP / OKD, the Operating System is Managed by the Cluster OCP Imposes This requires everything to run with the least privilege possible. For the moment every component has been given access to run as higher privilege. Something to circle back on is network polices and which components can have their privileges reduced without impacting functionality. The UI probably can be for example. openshift/oauth-proxy for authentication to the Longhorn Ui Currently Scoped to Authenticated Users that can delete a longhorn settings object. Since the UI it self is not protected, network policies will need to be created to prevent namespace <--> namespace communication against the pod or service object directly. Anyone with access to the UI Deployment can remove the route restriction. (Namespace Scoped Admin) Option to use separate disk in /var/mnt/longhorn & MachineConfig file to mount /var/mnt/longhorn Adding finalizers for mount propagation General Feature/Issue Thread 4.10 / 1.23: 4.10.0-0.okd-2022-03-07-131213 to 4.10.0-0.okd-2022-07-09-073606 Tested, No Known Issues 4.11 / 1.24: 4.11.0-0.okd-2022-07-27-052000 to 4.11.0-0.okd-2022-11-19-050030 Tested, No Known Issues 4.11.0-0.okd-2022-12-02-145640, 4.11.0-0.okd-2023-01-14-152430: Workaround: 4.12 / 1.25: 4.12.0-0.okd-2022-12-05-210624 to 4.12.0-0.okd-2023-01-20-101927 Tested, No Known Issues 4.12.0-0.okd-2023-01-21-055900 to 4.12.0-0.okd-2023-02-18-033438: Workaround: - 4.12.0-0.okd-2023-03-05-022504 - 4.12.0-0.okd-2023-04-16-041331: Tested, No Known Issues 4.13 / 1.26: 4.13.0-0.okd-2023-05-03-001308 - 4.13.0-0.okd-2023-08-18-135805: Tested, No Known Issues 4.14 / 1.27: 4.14.0-0.okd-2023-08-12-022330 - 4.14.0-0.okd-2023-10-28-073550: Tested, No Known Issues Only required if you require additional customizations, such as storage-less nodes, or secondary disks. Label each node for storage with: ```bash oc get nodes --no-headers | awk '{print $1}' export NODE=\"worker-0\" oc label node \"${NODE}\" node.longhorn.io/create-default-disk=true ``` On the storage nodes create a filesystem with the label longhorn: ```bash oc get nodes --no-headers | awk '{print $1}' export NODE=\"worker-0\" oc debug node/${NODE} -t -- chroot /host bash lsblk export DRIVE=\"sdb\" #vdb sudo mkfs.ext4 -L longhorn /dev/${DRIVE} ``` Note: If you add New Nodes After the below Machine Config is applied, you will need to also reboot the node. The Secondary Drive needs to be mounted on every boot. Save the Concents and Apply the MachineConfig with `oc apply -f`: This will trigger an machine config profile update and reboot all worker nodes on the cluster ```yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 71-mount-storage-worker spec: config: ignition: version: 3.2.0 systemd: units: name: var-mnt-longhorn.mount enabled: true contents: | [Unit] Before=local-fs.target [Mount] Where=/var/mnt/longhorn What=/dev/disk/by-label/longhorn Options=rw,relatime,discard [Install] WantedBy=local-fs.target ``` Label and annotate storage nodes like this: ```bash oc get nodes --no-headers | awk '{print $1}' export NODE=\"worker-0\" oc annotate node ${NODE} --overwrite node.longhorn.io/default-disks-config='[{\"path\":\"/var/mnt/longhorn\",\"allowScheduling\":true}]' oc label node ${NODE} node.longhorn.io/create-default-disk=config ``` Minimum Adjustments Required ```yaml openshift: oauthProxy: repository: quay.io/openshift/origin-oauth-proxy tag: 4.14 # Use Your OCP/OKD 4.X Version, Current Stable is 4.14 openshift: enabled: true ui: route: \"longhorn-ui\" port: 443 proxy: 8443 ``` ```bash helm template longhorn --namespace longhorn-system --values values.yaml --no-hooks > longhorn.yaml oc create namespace longhorn-system -o yaml --dry-run=client | oc apply -f - oc apply -f longhorn.yaml -n longhorn-system ``` <https://docs.openshift.com/container-platform/4.11/storage/persistent_storage/persistent-storage-iscsi.html> <https://docs.okd.io/4.11/storage/persistent_storage/persistent-storage-iscsi.html> okd 4.5: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-702690613> okd 4.6: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-765884631> oauth-proxy: <https://github.com/openshift/oauth-proxy/blob/master/contrib/sidecar.yaml> <https://github.com/longhorn/longhorn/issues/1831>"
}
] |
{
"category": "Runtime",
"file_name": "ocp-readme.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Longhorn should support Kubernetes Cluster Autoscaler. Currently, Longhorn pods are . This proposes to introduce a new global setting `kubernetes-cluster-autoscaler-enabled` that will annotate Longhorn components and also add logic for instance-manager PodDisruptionBudget management. https://github.com/longhorn/longhorn/issues/2203 Longhorn should block CA from scaling down if a node met ANY condition: Any volume attached Contains a backing image manager pod Contains a share manager pod Longhorn should not block CA from scaling down if a node met ALL conditions: All volume detached and there is another schedulable node with volume replica and replica IM PDB. Not contain a backing image manager pod Not contain a share manager pod CA setup. CA blocked by kube-system components. CA blocked by backing image manager pod. (TODO) CA blocked by share manager pod. (TODO) Set `kubernetes-cluster-autoscaler-enabled` adds `cluster-autoscaler.kubernetes.io/safe-to-evict` annotation to Longhorn pods that are not backed by a controller, or with local storage volume mounts. To avoid data loss, Longhorn does not annotate the backing image manager and share manager pods. Currently, Longhorn creates instance-manager PDBs for replica/engine regardless of the volume state. During scale down, CA tries to find a removable node but failed by those instance-manager PDBs. We can add IM PDB handling to create and retained when the PDB is required: There are volumes/engines running on the node. We need to guarantee that the volumes won't crash. The only available/valid replica of a volume is on the node. Here we need to prevent the volume data from being lost. Before the enhancement, CA will be blocked by Pods that are not backed by a controller (engine/replica instance manager). Pods with (longhorn-ui, longhorn-csi-plugin, csi-attacher, csi-provisioner, csi-resizer, csi-snapshotter). After enhancement, instance manager PDB will be actively managed by Longhorn: Creates all engine/replica instance manager PDB when the volume is attached. Delete engine instance manager PDB when the volume is detached. Delete but keep 1 replica instance manager PDB when the volume is detached. the user can set a new global setting `kubernetes-cluster-autoscaler-enabled` to unblock CA scaling. This allows Longhorn to annotate Longhorn-managed deployments and engine/replica instance manager pods with `cluster-autoscaler.kubernetes.io/safe-to-evict`. Configure the setting via Longhorn UI or kubectl. Ensure all volume replica count is set to more than 1. CA is not blocked by Longhorn components when the node doesn't contain volume replica, backing image manager pod, and share manager pod. Engine/Replica instance-manager PDB will block the node if the volume is attached. Replica instance-manager PDB will block the node when CA tries to delete the last node with the volume replica. `None` Add new global setting `Kubernetes Cluster Autoscaler Enabled (Experimental)`. The setting is `boolean`. The default value is `false`. When setting `kubernetes-cluster-autoscaler-enabled` is `true`, Longhorn will add annotation"
},
{
"data": "for the following pods: The engine and replica instance-manager pods because those are not backed by a controller and use local storage mounts. The deployment workloads are managed by the longhorn manager and using any local storage mount. The managed components are labeled with `longhorn.io/managed-by: longhorn-manager`. No change to the logic to cleanup PDB if instance-manager doesn't exist. Engine IM PDB: Delete PDB if volumes are detached; There is no instance process in IM (im.Status.Instance). The same logic applies when a node is un-schedulable. Node is un-schedulable when marked in spec or with CA tainted `ToBeDeletedByClusterAutoscaler`; Create PDB if volumes are attached; there are instance processes in IM (im.Status.Instance). Replica IM PDB: Delete PDB if setting `allow-node-drain-with-last-healthy-replica` is enabled. Delete PDB if volumes are detached; There is no instance process in IM (im.Status.Instance) There are other schedulable nodes with healthy volume replica and have replica IM PDB. Delete PDB when a node is un-schedulable. Node is un-schedulable when marked in spec or with CA tainted `ToBeDeletedByClusterAutoscaler`; Check if the condition is met to delete PDB (same check as to when volumes are detached). Enqueue the replica instance-manager of another schedulable node with the volume replica. Delete PDB. Create PDB if volumes are attached: There are instance processes in IM (im.Status.Instance). Create PDB when volumes are detached; There is no instance process in IM (im.Status.Instance) The replica has been started. There are no other schedulable nodes with healthy volume replica and have replica IM PDB. Given Cluster with Kubernetes cluster-autoscaler. And Longhorn installed. And Set `kubernetes-cluster-autoscaler-enabled` to `true`. And Create deployment with cpu request. ``` resources: limits: cpu: 300m memory: 30Mi requests: cpu: 150m memory: 15Mi ``` When Trigger CA to scale-up by increase deployment replicas. (double the node number, not including host node) ``` 10 math.ceil(allocatable_millicpu/cpu_requestnode_number/10) ``` Then Cluster should have double the node number. When Trigger CA to scale-down by decrease deployment replicas. (original node number) Then Cluster should have original node number. Given Cluster with Kubernetes cluster-autoscaler. And Longhorn installed. And Set `kubernetes-cluster-autoscaler-enabled` to `true`. And Create volume. And Attach the volume. And Write some data to volume. And Detach the volume. And Create deployment with cpu request. When Trigger CA to scale-up by increase deployment replicas. (double the node number, not including host node) Then Cluster should have double the node number. When Annotate new nodes with `cluster-autoscaler.kubernetes.io/scale-down-disabled`. (this ensures scale-down only the old nodes) And Trigger CA to scale-down by decrease deployment replicas. (original node number) Then Cluster should have original node number + 1 blocked node. When Attach the volume to a new node. This triggers replica rebuild. And Volume data should be the same. And Detach the volume. Then Cluster should have original node number. And Volume data should be the same. Similar to `Scenario: test CA scale down all nodes containing volume replicas`. `N/A` `N/A`"
}
] |
{
"category": "Runtime",
"file_name": "20220408-support-kubernetes-ca.md",
"project_name": "Longhorn",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "::: tip Note For a quick experience, please refer to . ::: ``` bash $ git clone https://github.com/cubefs/cubefs.git $ source build/cgo_env.sh $ make blobstore ``` After successful building, the following executable files will be generated in the `build/bin/blobstore` directory: ```text build/bin/blobstore access clustermgr proxy scheduler blobnode blobstore-cli ``` Due to the interdependence of the modules, the deployment should be carried out in the following order to avoid deployment failure caused by service dependencies. Supported Platforms > Linux Dependent Components > > > Language Environment > (1.17.x) ::: tip Note Deploying Clustermgr requires at least three nodes to ensure service availability. ::: The node startup example is as follows. The node startup requires changing the corresponding configuration file and ensuring that the associated configuration between the cluster nodes is consistent. Startup (three-node cluster) ```bash nohup ./clustermgr -f clustermgr.conf nohup ./clustermgr -f clustermgr1.conf nohup ./clustermgr -f clustermgr2.conf ``` Cluster configuration of the three nodes, example node 1: `clustermgr.conf` ```json { \"bind_addr\":\":9998\", \"cluster_id\":1, \"idc\":[\"z0\"], \"chunk_size\": 16777216, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/clustermgr.log\" }, \"auth\": { \"enable_auth\": false, \"secret\": \"testsecret\" }, \"region\": \"test-region\", \"db_path\":\"./run/db0\", \"codemodepolicies\": [ {\"modename\":\"EC3P3\",\"minsize\":0,\"maxsize\":50331648,\"sizeratio\":1,\"enable\":true} ], \"raft_config\": { \"server_config\": { \"nodeId\": 1, \"listen_port\": 10110, \"raftwaldir\": \"./run/raftwal0\" }, \"raftnodeconfig\":{ \"node_protocol\": \"http://\", \"members\": [ {\"id\":1, \"host\":\"127.0.0.1:10110\", \"learner\": false, \"node_host\":\"127.0.0.1:9998\"}, {\"id\":2, \"host\":\"127.0.0.1:10111\", \"learner\": false, \"node_host\":\"127.0.0.1:9999\"}, {\"id\":3, \"host\":\"127.0.0.1:10112\", \"learner\": false, \"node_host\":\"127.0.0.1:10000\"}] } }, \"volumemgrconfig\": { \"allocatable_size\": 10485760 }, \"diskmgrconfig\": { \"refreshintervals\": 10, \"rack_aware\":false, \"host_aware\":false } } ``` Set the initial value of background task according to the mentioned in ```bash $> curl -X POST http://127.0.0.1:9998/config/set -d '{\"key\":\"balance\",\"value\":\"false\"}' --header 'Content-Type: application/json' ``` `proxy` depends on the Kafka component and requires the creation of corresponding topics for `blobdeletetopic`, `shardrepairtopic`, and `shardrepairpriority_topic` in advance. ::: tip Note Kafka can also use other topic names, but it is necessary to ensure that the Kafka of the Proxy and Scheduler service modules are consistent. ::: ```bash bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic blobdelete shardrepair shardrepairpriority ``` Start the service. To ensure availability, at least one proxy node needs to be deployed in each IDC. ```bash nohup ./proxy -f proxy.conf & ``` Example `proxy.conf`: ```json { \"bind_addr\": \":9600\", \"host\": \"http://127.0.0.1:9600\", \"idc\": \"z0\", \"cluster_id\": 1, \"clustermgr\": { \"hosts\": [ \"http://127.0.0.1:9998\", \"http://127.0.0.1:9999\", \"http://127.0.0.1:10000\" ] }, \"auth\": { \"enable_auth\": false, \"secret\": \"test\" }, \"mq\": { \"blobdeletetopic\": \"blob_delete\", \"shardrepairtopic\": \"shard_repair\", \"shardrepairprioritytopic\": \"shardrepair_prior\", \"msg_sender\": { \"broker_list\": [\"127.0.0.1:9092\"] } }, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/proxy.log\" } } ``` Start the service. ```bash nohup ./scheduler -f scheduler.conf & ``` Example `scheduler.conf`: Note that the Scheduler module is deployed on a single node. ```json { \"bind_addr\": \":9800\", \"cluster_id\": 1, \"services\": { \"leader\": 1, \"node_id\": 1, \"members\": {\"1\": \"127.0.0.1:9800\"} }, \"service_register\": { \"host\": \"http://127.0.0.1:9800\", \"idc\": \"z0\" }, \"clustermgr\": { \"hosts\": [\"http://127.0.0.1:9998\", \"http://127.0.0.1:9999\", \"http://127.0.0.1:10000\"] }, \"kafka\": { \"broker_list\": [\"127.0.0.1:9092\"] }, \"blob_delete\": { \"maxbatchsize\": 10, \"batchintervals\": 2, \"delete_log\": { \"dir\": \"./run/delete_log\" } }, \"shard_repair\": { \"orphanshardlog\": { \"dir\": \"./run/orphanshardlog\" } }, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/scheduler.log\" }, \"task_log\": { \"dir\": \"./run/task_log\" } } ``` Create the relevant directories in the compiled `blobnode` binary directory. ```bash mkdir -p ./run/disks/disk{1..8} # Each directory needs to mount a disk to ensure data collection accuracy ``` Start the service. ```bash nohup ./blobnode -f blobnode.conf ``` Example `blobnode.conf`: ```json { \"bind_addr\": \":8899\", \"cluster_id\": 1, \"idc\": \"z0\", \"rack\": \"testrack\", \"host\": \"http://127.0.0.1:8899\", \"droppedbidrecord\": { \"dir\": \"./run/logs/blobnode_dropped\" }, \"disks\": [ { \"path\": \"./run/disks/disk1\", \"auto_format\": true, \"max_chunks\": 1024 }, { \"path\": \"./run/disks/disk2\", \"auto_format\": true, \"max_chunks\": 1024 }, { \"path\": \"./run/disks/disk3\", \"auto_format\": true, \"max_chunks\": 1024 }, { \"path\": \"./run/disks/disk4\", \"auto_format\": true, \"max_chunks\": 1024 }, { \"path\":"
},
{
"data": "\"auto_format\": true, \"max_chunks\": 1024 }, { \"path\": \"./run/disks/disk6\", \"auto_format\": true, \"max_chunks\": 1024 }, { \"path\": \"./run/disks/disk7\", \"auto_format\": true, \"max_chunks\": 1024 }, { \"path\": \"./run/disks/disk8\", \"auto_format\": true, \"max_chunks\": 1024 } ], \"clustermgr\": { \"hosts\": [ \"http://127.0.0.1:9998\", \"http://127.0.0.1:9999\", \"http://127.0.0.1:10000\" ] }, \"disk_config\":{ \"diskreservedspace_B\":1 }, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/blobnode.log\" } } ``` ::: tip Note The Access module is a stateless service node and can be deployed on multiple nodes. ::: Start the service. ```bash nohup ./access -f access.conf ``` Example `access.conf`: ```json { \"bind_addr\": \":9500\", \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/access.log\" }, \"stream\": { \"idc\": \"z0\", \"cluster_config\": { \"region\": \"test-region\", \"clusters\":[ {\"cluster_id\":1,\"hosts\":[\"http://127.0.0.1:9998\",\"http://127.0.0.1:9999\",\"http://127.0.0.1:10000\"]}] } } } ``` After the deployment of Clustermgr and BlobNode fails, residual data needs to be cleaned up before redeployment to avoid registration disk failure or data display errors. The command is as follows: ```bash rm -f -r ./run/disks/disk/. rm -f -r ./run/disks/disk/ rm -f -r /tmp/raftdb0 rm -f -r /tmp/volumedb0 rm -f -r /tmp/clustermgr rm -f -r /tmp/normaldb0 rm -f -r /tmp/normalwal0 ``` Clustermgr adds `learner` nodes. ::: tip Note Learner nodes are generally used for data backup and fault recovery. ::: Enable the Clustermgr service on the new node and add the member information of the current node to the configuration of the new service. Call the to add the newly started learner node to the cluster. ```bash curl -X POST --header 'Content-Type: application/json' -d '{\"peerid\": 4, \"host\": \"127.0.0.1:10113\",\"nodehost\": \"127.0.0.1:10001\", \"member_type\": 1}' \"http://127.0.0.1:9998/member/add\" ``` After the addition is successful, the data will be automatically synchronized. The reference configuration is as follows: `clustermgr-learner.conf`: ```json { \"bind_addr\":\":10001\", \"cluster_id\":1, \"idc\":[\"z0\"], \"chunk_size\": 16777216, \"log\": { \"level\": \"info\", \"filename\": \"./run/logs/clustermgr3.log\" }, \"auth\": { \"enable_auth\": false, \"secret\": \"testsecret\" }, \"region\": \"test-region\", \"db_path\":\"./run/db3\", \"codemodepolicies\": [ {\"modename\":\"EC3P3\",\"minsize\":0,\"maxsize\":50331648,\"sizeratio\":1,\"enable\":true} ], \"raft_config\": { \"server_config\": { \"nodeId\": 4, \"listen_port\": 10113, \"raftwaldir\": \"./run/raftwal3\" }, \"raftnodeconfig\":{ \"node_protocol\": \"http://\", \"members\": [ {\"id\":1, \"host\":\"127.0.0.1:10110\", \"learner\": false, \"node_host\":\"127.0.0.1:9998\"}, {\"id\":2, \"host\":\"127.0.0.1:10111\", \"learner\": false, \"node_host\":\"127.0.0.1:9999\"}, {\"id\":3, \"host\":\"127.0.0.1:10112\", \"learner\": false, \"node_host\":\"127.0.0.1:10000\"}, {\"id\":4, \"host\":\"127.0.0.1:10113\", \"learner\": true, \"node_host\": \"127.0.0.1:10001\"}] } }, \"diskmgrconfig\": { \"refreshintervals\": 10, \"rack_aware\":false, \"host_aware\":false } } ``` After all modules are deployed successfully, the upload verification needs to be delayed for a period of time to wait for the successful creation of the volume. Refer to for details. Modify the `ebsAddr` configuration item in the Master configuration file () to the Consul address registered by the Access node. Refer to . Encoding strategy: commonly used strategy table | Category | Description | |--|--| | EC12P4 | {N: 12, M: 04, L: 0, AZCount: 1, PutQuorum: 15, GetQuorum: 0, MinShardSize: 2048} | | EC3P3 | {N: 3, M: 3, L: 0, AZCount: 1, PutQuorum: 5, GetQuorum: 0, MinShardSize: 2048} | | EC16P20L2 | {N: 16, M: 20, L: 2, AZCount: 2, PutQuorum: 34, GetQuorum: 0, MinShardSize: 2048} | | EC6P10L2 | {N: 6, M: 10, L: 2, AZCount: 2, PutQuorum: 14, GetQuorum: 0, MinShardSize: 2048} | | EC12P9 | {N: 12, M: 9, L: 0, AZCount: 3, PutQuorum: 20, GetQuorum: 0, MinShardSize: 2048} | | EC15P12 | {N: 15, M: 12, L: 0, AZCount: 3, PutQuorum: 24, GetQuorum: 0, MinShardSize: 2048} | | EC6P6 | {N: 6, M: 6, L: 0, AZCount: 3, PutQuorum: 11, GetQuorum: 0, MinShardSize: 2048} | Where N: number of data blocks, M: number of check blocks, L: number of local check blocks, AZCount: number of AZs PutQuorum: `(N + M) / AZCount + N \\<= PutQuorum \\<= M + N` MinShardSize: minimum shard size, the data is continuously filled into the `0-N` shards. If the data size is less than `MinShardSize*N`, it is aligned with zero bytes. See for details."
}
] |
{
"category": "Runtime",
"file_name": "blobstore.md",
"project_name": "CubeFS",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "](https://pkg.go.dev/github.com/virtual-kubelet/virtual-kubelet) Virtual Kubelet is an open source implementation that masquerades as a kubelet for the purposes of connecting Kubernetes to other APIs. This allows the nodes to be backed by other services like ACI, AWS Fargate, , etc. The primary scenario for VK is enabling the extension of the Kubernetes API into serverless container platforms like ACI and Fargate, though we are open to others. However, it should be noted that VK is explicitly not intended to be an alternative to Kubernetes federation. Virtual Kubelet features a pluggable architecture and direct use of Kubernetes primitives, making it much easier to build on. We invite the Kubernetes ecosystem to join us in empowering developers to build upon our base. Join our slack channel named, virtual-kubelet, within the . The best description is \"Kubernetes API on top, programmable back.\" + + + + + + + The diagram below illustrates how Virtual-Kubelet works. Virtual Kubelet is focused on providing a library that you can consume in your project to build a custom Kubernetes node agent. See godoc for up to date instructions on consuming this project: https://godoc.org/github.com/virtual-kubelet/virtual-kubelet There are implementations available for , see those repos for details on how to deploy. create, delete and update pods container logs, exec, and metrics get pod, pods and pod status capacity node addresses, node capacity, node daemon endpoints operating system bring your own virtual network This project features a pluggable provider interface developers can implement that defines the actions of a typical kubelet. This enables on-demand and nearly instantaneous container compute, orchestrated by Kubernetes, without having VM infrastructure to manage and while still leveraging the portable Kubernetes API. Each provider may have its own configuration file, and required environmental variables. Providers must provide the following functionality to be considered a supported integration with Virtual Kubelet. Provides the back-end plumbing necessary to support the lifecycle management of pods, containers and supporting resources in the context of Kubernetes. Conforms to the current API provided by Virtual Kubelet. Does not have access to the Kubernetes API Server and has a well-defined callback mechanism for getting data like secrets or configmaps. Admiralty Multi-Cluster Scheduler mutates annotated pods into \"proxy pods\" scheduled on a virtual-kubelet node and creates corresponding \"delegate pods\" in remote clusters (actually running the containers). A feedback loop updates the statuses and annotations of the proxy pods to reflect the statuses and annotations of the delegate pods. You can find more details in the . Alibaba Cloud ECI(Elastic Container Instance) is a service that allow you run containers without having to manage servers or clusters. You can find more details in the . The alibaba ECI provider will read configuration file specified by the `--provider-config` flag. The example configure file is in the . The Azure Container Instances Provider allows you to utilize both typical pods on VMs and Azure Container instances simultaneously in the same Kubernetes"
},
{
"data": "You can find detailed instructions on how to set it up and how to test it in the . The Azure connector can use a configuration file specified by the `--provider-config` flag. The config file is in TOML format, and an example lives in `providers/azure/example.toml`. is a technology that allows you to run containers without having to manage servers or clusters. The AWS Fargate provider allows you to deploy pods to . Your pods on AWS Fargate have access to VPC networking with dedicated ENIs in your subnets, public IP addresses to connect to the internet, private IP addresses to connect to your Kubernetes cluster, security groups, IAM roles, CloudWatch Logs and many other AWS services. Pods on Fargate can co-exist with pods on regular worker nodes in the same Kubernetes cluster. Easy instructions and a sample configuration file is available in the . Please note that this provider is not currently supported. is a provider that runs pods in cloud instances, allowing a Kubernetes cluster to transparently scale workloads into a cloud. When a pod is scheduled onto the virtual node, Kip starts a right-sized cloud instance for the pod's workload and dispatches the pod onto the instance. When the pod is finished running, the cloud instance is terminated. When workloads run on Kip, your cluster size naturally scales with the cluster workload, pods are strongly isolated from each other and the user is freed from managing worker nodes and strategically packing pods onto nodes. HashiCorp provider for Virtual Kubelet connects your Kubernetes cluster with Nomad cluster by exposing the Nomad cluster as a node in Kubernetes. By using the provider, pods that are scheduled on the virtual Nomad node registered on Kubernetes will run as jobs on Nomad clients as they would on a Kubernetes node. For detailed instructions, follow the guide . provides an abstraction for the execution of a Kubernetes pod on any remote resource that has the capability to manage a container's execution lifecycle. The use cases that drove the initial development of the tool are the Slurm-powered HPC centers, regardless the plugin based design is enabling several additional use cases to provide Kubernetes-API based access to infrastracture that cannot host a Kubelet processes. InterLink is a Virtual Kubelet provider that can manage container lifecycle through a well defined API specification, allowing for any resource provider to be integrated with a simple http server and a handful of methods. In other words, this is an attempt to streamline the process of creating custom Virtual Kubelet providers, avoiding the need for any resource provider to implement its own version of a Kubelet workflow, which would require having some domain expertise in the Kubernetes internals. For detailed instruction, follow the guide . implements a provider for Virtual Kubelet designed to transparently offload pods and services to \"peered\" Kubernetes remote"
},
{
"data": "Liqo is capable of discovering neighbor clusters (using DNS, mDNS) and \"peer\" with them, or in other words, establish a relationship to share part of the cluster resources. When a cluster has established a peering, a new instance of the Liqo Virtual Kubelet is spawned to seamlessly extend the capacity of the cluster, by providing an abstraction of the resources of the remote cluster. The provider combined with the Liqo network fabric extends the cluster networking by enabling Pod-to-Pod traffic and multi-cluster east-west services, supporting endpoints on both clusters. For detailed instruction, follow the guide OpenStack provider for Virtual Kubelet connects your Kubernetes cluster with OpenStack in order to run Kubernetes pods on OpenStack Cloud. Your pods on OpenStack have access to OpenStack tenant networks because they have Neutron ports in your subnets. Each pod will have private IP addresses to connect to other OpenStack resources (i.e. VMs) within your tenant, optionally have floating IP addresses to connect to the internet, and bind-mount Cinder volumes into a path inside a pod's container. ```bash ./bin/virtual-kubelet --provider=\"openstack\" ``` For detailed instructions, follow the guide . is contributed by [tencent games](https://game.qq.com), which is provider for Virtual Kubelet connects your Kubernetes cluster with other Kubernetes clusters. This provider enables us extending Kubernetes to an unlimited one. By using the provider, pods that are scheduled on the virtual node registered on Kubernetes will run as jobs on other Kubernetes clusters' nodes. Providers consume this project as a library which implements the core logic of a Kubernetes node agent (Kubelet), and wire up their implementation for performing the neccessary actions. There are 3 main interfaces: When pods are created, updated, or deleted from Kubernetes, these methods are called to handle those actions. ```go type PodLifecycleHandler interface { // CreatePod takes a Kubernetes Pod and deploys it within the provider. CreatePod(ctx context.Context, pod *corev1.Pod) error // UpdatePod takes a Kubernetes Pod and updates it within the provider. UpdatePod(ctx context.Context, pod *corev1.Pod) error // DeletePod takes a Kubernetes Pod and deletes it from the provider. DeletePod(ctx context.Context, pod *corev1.Pod) error // GetPod retrieves a pod by name from the provider (can be cached). GetPod(ctx context.Context, namespace, name string) (*corev1.Pod, error) // GetPodStatus retrieves the status of a pod by name from the provider. GetPodStatus(ctx context.Context, namespace, name string) (*corev1.PodStatus, error) // GetPods retrieves a list of all pods running on the provider (can be cached). GetPods(context.Context) ([]*corev1.Pod, error) } ``` There is also an optional interface `PodNotifier` which enables the provider to asynchronously notify the virtual-kubelet about pod status changes. If this interface is not implemented, virtual-kubelet will periodically check the status of all pods. It is highly recommended to implement `PodNotifier`, especially if you plan to run a large number of pods. ```go type PodNotifier interface { // NotifyPods instructs the notifier to call the passed in function when // the pod status changes. // // NotifyPods should not block callers. NotifyPods(context.Context,"
},
{
"data": "} ``` `PodLifecycleHandler` is consumed by the `PodController` which is the core logic for managing pods assigned to the node. ```go pc, _ := node.NewPodController(podControllerConfig) // <-- instatiates the pod controller pc.Run(ctx) // <-- starts watching for pods to be scheduled on the node ``` NodeProvider is responsible for notifying the virtual-kubelet about node status updates. Virtual-Kubelet will periodically check the status of the node and update Kubernetes accordingly. ```go type NodeProvider interface { // Ping checks if the node is still active. // This is intended to be lightweight as it will be called periodically as a // heartbeat to keep the node marked as ready in Kubernetes. Ping(context.Context) error // NotifyNodeStatus is used to asynchronously monitor the node. // The passed in callback should be called any time there is a change to the // node's status. // This will generally trigger a call to the Kubernetes API server to update // the status. // // NotifyNodeStatus should not block callers. NotifyNodeStatus(ctx context.Context, cb func(*corev1.Node)) } ``` Virtual Kubelet provides a `NaiveNodeProvider` that you can use if you do not plan to have custom node behavior. `NodeProvider` gets consumed by the `NodeController`, which is core logic for managing the node object in Kubernetes. ```go nc, _ := node.NewNodeController(nodeProvider, nodeSpec) // <-- instantiate a node controller from a node provider and a kubernetes node spec nc.Run(ctx) // <-- creates the node in kubernetes and starts up he controller ``` One of the roles of a Kubelet is to accept requests from the API server for things like `kubectl logs` and `kubectl exec`. Helpers for setting this up are provided If you want to use HPA(Horizontal Pod Autoscaler) in your cluster, the provider should implement the `GetStatsSummary` function. Then metrics-server will be able to get the metrics of the pods on virtual-kubelet. Otherwise, you may see `No metrics for pod ` on metrics-server, which means the metrics of the pods on virtual-kubelet are not collected. Running the unit tests locally is as simple as `make test`. Check out for more details. Kubernetes 1.9 introduces a new flag, `ServiceNodeExclusion`, for the control plane's Controller Manager. Enabling this flag in the Controller Manager's manifest allows Kubernetes to exclude Virtual Kubelet nodes from being added to Load Balancer pools, allowing you to create public facing services with external IPs without issue. Cluster requirements: Kubernetes 1.9 or above Enable the ServiceNodeExclusion flag, by modifying the Controller Manager manifest and adding `--feature-gates=ServiceNodeExclusion=true` to the command line arguments. Virtual Kubelet follows the . Sign the to be able to make Pull Requests to this repo. Monthly Virtual Kubelet Office Hours are held at 10am PST on the second Thursday of every month in this . Check out the calendar . Our google drive with design specifications and meeting notes are . We also have a community slack channel named virtual-kubelet in the Kubernetes slack. You can also connect with the Virtual Kubelet community via the ."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Virtual Kubelet",
"subcategory": "Container Runtime"
}
|
[
{
"data": "Installation ============ The LTTng libraries that ship with Ubuntu 12.04 have been very buggy, and the generated header files using `lttng-gen-tp` have needed to be fixed just to compile in the Ceph tree. The packages available in Ubuntu 14.04 seem to work alright, and for older versions please install LTTng from the LTTng PPA. https://launchpad.net/~lttng/+archive/ppa Then install as normal apt-get install lttng-tools liblttng-ust-dev Add/Update Provider =================== Add tracepoint definitions for the provider into a `.tp` file. Documentation on defining a tracepoint can be found in `man lttng-ust`. By convention files are named according to the logical sub-system they correspond to (e.g. `mutex.tp`, `pg.tp`). And add a C source file to be compiled into the tracepoint provider shared object, in which `TRACEPOINT_DEFINE` should be defined. See for details. Place the `.tp` and the `.c` files into the `src/tracing` directory and modify the CMake file `src/tracing/CMakeLists.txt` accordingly. Function Instrumentation ======================== Ceph supports instrumentation using GCC's `-finstrument-functions` flag. Supported CMake flags are: `-DWITHOSDINSTRUMENT_FUNCTIONS=ON`: instrument OSD code Note that this instrumentation adds an extra function call on each function entry and exit of Ceph code. This option is currently only supported with GCC. Using it with Clang has no effect. The only function tracing implementation at the moment is done using LTTng UST. In order to use it, Ceph needs to be configured with LTTng using `-DWITH_LTTNG=ON`. can be used to generate flame charts/graphs and other metrics. It is also possible to use to write custom analysis. The entry and exit tracepoints are called `lttngustcygprofile:funcenter` and `lttngustcygprofile:funcexit` respectively. The payload variable `addr` holds the address of the function called and the payload variable `call_site` holds the address where it is called. `nm` can be used to resolve function addresses (`addr` to function name)."
}
] |
{
"category": "Runtime",
"file_name": "README.md",
"project_name": "Ceph",
"subcategory": "Cloud Native Storage"
}
|
[
{
"data": "Currently, the Velero CLI tool has a `install` command that configures numerous major and minor aspects of Velero. As a result, the combined set of flags for this `install` command makes it hard to intuit and reason about the different Velero components. This document proposes changes to improve the UX for installation and configuration in a way that would make it easier for the user to discover what needs to be configured by looking at what is available in the CLI rather then having to rely heavily on our documentation for the usage. At the same time, it is expected that the documentation update to reflect these changes will also make the documentation flow easier to follow. This proposal prioritizes discoverability and self-documentation over minimizing length or number of commands and flags. Split flags currently under the `velero install` command into multiple commands, and group flags under commands in a way that allows a good level of discovery and self-documentation Maintain compatibility with gitops practices (i.e. ability to generate a full set of yaml for install that can be stored in source control) Have a clear path for deprecating commands Introduce new CLI features Propose changes to the CLI that go beyond the functionality of install and configure Optimize for shorter length or number of commands/flags This document proposes users could benefit from a more intuitive and self-documenting CLI setup as compared to our existing CLI UX. Ultimately, it is proposed that a recipe-style CLI flow for installation, configuration and use would greatly contribute to this purpose. Also, the `install` command currently can be reused to update Velero deployment configurations. For server and restic related install and configurations, settings will be moved to under `velero config`. The naming and organization of the proposed new CLI commands below have been inspired on the `kubectl` commands, particularly `kubectl set` and `kubectl config`. These are improvements that are part of this proposal: Go over all flags and document what is optional, what is required, and default values. Capitalize all help messages The organization of the commands follows this format: ``` velero [resource] [operation] [flags] ``` To conform with Velero's current practice: commands will also work by swapping the operation/resource. the \"object\" of a command is an argument, and flags are strictly for modifiers (example: `backup get my-backup` and not `backup get --name my-backup`) All commands will include the `--dry-run` flag, which can be used to output yaml files containing the commands' configuration for resource creation or patching. `--dry-run generate resources, but don't send them to the cluster. Use with -o. Optional.` The `--help` and `--output` flags will also be included for all commands, omitted below for brevity. Below is the proposed set of new commands to setup and configure Velero. 1) `velero config` ``` server Configure up the namespace, RBAC, deployment, etc., but does not add any external plugins, BSL/VSL definitions. This would be the minimum set of commands to get the Velero server up and running and ready to accept other configurations. --label-columns stringArray a comma-separated list of labels to be displayed as columns --show-labels show labels in the last column --image string image to use for the Velero and restic server pods. Optional. (default \"velero/velero:latest\") --pod-annotations mapStringString annotations to add to the Velero and restic pods. Optional. Format is key1=value1,key2=value2 --restore-only run the server in restore-only mode."
},
{
"data": "--pod-cpu-limit string CPU limit for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"1000m\") --pod-cpu-request string CPU request for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"500m\") --pod-mem-limit string memory limit for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"256Mi\") --pod-mem-request string memory request for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"128Mi\") --client-burst int maximum number of requests by the server to the Kubernetes API in a short period of time (default 30) --client-qps float32 maximum number of requests per second by the server to the Kubernetes API once the burst limit has been reached (default 20) --default-backup-ttl duration how long to wait by default before backups can be garbage collected (default 720h0m0s) --disable-controllers strings list of controllers to disable on startup. Valid values are backup,backup-sync,schedule,gc,backup-deletion,restore,download-request,restic-repo,server-status-request --log-format the format for log output. Valid values are text, json. (default text) --log-level the level at which to log. Valid values are debug, info, warning, error, fatal, panic. (default info) --metrics-address string the address to expose prometheus metrics (default \":8085\") --plugin-dir string directory containing Velero plugins (default \"/plugins\") --profiler-address string the address to expose the pprof profiler (default \"localhost:6060\") --restore-only run in a mode where only restores are allowed; backups, schedules, and garbage-collection are all disabled. DEPRECATED: this flag will be removed in v2.0. Use read-only backup storage locations instead. --restore-resource-priorities strings desired order of resource restores; any resource not in the list will be restored alphabetically after the prioritized resources (default [namespaces,storageclasses,persistentvolumes,persistentvolumeclaims,secrets,configmaps,serviceaccounts,limitranges,pods,replicaset,customresourcedefinitions]) --terminating-resource-timeout duration how long to wait on persistent volumes and namespaces to terminate during a restore before timing out (default 10m0s) restic Configuration for restic operations. --default-prune-frequency duration how often 'restic prune' is run for restic repositories by default. Optional. --pod-annotations mapStringString annotations to add to the Velero and restic pods. Optional. Format is key1=value1,key2=value2 --pod-cpu-limit string CPU limit for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") --pod-cpu-request string CPU request for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") --pod-mem-limit string memory limit for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") --pod-mem-request string memory request for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") --timeout duration how long backups/restores of pod volumes should be allowed to run before timing out (default 1h0m0s) repo get Get restic repositories ``` The `velero config server` command will create the following resources: ``` Namespace Deployment backups.velero.io backupstoragelocations.velero.io deletebackuprequests.velero.io downloadrequests.velero.io podvolumebackups.velero.io podvolumerestores.velero.io resticrepositories.velero.io restores.velero.io schedules.velero.io serverstatusrequests.velero.io volumesnapshotlocations.velero.io ``` Note: Velero will maintain the `velero server` command run by the Velero pod, which starts the Velero server deployment. 2) `velero backup-location` Commands/flags for backup locations. ``` set --default string sets the default backup storage location (default \"default\") (NEW, -- was `server --default-backup-storage-location; could be set as an annotation on the BSL) --credentials mapStringString sets the name of the corresponding credentials secret for a provider. Format is provider:credentials-secret-name. (NEW) --cacert-file mapStringString configuration to use for creating a secret containing a custom certificate for an S3 location of a plugin provider. Format is provider:path-to-file. (NEW) create NAME [flags] --default Sets this new location to be the new default backup location. Default is false. (NEW) --access-mode access mode for the backup storage"
},
{
"data": "Valid values are ReadWrite,ReadOnly (default ReadWrite) --backup-sync-period 0s how often to ensure all Velero backups in object storage exist as Backup API objects in the cluster. Optional. Set this to 0s to disable sync --bucket string name of the object storage bucket where backups should be stored. Required. --config mapStringString configuration to use for creating a backup storage location. Format is key1=value1,key2=value2 (was also in `velero install --backup-location-config`). Required for Azure. --provider string provider name for backup storage. Required. --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the backup storage location --prefix string prefix under which all Velero data should be stored within the bucket. Optional. --provider string name of the backup storage provider (e.g. aws, azure, gcp) --show-labels show labels in the last column --credentials mapStringString sets the name of the corresponding credentials secret for a provider. Format is provider:credentials-secret-name. (NEW) --cacert-file mapStringString configuration to use for creating a secret containing a custom certificate for an S3 location of a plugin provider. Format is provider:path-to-file. (NEW) get Display backup storage locations --default displays the current default backup storage location (NEW) --label-columns stringArray a comma-separated list of labels to be displayed as columns -l, --selector string only show items matching this label selector --show-labels show labels in the last column ``` 3) `velero snapshot-location` Commands/flags for snapshot locations. ``` set --default mapStringString sets the list of unique volume providers and default volume snapshot location (provider1:location-01,provider2:location-02,...) (NEW, -- was `server --default-volume-snapshot-locations; could be set as an annotation on the VSL) --credentials mapStringString sets the list of name of the corresponding credentials secret for providers. Format is (provider1:credentials-secret-name1,provider2:credentials-secret-name2,...) (NEW) create NAME [flags] --default Sets these new locations to be the new default snapshot locations. Default is false. (NEW) --config mapStringString configuration to use for creating a volume snapshot location. Format is key1=value1,key2=value2 (was also in `velero install --`snapshot-location-config`). Required. --provider string provider name for volume storage. Required. --label-columns stringArray a comma-separated list of labels to be displayed as columns --labels mapStringString labels to apply to the volume snapshot location --provider string name of the volume snapshot provider (e.g. aws, azure, gcp) --show-labels show labels in the last column --credentials mapStringString sets the list of name of the corresponding credentials secret for providers. Format is (provider1:credentials-secret-name1,provider2:credentials-secret-name2,...) (NEW) get Display snapshot locations --default list of unique volume providers and default volume snapshot location (provider1:location-01,provider2:location-02,...) (NEW -- was `server --default-volume-snapshot-locations`)) ``` 4) `velero plugin` Configuration for plugins. ``` add stringArray IMAGES [flags] - add plugin container images to install into the Velero Deployment get get information for all plugins on the velero server (was `get`) --timeout duration maximum time to wait for plugin information to be reported (default 5s) remove Remove a plugin [NAME | IMAGE] set --credentials-file mapStringString configuration to use for creating a secret containing the AIM credentials for a plugin provider. Format is provider:path-to-file. (was `secret-file`) --no-secret flag indicating if a secret should be created. Must be used as confirmation if create --secret-file is not provided. Optional. (MOVED FROM install -- not sure we need it?) --sa-annotations mapStringString annotations to add to the Velero ServiceAccount for GKE. Add iam.gke.io/gcp-service-account=[GSANAME]@[PROJECTNAME].iam.gserviceaccount.com for workload identity."
},
{
"data": "Format is key1=value1,key2=value2 ``` Considering this proposal, let's consider what a high-level documentation for getting Velero ready to do backups could look like for Velero users: After installing the Velero CLI: ``` velero config server [flags] (required) velero config restic [flags] velero plugin add IMAGES [flags] (add/config provider plugins) velero backup-location/snapshot-location create NAME [flags] (run `velero plugin --get` to see what kind of plugins are available; create locations) velero backup/restore/schedule create/get/delete NAME [flags] ``` The above recipe-style documentation should highlight 1) the main components of Velero, and, 2) the relationship/dependency between the main components In order to maintain compatibility with the current Velero version for a sufficient amount of time, and give users a chance to upgrade any install scripts they might have, we will keep the current `velero install` command in parallel with the new commands until the next major Velero version, which will be Velero 2.0. In the mean time, ia deprecation warning will be added to the `velero install` command. `velero install (DEPRECATED)` Flags moved to... ...`velero config server`: ``` --image string image to use for the Velero and restic server pods. Optional. (default \"velero/velero:latest\") --label-columns stringArray a comma-separated list of labels to be displayed as columns --pod-annotations mapStringString annotations to add to the Velero and restic pods. Optional. Format is key1=value1,key2=value2 --show-labels show labels in the last column --pod-cpu-limit string CPU limit for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"1000m\") --pod-cpu-request string CPU request for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"500m\") --pod-mem-limit string memory limit for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"256Mi\") --pod-mem-request string memory request for Velero pod. A value of \"0\" is treated as unbounded. Optional. (default \"128Mi\") ``` ...`velero config restic` ``` --default-prune-frequency duration how often 'restic prune' is run for restic repositories by default. Optional. --pod-cpu-limit string CPU limit for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") --pod-cpu-request string CPU request for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") --pod-mem-limit string memory limit for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") --pod-mem-request string memory request for restic pod. A value of \"0\" is treated as unbounded. Optional. (default \"0\") ``` ...`backup-location create` ``` --backup-location-config mapStringString configuration to use for the backup storage location. Format is key1=value1,key2=value2 --bucket string name of the object storage bucket where backups should be stored --prefix string prefix under which all Velero data should be stored within the bucket. Optional. ``` ...`snapshot-location create` ``` --snapshot-location-config mapStringString configuration to use for the volume snapshot location. Format is key1=value1,key2=value2 ``` ...both `backup-location create` and `snapshot-location create` ``` --provider string provider name for backup and volume storage ``` ...`plugin` ``` --plugins stringArray Plugin container images to install into the Velero Deployment --sa-annotations mapStringString annotations to add to the Velero ServiceAccount. Add iam.gke.io/gcp-service-account=[GSANAME]@[PROJECTNAME].iam.gserviceaccount.com for workload identity. Optional. Format is key1=value1,key2=value2 --no-secret flag indicating if a secret should be created. Must be used as confirmation if --secret-file is not provided. Optional. --secret-file string (renamed `credentials-file`) file containing credentials for backup and volume provider. If not specified, --no-secret must be used for confirmation. Optional. ``` Flags to deprecate: ``` --no-default-backup-location flag indicating if a default backup location should be created. Must be used as confirmation if --bucket or --provider are not provided. Optional. --use-volume-snapshots whether or not to create snapshot location"
},
{
"data": "Set to false if you do not plan to create volume snapshots via a storage provider. (default true) --wait wait for Velero deployment to be ready. Optional. --use-restic (obsolete since now we have `velero config restic`) ``` These flags will be moved to under `velero config server`: `velero server --default-backup-storage-location (DEPRECATED)` changed to `velero backup-location set --default` `velero server --default-volume-snapshot-locations (DEPRECATED)` changed to `velero snapshot-location set --default` The value for these flags will be stored as annotations. In anticipation of a new configuration implementation to handle custom CA certs (as per design doc https://github.com/vmware-tanzu/velero/blob/main/design/custom-ca-support.md), a new flag `velero storage-location create/set --cacert-file mapStringString` is proposed. It sets the configuration to use for creating a secret containing a custom certificate for an S3 location of a plugin provider. Format is provider:path-to-file. See discussion https://github.com/vmware-tanzu/velero/pull/2259#discussion_r384700723 for more clarification. As part of this change, we should change to use the term `location-plugin` instead of `provider`. The reasoning: in practice, we usually have 1 plugin per provider, and if there is an implementation for both object store and volume snapshotter for that provider, it will all be contained in the same plugin. When we handle plugins, we follow this logic. In other words, there's a plugin name (ex: `velero.io/aws`) and it can contain implementations of kind `ObjectStore` and/or `VolumeSnapshotter`. But when we handle BSL or VSL (and the CLI commands/flags that configure them), we use the term `provider`, which can cause ambiguity as if that is a kind of thing different from a plugin. If the plugin is the \"thing\" that contains the implementation for the desired provider, we should make it easier for the user to guess that and change BackupStorageLocation/VolumeSnapshotLocation `Spec.Provider` field to be called `Spec.Location-Plugin` and all related CLI command flags to `location-plugin`, and update the docs accordingly. This change will require a CRD version bump and deprecation cycle. To maintain compatibility with gitops practices, each of the new commands will generate `yaml` output that can be stored in source control. For content examples, please refer to the files here: https://github.com/carlisia/velero/tree/c-cli-design/design/CLI/PoC Note: actual `yaml` file names are defined by the user. `velero config server` - base/deployment.yaml `velero config restic` - overlays/plugins/restic.yaml `velero backup-location create` - base/backupstoragelocations.yaml `velero snapshot-location create` - base/volumasnapshotlocations.yaml `velero plugin add velero/velero-plugin-for-aws:v1.0.1` - overlays/plugins/aws-plugin.yaml `velero plugin add velero/velero-plugin-for-microsoft-azure:v1.0.1` - overlay/plugins/azure-plugin.yaml These resources can be deployed/deleted using the included kustomize setup and running: ``` kubectl apply -k design/CLI/PoC/overlays/plugins/ kubectl delete -k design/CLI/PoC/overlays/plugins/ ``` Note: All CRDs, including the `ResticRepository`, may continue to be deployed at startup as it is now, or together with their respective instantiation. To recap, this proposal redesigns the Velero CLI to make `velero install` obsolete, and instead breaks down the installation and configuration into separate commands. These are the major highlights: Plugins will only be installed separately via `velero plugin add` BSL/VSL will be continue to be configured separately, and now each will have an associated secret Since each BSL/VSL will have its own association with a secret, the user will no longer need to upload a new secret whenever changing to, or adding, a BSL/VSL for a provider that is different from the one in use. This will be done at setup time. This will make it easier to support any number of BSL/VSL combinations, with different providers each. The user will start up the Velero server on a cluster by using the command `velero config"
},
{
"data": "This will create the Velero deployment resource with default values or values overwritten with flags, create the Velero CRDs, and anything else that is not specific to plugins or BSL/VSL. The Velero server will start up, verify that the deployment is running, that all CRDs were found, and log a message that it is waiting for a BSL to be configured. at this point, other operations, such as configuring restic, will be allowed. Velero should keep track of its status, ie, if it is ready to create backups or not. This could be a field `ServerStatus` added to `ServerStatusRequest`. Possible values could be [ready|waiting]. \"ready\" would mean there is at least 1 valid BSL, and \"waiting\" would be anything but that. When adding/configuring a BSL or VSL, we will allow creating locations, and continuously verify if there is a corresponding, valid plugin. When a valid match is found, mark the BSL/VSL as \"ready\". This would require adding a field to the BSL/VSL, or using the existing `Phase` field, and keep track of its status, possibly: [ready|waiting]. With the first approach: the server would transition into \"ready\" (to create backups) as soon as there is one BSL. It would require a set sequence of actions, ie, first install the plugin, only then the user can successfully configure a BSL. With the second approach, the Velero server would continue looping and checking all existing BSLs for at least 1 with a \"ready\" status. Once it found that, it would set itself to \"ready\" also. Another new behavior that must be added: the server needs to identify when there no longer exists a valid BSL. At this point, it should change its status from \"ready\" to one that indicates it is not ready, maybe \"waiting\". With the first approach above, this would mean checking if there is still at least one BSL. With the second approach, it would require checking the status of all BSLs to find at least one with the status of \"ready\". As it is today, a valid VSL would not be required to create backups, unless the backup included a PV. To make it easier for the user to identify if their Velero server is ready to create backups or not, a `velero status` command should be added. This issue has been created some time ago for this purpose: https://github.com/vmware-tanzu/velero/issues/1094. It seems that the vast majority of tools document their usage with `kubectl` and `yaml` files to install and configure their Kubernetes resources. Many of them also make use of Helm, and to a lesser extent some of them have their own CLI tools. Amongst the tools that have their own CLI, not enough examples were found to establish a clear pattern of usage. It seems the most relevant priority should be to have output in `yaml` format. Any set of `yaml` files can also be arranged to use with Kustomize by creating/updating resources, and patching them using Kustomize functionalities. The way the Velero commands were arranged in this proposal with the ability to output corresponding `yaml` files, and the included Kustomize examples, makes it in line with the widely used practices for installation and configuration. Some CLI tools do not document their usage with Kustomize, one could assume it is because anyone with knowledge of Kustomize and `yaml` files would know how to use it. Here are some examples: https://github.com/jetstack/kustomize-cert-manager-demo https://github.com/istio/installer/tree/master/kustomize https://github.com/weaveworks/flagger/tree/master/kustomize https://github.com/jpeach/contour/tree/1c575c772e9fd747fba72ae41ab99bdae7a01864/kustomize (RFC) N/A"
}
] |
{
"category": "Runtime",
"file_name": "cli-install-changes.md",
"project_name": "Velero",
"subcategory": "Cloud Native Storage"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.