content
listlengths
1
171
tag
dict
[ { "data": "](https://pkg.go.dev/sigs.k8s.io/json) This library is a subproject of . It provides case-sensitive, integer-preserving JSON unmarshaling functions based on `encoding/json` `Unmarshal()`. The `UnmarshalCaseSensitivePreserveInts()` function behaves like `encoding/json#Unmarshal()` with the following differences: JSON object keys are treated case-sensitively. Object keys must exactly match json tag names (for tagged struct fields) or struct field names (for untagged struct fields). JSON integers are unmarshaled into `interface{}` fields as an `int64` instead of a `float64` when possible, falling back to `float64` on any parse or overflow error. Syntax errors do not return an `encoding/json` `*SyntaxError` error. Instead, they return an error which can be passed to `SyntaxErrorOffset()` to obtain an offset. The `UnmarshalStrict()` function decodes identically to `UnmarshalCaseSensitivePreserveInts()`, and also returns non-fatal strict errors encountered while decoding: Duplicate fields encountered Unknown fields encountered You can reach the maintainers of this project via the . Participation in the Kubernetes community is governed by the . tails. Example ```Go package main import ( \"fmt\" \"log\" \"gopkg.in/yaml.v2\" ) var data = ` a: Easy! b: c: 2 d: [3, 4] ` // Note: struct fields must be public in order for unmarshal to // correctly populate the data. type T struct { A string B struct { RenamedC int `yaml:\"c\"` D []int `yaml:\",flow\"` } } func main() { t := T{} err := yaml.Unmarshal([]byte(data), &t) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" t:\\n%v\\n\\n\", t) d, err := yaml.Marshal(&t) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" t dump:\\n%s\\n\\n\", string(d)) m := make(map[interface{}]interface{}) err = yaml.Unmarshal([]byte(data), &m) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" m:\\n%v\\n\\n\", m) d, err = yaml.Marshal(&m) if err != nil { log.Fatalf(\"error: %v\", err) } fmt.Printf(\" m dump:\\n%s\\n\\n\", string(d)) } ``` This example will generate the following output: ``` t: {Easy! {2 [3 4]}} t dump: a: Easy! b: c: 2 d: [3, 4] m: map[a:Easy! b:map[c:2 d:[3 4]]] m dump: a: Easy! b: c: 2 d: 3 4 ```" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "BR: Backup & Restore Backup Storage: See the same definition in . Backup Repository: See the same definition in . BIA/RIA V2: Backup Item Action/Restore Item Action V2 that supports asynchronized operations, see the for details. As a Kubernetes BR solution, Velero is pursuing the capability to back up data from the volatile and limited production environment into the durable, heterogeneous and scalable backup storage. This relies on two parts: Data Movement: Move data from various production workloads, including the snapshots of the workloads or volumes of the workloads Data Persistency and Management: Persistent the data in backup storage and manage its security, redundancy, accessibility, etc. through backup repository. This has been covered by the At present, Velero supports moving file system data from PVs through Pod Volume Backup (a.k.a. file system backup). However, it backs up the data from the live file system, so it should be the last option when more consistent data movement (i.e., moving data from snapshot) is not available. Moreover, we would like to create a general workflow to variations during the data movement, e.g., data movement plugins, different snapshot types, different snapshot accesses and different data accesses. Create components and workflows for Velero to move data based on volume snapshots Create components and workflows for Velero built-in data mover Create the mechanism to support data mover plugins from third parties Implement CSI snapshot data movement on file system level Support different data accesses, i.e., file system level and block level Support different snapshot types, i.e., CSI snapshot, volume snapshot API from storage vendors Support different snapshot accesses, i.e., through PV generated from snapshots, and through direct access API from storage vendors Reuse the existing Velero generic data path as creatd in The current support for block level access is through file system uploader, so it is not aimed to deliver features of an ultimate block level backup. Block level backup will be included in a future design Most of the components are generic, but the Exposer is snapshot type specific or snapshot access specific. The current design covers the implementation details for exposing CSI snapshot to host path access only, for other types or accesses, we may need a separate design The current workflow focuses on snapshot-based data movements. For some application/SaaS level data sources, snapshots may not be taken explicitly. We dont take them into consideration, though we believe that some workflows or components may still be reusable. Here are the diagrams that illustrate components and workflows for backup and restore respectively. For backup, we intend to create an extensive architecture for various snapshot types, snapshot accesses and various data accesses. For example, the snapshot specific operations are isolated in Data Mover Plugin and Exposer. In this way, we only need to change the two modules for variations. Likely, the data access details are isolated into uploaders, so different uploaders could be plugged into the workflow seamlessly. For restore, we intend to create a generic workflow that could for all backups. This means the restore is backup source independent. Therefore, for example, we can restore a CSI snapshot backup to another cluster with no CSI facilities or with CSI facilities different from the source cluster. We still have the Exposer module for restore and it is to expose the target volume to the data path. Therefore, we still have the flexibility to introduce different ways to expose the target" }, { "data": "Likely, the data downloading details are isolated in uploaders, so we can still create multiple types of uploaders. Below is the backup workflow: Below is the restore workflow: Below are the generic components in the data movement workflow: Velero: Velero controls the backup/restore workflow, it calls BIA/RIA V2 to backup/restore an object that involves data movement, specifically, a PVC or a PV. BIA/RIA V2: BIA/RIA V2 are the protocols between Velero and the data mover plugins. They support asynchronized operations so that Velero backup/restore is not marked as completion until the data movement is done and in the meantime, Velero is free to process other backups during the data movement. Data Mover Plugin (DMP): DMP implement BIA/RIA V2 and it invokes the corresponding data mover by creating the DataUpload/DataDownload CRs. DMP is also responsible to take snapshot of the source volume, so it is a snapshot type specific module. For CSI snapshot data movement, the CSI plugin could be extended as a DMP, this also means that the CSI plugin will fully implement BIA/RIA V2 and support some more methods like Progress, Cancel, etc. DataUpload CR (DUCR)/ DataDownload CR (DDCR): DUCR/DDCR are Kubernetes CRs that act as the protocol between data mover plugins and data movers. The parties who want to provide a data mover need to watch and process these CRs. Data Mover (DM): DM is a collective of modules to finish the data movement, specifically, data upload and data download. The modules may include the data mover controllers to reconcile DUCR/DDCR and the data path to transfer data. DMs take the responsibility to handle DUCR/DDCRs, Velero provides a built-in DM and meanwhile Velero supports plugin DMs. Below shows the components for the built-in DM: Velero Built-in Data Mover (VBDM): VBDM is the built-in data mover shipped along with Velero, it includes Velero data mover controllers and Velero generic data path. Node-Agent: Node-Agent is an existing Velero module that will be used to host VBDM. Exposer: Exposer is to expose the snapshot/target volume as a path/device name/endpoint that are recognizable by Velero generic data path. For different snapshot types/snapshot accesses, the Exposer may be different. This isolation guarantees that when we want to support other snapshot types/snapshot accesses, we only need to replace with a new Exposer and keep other components as is. Velero Generic Data Path (VGDP): VGDP is the collective of modules that is introduced in . Velero uses these modules to finish data transmission for various purposes. In includes uploaders and the backup repository. Uploader: Uploader is the module in VGDP that reads data from the source and writes to backup repository for backup; while read data from backup repository and write to the restore target for restore. At present, only file system uploader is supported. In future, the block level uploader will be added. For file system and basic block uploader, only Kopia uploader will be used, Restic will not be integrated with VBDM. 3rd parties could integrate their own data movement into Velero by replacing VBDM with their own DMs. The DMs should process DUCR/DDCRs appropriately and finally put them into one of the terminal states as shown in the DataUpload CRD and DataDownload CRD sections. Theoretically, replacing the DMP is also allowed. In this way, the entire workflow is customized, so this is out of the scope of this" }, { "data": "Below are the data movement actions and sequences during backup: Below are actions from Velero and DMP: BIA Execute This is the existing logic in Velero. For a source PVC/PV, Velero delivers it to the corresponding BackupItemAction plugin, the plugin then takes the related actions to back it up. For example, the existing CSI plugin takes a CSI snapshot to the volume represented by the PVC and then returns additional items (i.e., VolumeSnapshot, VolumeSnapshotContent and VolumeSnapshotClass) for Velero to further backup. To support data movement, we will use BIA V2 which supports asynchronous operation management. Here is the Execute method from BIA V2: ``` Execute(item runtime.Unstructured, backup *api.Backup) (runtime.Unstructured, []velero.ResourceIdentifier, string, []velero.ResourceIdentifier, error) ``` Besides ```additionalItem``` (as the 2nd return value), Execute method will return one more resource list called ```itemToUpdate```, which means the items to be updated and persisted when the async operation completes. For details, visit . Specifically, this mechanism will be used to persist DUCR into the persisted backup data, in another words, DUCR will be returned as ```itemToUpdate``` from Execute method. DUCR contains all the information the restore requires, so during restore, DUCR will be extracted from the backup data. Additionally, in the same way, a DMP could add any other items into the persisted backup data. Execute method also returns the ```operationID``` which uniquely identifies the asynchronous operation. This ```operationID``` is generated by plugins. The doesn't restrict the format of the ```operationID```, for Velero CSI plugin, the ```operationID``` is a combination of the backup CR UID and the source PVC (represented by the ```item``` parameter) UID. Create Snapshot The DMP creates a snapshot of the requested volume and deliver it to DM through DUCR. After that, the DMP leaves the snapshot and its related objects (e.g., VolumeSnapshot and VolumeSnapshotContent for CSI snapshot) to the DM, DM then has full control of the snapshot and its related objects, i.e., deciding whether to delete the snapshot or its related objects and when to do it. This also indicates that the DUCR should contain the snapshot type specific information because different snapshot types may have their unique information. For Velero built-in implementation, the existing logics to create the snapshots will be reused, specifically, for CSI snapshot, the related logics in CSI plugin are fully reused. Create DataUpload CR A DUCR is created for as the result of each Execute call, then Execute method will return and leave DUCR being processed asynchronously. Set Backup As WaitForAsyncOperations Persist Backup After ```Execute``` returns, the backup is set to ```WaitingForPluginOperations```, and then Velero is free to process other items or backups. Before Velero moves to other items/backups, it will persist the backup data. This is the same as the existing behavior. The backup then is left as ```WaitForAsyncOperations``` until the DM completes or timeout. BIA Progress Velero keeps monitoring the status of the backup by calling BIA V2s Progress method. Below is the Progress method from BIA V2: ``` Progress(operationID string, backup *api.Backup) (velero.OperationProgress, error) ``` On the call of this method, DMP will query the DUCRs status. Some critical progress data is transferred from DUCR to the ```OperationProgress``` which is the return value of BIA V2s Progress method. For example, NCompleted indicates the size/number of data that have been completed and NTotal indicates the total size/number of data. When the async operation completes, the Progress method returns an OperationProgress with ```Completed``` set as true. Then Velero will persist DUCR as well as any other items returned by DUP as" }, { "data": "Finally, then backup is as ```Completed```. To help BIA Progress find the corresponding DUCR, the ```operationID``` is saved along with the DUCR as a label ```velero.io/async-operation-id```. DUCRs are handled by the data movers, so how to handle them are totally decided by the data movers. Below covers the details of VBDM, plugging data movers should have their own actions and workflows. Persist DataUpload CR As mentioned above, the DUCR will be persisted when it is completed under the help of BIA V2 async operation finalizing mechanism. This means the backup tarball will be uploaded twice, this is as the designed behavior of . Conclusively, as a result of the above executions: A DataUpload CR is created and persisted to the backup tarball. The CR will be left there after the backup completes because the CR includes many information connecting to the backup that may be useful to end users or upper level modules. A snapshot as well as the objects representing it are created. For CSI snapshot, a VolumeSnapshot object and a VolumeSnapshotContent object is created. The DMP leaves the snapshot as well as its related objects to DM for further processing. VBDM creates a Data Uploader Controller to handle the DUCRs in node-agent daemonset, therefore, on each node, there will be an instance of this controller. The controller connects to the backup repository and calls the uploader. Below are the VBDM actions. Acquire Object Lock Release Object Lock There are multiple instances of Data Uploader Controllers and when a DUCR is created, there should be only one of the instances handle the CR. Therefore, an operation called Acquired Object Lock is used to reach a consensus among the controller instances so that only one controller instance takes over the CR and tries the next action Expose for the CR. After the CR is completed in the Expose phase, the CR is released with the operation of Release Object Lock. We fulfil the Acquired Object Lock and Release Object Lock under the help of Kubernetes API server and the etcd in the background, which guarantees strong write consistency among all the nodes. Expose For some kinds of snapshot, it may not be usable directly after it is taken. For example, a CSI snapshot is represented by the VolumeSnapshot and VolumeSnapshotContent object, if we dont do anything, we dont see any PV really exists in the cluster, so VGDP has no way to access it. Meanwhile, when we have a PV representing the snapshot data, we still need a way to make it accessible by the VGDP. The details of the expose process are snapshot specific, and for one kind of snapshot, we may have different methods to expose it to VGDP. Later, we will have a specific section to explain the current design of the Exposer. Backup From Data Path After a snapshot is exposed, VGDP will be able to access the snapshot data, so the controller calls the uploader to start the data backup. To support cancellation and concurrent backup, the call to the VGDP is done asynchronously. How this asynchronization is implemented may be related to the Exposer. as the current design of Exposer, the asynchronization is implemented by the controller with go routines. We keep VGDP reused for VBDM, so everything inside VGDP are kept as is. For details of VGDP, refer to the" }, { "data": "Update Repo Snapshot ID When VGDP completes backup, it returns an ID that represent the root object saved into the backup repository for this backup, through the root object, we will be able to enumerate the entire backup data. This Repo Snapshot ID will be saved along with the DUCR. Below are the essential fields of DataUpload CRD. The CRD covers below information: The information to manipulate the specified snapshot The information to manipulate the specified data mover The information to manipulate the specified backup repository The progress of the current data upload The result of the current data upload once it finishes For snapshot manipulation: ```snapshotType``` indicates the type of the snapshot, at present, the only valid value is ```CSI```. If ```snapshotType``` is ```CSI```, ```csiSnapshot``` which is a pointer to a ```CSISnapshotSpec``` must not be absent. ```CSISnapshotSpec``` specifies the information of the CSI snapshot, e.g., ```volumeSnapshot``` is the name of VolumeSnapshot object representing the CSI snapshot; ```storageClass``` specifies the name of the StorageClass of the source PVC, which will be used to create the backupPVC during the data upload. For data mover manipulation: ```datamover``` indicates the name of the data mover, if it is empty or ```velero```, it means the built-in data mover will be used for this data upload For backup repository manipulation, ```backupStorageLocation``` is the name of the related BackupStorageLocation, where we can find all the required information. For the progress, it includes the ```totalBytes``` and ```doneBytes``` so that other modules could easily cuclulate a progress. For data upload result, ```snapshotID``` in the ```status``` field is the Repo Snapshot ID. Data movers may have their private outputs as a result of the DataUpload, they will be put in the ```dataMoverResult``` map of the ```status``` field. Here are the statuses of DataUpload CRD and their descriptions: New: The DUCR has been created but not processed by a controller Accepted: the Object lock has been acquired for this DUCR and the elected controller is trying to expose the snapshot Prepared: the snapshot has been exposed, the related controller is starting to process the upload InProgress: the data upload is in progress Canceling: the data upload is being canceled Canceled: the data upload has been canceled Completed: the data upload has completed Failed: the data upload has failed Below is the full spec of DataUpload CRD: ``` apiVersion: apiextensions.k8s.io/v1alpha1 kind: CustomResourceDefinition metadata: labels: component: velero name: datauploads.velero.io spec: conversion: strategy: None group: velero.io names: kind: DataUpload listKind: DataUploadList plural: datauploads singular: dataupload scope: Namespaced versions: additionalPrinterColumns: description: DataUpload status such as New/InProgress jsonPath: .status.phase name: Status type: string description: Time duration since this DataUpload was started jsonPath: .status.startTimestamp name: Started type: date description: Completed bytes format: int64 jsonPath: .status.progress.bytesDone name: Bytes Done type: integer description: Total bytes format: int64 jsonPath: .status.progress.totalBytes name: Total Bytes type: integer description: Name of the Backup Storage Location where this backup should be stored jsonPath: .spec.backupStorageLocation name: Storage Location type: string description: Time duration since this DataUpload was created jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: properties: spec: description: DataUploadSpec is the specification for a DataUpload. properties: backupStorageLocation: description: BackupStorageLocation is the name of the backup storage location where the backup repository is stored. type: string csiSnapshot: description: If SnapshotType is CSI, CSISnapshot provides the information of the CSI" }, { "data": "properties: snapshotClass: description: SnapshotClass is the name of the snapshot class that the volume snapshot is created with type: string storageClass: description: StorageClass is the name of the storage class of the PVC that the volume snapshot is created from type: string volumeSnapshot: description: VolumeSnapshot is the name of the volume snapshot to be backed up type: string required: storageClass volumeSnapshot type: object datamover: description: DataMover specifies the data mover to be used by the backup. If DataMover is \"\" or \"velero\", the built-in data mover will be used. type: string operationTimeout: description: OperationTimeout specifies the time used to wait internal operations, e.g., wait the CSI snapshot to become readyToUse. type: string snapshotType: description: SnapshotType is the type of the snapshot to be backed up. type: string sourceNamespace: description: SourceNamespace is the original namespace where the volume is backed up from. type: string required: backupStorageLocation csiSnapshot snapshotType sourceNamespace type: object status: description: DataUploadStatus is the current status of a DataUpload. properties: completionTimestamp: description: CompletionTimestamp records the time a backup was completed. Completion time is recorded even on failed backups. Completion time is recorded before uploading the backup object. The server's time is used for CompletionTimestamps format: date-time nullable: true type: string dataMoverResult: additionalProperties: type: string description: DataMoverResult stores data-mover-specific information as a result of the DataUpload. nullable: true type: object message: description: Message is a message about the DataUpload's status. type: string node: description: Node is the name of the node where the DataUpload is running. type: string path: description: Path is the full path of the snapshot volume being backed up. type: string phase: description: Phase is the current state of the DataUpload. enum: New Accepted Prepared InProgress Canceling Canceled Completed Failed type: string progress: description: Progress holds the total number of bytes of the volume and the current number of backed up bytes. This can be used to display progress information about the backup operation. properties: bytesDone: format: int64 type: integer totalBytes: format: int64 type: integer type: object snapshotID: description: SnapshotID is the identifier for the snapshot in the backup repository. type: string startTimestamp: description: StartTimestamp records the time a backup was started. Separate from CreationTimestamp, since that value changes on restores. The server's time is used for StartTimestamps format: date-time nullable: true type: string type: object type: object ``` Below are the data movement actions sequences during restore: Many of the actions are the same with backup, here are the different ones. Query Backup Result The essential information to be filled into DataDownload all comes from the DataUpload CR. For example, the Repo Snapshot ID is stored in the status fields of DataUpload CR. However, we don't want to restore the DataUpload CR and leave it in the cluster since it is useless after the restore. Therefore, we will retrieve the necessary information from DataUpload CR and store it in a temporary ConfigMap for the DM to use. There is one ConfigMap for each DataDownload CR and the ConfigMaps belong to a restore will be deleted when the restore finishes. Prepare Volume Readiness As the current pattern, Velero delivers an object representing a volume, either a PVC or a PV, to DMP and Velero will create the object after DMP's Execute call returns. However, by this time, DM should have not finished the restore, so the volume is not ready for use. In this step, DMP needs to mark the object as unready to use so as to prevent others from using it, i.e., a pod mounts the" }, { "data": "Additionlly, DMP needs to provide an approach for DM to mark it as ready when the data movement finishes. How to mark the volume as unready or ready varying from the type of the object, specifically, a PVC or a PV; and there are more than one ways to achieve this. Below show the details of how to do this for CSI snapshot data movement. After the DMP submits the DataDownload CR, it does below modifications to the PVC spec: Set spec.VolumeName to empty (\"\") Add a selector with a matchLabel ```velero.io/dynamic-pv-restore``` With these two steps, it tells Kubernetes that the PVC is not bound and it only binds a PV with the ```velero.io/dynamic-pv-restore``` label. As a result, even after the PVC object is created by Velero later and is used by other resources, it is not usable until the DM creates the target PV. Expose The purpose of expose process for restore is to create the target PV and make the PV accessible by VGDP. Later the Expose section will cover the details. Finish Volume Readiness By the data restore finishes, the target PV is ready for use but it is not delivered to the outside world. This step is the follow up of Prepare Volume Readiness, which does necessary work to mark the volume ready to use. For CSI snapshot restore, DM does below steps: Set the target PV's claim reference (the ```claimRef``` filed) to the target PVC Add the ```velero.io/dynamic-pv-restore``` label to the target PV By the meantime, the target PVC should have been created in the source user namespace and waiting for binding. When the above steps are done, the target PVC will be bound immediately by Kubernetes. This also means that Velero should not restore the PV if a data movement restore is involved, this follows the existing CSI snapshot behavior. For restore, VBDM doesnt need to persist anything. Below are the essential fields of DataDownload CRD. The CRD covers below information: The information to manipulate the target volume The information to manipulate the specified data mover The information to manipulate the specified backup repository Target volume information includes PVC and PV that represents the volume and the target namespace. The data mover information and backup repository information are the same with DataUpload CRD. DataDownload CRD defines the same status as DataUpload CRD with nearly the same meanings. Below is the full spec of DataDownload CRD: ``` apiVersion: apiextensions.k8s.io/v1alpha1 kind: CustomResourceDefinition metadata: labels: component: velero name: datadownloads.velero.io spec: conversion: strategy: None group: velero.io names: kind: DataDownload listKind: DataDownloadList plural: datadownloads singular: datadownload scope: Namespaced versions: DataDownload: description: DataDownload status such as New/InProgress jsonPath: .status.phase name: Status type: string description: Time duration since this DataDownload was started jsonPath: .status.startTimestamp name: Started type: date description: Completed bytes format: int64 jsonPath: .status.progress.bytesDone name: Bytes Done type: integer description: Total bytes format: int64 jsonPath: .status.progress.totalBytes name: Total Bytes type: integer description: Time duration since this DataDownload was created jsonPath: .metadata.creationTimestamp name: Age type: date name: v1 schema: openAPIV3Schema: properties: spec: description: SnapshotDownloadSpec is the specification for a SnapshotDownload. properties: backupStorageLocation: description: BackupStorageLocation is the name of the backup storage location where the backup repository is stored. type: string datamover: description: DataMover specifies the data mover to be used by the backup. If DataMover is \"\" or \"velero\", the built-in data mover will be used. type: string operationTimeout: description: OperationTimeout specifies the time used to wait internal operations, before returning error as" }, { "data": "type: string snapshotID: description: SnapshotID is the ID of the Velero backup snapshot to be restored from. type: string sourceNamespace: description: SourceNamespace is the original namespace where the volume is backed up from. type: string targetVolume: description: TargetVolume is the information of the target PVC and PV. properties: namespace: description: Namespace is the target namespace type: string pv: description: PV is the name of the target PV that is created by Velero restore type: string pvc: description: PVC is the name of the target PVC that is created by Velero restore type: string required: namespace pv pvc type: object required: backupStorageLocation restoreName snapshotID sourceNamespace targetVolume type: object status: description: SnapshotRestoreStatus is the current status of a SnapshotRestore. properties: completionTimestamp: description: CompletionTimestamp records the time a restore was completed. Completion time is recorded even on failed restores. The server's time is used for CompletionTimestamps format: date-time nullable: true type: string message: description: Message is a message about the snapshot restore's status. type: string node: description: Node is the name of the node where the DataDownload is running. type: string phase: description: Phase is the current state of theSnapshotRestore. enum: New Accepted Prepared InProgress Canceling Canceled Completed Failed type: string progress: description: Progress holds the total number of bytes of the snapshot and the current number of restored bytes. This can be used to display progress information about the restore operation. properties: bytesDone: format: int64 type: integer totalBytes: format: int64 type: integer type: object startTimestamp: description: StartTimestamp records the time a restore was started. The server's time is used for StartTimestamps format: date-time nullable: true type: string type: object type: object ``` At present, for a file system backup, VGDP accepts a string representing the root path of the snapshot to be backed up, the path should be accessible from the process/pod that VGDP is running. In future, VGDP may accept different access parameters. Anyway, the snapshot should be accessible local. Therefore, the first phase for Expose is to expose the snapshot to be locally accessed. This is a snapshot specific operation. For CSI snapshot, the final target is to create below 3 objects in Velero namespace: backupVSC: This is the Volume Snapshot Content object represents the CSI snapshot backupVS: This the Volume Snapshot object for BackupVSC in Velero namespace backupPVC: This is the PVC created from the backupVS in Velero namespace. Specifically, backupPVCs data source points to backupVS backupPod: This is a pod attaching backupPVC in Velero namespace. As Kubernetes restriction, the PV is not provisioned until the PVC is attached to a pod and the pod is scheduled to a node. Therefore, after the backupPod is running, the backupPV which represents the data of the snapshot will be provisioned backupPV: This is the PV provisioned as a result of backupPod schedule, it has the same data of the snapshot Initially, the CSI VS object is created in the source user namespace (we call it sourceVS), after the Expose, all the objects will be in Velero namespace, so all the data upload activities happen in the Velero namespace only. As you can see, we have duplicated some objects (sourceVS and sourceVSC), this is due to Kubernetes restriction the data source reference cannot across namespaces. After the duplication completes, the objects related to the source user namespace will be deleted. Below diagram shows the relationships of the objects: After the first phase, we will see a backupPod attaching a backupPVC/backupPV which data is the same as the snapshot" }, { "data": "Then the second phase could start, this phase is related to the uploader. For file system uploader, the target of this phase is to get a path that is accessible locally by the uploader. There are some alternatives: Get the path in the backupPod, so that VGDP runs inside the backupPod Get the path on the host, so that VGDP runs inside node-agent, this is similar to the existing PodVolumeBackup Each option has their pros and cons, in the current design, we will use the second way because it is simpler in implementation and more controllable in workflow. The Expose operation for DataDownload still takes two phases, The first phase creates below objects: restorePVC: It is a PVC in Velero namespace with the same specification, it is used to provision the restorePV restorePod: It is used to attach the restorePVC so that the restorePV could be provisioned by Kubernetes restorePV: It is provisioned by Kubernetes and bound to restorePVC Data will be downloaded to the restorePV. No object is created in user source namespace and no activity is done there either. The second phase is the same as DataUpload, that is, we still use the host path to access restorePV and run VGDP in node-agent. Some internal objects are created during the expose. Therefore, we need to clean them up to prevent internal objects from rampant growth. The cleanup happens in two cases: When the controller finishes processing the DUCR/DDCR, this includes the cases that the DUCR/DDCR is completed, failed and cancelled. When the DM restarts and the DM doesn't support restart recovery. When the DM comes back, it should detect all the ongoing DUCR/DDCR and clean up the expose. Specifically, VBDM should follow this rule since it doesn't support restart recovery. We will leverage on BIA/RIA V2's Cancel method to implement the cancellation, below are the prototypes from BIA/RIA V2: ``` Cancel(operationID string, backup *api.Backup) error Cancel(operationID string, restore *api.Restore) error ``` At present, Velero doesnt support canceling an ongoing backup, the current version of BIA/RIA V2 framework has some problems to support the end to end cancellation as well. Therefore, the current design doesnt aim to deliver an end-to-end cancellation workflow but to implement the cancellation workflow inside the data movement, in future, when the other two parts are ready for cancellation, the data movement cancellation workflow could be directly used. Additionally, at present, the data movement cancellation will be used in the below scenarios: When a backup is deleted, the backup deletion controller will call DMPs Cancel method, so that the ongoing data movement will not run after the backup is deleted. In the restart case, the ongoing backups will be marked as ```Failed``` when Velero restarts, at this time, DMPs Cancel method will also be called when Velero server comes back because Velero will never process these backups. For data movement implementation, a ```Cancel``` field is included in the DUCR/DDCR. DMP patches the DUCR/DDCR with ```Cancel``` field set to true, then it keeps querying the status of DUCR/DDCR until it comes to Canceled status or timeout, by which time, DMP returns the Cancel call to Velero. Then DM needs to handle the cancel request, e.g., stop the data transition. For VBDM, it sets a signal to the uploader and the uploader will abort in a short time. The cancelled DUCR/DDCR is marked as ```Canceled```. Below diagram shows VBDMs cancel workflow (take backup for example, restore is the" }, { "data": "It is possible that a DM doesnt support cancellation at all or only support in a specific phase (e.g., during InProgress phase), if the cancellation is requested at an unexpected time or to an unexpecting DM, the behavior is decided by the DMP and DM, below are some recommendations: If a DM doesn't support cancellation at all, DMP should be aware of this, so the DMP could return an error and fail early If the cancellation is requested at an unexpected time, DMP is possibly not aware of this, it could still deliver it to the DM, so both Velero and DMP wait there until DM completes the cancel request or timeout VBDM's cancellation exactly follows the above rules. Velero uses BIA/RIA V2 to launch data movement tasks, so from Veleros perspective, the DataUpload/DataDownload CRs from the running backups will be submitted in parallel. Then how these CRs are handled is decided by data movers, in another words, the specific data mover decides whether to handle them sequentially or in parallel, as well what the parallelism is like. Velero makes no restriction to data movers regarding to this. Next, lets focus on the parallelism of VBDM, which could also be a reference of the plugin data movers. VBDM is hosted by Velero node-agent, so there is one data movement controller instance on each Kubernetes node, which also means that these instances could handle the DataUpload/DataDownload CRs in parallel. On the other hand, a volume/volume snapshot may be accessed from only one or multiple nodes varying from its location, the backend storage architecture, etc. Therefore, the first decisive factor of the parallelism is the accessibility of a volume/volume snapshot. Therefore, we have below principles: We should spread the data movement activities equally to all the nodes in the cluster. This requires a load balance design from Velero In one node, it doesnt mean the more concurrency the better, because the data movement activities are high in resource consumption, i.e., CPU, memory, and network throughput. For the same consideration, we should make this configurable because the best number should be set by users according to the bottleneck they detect We will address the two principles step by step. As the first step, VBDMs parallelism is designed as below: We dont create the load balancing mechanism for the first step, we dont detect the accessibility of the volume/volume snapshot explicitly. Instead, we create the backupPod/restorePod under the help of Kubernetes, Kubernetes schedules the backupPod/restorePod to the appropriate node, then the data movement controller on that node will handle the DataUpload/DataDownload CR there, so the resource will be consumed from that node. We expose the configurable concurrency value per node, for details of how the concurrency number constraints various backups and restores which share VGDP, check the . As for the resource consumption, it is related to the data scale of the data movement activity and it is charged to node-agent pods, so users should configure enough resource to node-agent pods. When a DUCR/DDCR is in InProgress phase, users could check the progress. In DUCR/DDCRs status, we have fields like ```totalBytes``` and ```doneBytes```, the same values will be displayed as a result of below querires: Call ```kubectl get dataupload -n velero xxx or kubectl get datadownload -n velero xxx```. Call ```velero backup describe" }, { "data": "This is implemented as part of BIA/RIA V2, the above values are transferred to async operation and this command retrieve them from the async operation instead of DUCR/DDCR. See for details DUCR contains the information that is required during restore but as mentioned above, it will not be synced because during restore its information is retrieved dynamically. Therefore, we have no change to Backup Sync. Once a backup is deleted, the data in the backup repository should be deleted as well. On the other hand, the data is created by the specific DM, Velero doesn't know how to delete the data. Therefore, Velero relies on the DM to delete the backup data. As the current workflow, when ```velero backup delete``` CLI is called, a ```deletebackuprequests``` CR is submitted; after the backup delete controller finishes all the work, the ```deletebackuprequests``` CR will be deleted. In order to give an opportunity for the DM to delete the backup data, we remedy the workflow as below: During the backup deletion, the backup delete controller retrieves all the DUCRs belong to the backup The backup delete controller then creates the DUCRs into the cluster Before deleting the ```deletebackuprequests``` CR, the backup delete controller adds a ```velero.io/dm-delete-backup``` finalizer to the CR As a result, the ```deletebackuprequests``` CR will not be deleted until the finalizer is removed The DM needs to watch the ```deletebackuprequests``` CRs with the ```velero.io/dm-delete-backup``` finalizer Once the DM finds one, it collects a list of DUCRs that belong to the backup indicating by the ```deletebackuprequests``` CR's spec If the list is not empty, the DM delete the backup data for each of the DUCRs in the list as well as the DUCRs themselves Finally, when all the items in the list are processed successfully, the DM removes the ```velero.io/dm-delete-backup``` finalizer Otherwise, if any error happens during the processing, the ```deletebackuprequests``` CR will be left there with the ```velero.io/dm-delete-backup``` finalizer, as well as the failed DUCRs DMs may use a periodical manner to retry the failed delete requests If Velero restarts during a data movement activity, the backup/restore will be marked as failed when Velero server comes back, by this time, Velero will request a cancellation to the ongoing data movement. If DM restarts, Velero has no way to detect this, DM is expected to: Either recover from the restart and continue the data movement Or if DM doesnt support recovery, it should cancel the data movement and mark the DUCR/DDCR as failed. DM should also clear any internal objects created during the data movement before and after the restart At present, VBDM doesn't support recovery, so it will follow the second rule. To work with block devices, VGDP will be updated. Today, when Kopia attempts to create a snapshot of the block device, it will error because kopia does not support this file type. Kopia does have a nice set of interfaces that are able to be extended though. Notice The Kopia block mode uploader only supports non-Windows platforms, because the block mode code invokes some system calls that are not present in the Windows platform. To achieve the necessary information to determine the type of volume that is being used, we will need to pass in the volume mode in provider interface. ```go type PersistentVolumeMode string const ( // PersistentVolumeBlock means the volume will not be formatted with a filesystem and will remain a raw block device. PersistentVolumeBlock PersistentVolumeMode = \"Block\" // PersistentVolumeFilesystem means the volume will be or is formatted with a" }, { "data": "PersistentVolumeFilesystem PersistentVolumeMode = \"Filesystem\" ) // Provider which is designed for one pod volume to do the backup or restore type Provider interface { // RunBackup which will do backup for one specific volume and return snapshotID, isSnapshotEmpty, error // updater is used for updating backup progress which implement by third-party RunBackup( ctx context.Context, path string, realSource string, tags map[string]string, forceFull bool, parentSnapshot string, volMode uploader.PersistentVolumeMode, uploaderCfg shared.UploaderConfig, updater uploader.ProgressUpdater) (string, bool, error) RunRestore( ctx context.Context, snapshotID string, volumePath string, volMode uploader.PersistentVolumeMode, updater uploader.ProgressUpdater) error ``` In this case, we will extend the default kopia uploader to add the ability, when a given volume is for a block mode and is mapped as a device, we will use the to stream the device and backup to the kopia repository. ```go func getLocalBlockEntry(sourcePath string) (fs.Entry, error) { source, err := resolveSymlink(sourcePath) if err != nil { return nil, errors.Wrap(err, \"resolveSymlink\") } fileInfo, err := os.Lstat(source) if err != nil { return nil, errors.Wrapf(err, \"unable to get the source device information %s\", source) } if (fileInfo.Sys().(*syscall.Statt).Mode & syscall.SIFMT) != syscall.S_IFBLK { return nil, errors.Errorf(\"source path %s is not a block device\", source) } device, err := os.Open(source) if err != nil { if os.IsPermission(err) || err.Error() == ErrNotPermitted { return nil, errors.Wrapf(err, \"no permission to open the source device %s, make sure that node agent is running in privileged mode\", source) } return nil, errors.Wrapf(err, \"unable to open the source device %s\", source) } sf := virtualfs.StreamingFileFromReader(source, device) return virtualfs.NewStaticDirectory(source, []fs.Entry{sf}), nil } ``` In the `pkg/uploader/kopia/snapshot.go` this is used in the Backup call like ```go if volMode == uploader.PersistentVolumeFilesystem { // to be consistent with restic when backup empty dir returns one error for upper logic handle dirs, err := os.ReadDir(source) if err != nil { return nil, false, errors.Wrapf(err, \"Unable to read dir in path %s\", source) } else if len(dirs) == 0 { return nil, true, nil } } source = filepath.Clean(source) ... var sourceEntry fs.Entry if volMode == uploader.PersistentVolumeBlock { sourceEntry, err = getLocalBlockEntry(source) if err != nil { return nil, false, errors.Wrap(err, \"unable to get local block device entry\") } } else { sourceEntry, err = getLocalFSEntry(source) if err != nil { return nil, false, errors.Wrap(err, \"unable to get local filesystem entry\") } } ... snapID, snapshotSize, err := SnapshotSource(kopiaCtx, repoWriter, fsUploader, sourceInfo, sourceEntry, forceFull, parentSnapshot, tags, log, \"Kopia Uploader\") ``` To handle restore, we need to extend the interface and specifically the . We only need to extend two functions the rest will be passed through. ```go type BlockOutput struct { *restore.FilesystemOutput targetFileName string } var _ restore.Output = &BlockOutput{} const bufferSize = 128 * 1024 func (o *BlockOutput) WriteFile(ctx context.Context, relativePath string, remoteFile fs.File) error { remoteReader, err := remoteFile.Open(ctx) if err != nil { return errors.Wrapf(err, \"failed to open remote file %s\", remoteFile.Name()) } defer remoteReader.Close() targetFile, err := os.Create(o.targetFileName) if err != nil { return errors.Wrapf(err, \"failed to open file %s\", o.targetFileName) } defer targetFile.Close() buffer := make([]byte, bufferSize) readData := true for readData { bytesToWrite, err := remoteReader.Read(buffer) if err != nil { if err != io.EOF { return errors.Wrapf(err, \"failed to read data from remote file %s\", o.targetFileName) } readData = false } if bytesToWrite > 0 { offset := 0 for bytesToWrite > 0 { if bytesWritten, err := targetFile.Write(buffer[offset:bytesToWrite]); err == nil { bytesToWrite -= bytesWritten offset += bytesWritten } else { return" }, { "data": "\"failed to write data to file %s\", o.targetFileName) } } } } return nil } func (o *BlockOutput) BeginDirectory(ctx context.Context, relativePath string, e fs.Directory) error { var err error o.targetFileName, err = filepath.EvalSymlinks(o.TargetPath) if err != nil { return errors.Wrapf(err, \"unable to evaluate symlinks for %s\", o.targetFileName) } fileInfo, err := os.Lstat(o.targetFileName) if err != nil { return errors.Wrapf(err, \"unable to get the target device information for %s\", o.TargetPath) } if (fileInfo.Sys().(*syscall.Statt).Mode & syscall.SIFMT) != syscall.S_IFBLK { return errors.Errorf(\"target file %s is not a block device\", o.TargetPath) } return nil } ``` Additional mount is required in the node-agent specification to resolve symlinks to the block devices from /hostpods/PODID/volumeDevices/kubernetes.io~csi directory. ```yaml mountPath: /var/lib/kubelet/plugins mountPropagation: HostToContainer name: host-plugins .... hostPath: path: /var/lib/kubelet/plugins name: host-plugins ``` Privileged mode is required to access the block devices in /var/lib/kubelet/plugins/kubernetes.io/csi/volumeDevices/publish directory as confirmed by testing on EKS and Minikube. ```yaml SecurityContext: &corev1.SecurityContext{ Privileged: &c.privilegedNodeAgent, }, ``` There should be only one DM to handle a specific DUCR/DDCR in all cases. If more than one DMs process a DUCR/DDCR at the same time, there will be a disaster. Therefore, a DM should check the dataMover field of DUCR/DDCR and process the CRs belong to it only. For example, VBDM reconciles DUCR/DDCR with their ```dataMover``` field set to \"\" or \"velero\", it will skip all others. This means during the installation, users are allowed to install more than one DMs, but the DMs should follow the above rule. When creating a backup, we should allow users to specify the data mover, so a new backup CLI option is required. For restore, we should retrieve the same information from the corresponding backup, so that the data mover selection is consistent. At present, Velero doesn't have the capability to verify the existence of the specified data mover. As a result, if a wrong data mover name is specified for the backup or the specified data mover is not installed, nothing will fail early, DUCR/DDCR is still created and Velero will wait there until timeout. Plugin DMs may need some private configurations, the plugin DM providers are recommended to create a self-managed configMap to take the information. Velero doesn't maintain the lifecycle of the configMap. Besides, the configMap is recommended to named as the DM's name, in this way, if Velero or DMP recognizes some generic options that varies between DMs, the options could be added into the configMap and visited by Velero or DMP. Conclusively, below are the steps plugin DMs need to do in order to integrate to Velero volume snapshot data movement. Handle and only handle DUCRs with the matching ```dataMover``` value Maintain the phases and progresses of DUCRs correctly If supported, response to the Cancel request of DUCRs Dispose the volume snapshots as well as their related objects after the snapshot data is transferred Handle and only handle DDCRs with the matching ```dataMover``` value Maintain the phases and progresses of DDCRs correctly If supported, response to the Cancel request of DDCRs Create the PV with data restored to it Set PV's ```claimRef``` to the provided PVC and set ```velero.io/dynamic-pv-restore``` label It doesnt mean that once the data movement feature is enabled users must move every snapshot. We will support below two working modes: Dont move snapshots. This is same with the existing CSI snapshot feature, that is, native snapshots are taken and kept Move snapshot data and delete native" }, { "data": "This means that once the data movement completes, the native snapshots will be deleted. For this purpose, we need to add a new option in the backup command as well as the Backup CRD. The same option for restore will be retrieved from the specified backup, so that the working mode is consistent. We add below new fields in the Backup CRD: ``` // SnapshotMoveData specifies whether snapshot data should be moved // +optional // +nullable SnapshotMoveData *bool `json:\"snapshotMoveData,omitempty\"` // DataMover specifies the data mover to be used by the backup. // If DataMover is \"\" or \"velero\", the built-in data mover will be used. // +optional DataMover string `json:\"datamover,omitempty\"` ``` SnapshotMoveData will be used to decide the Working Mode. DataMover will be used to decide the data mover to handle the DUCR. DUCR's DataMover value is derived from this value. As mentioned in the Plugin Data Movers section, the data movement information for a restore should be the same with the backup. Therefore, the working mode for restore should be decided by checking the corresponding Backup CR; when creating a DDCR, the DataMover value should be retrieved from the corresponding Backup Result. The logs during the data movement are categorized as below: Logs generated by Velero Logs generated by DMPs Logs generated by DMs For 1 and 2, the existing plugin mechanism guarantees that the logs could be saved into the Velero server log as well as backup/restore persistent log. For 3, Velero leverage on DMs to decide how to save the log, but they will not go to Velero server log or backup/restore persistent log. For VBDM, the logs are saved in the node-agent server log. DMs need to be configured during installation so that they can be installed. Plugin DMs may have their own configuration, for VGDM, the only requirement is to install Velero node-agent. Moreover, the DMP is also required during the installation. From release-1.14, the `github.com/vmware-tanzu/velero-plugin-for-csi` repository, which is the Velero CSI plugin, is merged into the `github.com/vmware-tanzu/velero` repository. The reason to merge the CSI plugin is: The VolumeSnapshot data mover depends on the CSI plugin, it's reasonabe to integrate them. This change reduces the Velero deploying complexity. This makes performance tuning easier in the future. As a result, no need to install Velero CSI plugin anymore. For example, to move CSI snapshot through VBDM, below is the installation script: ``` velero install \\ --provider \\ --image \\ --features=EnableCSI \\ --use-node-agent \\ ``` For VBDM, no new installation option is introduced, so upgrade is not affected. If plugin DMs require new options and so the upgrade is affected, they should explain them in their own documents. As explained in the Working Mode section, we add one more flag ```snapshot-move-data``` to indicate whether the snapshot data should be moved. As explained in the Plugin Data Movers section, we add one more flag ```data-mover``` for users to configure the data mover to move the snapshot data. Example of backup command are as below. Below CLI means to create a backup with volume snapshot data movement enabled and with VBDM as the data mover: ``` velero backup create xxx --include-namespaces --snapshot-move-data ``` Below CLI has the same meaning as the first one: ``` velero backup create xxx --include-namespaces --snapshot-move-data --data-mover velero ``` Below CLI means to create a backup with volume snapshot data movement enabled and with \"xxx-plugin-dm\" as the data mover: ``` velero backup create xxx --include-namespaces --snapshot-move-data --data-mover xxx-plugin-dm ``` Restore command is kept as is." } ]
{ "category": "Runtime", "file_name": "volume-snapshot-data-movement.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "English | can be used with Macvlan, SR-IOV, and IPvlan to implement a complete network solution. This article will compare it with the mainstream network CNI plug-ins on the market ( Such as , ) Network `Latency` and `Throughput` in various scenarios This test contains performance benchmark data for various scenarios. All tests were performed between containers running on two different bare metal nodes with 10 Gbit/s network interfaces. Kubernetes: `v1.28.2` container runtime: `containerd 1.6.24` OS: `ubuntu 23.04` kernel: `6.2.0-35-generic` NIC: `Mellanox Technologies MT27800 Family [ConnectX-5]` | Node | Role | CPU | Memory | | -- | | | | | master1 | control-plane, worker | 56C | 125Gi | | worker1 | worker | 56C | 125Gi | This test uses with Spiderpool as the test solution, and selected , For comparison, two common network solutions are as follows. The following is the relevant version and other information: | Test object | illustrate | | - | | | Spiderpool based macvlan datapath | Spiderpool version v0.8.0 | | Calico | Calico version v3.26.1, based on iptables datapath and no tunnels | | Cilium | Cilium version v1.14.3, based on full eBPF acceleration and no tunneling | Sockperf is a network benchmarking tool that can be used to measure network latency. It allows you to evaluate the performance of your network by testing the latency between two endpoints. We can use it to separately test Pod's cross-node access to Pod and Service. When testing access to Service's cluster IP, there are two scenarios: `kube-proxy` or `cilium + kube-proxy replacement`. Cross-node Pod latency testing for Pod IP purposes. Use `sockperf pp --tcp -i <Pod IP> -p 12345 -t 30` to test the latency of cross-node Pod access to the Pod IP. The data is as follows. | Test object | latency | | - | -- | | Calico based on iptables datapath and tunnelless | 51.3 usec | | Cilium based on full eBPF acceleration and no tunneling | 29.1 usec | | Spiderpool Pod on the same subnet based on macvlan | 24.3 usec | | Spiderpool Pod across subnets based on macvlan | 26.2 usec | | node to node | 32.2 usec | Cross-node Pod latency test for cluster IP purpose. Use `sockperf pp --tcp -i <Cluster IP> -p 12345 -t 30` to test the latency of cross-node Pod access to the cluster IP. The data is as follows. | Test object | latency | | -- | -- | | Calico based on iptables datapath and tunnelless | 51.9 usec | | Cilium based on full eBPF acceleration and no tunneling | 30.2 usec | | Spiderpool Pod based on macvlan on the same subnet and kube-proxy | 36.8 usec | | Spiderpool Pod based on macvlan on the same subnet and fully eBPF accelerated | 27.7 usec | | node to node | 32.2 usec | netperf is a widely used network performance testing tool that allows you to measure various aspects of network performance, such as" }, { "data": "We can use netperf to test Pod's cross-node access to Pod and Service respectively. When testing access to Service's cluster IP, there are two scenarios: `kube-proxy` or `cilium + kube-proxy replacement`. Netperf testing of cross-node Pods for Pod IP purposes. Use `netperf -H <Pod IP> -l 10 -c -t TCP_RR -- -r100,100` to test the throughput of cross-node Pod access to Pod IP. The data is as follows. | Test object | Throughput (rps) | | -- | -- | | Calico based on iptables datapath and tunnelless | 9985.7 | | Cilium based on full eBPF acceleration and no tunneling | 17571.3 | | Spiderpool Pod on the same subnet based on macvlan | 19793.9 | | Spiderpool Pod across subnets based on macvlan | 19215.2 | | node to node | 47560.5 | Netperf testing across node Pods for cluster IP purposes. Use `netperf -H <cluster IP> -l 10 -c -t TCP_RR -- -r100,100` to test the throughput of cross-node Pods accessing the cluster IP. The data is as follows. | Test object | Throughput (rps) | | | - | | Calico based on iptables datapath and tunnelless | 9782.2 | | Cilium based on full eBPF acceleration and no tunneling | 17236.5 | | Spiderpool Pod based on macvlan on the same subnet and kube-proxy | 16002.3 | | Spiderpool Pod based on macvlan on the same subnet and fully eBPF accelerated | 18992.9 | | node to node | 47560.5 | iperf is a popular network performance testing tool that allows you to measure network bandwidth between two endpoints. It is widely used to evaluate the bandwidth and performance of network connections. In this chapter, we use it to test Pod's cross-node access to Pod and Service. When testing access to Service's cluster IP, there are two scenarios: `kube-proxy` or `cilium + kube-proxy replacement`. iperf testing of cross-node Pods for Pod IP purposes. Use `iperf3 -c <Pod IP> -d -P 1` to test the performance of cross-node Pod access to Pod IP. Use the -P parameter to specify threads 1, 2, and 4 respectively. The data is as follows. | Test object | Number of threads 1 | Number of threads 2 | Number of threads 4 | | -- | -- | -- | -- | | Calico based on iptables datapath and tunnelless | 3.26 Gbits/sec | 4.56 Gbits/sec | 8.05 Gbits/sec | | Cilium based on full eBPF acceleration and no tunneling | 9.35 Gbits/sec | 9.36 Gbits/sec | 9.39 Gbits/sec | | Spiderpool Pod on the same subnet based on macvlan | 9.36 Gbits/sec | 9.37 Gbits/sec | 9.38 Gbits/sec | | Spiderpool Pod across subnets based on macvlan | 9.36 Gbits/sec | 9.37 Gbits/sec | 9.38 Gbits/sec | | node to node | 9.41 Gbits/sec | 9.40 Gbits/sec | 9.42 Gbits/sec | iperf testing of cross-node Pods for cluster IP purposes. Use `iperf3 -c <cluster IP> -d -P 1` to test the performance of cross-node Pod access to cluster IP. Use the -P parameter to specify threads 1, 2, and 4 respectively. The data is as" }, { "data": "| Test object | Number of threads 1 | Number of threads 2 | Number of threads 4 | | -- | -- | -- | | | Calico based on iptables datapath and tunnelless | 3.06 Gbits/sec | 4.63 Gbits/sec | 8.02 Gbits/sec | | Cilium based on full eBPF acceleration and no tunneling | 9.35 Gbits/sec | 9.35 Gbits/sec | 9.38 Gbits/sec | | Spiderpool Pod based on macvlan on the same subnet and kube-proxy | 3.42 Gbits/sec | 6.75 Gbits/sec | 9.24 Gbits/sec | | Spiderpool Pod based on macvlan on the same subnet and fully eBPF accelerated | 9.36 Gbits/sec | 9.38 Gbits/sec | 9.39 Gbits/sec | | node to node | 9.41 Gbits/sec | 9.40 Gbits/sec | 9.42 Gbits/sec | redis-benchmark is designed to measure the performance and throughput of a Redis server by simulating multiple clients and executing various Redis commands. We used redis-benchmark to test Pod's cross-node access to the Pod and Service where the Redis service is deployed. When testing access to Service's cluster IP, there are two scenarios: `kube-proxy` or `cilium + kube-proxy replacement`. Cross-node Pod redis-benchmark testing based on Pod IP. Use `redis-benchmark -h <Pod IP> -p 6379 -d 1000 -t get,set` to test the performance of cross-node Pod access to Pod IP. The data is as follows. | Test object | get | set | | | -- | | | Calico based on iptables datapath and tunnelless | 45682.96 rps | 46992.48 rps | | Cilium based on full eBPF acceleration and no tunneling | 59737.16 rps | 59988.00 rps | | Spiderpool Pod on the same subnet based on macvlan | 66357.00 rps | 66800.27 rps | | Spiderpool Pod across subnets based on macvlan | 67444.45 rps | 67783.67 rps | Cross-node Pod redis-benchmark testing for cluster IP purposes. Use `redis-benchmark -h <cluster IP> -p 6379 -d 1000 -t get,set` to test the performance of cross-node Pod access to cluster IP. The data is as follows. | Test object | get | set | | | | -- | | Calico based on iptables datapath and tunnelless | 46082.95 rps | 46728.97 rps | | Cilium based on full eBPF acceleration and no tunneling | 60496.07 rps | 58927.52 rps | | Spiderpool Pod based on macvlan on the same subnet and kube-proxy | 45578.85 rps | 46274.87 rps | | Spiderpool Pod based on macvlan on the same subnet and fully eBPF accelerated | 63211.12 rps | 64061.50 rps | Spiderpool can achieve same-node communication acceleration with the help of the project. Run the service on one node of the cluster and not on the other node. Conduct a performance test through Sockperf between Pods on the same node. The data is as follows. | Test object | latency | | - | -- | | Node enables eBPF acceleration | 7.643 usec | | Node is not enabled for eBPF acceleration | 17.335 usec | When Spiderpool is used as an underlay network solution, its IO performance is ahead of Calico and Cilium in most scenarios." } ]
{ "category": "Runtime", "file_name": "io-performance.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "All notable changes to this project will be documented in this file. This project adheres to . Enhancements: : Add `WithLazy` method to `Logger` which lazily evaluates the structured context. : String encoding is much (~50%) faster now. Thanks to @jquirke, @cdvr1993 for their contributions to this release. This release contains several improvements including performance, API additions, and two new experimental packages whose APIs are unstable and may change in the future. Enhancements: : Add `zap/exp/zapslog` package for integration with slog. : Add `Name` to `Logger` which returns the Logger's name if one is set. : Add `zap/exp/expfield` package which contains helper methods `Str` and `Strs` for constructing String-like zap.Fields. : Reduce stack size on `Any`. Thanks to @knight42, @dzakaammar, @bcspragu, and @rexywork for their contributions to this release. Enhancements: : Add `Level` to both `Logger` and `SugaredLogger` that reports the current minimum enabled log level. : `SugaredLogger` turns errors to zap.Error automatically. Thanks to @Abirdcfly, @craigpastro, @nnnkkk7, and @sashamelentyev for their contributions to this release. Enhancements: : Add a `zapcore.LevelOf` function to determine the level of a `LevelEnabler` or `Core`. : Add `zap.Stringers` field constructor to log arrays of objects that implement `String() string`. Enhancements: : Add `zap.Objects` and `zap.ObjectValues` field constructors to log arrays of objects. With these two constructors, you don't need to implement `zapcore.ArrayMarshaler` for use with `zap.Array` if those objects implement `zapcore.ObjectMarshaler`. : Add `SugaredLogger.WithOptions` to build a copy of an existing `SugaredLogger` with the provided options applied. : Add `ln` variants to `SugaredLogger` for each log level. These functions provide a string joining behavior similar to `fmt.Println`. : Add `zap.WithFatalHook` option to control the behavior of the logger for `Fatal`-level log entries. This defaults to exiting the program. : Add a `zap.Must` function that you can use with `NewProduction` or `NewDevelopment` to panic if the system was unable to build the logger. : Add a `Logger.Log` method that allows specifying the log level for a statement dynamically. Thanks to @cardil, @craigpastro, @sashamelentyev, @shota3506, and @zhupeijun for their contributions to this release. Enhancements: : Add `zapcore.ParseLevel` to parse a `Level` from a string. : Add `zap.ParseAtomicLevel` to parse an `AtomicLevel` from a string. Bugfixes: : Fix panic in JSON encoder when `EncodeLevel` is unset. Other changes: : Improve encoding performance when the `AddCaller` and `AddStacktrace` options are used together. Thanks to @aerosol and @Techassi for their contributions to this release. Enhancements: : Add `EncoderConfig.SkipLineEnding` flag to disable adding newline characters between log statements. : Add `EncoderConfig.NewReflectedEncoder` field to customize JSON encoding of reflected log fields. Bugfixes: : Fix inaccurate precision when encoding complex64 as JSON. , : Close JSON namespaces opened in `MarshalLogObject` methods when the methods return. : Avoid panicking in Sampler core if `thereafter` is zero. Other changes: : Drop support for Go < 1.15. Thanks to @psrajat, @lruggieri, @sammyrnycreal for their contributions to this release. Bugfixes: : JSON: Fix complex number encoding with negative imaginary part. Thanks to @hemantjadon. : JSON: Fix inaccurate precision when encoding float32. Enhancements: : Avoid panicking in Sampler core if the level is out of bounds. : Reduce the size of BufferedWriteSyncer by aligning the fields better. Thanks to @lancoLiu and @thockin for their contributions to this release. Bugfixes: : Fix nil dereference in logger constructed by `zap.NewNop`. Enhancements: : Add `zapcore.BufferedWriteSyncer`, a new `WriteSyncer` that buffers messages in-memory and flushes them periodically. : Add `zapio.Writer` to use a Zap logger as an `io.Writer`. : Add" }, { "data": "option to control the source of time via the new `zapcore.Clock` interface. : Avoid panicking in `zap.SugaredLogger` when arguments of `w` methods don't match expectations. : Add support for filtering by level or arbitrary matcher function to `zaptest/observer`. : Comply with `io.StringWriter` and `io.ByteWriter` in Zap's `buffer.Buffer`. Thanks to @atrn0, @ernado, @heyanfu, @hnlq715, @zchee for their contributions to this release. Bugfixes: : Encode `<nil>` for nil `error` instead of a panic. , : Update minimum version constraints to address vulnerabilities in dependencies. Enhancements: : Improve alignment of fields of the Logger struct, reducing its size from 96 to 80 bytes. : Support `grpclog.LoggerV2` in zapgrpc. : Support URL-encoded POST requests to the AtomicLevel HTTP handler with the `application/x-www-form-urlencoded` content type. : Support multi-field encoding with `zap.Inline`. : Speed up SugaredLogger for calls with a single string. : Add support for filtering by field name to `zaptest/observer`. Thanks to @ash2k, @FMLS, @jimmystewpot, @Oncilla, @tsoslow, @tylitianrui, @withshubh, and @wziww for their contributions to this release. Bugfixes: : Fix missing newline in IncreaseLevel error messages. : Fix panic in JSON encoder when encoding times or durations without specifying a time or duration encoder. : Honor CallerSkip when taking stack traces. : Fix the default file permissions to use `0666` and rely on the umask instead. : Encode `<nil>` for nil `Stringer` instead of a panic error log. Enhancements: : Added `zapcore.TimeEncoderOfLayout` to easily create time encoders for custom layouts. : Added support for a configurable delimiter in the console encoder. : Optimize console encoder by pooling the underlying JSON encoder. : Add ability to include the calling function as part of logs. : Add `StackSkip` for including truncated stacks as a field. : Add options to customize Fatal behaviour for better testability. Thanks to @SteelPhase, @tmshn, @lixingwang, @wyxloading, @moul, @segevfiner, @andy-retailnext and @jcorbin for their contributions to this release. Bugfixes: : Fix handling of `Time` values out of `UnixNano` range. : Fix `IncreaseLevel` being reset after a call to `With`. Enhancements: : Add `WithCaller` option to supersede the `AddCaller` option. This allows disabling annotation of log entries with caller information if previously enabled with `AddCaller`. : Deprecate `NewSampler` constructor in favor of `NewSamplerWithOptions` which supports a `SamplerHook` option. This option adds support for monitoring sampling decisions through a hook. Thanks to @danielbprice for their contributions to this release. Bugfixes: : Fix panic on attempting to build a logger with an invalid Config. : Vendoring Zap with `go mod vendor` no longer includes Zap's development-time dependencies. : Fix issue introduced in 1.14.0 that caused invalid JSON output to be generated for arrays of `time.Time` objects when using string-based time formats. Thanks to @YashishDua for their contributions to this release. Enhancements: : Optimize calls for disabled log levels. : Add millisecond duration encoder. : Add option to increase the level of a logger. : Optimize time formatters using `Time.AppendFormat` where possible. Thanks to @caibirdme for their contributions to this release. Enhancements: : Add `Intp`, `Stringp`, and other similar `p` field constructors to log pointers to primitives with support for `nil` values. Thanks to @jbizzle for their contributions to this release. Enhancements: : Migrate to Go modules. Enhancements: : Add `zapcore.OmitKey` to omit keys in an `EncoderConfig`. : Add `RFC3339` and `RFC3339Nano` time encoders. Thanks to @juicemia, @uhthomas for their contributions to this release. Bugfixes: : Fix `MapObjectEncoder.AppendByteString` not adding value as a string. : Fix incorrect call depth to determine caller in Go 1.12. Enhancements: : Add" }, { "data": "to wrap `zap.Option` for creating test loggers. : Don't panic when encoding a String field. : Disable HTML escaping for JSON objects encoded using the reflect-based encoder. Thanks to @iaroslav-ciupin, @lelenanam, @joa, @NWilson for their contributions to this release. Bugfixes: : MapObjectEncoder should not ignore empty slices. Enhancements: : Reduce number of allocations when logging with reflection. , : Expose a registry for third-party logging sinks. Thanks to @nfarah86, @AlekSi, @JeanMertz, @philippgille, @etsangsplk, and @dimroc for their contributions to this release. Enhancements: : Make log level configurable when redirecting the standard library's logger. : Add a logger that writes to a `testing.TB`. : Add a top-level alias for `zapcore.Field` to clean up GoDoc. Bugfixes: : Add a missing import comment to `go.uber.org/zap/buffer`. Thanks to @DiSiqueira and @djui for their contributions to this release. Bugfixes: : Store strings when using AddByteString with the map encoder. Enhancements: : Add `NewStdLogAt`, which extends `NewStdLog` by allowing the user to specify the level of the logged messages. Enhancements: : Omit zap stack frames from stacktraces. : Add a `ContextMap` method to observer logs for simpler field validation in tests. Enhancements: and : Support errors produced by `go.uber.org/multierr`. : Support user-supplied encoders for logger names. Bugfixes: : Fix a bug that incorrectly truncated deep stacktraces. Thanks to @richard-tunein and @pavius for their contributions to this release. This release fixes two bugs. Bugfixes: : Support a variety of case conventions when unmarshaling levels. : Fix a panic in the observer. This release adds a few small features and is fully backward-compatible. Enhancements: : Add a `LineEnding` field to `EncoderConfig`, allowing users to override the Unix-style default. : Preserve time zones when logging times. : Make `zap.AtomicLevel` implement `fmt.Stringer`, which makes a variety of operations a bit simpler. This release adds an enhancement to zap's testing helpers as well as the ability to marshal an AtomicLevel. It is fully backward-compatible. Enhancements: : Add a substring-filtering helper to zap's observer. This is particularly useful when testing the `SugaredLogger`. : Make `AtomicLevel` implement `encoding.TextMarshaler`. This release adds a gRPC compatibility wrapper. It is fully backward-compatible. Enhancements: : Add a `zapgrpc` package that wraps zap's Logger and implements `grpclog.Logger`. This release fixes two bugs and adds some enhancements to zap's testing helpers. It is fully backward-compatible. Bugfixes: : Fix caller path trimming on Windows. : Fix a panic when attempting to use non-existent directories with zap's configuration struct. Enhancements: : Add filtering helpers to zaptest's observing logger. Thanks to @moitias for contributing to this release. This is zap's first stable release. All exported APIs are now final, and no further breaking changes will be made in the 1.x release series. Anyone using a semver-aware dependency manager should now pin to `^1`. Breaking changes: : Add byte-oriented APIs to encoders to log UTF-8 encoded text without casting from `[]byte` to `string`. : To support buffering outputs, add `Sync` methods to `zapcore.Core`, `zap.Logger`, and `zap.SugaredLogger`. : Rename the `testutils` package to `zaptest`, which is less likely to clash with other testing helpers. Bugfixes: : Make the ISO8601 time formatters fixed-width, which is friendlier for tab-separated console output. : Remove the automatic locks in `zapcore.NewCore`, which allows zap to work with concurrency-safe `WriteSyncer` implementations. : Stop reporting errors when trying to `fsync` standard out on Linux systems. : Report the correct caller from zap's standard library interoperability wrappers. Enhancements: : Add a registry allowing third-party encodings to work with zap's built-in" }, { "data": ": Make the representation of logger callers configurable (like times, levels, and durations). : Allow third-party encoders to use their own buffer pools, which removes the last performance advantage that zap's encoders have over plugins. : Add `CombineWriteSyncers`, a convenience function to tee multiple `WriteSyncer`s and lock the result. : Make zap's stacktraces compatible with mid-stack inlining (coming in Go 1.9). : Export zap's observing logger as `zaptest/observer`. This makes it easier for particularly punctilious users to unit test their application's logging. Thanks to @suyash, @htrendev, @flisky, @Ulexus, and @skipor for their contributions to this release. This is the third release candidate for zap's stable release. There are no breaking changes. Bugfixes: : Byte slices passed to `zap.Any` are now correctly treated as binary blobs rather than `[]uint8`. Enhancements: : Users can opt into colored output for log levels. : In addition to hijacking the output of the standard library's package-global logging functions, users can now construct a zap-backed `log.Logger` instance. : Frames from common runtime functions and some of zap's internal machinery are now omitted from stacktraces. Thanks to @ansel1 and @suyash for their contributions to this release. This is the second release candidate for zap's stable release. It includes two breaking changes. Breaking changes: : Zap's global loggers are now fully concurrency-safe (previously, users had to ensure that `ReplaceGlobals` was called before the loggers were in use). However, they must now be accessed via the `L()` and `S()` functions. Users can update their projects with ``` gofmt -r \"zap.L -> zap.L()\" -w . gofmt -r \"zap.S -> zap.S()\" -w . ``` and : RC1 was mistakenly shipped with invalid JSON and YAML struct tags on all config structs. This release fixes the tags and adds static analysis to prevent similar bugs in the future. Bugfixes: : Redirecting the standard library's `log` output now correctly reports the logger's caller. Enhancements: and : Zap now transparently supports non-standard, rich errors like those produced by `github.com/pkg/errors`. : Though `New(nil)` continues to return a no-op logger, `NewNop()` is now preferred. Users can update their projects with `gofmt -r 'zap.New(nil) -> zap.NewNop()' -w .`. : Incorrectly importing zap as `github.com/uber-go/zap` now returns a more informative error. Thanks to @skipor and @chapsuk for their contributions to this release. This is the first release candidate for zap's stable release. There are multiple breaking changes and improvements from the pre-release version. Most notably: Zap's import path is now \"go.uber.org/zap\"* &mdash; all users will need to update their code. User-facing types and functions remain in the `zap` package. Code relevant largely to extension authors is now in the `zapcore` package. The `zapcore.Core` type makes it easy for third-party packages to use zap's internals but provide a different user-facing API. `Logger` is now a concrete type instead of an interface. A less verbose (though slower) logging API is included by default. Package-global loggers `L` and `S` are included. A human-friendly console encoder is included. A declarative config struct allows common logger configurations to be managed as configuration instead of code. Sampling is more accurate, and doesn't depend on the standard library's shared timer heap. This is a minor version, tagged to allow users to pin to the pre-1.0 APIs and upgrade at their leisure. Since this is the first tagged release, there are no backward compatibility concerns and all functionality is new. Early zap adopters should pin to the 0.1.x minor version until they're ready to upgrade to the upcoming stable release." } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "Stash by AppsCode", "subcategory": "Cloud Native Storage" }
[ { "data": "(architectures)= Incus can run on just about any architecture that is supported by the Linux kernel and by Go. Some entities in Incus are tied to an architecture, for example, the instances, instance snapshots and images. The following table lists all supported architectures including their unique identifier and the name used to refer to them. The architecture names are typically aligned with the Linux kernel architecture names. ID | Kernel name | Description | Personalities : | : | :- | : 1 | `i686` | 32bit Intel x86 | 2 | `x86_64` | 64bit Intel x86 | `x86` 3 | `armv7l` | 32bit ARMv7 little-endian | 4 | `aarch64` | 64bit ARMv8 little-endian | `armv7l` (optional) 5 | `ppc` | 32bit PowerPC big-endian | 6 | `ppc64` | 64bit PowerPC big-endian | `powerpc` 7 | `ppc64le` | 64bit PowerPC little-endian | 8 | `s390x` | 64bit ESA/390 big-endian | 9 | `mips` | 32bit MIPS | 10 | `mips64` | 64bit MIPS | `mips` 11 | `riscv32` | 32bit RISC-V little-endian | 12 | `riscv64` | 64bit RISC-V little-endian | 13 | `armv6l` | 32bit ARMv6 little-endian | 14 | `armv8l` | 32bit ARMv8 little-endian | 15 | `loongarch64` | 64bit Loongarch | ```{note} Incus cares only about the kernel architecture, not the particular userspace flavor as determined by the toolchain. That means that Incus considers ARMv7 hard-float to be the same as ARMv7 soft-float and refers to both as `armv7l`. If useful to the user, the exact userspace ABI may be set as an image and container property, allowing easy query. ``` Incus only supports running virtual-machines on the following host architectures: `x86_64` `aarch64` `ppc64le` `s390x` The virtual machine guest architecture can usually be the 32bit personality of the host architecture, so long as the virtual machine firmware is capable of booting it." } ]
{ "category": "Runtime", "file_name": "architectures.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "target-version: release-1.9 The goal of this proposal is to allow configuring the Swift API and the Keystone integration of Ceph RGW natively via the Object Store CRD, which allows native integration of the Rook operated Ceph RGW into OpenStack clouds. Both changes are bundled together as one proposal because they will typically deployed together. It is unlikely to encounter a Swift object store API outside an OpenStack cloud and in the context of an OpenStack cloud the Swift API needs to be integrated with the Keystone authentication service. The Keystone integration must support the current [OpenStack Identity API version 3](https://docs.openstack.org/api-ref/identity/v3/). It must be possible to serve S3 and Swift for the same object store pool. It must be possible to [obtain S3 credentials via OpenStack]( https://docs.ceph.com/en/octopus/radosgw/keystone/#keystone-integration-with-the-s3-api). Any changes to the CRD must be future safe and cleanly allow extension to further technologies (such as LDAP authentication). Support for OpenStack Identity API versions below v3. API version v2 has long been deprecated and [has been removed in the \"queens\" version of Keystone](https://docs.openstack.org/keystone/xena/contributor/http-api.html) which was released in 2018 and is now in extended maintenance mode (which means it gets no more points releases and only sporadic bug fixes/security fixes). Authenticating Ceph RGW to Keystone via admin token (a.k.a. shared secret). This is a deliberate choice as [admin tokens should not be used in production environments]( https://docs.openstack.org/keystone/rocky/admin/identity-bootstrap.html#using-a-shared-secret). Support for APIs beside S3 and Swift. Interaction of Swift with OBCs is out of scope of this document. If you need to access a bucket created by OBC via Swift you need to create a separate `cephobjectstoreuser`, configure its access rights to the bucket and use those credentials. Support for authentication technologies other than Keystone (e.g. LDAP) Exposing options that disable security features (e.g. TLS verification) The Object Store CRD will have to be extended to accommodate the new settings. A new optional section `auth.keystone` is added to the Object Store CRD to configure the keystone integration: ```yaml auth: keystone: url: https://keystone:5000/ [1, 2] acceptedRoles: [\"member\", \"service\", \"admin\"] [1, 2] implicitTenants: swift [1] tokenCacheSize: 1000 [1] revocationInterval: 1200 [1] serviceUserSecretName: rgw-service-user [3, 2] ``` Annotations: `[1]` These options map directly to [RGW configuration options](https://docs.ceph.com/en/octopus/radosgw/config-ref/#keystone-settings), the corresponding RGW option is formed by prefixing it with `rgwkeystone` and replacing upper case letters by their lower case letter followed by an underscore. E.g. `tokenCacheSize` maps to `rgwkeystonetokencachesize`. `[2]` These settings are required in the `keystone` section if present. `[1]` The name of the secret containing the credentials for the service user account used by RGW. It has to be in the same namespace as the object store resource. The `rgwkeystoneapi_version` option is not exposed to the user, as only version 3 of the OpenStack Identity API is supported for now. If a newer version of the Openstack Identity should be released at some point, it will be easy to extend the CR to accommodate it. The certificate to verify the Keystone endpoint can't be explicitly configured in Ceph RGW. Instead, the system configuration of the pod running RGW is used. You can add to the system certificate store via the `gateway.caBundleRef` setting of the object store resource. The credentials for the Keystone service account used by Ceph RGW are supplied in a Secret that contains a mapping of OpenStack [openrc" }, { "data": "variables](https://docs.openstack.org/python-openstackclient/xena/cli/man/openstack.html#environment-variables). `password` is the only authentication type that is supported, this is a limitation of RGW which does not support other Keystone authentication types. Example: ```yaml apiVersion: v1 kind: Secret metadata: name: rgw-service-user data: OS_PASSWORD: \"horse staples battery correct\" OS_USERNAME: \"ceph-rgw\" OSPROJECTNAME: \"admin\" OSUSERDOMAIN_NAME: \"Default\" OSPROJECTDOMAIN_NAME: \"Default\" OSAUTHTYPE: \"password\" ``` This format is chosen because it is a natural and interoperable way that keystone credentials are represented and for the supported auth type it maps naturally to the Ceph RGW configuration. The following constraints must be fulfilled by the secret: `OSAUTHTYPE` must be `password` or omitted. `OSUSERDOMAINNAME` must equal `OSPROJECTDOMAINNAME`. This is a restriction of Ceph RGW, which does not support configuring separate domains for the user and project. All openrc variables not in the example above are ignored. The API version (`OSIDENTITYAPI_VERSION`) is assumed to be `3` and Keystone endpoint `OSAUTHURL` is taken from the `keystone.apiVersion` configuration in the object store resource. The mapping to is done as follows: `OSUSERNAME` -> `rgwkeystoneadminuser` `OSPROJECTNAME` -> `rgwkeystoneadmin_project` `OSPROJECTDOMAINNAME`, `OSUSERDOMAINNAME` -> `rgwkeystoneadmin_domain` `OSPASSWORD` -> `rgwkeystoneadminpassword` The currently ignored `gateway.type` option is deprecated and from now on explicitly ignored by rook. The other `gateway` settings are kept as they are: They do not directly relate to Swift or S3 but are common configuration of RGW. The Swift API is enabled and configured via a new `protocols` section: ```yaml protocols: swift: [1] accountInUrl: true [4] urlPrefix: /example [4] versioningEnabled: false [4] s3: enabled: false [2] authUseKeystone: true [3] ``` Annotations: `[1]` Swift will be enabled, if `protocols.swift` is present. `[2]` This defaults to `true` (even if `protocols.s3` is not present in the CRD). This maintains backwards compatibility by default S3 is enabled. `[3]` This option maps directly to the `rgws3authusekeystone` option. Enabling it allows generating S3 credentials via an OpenStack API call, see the . If not given, the defaults of the corresponding RGW option apply. `[4]` These options map directly to [RGW configuration options](https://docs.ceph.com/en/octopus/radosgw/config-ref/#swift-settings), the corresponding RGW option is formed by prefixing it with `rgwswift` and replacing upper case letters by their lower case letter followed by an underscore. E.g. `urlPrefix` maps to `rgwswifturl_prefix`. They are optional. If not given, the defaults of the corresponding RGW option apply. The access to the Swift API is granted by creating a subuser of an RGW user. While commonly the access is granted via projects mapped from Keystone, explicit creation of subusers is supported by extending the `cephobjectstoreuser` resource with a new optional section `spec.subUsers`: ```yaml apiVersion: ceph.rook.io/v1 kind: CephObjectStoreUser metadata: name: my-user namespace: rook-ceph spec: store: my-store displayName: my-display-name quotas: maxBuckets: 100 maxSize: 10G maxObjects: 10000 capabilities: user: \"*\" bucket: \"*\" subUsers: name: swift [1] access: full [2] ``` Annotations: `[1]` This is the name of the subuser without the `username:` prefix (see below for more explanation). The `name` must be unique within the `subUsers` list. `[2]` The possible values are: `read`, `write`, `readwrite`, `full`. These values take their meanings from the possible values of the `--access-level` option of `radosgw-admin subuser create`, as documented in the [radosgw admin guide]( https://docs.ceph.com/en/octopus/radosgw/admin/#create-a-subuser). `name` and `access` are required for each item in `subUsers`. When changing the subuser-configuration in the CR, this is reflected on the RGW side: Subsers are deleted and created to match the list of subusers in the" }, { "data": "If the access level for an existing user is changed no new credentials are created, but the existing credentials are kept. If a subuser is deleted the corresponding credential secret is deleted as well. Changing only the order of the subuser list does not trigger a reconcile. The subusers are not mapped to a separate CR for the following reasons: The full subuser names are prefixed with the username like `my-user:subusername`, so being unique within the CR guarantees global uniqueness. Unlike `radosgw-admin` the subuser name in the CRD must be provided without the prefix (radosgw-admin allows both, to omit or include the prefix). The subuser abstraction is very simple only a name and an access level can be configured, so a separate resource would not be appropriate complexity wise. Like for the S3 access keys for the users, the swift keys created for the sub-users will be automatically injected into Secret objects. The credentials for the subusers are mapped to separate secrets, in the case of the example the following secret will be created: ```yaml apiVersion: kind: Secret metadata: name: rook-ceph-object-subuser-my-store-my-user-swift [1] namespace: rook-ceph data: SWIFT_USER: my-user:swift [2] SWIFTSECRETKEY: $KEY [3] SWIFTAUTHENDPOINT: https://rgw.example:6000/auth [4] ``` Annotations: `[1]` The name is constructed by joining the following elements with dashes (compare [the corresponding name of the secret for the object store users]( https://github.com/rook/rook/blob/376ca62f8ad07540d9ddffe9dc0ee53f4ac35e29/pkg/operator/ceph/object/user/controller.go#L416)): the literal `rook-ceph-object-subuser` the name of the object store resource the name of the user the name of the subuser (without `username:`-prefix) `[2]` The full name of the subuser (including the `username:`-prefix). `[3]` The generated swift access secret. `. As long as the Object Store CRD changes are well thought out the overall risk is minimal. If Swift is enabled by accident this could lead to an increased and unexpected attack surface, especially since the authentication and authorization for Swift and S3 may differ depending on how Ceph RGW is configured. This is mitigated by requiring explicit configuration to enable Swift. A misconfigured Keystone integration may allow users to gain access to objects they should not be authorized to access (e.g. if the `rgw keystone accepted roles` setting is too broad). This will be mitigated by proper documentation to make the operator aware of the risk. Ceph RGW allows to disable TLS verification when querying Keystone, we deliberately choose not to expose this config option to the user. The mapping from store, username and subuser-name to the name of the secret with the credentials is not injective. This means that the subusers of two different users may map to the same secret (e.g. `user:a-b` and `user-a:b`). This is potentially a vector for leaks of credentials to unauthorized entities. A simple workaround is to avoid dashes in the names of users and subuser managed by the CephObjectStoreUser CR. Documenting the problem is deemed sufficient since a similar problem already exists for the secret created for the users (in that case for users from different object stores, e.g. the secret for user `foo` in `my-store` collides with the one for user `store-foo` in `my`). As shown in keystone can be integrated via config override so it is not strictly necessary to support configuring it via the Object Store CRD. Adding it to the CRD complicates things with minor gain. There is no workable alternative to extending the CRD for Swift support. Keystone support could be configured via Rook config override as shown in ." } ]
{ "category": "Runtime", "file_name": "swift-and-keystone-integration.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Kanister can be easily installed and managed with . You will need to configure your `kubectl` CLI tool to target the Kubernetes cluster you want to install Kanister on. Start by adding the Kanister repository to your local setup: ``` bash helm repo add kanister <https://charts.kanister.io/> ``` Use the `helm install` command to install Kanister in the `kanister` namespace: ``` bash helm -n kanister upgrade \\--install kanister \\--create-namespace kanister/kanister-operator ``` Confirm that the Kanister workloads are ready: ``` bash kubectl -n kanister get po ``` You should see the operator pod in the `Running` state: ``` bash NAME READY STATUS RESTARTS AGE kanister-kanister-operator-85c747bfb8-dmqnj 1/1 Running 0 15s ``` ::: tip NOTE Kanister is guaranteed to work with the 3 most recent versions of Kubernetes. For example, if the latest version of Kubernetes is 1.24, Kanister will work with 1.24, 1.23, and 1.22. Support for older versions is provided on a best-effort basis. If you are using an older version of Kubernetes, please consider upgrading to a newer version. ::: Use the `helm show values` command to list the configurable options: ``` bash helm show values kanister/kanister-operator ``` For example, you can use the `image.tag` value to specify the Kanister version to install. The source of the `values.yaml` file can be found on . The default RBAC settings in the Helm chart permit Kanister to manage and auto-update its own custom resource definitions, to ease the user\\'s operation burden. If your setup requires the removal of these settings, you will have to install Kanister with the `--set controller.updateCRDs=false` option: ``` bash helm -n kanister upgade \\--install kanister \\--create-namespace kanister/kanister-operator \\--set controller.updateCRDs=false ``` This option lets Helm manage the CRD resources. Kanister installation also creates a validating admission webhook server that is invoked each time a new Blueprint is created. By default the Helm chart is configured to automatically generate a self-signed certificates for Admission Webhook Server. If your setup requires custom certificates to be configured, you will have to install kanister with `--set bpValidatingWebhook.tls.mode=custom` option along with other certificate details. Create a Secret that stores the TLS key and certificate for webhook admission server: ``` bash kubectl create secret tls my-tls-secret \\--cert /path/to/tls.crt \\--key /path/to/tls.key -n kanister ``` Install Kanister, providing the PEM-encoded CA bundle and the [tls]{.title-ref} secret name like below: ``` bash helm upgrade \\--install kanister kanister/kanister-operator \\--namespace kanister \\--create-namespace \\--set bpValidatingWebhook.tls.mode=custom \\--set bpValidatingWebhook.tls.caBundle=\\$(cat /path/to/ca.pem \\| base64 -w 0) \\--set bpValidatingWebhook.tls.secretName=tls-secret ``` Follow the instructions in the `BUILD.md` file in the [Kanister GitHub repository](https://github.com/kanisterio/kanister/blob/master/BUILD.md) to build Kanister from source code." } ]
{ "category": "Runtime", "file_name": "install.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "This document details the versioning and release plan for containerd. Stability is a top goal for this project and we hope that this document and the processes it entails will help to achieve that. It covers the release process, versioning numbering, backporting, API stability and support horizons. If you rely on containerd, it would be good to spend time understanding the areas of the API that are and are not supported and how they impact your project in the future. This document will be considered a living document. Supported timelines, backport targets and API stability guarantees will be updated here as they change. If there is something that you require or this document leaves out, please reach out by . Releases of containerd will be versioned using dotted triples, similar to . For the purposes of this document, we will refer to the respective components of this triple as `<major>.<minor>.<patch>`. The version number may have additional information, such as alpha, beta and release candidate qualifications. Such releases will be considered \"pre-releases\". Major and minor releases of containerd will be made from master. Releases of containerd will be marked with GPG signed tags and announced at https://github.com/containerd/containerd/releases. The tag will be of the format `v<major>.<minor>.<patch>` and should be made with the command `git tag -s v<major>.<minor>.<patch>`. After a minor release, a branch will be created, with the format `release/<major>.<minor>` from the minor tag. All further patch releases will be done from that branch. For example, once we release `v1.0.0`, a branch `release/1.0` will be created from that tag. All future patch releases will be done against that branch. Pre-releases, such as alphas, betas and release candidates will be conducted from their source branch. For major and minor releases, these releases will be done from master. For patch releases, these pre-releases should be done within the corresponding release branch. While pre-releases are done to assist in the stabilization process, no guarantees are provided. The upgrade path for containerd is such that the 0.0.x patch releases are always backward compatible with its major and minor version. Minor (0.x.0) version will always be compatible with the previous minor release. i.e. 1.2.0 is backwards compatible with 1.1.0 and 1.1.0 is compatible with 1.0.0. There is no compatibility guarantees for upgrades that span multiple, minor releases. For example, 1.0.0 to 1.2.0 is not supported. One should first upgrade to 1.1, then 1.2. There are no compatibility guarantees with upgrades to major versions. For example, upgrading from 1.0.0 to 2.0.0 may require resources to migrated or integrations to change. Each major version will be supported for at least 1 year with bug fixes and security patches. The activity for the next release will be tracked in the . If your issue or PR is not present in a milestone, please reach out to the maintainers to create the milestone or add an issue or PR to an existing milestone. Support horizons will be defined corresponding to a release branch, identified by `<major>.<minor>`. Releases branches will be in one of several states: *Next*: The next planned release branch. *Active*: The release branch is currently supported and accepting patches. *Extended*: The release branch is only accepting security patches. *End of Life*: The release branch is no longer supported and no new patches will be accepted. Releases will be supported up to one year after a minor" }, { "data": "This means that we will accept bug reports and backports to release branches until the end of life date. If no new minor release has been made, that release will be considered supported until 6 months after the next minor is released or one year, whichever is longer. Additionally, releases may have an extended security support period after the end of the active period to accept security backports. This timeframe will be decided by maintainers before the end of the active status. The current state is available in the following table: | Release | Status | Start | End of Life | ||-||-| | | End of Life | Dec 4, 2015 | - | | | End of Life | Mar 21, 2016 | - | | | End of Life | Apr 21, 2016 | December 5, 2017 | | | End of Life | December 5, 2017 | December 5, 2018 | | | End of Life | April 23, 2018 | October 23, 2019 | | | End of Life | October 24, 2018 | October 15, 2020 | | | End of Life | September 26, 2019 | March 4, 2021 | | | Active | August 17, 2020 | max(August 17, 2021, release of 1.5.0 + 6 months) | | | Next | TBD | max(TBD+1 year, release of 1.6.0 + 6 months) | Note that branches and release from before 1.0 may not follow these rules. This table should be updated as part of the release preparation process. Backports in containerd are community driven. As maintainers, we'll try to ensure that sensible bugfixes make it into active release, but our main focus will be features for the next minor or major release. For the most part, this process is straightforward and we are here to help make it as smooth as possible. If there are important fixes that need to be backported, please let use know in one of three ways: Open an issue. Open a PR with cherry-picked change from master. Open a PR with a ported fix. If you are reporting a security issue, please reach out discreetly at [email protected]. Remember that backported PRs must follow the versioning guidelines from this document. Any release that is \"active\" can accept backports. Opening a backport PR is fairly straightforward. The steps differ depending on whether you are pulling a fix from master or need to draft a new commit specific to a particular branch. To cherry pick a straightforward commit from master, simply use the cherry pick process: Pick the branch to which you want backported, usually in the format `release/<major>.<minor>`. The following will create a branch you can use to open a PR: ```console $ git checkout -b my-backport-branch release/<major>.<minor>. ``` Find the commit you want backported. Apply it to the release branch: ```console $ git cherry-pick -xsS <commit> ``` Push the branch and open up a PR against the release branch: ``` $ git push -u stevvooe my-backport-branch ``` Make sure to replace `stevvooe` with whatever fork you are using to open the PR. When you open the PR, make sure to switch `master` with whatever release branch you are targeting with the fix. Make sure the PR title has `[<release branch>]` prefixed. e.g.: ``` [release/1.4] Fix foo in bar ``` If there is no existing fix in master, you should first fix the bug in master, or ask us a maintainer or contributor to do it via an" }, { "data": "Once that PR is completed, open a PR using the process above. Only when the bug is not seen in master and must be made for the specific release branch should you open a PR with new code. The following table provides an overview of the components covered by containerd versions: | Component | Status | Stabilized Version | Links | ||-|--|| | GRPC API | Stable | 1.0 | | | Metrics API | Stable | 1.0 | - | | Runtime Shim API | Stable | 1.2 | - | | Daemon Config | Stable | 1.0 | - | | Go client API | Unstable | future | | | CRI GRPC API | Unstable | v1alpha2 current | | | `ctr` tool | Unstable | Out of scope | - | From the version stated in the above table, that component must adhere to the stability constraints expected in release versions. Unless explicitly stated here, components that are called out as unstable or not covered may change in a future minor version. Breaking changes to \"unstable\" components will be avoided in patch versions. The primary product of containerd is the GRPC API. As of the 1.0.0 release, the GRPC API will not have any backwards incompatible changes without a major version jump. To ensure compatibility, we have collected the entire GRPC API symbol set into a single file. At each minor release of containerd, we will move the current `next.pb.txt` file to a file named for the minor version, such as `1.0.pb.txt`, enumerating the support services and messages. See for details. Note that new services may be added in minor releases. New service methods and new fields on messages may be added if they are optional. `*.pb.txt` files are generated at each API release. They prevent unintentional changes to the API by having a diff that the CI can run. These files are not intended to be consumed or used by clients. The metrics API that outputs prometheus style metrics will be versioned independently, prefixed with the API version. i.e. `/v1/metrics`, `/v2/metrics`. The metrics API version will be incremented when breaking changes are made to the prometheus output. New metrics can be added to the output in a backwards compatible manner without bumping the API version. containerd is based on a modular design where plugins are implemented to provide the core functionality. Plugins implemented in tree are supported by the containerd community unless explicitly specified as non-stable. Out of tree plugins are not supported by the containerd maintainers. Currently, the Windows runtime and snapshot plugins are not stable and not supported. Please refer to the github milestones for Windows support in a future release. Error codes will not change in a patch release, unless a missing error code causes a blocking bug. Error codes of type \"unknown\" may change to more specific types in the future. Any error code that is not \"unknown\" that is currently returned by a service will not change without a major release or a new version of the service. If you find that an error code that is required by your application is not well-documented in the protobuf service description or tested explicitly, please file and issue and we will clarify. Unless explicitly stated, the formats of certain fields may not be covered by this guarantee and should be treated" }, { "data": "For example, don't rely on the format details of a URL field unless we explicitly say that the field will follow that format. The Go client API, documented in , is currently considered unstable. It is recommended to vendor the necessary components to stabilize your project build. Note that because the Go API interfaces with the GRPC API, clients written against a 1.0 Go API should remain compatible with future 1.x series releases. We intend to stabilize the API in a future release when more integrations have been carried out. Any changes to the API should be detectable at compile time, so upgrading will be a matter of fixing compilation errors and moving from there. The CRI (Container Runtime Interface) GRPC API is used by a Kubernetes kubelet to communicate with a container runtime. This interface is used to manage container lifecycles and container images. Currently this API is under development and unstable across Kubernetes releases. Each Kubernetes release only supports a single version of CRI and the CRI plugin only implements a single version of CRI. Each minor release will support one version of CRI and at least one version of Kubernetes. Once this API is stable, a minor will be compatible with any version of Kubernetes which supports that version of CRI. The `ctr` tool provides the ability to introspect and understand the containerd API. It is not considered a primary offering of the project and is unsupported in that sense. While we understand it's value as a debug tool, it may be completely refactored or have breaking changes in minor releases. Targeting `ctr` for feature additions reflects a misunderstanding of the containerd architecture. Feature addition should focus on the client Go API and additions to `ctr` may or may not be accepted at the discretion of the maintainers. We will do our best to not break compatibility in the tool in patch releases. The daemon's configuration file, commonly located in `/etc/containerd/config.toml` is versioned and backwards compatible. The `version` field in the config file specifies the config's version. If no version number is specified inside the config file then it is assumed to be a version 1 config and parsed as such. Please use `version = 2` to enable version 2 config as version 1 has been deprecated. As a general rule, anything not mentioned in this document is not covered by the stability guidelines and may change in any release. Explicitly, this pertains to this non-exhaustive list of components: File System layout Storage formats Snapshot formats Between upgrades of subsequent, minor versions, we may migrate these formats. Any outside processes relying on details of these file system layouts may break in that process. Container root file systems will be maintained on upgrade. We may make exceptions in the interest of security patches. If a break is required, it will be communicated clearly and the solution will be considered against total impact. The deprecated features are shown in the following table: | Component | Deprecation release | Target release for removal | Recommendation | |-||-|-| | Runtime V1 API and implementation (`io.containerd.runtime.v1.linux`) | containerd v1.4 | containerd v2.0 | Use `io.containerd.runc.v2` | | Runc V1 implementation of Runtime V2 (`io.containerd.runc.v1`) | containerd v1.4 | containerd v2.0 | Use `io.containerd.runc.v2` | | config.toml `version = 1` | containerd v1.5 | containerd v2.0 | Use config.toml `version = 2` | | Built-in `aufs` snapshotter | containerd v1.5 | containerd v2.0 | Use `overlayfs` snapshotter |" } ]
{ "category": "Runtime", "file_name": "RELEASES.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 1 sidebar_label: \"LVM Volume\" HwameiStor provides LVM-based data volumes, which offer read and write performance comparable to that of native disks. These data volumes also provide advanced features such as data volume expansion, migration, high availability, and more. The following steps demonstrate how to create a simple non-HA type data volume. Prepare LVM Storage Node Ensure that the storage node has sufficient available capacity. If there is not enough capacity, please refer to . Check for available capacity using the following command: ```console $ kubectl get localstoragenodes k8s-worker-2 -oyaml | grep freeCapacityBytes freeCapacityBytes: 10523508736 ``` Prepare StorageClass Create a StorageClass named `hwameistor-storage-lvm-ssd` using the following command: ```console $ cat << EOF | kubectl apply -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hwameistor-storage-lvm-ssd parameters: convertible: \"false\" csi.storage.k8s.io/fstype: xfs poolClass: SSD poolType: REGULAR replicaNumber: \"1\" striped: \"true\" volumeKind: LVM provisioner: lvm.hwameistor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true EOF ``` Create Volume Create a PVC named `hwameistor-lvm-volume` using the following command: ```console $ cat << EOF | kubectl apply -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: hwameistor-lvm-volume spec: accessModes: ReadWriteOnce resources: requests: storage: 1Gi storageClassName: hwameistor-storage-lvm-ssd EOF ```" } ]
{ "category": "Runtime", "file_name": "lvm_volumes.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 5 sidebar_label: \"Uninstall\" To ensure data security, it is strongly recommended not to uninstall the HwameiStor system in a production environment. This section introduces two uninstallation scenarios for testing environments. If you want to uninstall the HwameiStor components, but still keep the existing data volumes working with the applications, perform the following steps: ```console $ kubectl get cluster.hwameistor.io NAME AGE cluster-sample 21m $ kubectl delete clusters.hwameistor.io hwameistor-cluster ``` Finally, all the HwameiStor's components (i.e. Pods) will be deleted. Check by: ```bash kubectl -n hwameistor get pod ``` :::danger Before you start to perform actions, make sure you reallly want to delete all your data. ::: If you confirm to delete your data volumes and uninstall HwameiStor, perform the following steps: Clean up stateful applications. Delete stateful applications. Delete PVCs. The relevant PVs, LVs, LVRs, LVGs will also been deleted. Clean up HwameiStor components. Delete HwameiStor components. ```bash kubectl delete clusters.hwameistor.io hwameistor-cluster ``` Delete hwameistor namespace. ```bash kubectl delete ns hwameistor ``` Delete CRD, Hook, and RBAC. ```bash kubectl get crd,mutatingwebhookconfiguration,clusterrolebinding,clusterrole -o name \\ | grep hwameistor \\ | xargs -t kubectl delete ``` Delete StorageClass. ```bash kubectl get sc -o name \\ | grep hwameistor-storage-lvm- \\ | xargs -t kubectl delete ``` Delete hwameistor-operator. ```bash helm uninstall hwameistor-operator -n hwameistor ``` Finally, you still need to clean up the LVM configuration on each node, and also data on the disks by tools like . ```bash wipefs -a /dev/sdx blkid /dev/sdx ```" } ]
{ "category": "Runtime", "file_name": "uninstall.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "A: Each controller should only reconcile one object type. Other affected objects should be mapped to a single type of root object, using the `EnqueueRequestForOwner` or `EnqueueRequestsFromMapFunc` event handlers, and potentially indices. Then, your Reconcile method should attempt to reconcile all state for that given root objects. A: You should not. Reconcile functions should be idempotent, and should always reconcile state by reading all the state it needs, then writing updates. This allows your reconciler to correctly respond to generic events, adjust to skipped or coalesced events, and easily deal with application startup. The controller will enqueue reconcile requests for both old and new objects if a mapping changes, but it's your responsibility to make sure you have enough information to be able clean up state that's no longer referenced. A: There are several different approaches that can be taken, depending on your situation. When you can, take advantage of optimistic locking: use deterministic names for objects you create, so that the Kubernetes API server will warn you if the object already exists. Many controllers in Kubernetes take this approach: the StatefulSet controller appends a specific number to each pod that it creates, while the Deployment controller hashes the pod template spec and appends that. In the few cases when you cannot take advantage of deterministic names (e.g. when using generateName), it may be useful in to track which actions you took, and assume that they need to be repeated if they don't occur after a given time (e.g. using a requeue result). This is what the ReplicaSet controller does. In general, write your controller with the assumption that information will eventually be correct, but may be slightly out of date. Make sure that your reconcile function enforces the entire state of the world each time it runs. If none of this works for you, you can always construct a client that reads directly from the API server, but this is generally considered to be a last resort, and the two approaches above should generally cover most circumstances. A: The fake client , but we generally recommend using to test against a real API server. In our experience, tests using fake clients gradually re-implement poorly-written impressions of a real API server, which leads to hard-to-maintain, complex test code. Use the aforementioned to spin up a real API server instead of trying to mock one out. Structure your tests to check that the state of the world is as you expect it, not that a particular set of API calls were made, when working with Kubernetes APIs. This will allow you to more easily refactor and improve the internals of your controllers without changing your tests. Remember that any time you're interacting with the API server, changes may have some delay between write time and reconcile time. A: You're probably missing a fully-set-up Scheme. Schemes record the mapping between Go types and group-version-kinds in Kubernetes. In general, your application should have its own Scheme containing the types from the API groups that it needs (be they Kubernetes types or your own). See the [scheme builder docs](https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/scheme) for more information." } ]
{ "category": "Runtime", "file_name": "faq.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "title: \"ark plugin\" layout: docs Work with plugins Work with plugins ``` -h, --help help for plugin ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Back up and restore Kubernetes cluster resources. - Add a plugin - Remove a plugin" } ]
{ "category": "Runtime", "file_name": "ark_plugin.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Add for the Antrea Agent RBAC permissions and how to restrict them using Gatekeeper/OPA. (, [@antoninbas]) Clean up stale routes installed by AntreaProxy when ProxyAll is disabled. (, [@hongliangl]) Fix export/import of Services with named ports when using the Antrea Multi-cluster feature. (, [@luolanzone]) Fix handling of the \"reject\" packets generated by the Antrea Agent to avoid infinite looping. (, [@GraysonWu]) Fix DNS resolution error of Antrea Agent on AKS by using `ClusterFirst` dnsPolicy. (, [@tnqn]) Fix tolerations for Pods running on control-plane for Kubernetes >= 1.24. (, [@xliuxu]) Reduce permissions of Antrea Agent ServiceAccount. (, [@xliuxu]) , [@hongliangl]) Fix Antrea wildcard FQDN NetworkPolicies not working when NodeLocal DNSCache is enabled. (, [@hongliangl]) The Egress feature is graduated from Alpha to Beta and is therefore enabled by default. The support for proxying all Service traffic by Antrea Proxy (enabled by `antreaProxy.proxyAll`) is now Beta. Add the following capabilities to the [Antrea IPAM] feature: Support pre-allocating continuous IPs for StatefulSet. (, [@annakhm]) Support specifying VLAN for IPPool. Traffic from Pods whose IPPools are configured with a VLAN ID will be tagged when leaving the Node uplink. (, [@gran-vmv]) Add the following capabilities to the [Antrea Multi-cluster] feature: Support Antrea ClusterNetworkPolicy replication in a ClusterSet. (, [@Dyanngg]) Add `antctl mc` subcommand for Antrea Multi-cluster resources. ( , [@hjiajing] [@bangqipropel]) Add the following capabilities to the [AntreaPolicy] feature: Add Node selector in Antrea-native policies to allow matching traffic originating from specific Nodes or destined to specific Nodes. (, [@wenqiq]) Add ServiceAccount selector in Antrea-native policies to allow selecting Pods by ServiceAccount. (, [@GraysonWu]) Support Pagination for ClusterGroupMembership API. (, [@qiyueyao]) Add Port Number to Audit Logging. (, [@qiyueyao]) [Flow Visibility] Add Grafana Flow Collector as the new visualization tool for flow records. Add Grafana dashboards, Clickhouse data schema, deployment files, and doc. ( , [@heanlan] [@zyiou] [@dreamtalen]) Add support for exporting flow records to ClickHouse from Flow Aggregator. ( , [@wsquan171] [@dreamtalen]) Add ClickHouse monitor to ensure data retention for in-memory ClickHouse deployment. ( , [@yanjunz97]) , [@wenyingd]) , [@XinShuYang]) Add SKIPCNIBINARIES environment variable to support skipping the installation of specified CNI plugins. (, [@jainpulkit22]) Support UBI8-based container image to run Antrea. (, [@ksamoray]) Add the following documentations: Add for ServiceExternalIP feature and Service of type LoadBalancer. (, [@hty690]) Add for deploying Antrea to Minikube cluster. (, [@jainpulkit22]) Add for `antctl` Multi-cluster commands. (, [@bangqipropel]) Add for Multiple-VLAN support. (, [@gran-vmv]) Add for Multi-cluster. (, [@luolanzone]) Add for NetworkpolicyStats docs. (, [@ceclinux]) Remove all legacy (*.antrea.tanzu.vmware.com) APIs. (, [@antoninbas]) Remove Kind-specific manifest and scripts. Antrea now uses OVS kernel datapath for Kind" }, { "data": "(, [@antoninbas]) , [@wenyingd]) Add an agent config parameter \"enableBridgingMode\" for enabling flexible IPAM (bridging mode). ( , [@jianjuns]) Use iptables-wrapper in Antrea container to support distros that runs iptables in \"nft\" mode. (, [@antoninbas]) Install CNI configuration files after installing CNI binaries to support container runtime `cri-o`. (, [@tnqn]) Upgrade packaged Whereabouts version to v0.5.1. (, [@antoninbas]) Upgrade to go-ipfix v0.5.12. (, [@yanjunz97]) Upgrade Kustomize from v3.8.8 to v4.4.1 to fix Cronjob patching bugs. (, [@yanjunz97]) Fail in Agent initialization if GRE tunnel type is used with IPv6. (, [@antoninbas]) Refactor the OpenFlow pipeline for future extensibility. (, [@hongliangl]) Validate IP ranges of IPPool for Antrea IPAM. (, [@ksamoray]) Validate protocol in the CRD schema of Antrea-native policies. (, [@KMAnju-2021]) Validate labels in the CRD schema of Antrea-native policies and ClusterGroup. (, [@GraysonWu]) Reduce permissions of Antrea ServiceAccounts. (, [@tnqn]) Remove --k8s-1.15 flag from hack/generate-manifest.sh. (, [@antoninbas]) Remove unnecessary CRDs and RBAC rules from Multi-cluster manifest. (, [@luolanzone]) Update label and image repo of antrea-mc-controller to be consistent with antrea-controller and antrea-agent. ( , [@luolanzone]) Add clusterID annotation to ServiceExport/Import resources. (, [@luolanzone]) Do not log error when Service for Endpoints is not found to avoid log spam. (, [@tnqn]) Ignore Services of type ExternalName for NodePortLocal feature. (, [@antoninbas]) Add powershell command replacement in the Antrea Windows documentation. (, [@GraysonWu]) Add userspace ARP/NDP responders to fix Egress and ServiceExternalIP support for IPv6 clusters. (, [@hty690]) Fix incorrect results by `antctl get networkpolicy` when both Pod and Namespace are specified. (, [@Dyanngg]) Fix IP leak issue when AntreaIPAM is enabled. (, [@gran-vmv]) Fix error when dumping OVS flows for a NetworkPolicy via `antctl get ovsflows`. (, [@jainpulkit22]) Fix IPsec encryption for IPv6 overlays. (, [@antoninbas]) Add ignored interfaces names when getting interface by IP to fix NetworkPolicyOnly mode in AKE. (, [@wenyingd]) Fix duplicate IP case for NetworkPolicy. (, [@tnqn]) Don't delete the routes which are added for the peer IPv6 gateways on Agent startup. ( , [@Jexf] [@xliuxu]) Fix pkt mark conflict between HostLocalSourceMark and SNATIPMark. (, [@tnqn]) Unconditionally sync CA cert for Controller webhooks to fix Egress support when AntreaPolicy is disabled. (, [@antoninbas]) Fix inability to access NodePort in particular cases. (, [@hongliangl]) Fix ipBlocks referenced in nested ClusterGroup not processed correctly. (, [@Dyanngg]) Realize Egress for a Pod as soon as its network is created. (, [@tnqn]) Fix NodePort/LoadBalancer issue when proxyAll is enabled. (, [@hongliangl]) Do not panic when processing a PacketIn message for a denied connection. (, [@antoninbas]) Fix CT mark matching without range in flow exporter. (, [@hongliangl]) , [@hongliangl])" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.6.md", "project_name": "Antrea", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Delete a subscriber from the multicast group. Delete a subscriber from the given multicast group. To delete remote subscriber, following information is required: group: multicast group address from which subscriber is deleted. subscriber-address: subscriber IP address ``` cilium-dbg bpf multicast subscriber delete <group> <subscriber-address> [flags] ``` ``` To delete a remote node 10.100.0.1 from multicast group 229.0.0.1, use the following command: cilium-dbg bpf multicast subscriber delete 229.0.0.1 10.100.0.1 ``` ``` -h, --help help for delete ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage the multicast subscribers." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_multicast_subscriber_delete.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "``` bash curl -v 'http://10.196.59.202:17210/getDentry?pid=100&name=\"aa.txt\"&parentIno=1024' ``` | Parameter | Type | Description | |--||| | pid | integer | meta partition id | | name | string | directory or file name | | parentIno | integer | parent directory inode id | ``` bash curl -v 'http://10.196.59.202:17210/getDirectory?pid=100&parentIno=1024' ``` | Parameter | Type | Description | |--||--| | pid | integer | partition id | | ino | integer | inode id | ``` bash curl -v 'http://10.196.59.202:17210/getAllDentry?pid=100' ``` | Parameter | Type | Description | |--||--| | pid | integer | partition id |" } ]
{ "category": "Runtime", "file_name": "dentry.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "name: Bug report about: Create a report to help us improve title: '' labels: type-bug assignees: '' Alluxio Version: What version of Alluxio are you using? Describe the bug A clear and concise description of what the bug is. To Reproduce Steps to reproduce the behavior (as minimally and precisely as possible) Expected behavior A clear and concise description of what you expected to happen. Urgency Describe the impact and urgency of the bug. Are you planning to fix it Please indicate if you are already working on a PR. Additional context Add any other context about the problem here." } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Disaster Recovery Under extenuating circumstances, steps may be necessary to recover the cluster health. There are several types of recovery addressed in this document. Under extenuating circumstances, the mons may lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon is still healthy. The following steps will remove the unhealthy mons from quorum and allow you to form a quorum again with a single mon, then grow the quorum back to the original size. The has a command `restore-quorum` that will walk you through the mon quorum automated restoration process. If the name of the healthy mon is `c`, you would run the command: ```console kubectl rook-ceph mons restore-quorum c ``` See the for more details. When the Rook CRDs are deleted, the Rook operator will respond to the deletion event to attempt to clean up the cluster resources. If any data appears present in the cluster, Rook will refuse to allow the resources to be deleted since the operator will refuse to remove the finalizer on the CRs until the underlying data is deleted. For more details, see the . While it is good that the CRs will not be deleted and the underlying Ceph data and daemons continue to be available, the CRs will be stuck indefinitely in a `Deleting` state in which the operator will not continue to ensure cluster health. Upgrades will be blocked, further updates to the CRs are prevented, and so on. Since Kubernetes does not allow undeleting resources, the following procedure will allow you to restore the CRs to their prior state without even necessarily suffering cluster downtime. !!! note In the following commands, the affected `CephCluster` resource is called `rook-ceph`. If yours is named differently, the commands will need to be adjusted. Scale down the operator. ```console kubectl -n rook-ceph scale --replicas=0 deploy/rook-ceph-operator ``` Backup all Rook CRs and critical metadata ```console kubectl -n rook-ceph get cephcluster rook-ceph -o yaml > cluster.yaml kubectl -n rook-ceph get secret -o yaml > secrets.yaml kubectl -n rook-ceph get configmap -o yaml > configmaps.yaml ``` Remove the owner references from all critical Rook resources that were referencing the `CephCluster` CR. Programmatically determine all such resources, using this command: ```console ROOK_UID=$(kubectl -n rook-ceph get cephcluster rook-ceph -o 'jsonpath={.metadata.uid}') RESOURCES=$(kubectl -n rook-ceph get secret,configmap,service,deployment,pvc -o jsonpath='{range .items[?(@.metadata.ownerReferences[*].uid==\"'\"$ROOK_UID\"'\")]}{.kind}{\"/\"}{.metadata.name}{\"\\n\"}{end}') kubectl -n rook-ceph get $RESOURCES ``` Verify that all critical resources are shown in the output. The critical resources are these: Secrets: `rook-ceph-admin-keyring`, `rook-ceph-config`, `rook-ceph-mon`, `rook-ceph-mons-keyring` ConfigMap: `rook-ceph-mon-endpoints` Services: `rook-ceph-mon-`, `rook-ceph-mgr-` Deployments: `rook-ceph-mon-`, `rook-ceph-osd-`, `rook-ceph-mgr-*` PVCs (if applicable): `rook-ceph-mon-` and the OSD PVCs (named `<deviceset>-`, for example `set1-data-*`) For each listed resource, remove the `ownerReferences` metadata field, in order to unlink it from the deleting `CephCluster`" }, { "data": "To do so programmatically, use the command: ```console for resource in $(kubectl -n rook-ceph get $RESOURCES -o name); do kubectl -n rook-ceph patch $resource -p '{\"metadata\": {\"ownerReferences\":null}}' done ``` For a manual alternative, issue `kubectl edit` on each resource, and remove the block matching: ```yaml ownerReferences: apiVersion: ceph.rook.io/v1 blockOwnerDeletion: true controller: true kind: `CephCluster` name: rook-ceph uid: <uid> ``` Before completing this step, validate these things. Failing to do so could result in data loss. Confirm that `cluster.yaml` contains the `CephCluster` CR. Confirm all critical resources listed above have had the `ownerReference` to the `CephCluster` CR removed. Remove the finalizer from the `CephCluster` resource. This will cause the resource to be immediately deleted by Kubernetes. ```console kubectl -n rook-ceph patch cephcluster/rook-ceph --type json --patch='[ { \"op\": \"remove\", \"path\": \"/metadata/finalizers\" } ]' ``` After the finalizer is removed, the `CephCluster` will be immediately deleted. If all owner references were properly removed, all ceph daemons will continue running and there will be no downtime. Create the `CephCluster` CR with the same settings as previously ```console kubectl create -f cluster.yaml ``` If there are other CRs in terminating state such as CephBlockPools, CephObjectStores, or CephFilesystems, follow the above steps as well for those CRs: Backup the CR Remove the finalizer and confirm the CR is deleted (the underlying Ceph resources will be preserved) Create the CR again Scale up the operator ```console kubectl -n rook-ceph scale --replicas=1 deploy/rook-ceph-operator ``` Watch the operator log to confirm that the reconcile completes successfully. ```console kubectl -n rook-ceph logs -f deployment/rook-ceph-operator ``` Situations this section can help resolve: The Kubernetes environment underlying a running Rook Ceph cluster failed catastrophically, requiring a new Kubernetes environment in which the user wishes to recover the previous Rook Ceph cluster. The user wishes to migrate their existing Rook Ceph cluster to a new Kubernetes environment, and downtime can be tolerated. A working Kubernetes cluster to which we will migrate the previous Rook Ceph cluster. At least one Ceph mon db is in quorum, and sufficient number of Ceph OSD is `up` and `in` before disaster. The previous Rook Ceph cluster is not running. Start a new and clean Rook Ceph cluster, with old `CephCluster` `CephBlockPool` `CephFilesystem` `CephNFS` `CephObjectStore`. Shut the new cluster down when it has been created successfully. Replace ceph-mon data with that of the old cluster. Replace `fsid` in `secrets/rook-ceph-mon` with that of the old one. Fix monmap in ceph-mon db. Fix ceph mon auth key. Disable auth. Start the new cluster, watch it resurrect. Fix admin auth key, and enable auth. Restart cluster for the final time. Assuming `dataHostPathData` is `/var/lib/rook`, and the `CephCluster` trying to adopt is named `rook-ceph`. Make sure the old Kubernetes cluster is completely torn down and the new Kubernetes cluster is up and running without Rook Ceph. Backup `/var/lib/rook` in all the Rook Ceph nodes to a different directory. Backups will be used later. Pick a `/var/lib/rook/rook-ceph/rook-ceph.config` from any previous Rook Ceph node and save the old cluster `fsid` from its content. Remove `/var/lib/rook` from all the Rook Ceph nodes. Add identical `CephCluster` descriptor to the new Kubernetes cluster, especially identical `spec.storage.config` and `spec.storage.nodes`, except `mon.count`, which should be set to" }, { "data": "Add identical `CephFilesystem` `CephBlockPool` `CephNFS` `CephObjectStore` descriptors (if any) to the new Kubernetes cluster. Install Rook Ceph in the new Kubernetes cluster. Watch the operator logs with `kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx`, and wait until the orchestration has settled. STATE: Now the cluster will have `rook-ceph-mon-a`, `rook-ceph-mgr-a`, and all the auxiliary pods up and running, and zero (hopefully) `rook-ceph-osd-ID-xxxxxx` running. `ceph -s` output should report 1 mon, 1 mgr running, and all of the OSDs down, all PGs are in `unknown` state. Rook should not start any OSD daemon since all devices belongs to the old cluster (which have a different `fsid`). Run `kubectl -n rook-ceph exec -it rook-ceph-mon-a-xxxxxxxx bash` to enter the `rook-ceph-mon-a` pod, ```console mon-a# cat /etc/ceph/keyring-store/keyring # save this keyring content for later use mon-a# exit ``` Stop the Rook operator by running `kubectl -n rook-ceph edit deploy/rook-ceph-operator` and set `replicas` to `0`. Stop cluster daemons by running `kubectl -n rook-ceph delete deploy/X` where X is every deployment in namespace `rook-ceph`, except `rook-ceph-operator` and `rook-ceph-tools`. Save the `rook-ceph-mon-a` address with `kubectl -n rook-ceph get cm/rook-ceph-mon-endpoints -o yaml` in the new Kubernetes cluster for later use. SSH to the host where `rook-ceph-mon-a` in the new Kubernetes cluster resides. Remove `/var/lib/rook/mon-a` Pick a healthy `rook-ceph-mon-ID` directory (`/var/lib/rook/mon-ID`) in the previous backup, copy to `/var/lib/rook/mon-a`. `ID` is any healthy mon node ID of the old cluster. Replace `/var/lib/rook/mon-a/keyring` with the saved keyring, preserving only the `[mon.]` section, remove `[client.admin]` section. Run `docker run -it --rm -v /var/lib/rook:/var/lib/rook ceph/ceph:v14.2.1-20190430 bash`. The Docker image tag should match the Ceph version used in the Rook cluster. The `/etc/ceph/ceph.conf` file needs to exist for `ceph-mon` to work. ```console touch /etc/ceph/ceph.conf cd /var/lib/rook ceph-mon --extract-monmap monmap --mon-data ./mon-a/data # Extract monmap from old ceph-mon db and save as monmap monmaptool --print monmap # Print the monmap content, which reflects the old cluster ceph-mon configuration. monmaptool --rm a monmap # Delete `a` from monmap. monmaptool --rm b monmap # Repeat, and delete `b` from monmap. monmaptool --rm c monmap # Repeat this pattern until all the old ceph-mons are removed monmaptool --rm d monmap monmaptool --rm e monmap monmaptool --addv a [v2:10.77.2.216:3300,v1:10.77.2.216:6789] monmap # Replace it with the rook-ceph-mon-a address you got from previous command. ceph-mon --inject-monmap monmap --mon-data ./mon-a/data # Replace monmap in ceph-mon db with our modified version. rm monmap exit ``` Tell Rook to run as old cluster by running `kubectl -n rook-ceph edit secret/rook-ceph-mon` and changing `fsid` to the original `fsid`. Note that the `fsid` is base64 encoded and must not contain a trailing carriage return. For example: ```console echo -n a811f99a-d865-46b7-8f2c-f94c064e4356 | base64 # Replace with the fsid from your old cluster. ``` Disable authentication by running `kubectl -n rook-ceph edit cm/rook-config-override` and adding content below: ```yaml data: config: | [global] auth cluster required = none auth service required = none auth client required = none auth supported = none ``` Bring the Rook Ceph operator back online by running `kubectl -n rook-ceph edit deploy/rook-ceph-operator` and set `replicas` to" }, { "data": "Watch the operator logs with `kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx`, and wait until the orchestration has settled. STATE: Now the new cluster should be up and running with authentication disabled. `ceph -s` should report 1 mon & 1 mgr & all of the OSDs up and running, and all PGs in either `active` or `degraded` state. Run `kubectl -n rook-ceph exec -it rook-ceph-tools-XXXXXXX bash` to enter tools pod: ```console vi key ceph auth import -i key rm key ``` Re-enable authentication by running `kubectl -n rook-ceph edit cm/rook-config-override` and removing auth configuration added in previous steps. Stop the Rook operator by running `kubectl -n rook-ceph edit deploy/rook-ceph-operator` and set `replicas` to `0`. Shut down entire new cluster by running `kubectl -n rook-ceph delete deploy/X` where X is every deployment in namespace `rook-ceph`, except `rook-ceph-operator` and `rook-ceph-tools`, again. This time OSD daemons are present and should be removed too. Bring the Rook Ceph operator back online by running `kubectl -n rook-ceph edit deploy/rook-ceph-operator` and set `replicas` to `1`. Watch the operator logs with `kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx`, and wait until the orchestration has settled. STATE: Now the new cluster should be up and running with authentication enabled. `ceph -s` output should not change much comparing to previous steps. It is possible to migrate/restore an rook/ceph cluster from an existing Kubernetes cluster to a new one without resorting to SSH access or ceph tooling. This allows doing the migration using standard kubernetes resources only. This guide assumes the following: You have a CephCluster that uses PVCs to persist mon and osd data. Let's call it the \"old cluster\" You can restore the PVCs as-is in the new cluster. Usually this is done by taking regular snapshots of the PVC volumes and using a tool that can re-create PVCs from these snapshots in the underlying cloud provider. is one such tool. You have regular backups of the secrets and configmaps in the rook-ceph namespace. Velero provides this functionality too. Do the following in the new cluster: Stop the rook operator by scaling the deployment `rook-ceph-operator` down to zero: `kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas 0` and deleting the other deployments. An example command to do this is `k -n rook-ceph delete deployment -l operator!=rook` Restore the rook PVCs to the new cluster. Copy the keyring and fsid secrets from the old cluster: `rook-ceph-mgr-a-keyring`, `rook-ceph-mon`, `rook-ceph-mons-keyring`, `rook-ceph-osd-0-keyring`, ... Delete mon services and copy them from the old cluster: `rook-ceph-mon-a`, `rook-ceph-mon-b`, ... Note that simply re-applying won't work because the goal here is to restore the `clusterIP` in each service and this field is immutable in `Service` resources. Copy the endpoints configmap from the old cluster: `rook-ceph-mon-endpoints` Scale the rook operator up again : `kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas 1` Wait until the reconciliation is over. When the rook-ceph namespace is accidentally deleted, the good news is that the cluster can be restored. With the content in the directory `dataDirHostPath` and the original OSD disks, the ceph cluster could be restored with this" }, { "data": "You need to manually create a ConfigMap and a Secret to make it work. The information required for the ConfigMap and Secret can be found in the `dataDirHostPath` directory. The first resource is the secret named `rook-ceph-mon` as seen in this example below: ```yaml apiVersion: v1 data: ceph-secret: QVFCZ0h6VmorcVNhSGhBQXVtVktNcjcrczNOWW9Oa2psYkErS0E9PQ== ceph-username: Y2xpZW50LmFkbWlu fsid: M2YyNzE4NDEtNjE4OC00N2MxLWIzZmQtOTBmZDRmOTc4Yzc2 mon-secret: QVFCZ0h6VmorcVNhSGhBQXVtVktNcjcrczNOWW9Oa2psYkErS0E9PQ== kind: Secret metadata: finalizers: ceph.rook.io/disaster-protection name: rook-ceph-mon namespace: rook-ceph ownerReferences: null type: kubernetes.io/rook ``` The values for the secret can be found in `$dataDirHostPath/rook-ceph/client.admin.keyring` and `$dataDirHostPath/rook-ceph/rook-ceph.config`. `ceph-secret` and `mon-secret` are to be filled with the `client.admin`'s keyring contents. `ceph-username`: set to the string `client.admin` `fsid`: set to the original ceph cluster id. All the fields in data section need to be encoded in base64. Coding could be done like this: ```console echo -n \"string to code\" | base64 -i - ``` Now save the secret as `rook-ceph-mon.yaml`, to be created later in the restore. The second resource is the configmap named rook-ceph-mon-endpoints as seen in this example below: ```yaml apiVersion: v1 data: csi-cluster-config-json: '[{\"clusterID\":\"rook-ceph\",\"monitors\":[\"169.169.241.153:6789\",\"169.169.82.57:6789\",\"169.169.7.81:6789\"],\"namespace\":\"\"}]' data: k=169.169.241.153:6789,m=169.169.82.57:6789,o=169.169.7.81:6789 mapping: '{\"node\":{\"k\":{\"Name\":\"10.138.55.111\",\"Hostname\":\"10.138.55.111\",\"Address\":\"10.138.55.111\"},\"m\":{\"Name\":\"10.138.55.120\",\"Hostname\":\"10.138.55.120\",\"Address\":\"10.138.55.120\"},\"o\":{\"Name\":\"10.138.55.112\",\"Hostname\":\"10.138.55.112\",\"Address\":\"10.138.55.112\"}}}' maxMonId: \"15\" kind: ConfigMap metadata: finalizers: ceph.rook.io/disaster-protection name: rook-ceph-mon-endpoints namespace: rook-ceph ownerReferences: null ``` The Monitor's service IPs are kept in the monitor data store and you need to create them by original ones. After you create this configmap with the original service IPs, the rook operator will create the correct services for you with IPs matching in the monitor data store. Along with monitor ids, their service IPs and mapping relationship of them can be found in dataDirHostPath/rook-ceph/rook-ceph.config, for example: ```console [global] fsid = 3f271841-6188-47c1-b3fd-90fd4f978c76 mon initial members = m o k mon host = [v2:169.169.82.57:3300,v1:169.169.82.57:6789],[v2:169.169.7.81:3300,v1:169.169.7.81:6789],[v2:169.169.241.153:3300,v1:169.169.241.153:6789] ``` `mon initial members` and `mon host` are holding sequences of monitors' id and IP respectively; the sequence are going in the same order among monitors as a result you can tell which monitors have which service IP addresses. Modify your `rook-ceph-mon-endpoints.yaml` on fields `csi-cluster-config-json` and `data` based on the understanding of `rook-ceph.config` above. The field `mapping` tells rook where to schedule monitor's pods. you could search in `dataDirHostPath` in all Ceph cluster hosts for `mon-m,mon-o,mon-k`. If you find `mon-m` in host `10.138.55.120`, you should fill `10.138.55.120` in field `mapping` for `m`. Others are the same. Update the `maxMonId` to be the max numeric ID of the highest monitor ID. For example, 15 is the 0-based ID for mon `o`. Now save this configmap in the file rook-ceph-mon-endpoints.yaml, to be created later in the restore. Now that you have the info for the secret and the configmap, you are ready to restore the running cluster. Deploy Rook Ceph using the YAML files or Helm, with the same settings you had previously. ```console kubectl create -f crds.yaml -f common.yaml -f operator.yaml ``` After the operator is running, create the configmap and secret you have just crafted: ```console kubectl create -f rook-ceph-mon.yaml -f rook-ceph-mon-endpoints.yaml ``` Create your Ceph cluster CR (if possible, with the same settings as existed previously): ```console kubectl create -f cluster.yaml ``` Now your Rook Ceph cluster should be running again." } ]
{ "category": "Runtime", "file_name": "disaster-recovery.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "title: How to Set Up Metadata Engine sidebar_position: 2 slug: /databasesformetadata description: JuiceFS supports Redis, TiKV, PostgreSQL, MySQL and other databases as metadata engines, and this article describes how to set up and use them. import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; :::tip `META_PASSWORD` is supported from JuiceFS v1.0. You should if you're still using older versions. ::: JuiceFS is a decoupled structure that separates data and metadata. Metadata can be stored in any supported database (called Metadata Engine). Many databases are supported and they all comes with different performance and intended scenarios, refer to for comparison. The storage space required for metadata is related to the length of the file name, the type and length of the file, and extended attributes. It is difficult to accurately estimate the metadata storage space requirements of a file system. For simplicity, we can approximate based on the storage space required for a single small file without extended attributes. Key-Value Database (e.g. Redis, TiKV): 300 bytes/file Relational Database (e.g. SQLite, MySQL, PostgreSQL): 600 bytes/file When the average file is larger (over 64MB), or the file is frequently modified and has a lot of fragments, or there are many extended attributes, or the average file name is long (over 50 bytes), more storage space is needed. When you need to migrate between two types of metadata engines, you can use this method to estimate the required storage space. For example, if you want to migrate the metadata engine from a relational database (MySQL) to a key-value database (Redis), and the current usage of MySQL is 30GB, then the target Redis needs to prepare at least 15GB or more of memory. The reverse is also true. JuiceFS requires Redis version 4.0 and above. Redis Cluster is also supported, but in order to avoid transactions across different Redis instances, JuiceFS puts all metadata for one file system on a single Redis instance. To ensure metadata security, JuiceFS requires , otherwise it will try to set it to `noeviction` when starting JuiceFS, and will print a warning log if it fails. Refer to for more. When using Redis as the metadata storage engine, the following format is usually used to access the database: <Tabs> <TabItem value=\"tcp\" label=\"TCP\"> ``` redis[s]://[<username>:<password>@]<host>[:<port>]/<db> ``` </TabItem> <TabItem value=\"unix-socket\" label=\"Unix socket\"> ``` unix://[<username>:<password>@]<socket-file-path>?db=<db> ``` </TabItem> </Tabs> Where `[]` enclosed are optional and the rest are mandatory. If the feature of Redis is enabled, the protocol header needs to use `rediss://`, otherwise use `redis://`. `<username>` is introduced after Redis 6.0 and can be ignored if there is no username, but the `:` colon in front of the password needs to be kept, e.g. `redis://:<password>@<host>:6379/1`. The default port number on which Redis listens is `6379`, which can be left blank if the default port number is not changed, e.g. `redis://:<password>@<host>/1`. Redis supports multiple , please replace `<db>` with the actual database number used. If you need to connect to Redis Sentinel, the format will be slightly different, refer to for" }, { "data": "If username / password contains special characters, use single quote to avoid unexpected shell interpretations, or use the `REDIS_PASSWORD` environment. :::tip A Redis instance can, by default, create a total of 16 logical databases, with each of these databases eligible for the creation of a singular JuiceFS file system. Thus, under ordinary circumstances, a single Redis instance may be utilized to form up to 16 JuiceFS file systems. However, it is crucial to note that the logical databases intended for use with JuiceFS must not be shared with other applications, as doing so could lead to data inconsistencies. ::: For example, the following command will create a JuiceFS file system named `pics`, using the database No. `1` in Redis to store metadata: ```shell juicefs format \\ --storage s3 \\ ... \\ \"redis://:[email protected]:6379/1\" \\ pics ``` For security purposes, it is recommended to pass the password using the environment variable `METAPASSWORD` or `REDISPASSWORD`, e.g. ```shell export META_PASSWORD=mypassword ``` Then there is no need to set a password in the metadata URL. ```shell juicefs format \\ --storage s3 \\ ... \\ \"redis://192.168.1.6:6379/1\" \\ pics ``` If you need to share the same file system across multiple nodes, ensure that all nodes has access to the Metadata Engine. ```shell juicefs mount -d \"redis://:[email protected]:6379/1\" /mnt/jfs ``` Passing passwords with the `METAPASSWORD` or `REDISPASSWORD` environment variables is also supported. ```shell export META_PASSWORD=mypassword juicefs mount -d \"redis://192.168.1.6:6379/1\" /mnt/jfs ``` JuiceFS supports both TLS server-side encryption authentication and mTLS mutual encryption authentication connections to Redis. When connecting to Redis via TLS or mTLS, use the `rediss://` protocol header. However, when using TLS server-side encryption authentication, it is not necessary to specify the client certificate and private key. :::note Using Redis mTLS requires JuiceFS version 1.1.0 and above ::: If Redis server has enabled mTLS feature, it is necessary to provide client certificate, private key, and CA certificate that issued the client certificate to connect. In JuiceFS, mTLS can be used in the following way: ```shell juicefs format --storage s3 \\ ... \\ \"rediss://192.168.1.6:6379/1?tls-cert-file=/etc/certs/client.crt&tls-key-file=/etc/certs/client.key&tls-ca-cert-file=/etc/certs/ca.crt\" pics ``` In the code mentioned above, we use the `rediss://` protocol header to enable mTLS functionality, and then use the following options to specify the path of the client certificate: `tls-cert-file=<path>`: The path of the client certificate. `tls-key-file=<path>`: The path of the private key. `tls-ca-cert-file=<path>`: The path of the CA certificate. It is optional. If it is not specified, the system CA certificate will be used. `insecure-skip-verify=true` It can skip verifying the server certificate. When specifying options in a URL, start with the `?` symbol and use the `&` symbol to separate multiple options, for example: `?tls-cert-file=client.crt&tls-key-file=client.key`. In the above example, `/etc/certs` is just a directory name. Replace it with your actual certificate directory when using it, which can be a relative or absolute path. is an open source fork of Redis, developed to stay aligned with the Redis community. KeyDB implements multi-threading support, better memory utilization, and greater throughput on top of Redis, and also supports , i.e., the Active Active feature. :::note Same as Redis, the Active Replication is asynchronous, which may cause consistency" }, { "data": "So use with caution! ::: When being used as metadata storage engine for Juice, KeyDB is used exactly in the same way as Redis. So please refer to the section for usage. is a powerful open source relational database with a perfect ecosystem and rich application scenarios, and it also works as the metadata engine of JuiceFS. Many cloud computing platforms offer hosted PostgreSQL database services, or you can deploy one yourself by following the . Other PostgreSQL-compatible databases (such as CockroachDB) can also be used as metadata engine. When using PostgreSQL as the metadata storage engine, you need to create a database manually before creating the file system by following the format below: <Tabs> <TabItem value=\"tcp\" label=\"TCP\"> ``` postgres://@<host>[:5432]/<database-name>[?parameters] ``` </TabItem> <TabItem value=\"unix-socket\" label=\"Unix socket\"> ``` postgres://@/<database-name>?host=<socket-directories-path>[&parameters] ``` </TabItem> </Tabs> Where `[]` enclosed are optional and the rest are mandatory. For example: ```shell juicefs format \\ --storage s3 \\ ... \\ \"postgres://user:[email protected]:5432/juicefs\" \\ pics ``` A more secure approach would be to pass the database password through the environment variable `META_PASSWORD`: ```shell export META_PASSWORD=\"mypassword\" juicefs format \\ --storage s3 \\ ... \\ \"postgres://[email protected]:5432/juicefs\" \\ pics ``` :::note JuiceFS uses public by default, if you want to use a `non-public schema`, you need to specify `searchpath` in the connection string parameter. e.g `postgres://user:[email protected]:5432/juicefs?searchpath=pguser1` If the `public schema` is not the first hit in the `searchpath` configured on the PostgreSQL server, the `searchpath` parameter must be explicitly set in the connection string. The `searchpath` connection parameter can be set to multiple schemas natively, but currently JuiceFS only supports setting one. `postgres://user:[email protected]:5432/juicefs?searchpath=pguser1,public` will be considered illegal. Special characters in the password need to be replaced by url encoding. For example, `|` needs to be replaced with `%7C`. ::: ```shell juicefs mount -d \"postgres://user:[email protected]:5432/juicefs\" /mnt/jfs ``` Passing password with the `META_PASSWORD` environment variable is also supported when mounting a file system. ```shell export META_PASSWORD=\"mypassword\" juicefs mount -d \"postgres://[email protected]:5432/juicefs\" /mnt/jfs ``` The JuiceFS client connects to PostgreSQL via SSL encryption by default. If you encountered an error saying `pq: SSL is not enabled on the server`, you need to enable SSL encryption for PostgreSQL according to your own business scenario, or you can disable it by adding a parameter to the metadata URL Validation. ```shell juicefs format \\ --storage s3 \\ ... \\ \"postgres://[email protected]:5432/juicefs?sslmode=disable\" \\ pics ``` Additional parameters can be appended to the metadata URL. More details can be seen . is one of the most popular open source relational databases, and is often preferred for web applications. When using MySQL as the metadata storage engine, you need to create a database manually before create the file system. The command with the following format is usually used to access the database: <Tabs> <TabItem value=\"tcp\" label=\"TCP\"> ``` mysql://<username>[:<password>]@(<host>:3306)/<database-name> ``` </TabItem> <TabItem value=\"unix-socket\" label=\"Unix socket\"> ``` mysql://<username>[:<password>]@unix(<socket-file-path>)/<database-name> ``` </TabItem> </Tabs> :::note Don't leave out the `()` brackets on either side of the URL. Special characters in passwords do not require url encoding ::: For example: ```shell juicefs format \\ --storage s3 \\ ... \\ \"mysql://user:mypassword@(192.168.1.6:3306)/juicefs\" \\ pics ``` A more secure approach would be to pass the database password through the environment variable `META_PASSWORD`: ```shell export META_PASSWORD=\"mypassword\" juicefs format \\ --storage s3 \\" }, { "data": "\\ \"mysql://user@(192.168.1.6:3306)/juicefs\" \\ pics ``` To connect to a TLS enabled MySQL server, pass the `tls=true` parameter (or `tls=skip-verify` if using a self-signed certificate): ```shell juicefs format \\ --storage s3 \\ ... \\ \"mysql://user:mypassword@(192.168.1.6:3306)/juicefs?tls=true\" \\ pics ``` ```shell juicefs mount -d \"mysql://user:mypassword@(192.168.1.6:3306)/juicefs\" /mnt/jfs ``` Passing password with the `META_PASSWORD` environment variable is also supported when mounting a file system. ```shell export META_PASSWORD=\"mypassword\" juicefs mount -d \"mysql://user@(192.168.1.6:3306)/juicefs\" /mnt/jfs ``` To connect to a TLS enabled MySQL server, pass the `tls=true` parameter (or `tls=skip-verify` if using a self-signed certificate): ```shell juicefs mount -d \"mysql://user:mypassword@(192.168.1.6:3306)/juicefs?tls=true\" /mnt/jfs ``` For more examples of MySQL database address format, please refer to . is an open source branch of MySQL, maintained by the original developers of MySQL. Because MariaDB is highly compatible with MySQL, there is no difference in usage, the parameters and settings are exactly the same. For example: ```shell juicefs format \\ --storage s3 \\ ... \\ \"mysql://user:mypassword@(192.168.1.6:3306)/juicefs\" \\ pics ``` ```shell juicefs mount -d \"mysql://user:mypassword@(192.168.1.6:3306)/juicefs\" /mnt/jfs ``` Passing passwords through environment variables is also the same: ```shell export META_PASSWORD=\"mypassword\" juicefs format \\ --storage s3 \\ ... \\ \"mysql://user@(192.168.1.6:3306)/juicefs\" \\ pics ``` ```shell juicefs mount -d \"mysql://user@(192.168.1.6:3306)/juicefs\" /mnt/jfs ``` To connect to a TLS enabled MariaDB server, pass the `tls=true` parameter (or `tls=skip-verify` if using a self-signed certificate): ```shell export META_PASSWORD=\"mypassword\" juicefs format \\ --storage s3 \\ ... \\ \"mysql://user@(192.168.1.6:3306)/juicefs?tls=true\" \\ pics ``` ```shell juicefs mount -d \"mysql://user@(192.168.1.6:3306)/juicefs?tls=true\" /mnt/jfs ``` For more examples of MariaDB database address format, please refer to . is a widely used small, fast, single-file, reliable and full-featured SQL database engine. The SQLite database has only one file, which is very flexible to create and use. When using SQLite as the JuiceFS metadata storage engine, there is no need to create a database file in advance, and you can directly create a file system: ```shell juicefs format \\ --storage s3 \\ ... \\ \"sqlite3://my-jfs.db\" \\ pics ``` Executing the above command will automatically create a database file named `my-jfs.db` in the current directory. Please keep this file properly! Mount the file system: ```shell juicefs mount -d \"sqlite3://my-jfs.db\" /mnt/jfs/ ``` Please note the location of the database file, if it is not in the current directory, you need to specify the absolute path to the database file, e.g. ```shell juicefs mount -d \"sqlite3:///home/herald/my-jfs.db\" /mnt/jfs/ ``` One can also add driver supported to the connection string like: ```shell \"sqlite3://my-jfs.db?cache=shared&busytimeout=5000\" ``` For more examples of SQLite database address format, please refer to . :::note Since SQLite is a single-file database, usually only the host where the database is located can access it. Therefore, SQLite database is more suitable for standalone use. For multiple servers sharing the same file system, it is recommended to use databases such as Redis or MySQL. ::: is an embedded, persistent, and standalone Key-Value database developed in pure Go. The database files are stored locally in the specified directory. When using BadgerDB as the JuiceFS metadata storage engine, use `badger://` to specify the database" }, { "data": "You only need to create a file system for use, and there is no need to create a BadgerDB database in advance. ```shell juicefs format badger://$HOME/badger-data myjfs ``` This command creates `badger-data` as a database directory in the `home` directory of the current user, which is used as metadata storage for JuiceFS. The database path needs to be specified when mounting the file system. ```shell juicefs mount -d badger://$HOME/badger-data /mnt/jfs ``` :::tip BadgerDB only allows single-process access. If you need to perform operations like `gc`, `fsck`, `dump`, and `load`, you need to unmount the file system first. ::: is a distributed transactional Key-Value database. It is originally developed by PingCAP as the storage layer for their flagship product TiDB. Now TiKV is an independent open source project, and is also a granduated project of CNCF. By using the official tool TiUP, you can easily build a local playground for testing (refer for details). Production environment generally requires at least three hosts to store three data replicas (refer to the for all deployment steps). :::note It's recommended to use dedicated TiKV 5.0+ cluster as the metadata engine for JuiceFS. ::: When using TiKV as the metadata storage engine, parameters needs to be specified as the following format: ```shell tikv://<pdaddr>[,<pdaddr>...]/<prefix> ``` The `prefix` is a user-defined string, which can be used to distinguish multiple file systems or applications when they share the same TiKV cluster. For example: ```shell juicefs format \\ --storage s3 \\ ... \\ \"tikv://192.168.1.6:2379,192.168.1.7:2379,192.168.1.8:2379/jfs\" \\ pics ``` If you need to enable TLS, you can set the TLS configuration item by adding the query parameter after the metadata URL. Currently supported configuration items: | Name | Value | |-|| | `ca` | CA root certificate, used to connect TiKV/PD with TLS | | `cert` | certificate file path, used to connect TiKV/PD with TLS | | `key` | private key file path, used to connect TiKV/PD with TLS | | `verify-cn` | verify component caller's identity, | For example: ```shell juicefs format \\ --storage s3 \\ ... \\ \"tikv://192.168.1.6:2379,192.168.1.7:2379,192.168.1.8:2379/jfs?ca=/path/to/ca.pem&cert=/path/to/tikv-server.pem&key=/path/to/tikv-server-key.pem&verify-cn=CN1,CN2\" \\ pics ``` ```shell juicefs mount -d \"tikv://192.168.1.6:2379,192.168.1.7:2379,192.168.1.8:2379/jfs\" /mnt/jfs ``` is a small-scale key-value database with high availability and reliability, which can be used as metadata storage for JuiceFS. When using etcd as the metadata engine, the `Meta-URL` parameter needs to be specified in the following format: ``` etcd://[user:password@]<addr>[,<addr>...]/<prefix> ``` Where `user` and `password` are required when etcd enables user authentication. The `prefix` is a user-defined string. When multiple file systems or applications share an etcd cluster, setting the prefix can avoid confusion and conflict. An example is as follows: ```shell juicefs format etcd://user:[email protected]:2379,192.168.1.7:2379,192.168.1.8:2379/jfs pics ``` If you need to enable TLS, set the TLS configuration item by adding the query parameter after the metadata URL, use absolute path for certificate files to avoid file not found error. | Name | Value | ||--| | `cacert` | CA root certificate | | `cert` | certificate file path | | `key` | private key file path | | `server-name` | name of server | | `insecure-skip-verify` | 1 | For example: ```shell juicefs format \\ --storage s3 \\ ... \\ \"etcd://192.168.1.6:2379,192.168.1.7:2379,192.168.1.8:2379/jfs?cert=/path/to/ca.pem&cacert=/path/to/etcd-server.pem&key=/path/to/etcd-key.pem&server-name=etcd\" \\ pics ``` ```shell juicefs mount -d" }, { "data": "/mnt/jfs ``` :::note When mounting to the background, the path to the certificate needs to use an absolute path. ::: is a distributed database that can hold large-scale structured data on multiple clustered servers. The database system focuses on high performance, high scalability, and good fault tolerance. Using FoundationDB as the metadata engine requires its client library, so by default it is not enabled in the JuiceFS released binaries. If you need to use it, please compile it yourself. First, you need to install the FoundationDB client library (refer to the for more details): <Tabs> <TabItem value=\"debian\" label=\"Debian and derivatives\"> ```shell curl -O https://github.com/apple/foundationdb/releases/download/6.3.25/foundationdb-clients6.3.25-1amd64.deb sudo dpkg -i foundationdb-clients6.3.25-1amd64.deb ``` </TabItem> <TabItem value=\"centos\" label=\"RHEL and derivatives\"> ```shell curl -O https://github.com/apple/foundationdb/releases/download/6.3.25/foundationdb-clients-6.3.25-1.el7.x86_64.rpm sudo rpm -Uvh foundationdb-clients-6.3.25-1.el7.x86_64.rpm ``` </TabItem> </Tabs> Then, compile JuiceFS supporting FoundationDB: ```shell make juicefs.fdb ``` When using FoundationDB as the metadata engine, the `Meta-URL` parameter needs to be specified in the following format: ```uri fdb://[config file address]?prefix=<prefix> ``` The `<clusterfilepath>` is the FoundationDB configuration file path, which is used to connect to the FoundationDB server. The `<prefix>` is a user-defined string, which can be used to distinguish multiple file systems or applications when they share the same FoundationDB cluster. For example: ```shell juicefs.fdb format \\ --storage s3 \\ ... \\ \"fdb:///etc/foundationdb/fdb.cluster?prefix=jfs\" \\ pics ``` If you need to enable TLS, the general steps are as follows. For details, please refer to . ```shell openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout private.key -out cert.crt cat cert.crt private.key > fdb.pem ``` | Command-line Option | Client Option | Environment Variable | Purpose | ||--|-|-| | `tlscertificatefile` | `TLScertpath` | `FDBTLSCERTIFICATE_FILE` | Path to the file from which the local certificates can be loaded | | `tlskeyfile` | `TLSkeypath` | `FDBTLSKEY_FILE` | Path to the file from which to load the private key | | `tlsverifypeers` | `tlsverifypeers` | `FDBTLSVERIFY_PEERS` | The byte-string for the verification of peer certificates and sessions | | `tlspassword` | `tlspassword` | `FDBTLSPASSWORD` | The byte-string representing the passcode for unencrypting the private key | | `tlscafile` | `TLScapath` | `FDBTLSCA_FILE` | Path to the file containing the CA certificates to trust | The TLS parameters can be configured in `foundationdb.conf` or environment variables, as shown in the following configuration files (emphasis on the `[foundationdb.4500]` configuration). ```ini title=\"foundationdb.conf\" [fdbmonitor] user = foundationdb group = foundationdb [general] restart-delay = 60 cluster-file = /etc/foundationdb/fdb.cluster [fdbserver] command = /usr/sbin/fdbserver datadir = /var/lib/foundationdb/data/$ID logdir = /var/log/foundationdb [fdbserver.4500] Public - address = 127.0.0.1:4500: TLS listen-address = public tlscertificatefile = /etc/foundationdb/fdb.pem tlscafile = /etc/foundationdb/cert.crt tlskeyfile = /etc/foundationdb/private.key tlsverifypeers= Check.Valid=0 [backup_agent] command = /usr/lib/foundationdb/backupagent/backupagent logdir = /var/log/foundationdb [backup_agent.1] ``` In addition, you need to add the suffix `:tls` after the address in `fdb.cluster`, `fdb.cluster` is as follows: ```uri title=\"fdb.cluster\" U6pT9Jhl:[email protected]:4500:tls ``` You need to configure TLS parameters and `fdb.cluster` on the client machine, `fdbcli` is the same. Connected by `fdbcli`: ```shell fdbcli --tlscertificatefile=/etc/foundationdb/fdb.pem \\ --tlscafile=/etc/foundationdb/cert.crt \\ --tlskeyfile=/etc/foundationdb/private.key \\ --tlsverifypeers=Check.Valid=0 ``` Connected by API (`fdbcli` also applies): ```shell export FDBTLSCERTIFICATE_FILE=/etc/foundationdb/fdb.pem \\ export FDBTLSCA_FILE=/etc/foundationdb/cert.crt \\ export FDBTLSKEY_FILE=/etc/foundationdb/private.key \\ export FDBTLSVERIFY_PEERS=Check.Valid=0 ``` ```shell juicefs.fdb mount -d \\ \"fdb:///etc/foundationdb/fdb.cluster?prefix=jfs\" \\ /mnt/jfs ```" } ]
{ "category": "Runtime", "file_name": "how_to_set_up_metadata_engine.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Orphaned replica directory cleanup identifies unmanaged replicas on the disks and provides a list of the orphaned replica directory on each node. Longhorn will not delete the replicas automatically, preventing deletions by mistake. Instead, it allows the user to select and trigger the deletion of the orphaned replica directory manually or deletes the orphaned replica directories automatically. Identify the orphaned replica directories The scanning process should not stuck the reconciliation of the controller Provide user a way to select and trigger the deletion of the orphaned replica directories Support the global auto-deletion of orphaned replica directories Clean up unknown files or directories in disk paths Support the per-node auto-deletion of orphaned replica directories Support the auto-deletion of orphaned replica directories exceeded the TTL Introduce a new CRD `orphan` and controller that represents and tracks the orphaned replica directories. The controller deletes the physical data and the resource if receive a deletion request. The monitor on each node controller is created to periodically collects the on-disk replica directories, compares them with the scheduled replica, and then finds the orphaned replica directories. The reconciliation loop of the node controller gets the latest disk status and orphaned replica directories from the monitor and update the state of the node. Additionally, the `orphan` resources associated with the orphaned replica directories are created. ``` queue ... syncNode() reconcile() syncWithMonitor per-node monitor | collect information ``` When a user introduces a disk into a Longhorn node, it may contain replica directories that are not tracked by the Longhorn system. The untracked replica directories may belong to other Longhorn clusters. Or, the replica CRs associated with the replica directories are removed after the node or the disk is down. When the node or the disk comes back, the corresponding replica data directories are no longer tracked by the Longhorn system. These replica data directories are called orphaned. Longhorn's disk capacity is taken up by the orphaned replica directories. Users need to compare the on-disk replica directories with the replicas tracked by the Longhorn system on each node and then manually delete the orphaned replica directories. The process is tedious and time-consuming for users. After the enhancement, Longhorn automatically finds out the orphaned replica directories on Longhorn nodes. Users can visualize and manage the orphaned replica directories via Longhorn GUI or command line tools. Additionally, Longhorn can deletes the orphaned replica directories automatically if users enable the global auto-deletion option. Via Longhorn GUI Users can check Node and Disk status then see if Longhorn already identifies orphaned replicas. Users can choose the items in the orphaned replica directory list then clean up them. Users can enable the global auto-deletion on setting page. By default, the auto-deletion is disabled. Via `kubectl` Users can list the orphaned replica directories by `kubectl -n longhorn-system get" }, { "data": "Users can delete the orphaned replica directories by `kubectl -n longhorn-system delete orphan <name>`. Users can enable the global auto-deletion by `kubectl -n longhorn-system edit settings orphan-auto-deletion` Settings Add setting `orphan-auto-deletion`. Default value is `false`. Node controller Start the monitor during initialization. Sync with the monitor in each reconcile loop. Update the node/disk status. Create `orphan` CRs based on the information collected by the monitor. Delete the `orphan` CRs if the node/disk is requested to be evicted. Delete the `orphan` CRs if the corresponding directories disappear. Delete the `orphan` CRs if the auto-deletion setting is enabled. Node monitor Struct ```go type NodeMonitor struct { logger logrus.FieldLogger ds *datastore.DataStore node longhorn.Node lock sync.RWMutex onDiskReplicaDirectories mapstring syncCallback func(key string) ctx context.Context quit context.CancelFunc } ``` Periodically detect and verify disk Run `stat` Check disk FSID Check disk UUID in the metafile Periodically check and identify orphan directories List on-disk directories in `${disk_path}/replicas` and compare them with the last record stored in `monitor.onDiskDirectoriesInReplicas`. If the two lists are different, iterate all directories in `${disk_path}/replicas` and then get the list of the orphaned replica directories. A valid replica directory has the properties: The directory name format is `<disk path>/replicas/<replica name>-<random string>` `<disk path>/replicas/<replica name>-<random string>/volume.meta` is parsible and follows the `volume.meta`'s format. Compare the list of the orphaned replica directories with the `node.status.diskStatus.scheduledReplica` and find out the list of the orphaned replica directories. Store the list in `monitor.node.status.diskStatus.orphanedReplicaDirectoryNames` Orphan controller Struct: ```go // OrphanSpec defines the desired state of the Longhorn orphaned data type OrphanSpec struct { // The node ID on which the controller is responsible to reconcile this orphan CR. // +optional NodeID string `json:\"nodeID\"` // The type of the orphaned data. // Can be \"replica\". // +optional Type OrphanType `json:\"type\"` // The parameters of the orphaned data // +optional // +nullable Parameters map[string]string `json:\"parameters\"` } // OrphanStatus defines the observed state of the Longhorn orphaned data type OrphanStatus struct { // +optional OwnerID string `json:\"ownerID\"` // +optional // +nullable Conditions []Condition `json:\"conditions\"` } ``` If receive the deletion request, delete the on-disk orphaned replica directory and the `orphan` resource. If the auto-deletion is enabled, node controller will issues the orphans deletion requests. longhorn-ui Allow users to list the orphans on the node page by sending`OrphanList`call to the backend. Allow users to select the orphans to be deleted. The frontend needs to send`OrphanDelete`call to the backend. Integration tests `orphan` CRs will be created correctly in the disk path. And they can be cleaned up with the directories. `orphan` CRs will be created correctly when there are multiple kinds of files/directories in the disk path. And they can be cleaned up with the directories. `orphan` CRs will be removed when the replica directories disappear. `orphan` CRs will be removed when the node/disk is evicted or down. The associated orphaned replica directories should not be cleaned up. Auto-deletion setting." } ]
{ "category": "Runtime", "file_name": "20220324-orphaned-data-cleanup.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 9 sidebar_label: \"LDA Controller\" The LDA controller provides a separate CRD - `LocalDiskAction`, which is used to match the localdisk and execute the specified action. Its yaml definition is as follows: ```yaml apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskAction metadata: name: forbidden-1 spec: action: reserve rule: minCapacity: 1024 devicePath: /dev/rbd* apiVersion: hwameistor.io/v1alpha1 kind: LocalDiskAction metadata: name: forbidden-2 spec: action: reserve rule: maxCapacity: 1048576 devicePath: /dev/sd* ``` The above yaml indicates: Localdisks smaller than 1024 bytes and whose devicePath meets the `/dev/rbd*` matching condition will be reserved Localdisks larger than 1048576 bytes and whose devicePath meets the `/dev/sd*` matching condition will be reserved Note that the currently supported actions are only reserve." } ]
{ "category": "Runtime", "file_name": "lda_controller.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "() This section describes how an operator can migrate a Mantav1 deployment to the new Mantav2 major version. See for a description of mantav2. <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> - - - - - - - <!-- END doctoc generated TOC please keep comment here to allow auto update --> The procedure to migrate a mantav1 to mantav2 will roughly be the following. Specific steps will be provided later in this document. Convert the manta deployment zone from mantav1 to mantav2. This is the point at which the operator is warned that this migration is not reversible and that mantav2 is backward incompatible. sdcadm self-update --latest sdcadm post-setup manta Snaplink cleanup. Snaplinks must be cleaned from the system, otherwise the new GC and rebalancer systems cannot guarantee data integrity. Deploy the new garbage collector (GCv2) system. Recommended service updates. This is where obsolete mantav1 service instances (marlin, jobpuller, jobsupervisor, marlin-dashboard, and medusa) can be undeployed. As well, any or all remaining Manta services can be updated to their latest \"mantav2-\\*\" images. Optional service updates. The new rebalancer service and the services that make up the new Buckets API can be deployed. Additional clean up. Some orphaned data (related to the removed jobs feature and to the earlier GC system) can be removed. Other than the usual possible brief downtimes for service upgrades, this migration procedure does not make Manta unavailable at any point. The following instructions use bold to indicate the explicit steps that must be run. The first step is to update the Manta deployment tooling (i.e. the manta deployment zone) to a mantav2 image. Run the following on the headnode global zone of every datacenter in the Manta region: ``` sdcadm self-update --latest sdcadm post-setup manta ``` or if using specific image UUIDs: ``` sdcadm_image=... mantadeploymentimage=... sdcadm self-update $sdcadm_image sdcadm post-setup manta -i $mantadeploymentimage ``` The `sdcadm post-setup manta` command will warn that the migration process is not reversible and require an interactive confirmation to proceed. The new manta deployment image provides a `mantav2-migrate` tool that will assist with some of the subsequent steps. The jobs-based GC (and other jobs-based tasks such as \"audit\" and metering) in the Manta \"ops\" zone (aka \"mola\") are obsolete and can/should be disabled if they aren't already. Disable all \"ops\" service jobs via the following: ``` sapiadm update $(sdc-sapi /services?name=ops | json -Ha uuid) metadata.DISABLEALLJOBS=true ``` The old GC system used a delayed-delete mechanism where deleted files were put in a daily \"/manta/tombstone/YYYY-MM-DD\" directory on each storage node. Optionally check the disk usage of and remove the obsolete tombstone directories by running the following in every datacenter in this region: ``` manta-oneach -s storage 'du -sh /manta/tombstone' manta-oneach -s storage 'rm -rf /manta/tombstone' ``` For example, the following shows that very little space (~2kB per storage node) is being used by tombstone directories in this datacenter: ``` [root@headnode (mydc-1) ~]# manta-oneach -s storage 'du -sh /manta/tombstone' SERVICE ZONE OUTPUT storage ae0096a5 2.0K /manta/tombstone storage 38b50a82 2.0K /manta/tombstone storage cd798768 2.0K /manta/tombstone storage ab7c6ef3 2.0K /manta/tombstone storage 12042540 2.0K /manta/tombstone storage 85d4b8c4 2.0K /manta/tombstone ``` Some Mantas may have deployed a garbage collection system called \"Accelerated GC\": , , ," }, { "data": "Work through the following steps to determine if you have Accelerated GC and, if so, to flush and disable it: Your Manta has Accelerated GC if you have deployed \"garbage-collector\" instances: ``` [root@headnode (mydc-1a) ~]# manta-adm show -a garbage-collector SERVICE SH DATACENTER ZONENAME garbage-collector 1 mydc-1a 65ad3602-959e-428d-bdee-f7915702c748 garbage-collector 1 mydc-1a 03dae05c-2fbf-47cc-9d39-b57a362c1534 garbage-collector 1 mydc-1a 655fe38c-4ec6-425e-bf0b-28166964308e ... ``` Disable all garbage-collector SMF services to allow inflight instructions to drain: ``` manta-oneach -s garbage-collector 'svcadm disable garbage-collector' ``` Wait 5 minutes and check that all instructions have drained: ``` manta-oneach -s garbage-collector 'du --inodes /var/spool/manta_gc/mako/* | sort -n | tail -3' ``` The file counts should all be 1 (the subdirectory itself). [For Manta deployment using \"feeder\" service only] After 5 minutes, check that the feeder zone also has no inflight instructions: ``` du --inodes /var/spool/manta_gc/mako/* | sort -n | tail -3 ``` The file counts should all be 1 (the subdirectory itself). Before upgrading a storage zone to the v2 image, check that its instruction directory is empty: For Manta deployment using feeder service ``` du --inodes /var/spool/manta_gc/instructions ``` The file count should be exactly 1 (the directory itself). For Manta deployment without feeder service ``` manta-login ops mls /poseidon/stor/mantagc/mako | while read stor; do minfo /poseidon/stor/mantagc/mako/$stor | grep result-set-size; done ``` The result-set-size should be 0 for all storage IDs, e.g.: ``` [root@7df71573 (ops) ~]$ mls /poseidon/stor/mantagc/mako | while read stor; do minfo /poseidon/stor/mantagc/mako/$stor | grep result; done result-set-size: 0 result-set-size: 0 result-set-size: 0 ``` If there are non-zero GC instructions in those results, then run the accelerated GC script manually to hasten up garbage collection: ``` manta-oneach -s storage 'nohup bash /opt/smartdc/mako/bin/mako_gc.sh >>/var/log/mako-gc.log 2>&1 &' ``` Repeat the check above until you get `result-set-size: 0` for all. <a name=\"snaplink-cleanup\" /> Mantav1 supported a feature called \"snaplinks\" where a new object could be quickly created from an existing one by linking to it. These snaplinks must be \"delinked\" -- i.e. changed from being metadata-tier references to a shared storage-tier object, to being fully separate objects -- before the new garbage-collector and rebalancer services in mantav2 can function. This section walks through the process of removing snaplinks. Snaplink cleanup involves a few stages, some of which are manual. The `mantav2-migrate snaplink-cleanup` command is used to coordinate the process. (It stores cross-DC progress in the `SNAPLINKCLEANUPPROGRESS` SAPI metadatum.) You will re-run that command multiple times, in each of the DCs that are part of the Manta region, and follow its instructions. Update every \"webapi\" service instance to a mantav2-webapi image. Any image published after 2019-12-09 will do. First set the \"WEBAPIUSEPICKER\" metadatum on the \"webapi\" service to have the new webapi instances not yet use the new \"storinfo\" service. (See for details.) ``` webapisvc=$(sdc-sapi \"/services?name=webapi&includemaster=true\" | json -H 0.uuid) echo '{\"metadata\": {\"WEBAPIUSEPICKER\": true}}' | sapiadm update \"$webapi_svc\" ``` Find and import the latest webapi image: ``` updates-imgadm -C $(sdcadm channel get) list name=mantav2-webapi latestwebapiimage=$(updates-imgadm -C $(sdcadm channel get) list name=mantav2-webapi -H -o uuid --latest) sdc-imgadm import -S https://updates.tritondatacenter.com $latestwebapiimage ``` Update webapis to that new image: ``` manta-adm show -js >/var/tmp/config.json vi /var/tmp/config.json # update webapi instances manta-adm update /var/tmp/config.json ``` Then run `mantav2-migrate snaplink-cleanup` from the headnode global zone of every DC in this Manta region. Until webapis are updated, the command will error out something like this: ``` [root@headnode (mydc) ~]# mantav2-migrate snaplink-cleanup Determining if webapi instances in this DC are at" }, { "data": "Phase 1: Update webapis to V2 Snaplinks cannot be fully cleaned until all webapi instances are are updated to a V2 image that no longer allows new snaplinks to be created. You must upgrade all webapi instances in this DC (mydc) to a recent enough V2 image (after 2019-12-09), and then re-run \"mantav2-migrate snaplink-cleanup\" to update snaplink-cleanup progress. mantav2-migrate snaplink-cleanup: error: webapi upgrades are required before snaplink-cleanup can proceed ``` Select the driver DC. Results from subsequent phases need to be collected in one place. Therefore, if this Manta region has multiple DCs, then you will be asked to choose one of them on which to coordinate. This is called the \"driver DC\". Any of the DCs in the Manta region will suffice. If there is only a single DC in the Manta region, then it will automatically be set as the \"driver DC\". <a name=\"snaplink-discovery\"/> Discover every snaplink. This involves working through each Manta index shard to find all the snaplinked objects. This is done by manually running a \"snaplink-sherlock.sh\" script against the postgres async VM for each shard, and then copying back that script's output file. Until those \"sherlock\" files are obtained, the command will error out something like this: ``` [root@headnode (mydc) ~]# mantav2-migrate snaplink-cleanup Phase 1: All webapi instances are running V2. Driver DC: mydc (this one) In this phase, you must run the \"snaplink-sherlock.sh\" tool against the async postgres for each Manta index shard. That will generate a \"{shard}_sherlock.tsv.gz\" file that must be copied back to \"/var/db/snaplink-cleanup/discovery/\" on this headnode. Repeat the following steps for each missing shard: https://github.com/TritonDataCenter/manta/blob/master/docs/operator-guide/mantav2-migration.md#snaplink-discovery Missing \"*_sherlock.tsv.gz\" for the following shards (1 of 1): 1.moray.coalregion.joyent.us mantav2-migrate snaplink-cleanup: error: sherlock files must be generated and copied to \"/var/db/snaplink-cleanup/discovery/\" before snaplink cleanup can proceed ``` You must do the following for each listed shard: Find the postgres VM UUID (`postgres_vm`) that is currently the async for that shard, and the datacenter in which it resides. (If this is a development Manta with no async, then the sync or primary can be used.) The output from the following commands can help find those instances: ``` manta-oneach -s postgres 'manatee-adm show' # find the async manta-adm show -a postgres # find which DC it is in ``` Copy the \"snaplink-sherlock.sh\" script to that server's global zone. ``` ssh $datacenter # the datacenter holding the postgres async VM postgres_vm=\"<the UUID of postgres async VM>\" serveruuid=$(sdc-vmadm get $postgresvm | json server_uuid) manta0vm=$(vmadm lookup -1 tags.smartdcrole=manta) sdc-oneachnode -n \"$server_uuid\" -X -d /var/tmp \\ -g \"/zones/$manta0_vm/root/opt/smartdc/manta-deployment/tools/snaplink-sherlock.sh\" ``` SSH to that server's global zone, and run that script with the postgres VM UUID as an argument. Run this in screen, via nohup, or equivalent because this can take a long time to run. ``` manta-login -G $postgresvm # or 'ssh root@$serverip' cd /var/tmp screen bash ./snaplink-sherlock.sh \"$postgres_vm\" ``` See [this gist](https://gist.github.com/trentm/05611024c0c825cb083a475e3b60aab4) for an example run of snaplink-sherlock.sh. Copy the created \"/var/tmp/{{shard}}_sherlock.tsv.gz\" file back to \"/var/db/snaplink-cleanup/discovery/\" on the driver DC. If this Manta region has multiple DCs, this may be a different DC. Then re-run `mantav2-migrate snaplink-cleanup` on the driver DC to process the sherlock files. At any point you may re-run this command to list the remaining shards to work through. The following commands should automate the tedium of step 3.3 on a larger Manta region. They assume every postgres index shard has one async. Run the following steps on each DC in the region. Print \"error: ...\" messages if the state of this Manta's postgres shards looks like the given commands here won't" }, { "data": "E.g. if a postgres shard has no async, if shard \"1\" is an index shard. ``` function warnmissingindexshardasyncs { local postgres_insts=$(manta-adm show postgres -Ho shard,zonename | sed 's/^ *//g' | sort) local indexshards=$(sdc-sapi '/applications?name=manta&includemaster=true' | json -H 0.metadata.INDEXMORAYSHARDS | json -e 'this.sh = this.host.split(\".\")[0]' -a sh) for shard in $index_shards; do if [[ \"$shard\" == \"1\" ]]; then echo \"error: shard '1' is in 'INDEXMORAYSHARDS' (the commands below assume shard 1 is NOT an index shard)\" fi local aninst=$(echo \"$postgresinsts\" | grep \"^$shard \" | head -1 | awk '{print $2}') if [[ -z \"$an_inst\" ]]; then echo \"error: no postgres instance found for shard $shard in this DC\" continue fi local asyncinst=$(manta-oneach -J -z \"$aninst\" 'curl -s http://localhost:5433/state | json zkstate.async.-1.zoneId' | json result.stdout) if [[ -z \"$async_inst\" ]]; then echo \"error: postgres shard $shard does not have an async member (the commands below assume there is one)\" continue fi echo \"postgres shard $shard async: $async_inst\" done } warnmissingindexshardasyncs ``` If there is no async for a given postgres shard, the snaplink-sherlock.sh can be run against a sync. The only reason for using an async is to avoid adding some CPU load on the primary or sync databases during the script run. Identify the postgres asyncs in this DC (excluding shard 1, which is assumed to not be an index shard) on which the script will be run: ``` declare instarray=(); readarray -t instarray <<<\"$(manta-oneach -s postgres 'if [[ \"$(curl -s http://localhost:5433/state | json role)\" == \"async\" && $(json -f /opt/smartdc/manatee/etc/sitter.json shardPath | cut -d/ -f3 | cut -d. -f1) != \"1\" ]]; then echo \"target $(hostname)\"; fi' | grep target | awk '{print $4}')\" instcsv=$(echo ${instarray[@]} | tr ' ' ',') echo \"The postgres asyncs in this DC are: '$inst_csv'\" ``` Copy the snaplink-sherlock.sh script to the server global zone hosting each postgres async. ``` manta0vm=$(vmadm lookup -1 tags.smartdcrole=manta) manta-oneach -G -z $inst_csv -X -d /var/tmp \\ -g \"/zones/$manta0_vm/root/opt/smartdc/manta-deployment/tools/snaplink-sherlock.sh\" ``` Start the long-running snaplink-sherlock.sh script for each async: ``` for inst in \"${inst_array[@]}\"; do manta-oneach -z \"$inst\" -G \"cd /var/tmp; nohup bash snaplink-sherlock.sh $inst >/var/tmp/snaplink-sherlock.$(date -u +%Y%m%dT%H%M%S).output.log 2>&1 &\"; done ``` Each execution will create a \"/var/tmp/${shard}_sherlock.tsv.gz\" file on completion. Poll for completion of the sherlock scripts via: ``` manta-oneach -z \"$inst_csv\" -G \"grep SnapLinks: /var/tmp/snaplink-sherlock.*.output.log\" manta-oneach -z \"$instcsv\" -G \"ls -l /var/tmp/*sherlock.tsv.gz\" ``` For example: ``` [root@headnode (coal) /var/tmp]# manta-oneach -z \"$inst_csv\" -G \"grep SnapLinks: /var/tmp/snaplink-sherlock.*.output.log\" HOSTNAME OUTPUT headnode Lines: 1226, SnapLinks: 42, Objects: 234 [root@headnode (coal) /var/tmp]# manta-oneach -z \"$instcsv\" -G \"ls -l /var/tmp/*sherlock.tsv.gz\" HOSTNAME OUTPUT headnode -rw-r--r-- 1 root staff 1008 Apr 14 18:26 /var/tmp/1.moray.coalregion.joyent.us_sherlock.tsv.gz ``` Copy the `*_sherlock.tsv.gz` files back to the headnode: ``` function copysherlockfilestoheadnode { local basedir=/var/tmp/sherlock-files mkdir -p $basedir/tmp for inst in \"${inst_array[@]}\"; do sherlockfiles=$(manta-oneach -G -z \"$inst\" -J 'ls /var/tmp/*sherlock.tsv.gz' | json -ga result.stdout | grep sherlock) for f in $sherlock_files; do echo \"Copy '$f' (postgres async $inst) to $basedir\" manta-oneach -G -z \"$inst\" -X -d $basedir/tmp -p \"$f\" mv $basedir/tmp/* $basedir/$(basename $f) done done rm -rf $basedir/tmp echo \"\" echo \"$basedir:\" ls -l1 $basedir } copysherlockfilestoheadnode ``` Copy these files to \"/var/db/snaplink-cleanup/discovery/\" on the driver DC (i.e. this might be in a different DC). ``` ssh DRIVER_DC rsync -av OTHER_DC:/var/tmp/sherlock-files/ /var/db/snaplink-cleanup/discovery/ ``` After the previous stage, the `mantav2-migrate snaplink-cleanup` command will generate a number of \"delinking\" scripts that must be manually run on the appropriate manta service instances. Example output: [root@headnode" }, { "data": "~]# mantav2-migrate snaplink-cleanup Phase 1: All webapi instances are running V2. Phase 2: Driver DC is \"nightly-1\" (this one) Phase 3: Have snaplink listings for all (1) Manta index shards. Created delink scripts in /var/db/snaplink-cleanup/delink/ stordelink scripts: 3.stor.nightly.joyent.us_stordelink.sh 2.stor.nightly.joyent.us_stordelink.sh 1.stor.nightly.joyent.us_stordelink.sh moraydelink scripts: 1.moray.nightly.joyent.us_moraydelink.sh \"Delink\" scripts have been generated from the snaplink listings from the previous phase. In this phase, you must: Copy each of the following \"*_stordelink.sh\" scripts to the appropriate storage node and run it there: ls /var/db/snaplink-cleanup/delink/*_stordelink.sh Use the following to help locate each storage node: manta-adm show -a -o service,storageid,datacenter,zonename,gzhost,gzadminip storage Then, copy each of the following \"*_moraydelink.sh\" scripts to a moray zone for the appropriate shard and run it there: ls /var/db/snaplink-cleanup/delink/*_moraydelink.sh Use the following to help locate a moray for each shard: manta-adm show -o service,shard,zonename,gzhost,gzadmin_ip moray When you are sure you have run all these scripts, then answer the following to proceed. WARNING Be sure you have run all these scripts successfully. If not, any lingering object that has multiple links will have the underlying files removed when the first link is deleted, which is data loss for the remaining links. Enter \"delinked\" when all delink scripts have been successfully run: Aborting. Re-run this command when all delink scripts have been run. mantav2-migrate snaplink-cleanup: error: delink scripts must be run before snaplink cleanup can proceed There are two sets of delink scripts: (a) \"stordelink\" scripts to be run on most/all storage instances; and (b) \"moraydelink\" scripts to be run on a \"moray\" instance in each index shard. The \"stordelink\" scripts must be handled first. You must run each \"/var/db/snaplink-cleanup/delink/\\*\\_stordelink.sh\" script on the appropriate Manta storage node, i.e. in the mako zone for that the `storage_id` in the script filename. There will be zero or one \"stordelink\" scripts for each storage node. Each script is idempotent, so can be run again if necessary. Each script will also error out if an attempt is made to run on the wrong storage node: ``` [root@94b3a1ce (storage) /var/tmp]$ bash 1.stor.coalregion.joyent.us_stordelink.sh Writing xtrace output to: /var/tmp/stordelink.20200320T213234.xtrace.log 1.stor.coalregion.joyent.us_stordelink.sh: fatal error: this stordelink script must run on '1.stor.coalregion.joyent.us': this is '3.stor.coalregion.joyent.us' ``` A successful run looks like this: ``` [root@94b3a1ce (storage) /var/tmp]$ bash 3.stor.coalregion.joyent.us_stordelink.sh Writing xtrace output to: /var/tmp/stordelink.20200320T213250.xtrace.log Completed stordelink successfully. ``` Please report any errors in running these scripts. You can use the following steps to mostly automate running all the stordelink scripts. Run these on every DC in the Manta region: Copy the delink scripts to a working \"/var/tmp/delink/\" dir on each DC. On the driver DC that is: ```bash rsync -av /var/db/snaplink-cleanup/delink/ /var/tmp/delink/ ``` On non-driver DCs, run something like this from the driver DC (depending on SSH access between the DCs): ```bash rsync -av /var/db/snaplink-cleanup/delink/ myregion-2:/var/tmp/delink/ ``` Copy the stordelink scripts to the appropriate storage nodes in this DC: ```bash manta-adm show storage -Ho zonename,storageid | while read zonename storageid; do delinkscript=/var/tmp/delink/${storageid}_stordelink.sh if [[ ! -f \"$delink_script\" ]]; then echo \"$storage_id: no stordelink script, skipping\" continue fi echo \"\" manta-oneach -z $zonename -X -d /var/tmp -g $delink_script echo \"$storageid: copied script to '/var/tmp/${storageid}_stordelink.sh' on zone '$zonename'\" done ``` Start the stordelink scripts on each storage node. ```bash manta-oneach -s storage 'storageid=$(json -f /opt/smartdc/mako/etc/gcconfig.json mantastorageid); nohup bash /var/tmp/${storageid}stordelink.sh &' ``` Check that each stordelink script ran successfully. The delink scripts generate a \"$name.success\" file on successful completion. We use that to check for success. ```bash manta-oneach -s storage 'storageid=$(json -f" }, { "data": "mantastorageid); if [[ -f /var/tmp/${storageid}stordelink.sh ]]; then cat /var/tmp/${storageid}stordelink.success; else echo \"(no stordelink script for ${storage_id})\"; fi' ``` For example: ``` [root@headnode (nightly-1) ~]# manta-oneach -s storage 'storageid=$(json -f /opt/smartdc/mako/etc/gcconfig.json mantastorageid); if [[ -f /var/tmp/${storageid}stordelink.sh ]]; then cat /var/tmp/${storageid}stordelink.success; else echo \"(no stordelink script for ${storage_id})\"; fi' SERVICE ZONE OUTPUT storage 81df545a [20200406T192654Z] Completed stordelink successfully. storage a811b282 [20200406T192654Z] Completed stordelink successfully. storage f7aeb86d (no stordelink script for 2.stor.nightly.joyent.us) ``` If a \".success\" file is not found for a given storage node, then one of the following is why: The stordelink script is still running. The stordelink script failed. Look at the \"/var/tmp/stordelink.$timestamp.xtrace.log\" file on the storage node for details. There is no stordelink script for this storage node -- possible if no snaplinked file ever landed on that storage node. Note: It is important to successfully run all \"stordelink\" scripts before running the \"moraydelink\" scripts, otherwise metadata will be updated to point to object ids for which no storage file exists. For this step you must run each \"/var/db/snaplink-cleanup/delink/\\*\\_moraydelink.sh\" script on a Manta moray instance for the appropriate shard. The shard is included in the filename. There will be one \"moraydelink\" script for each index moray shard (`INDEXMORAYSHARDS` in Manta metadata). Each script is idempotent, so can be run again if necessary. Each script will also error out if an attempt is made to run on the wrong shard node: ``` [root@01f043b4 (moray) /var/tmp]$ bash 1.moray.coalregion.joyent.us_moraydelink.sh Writing xtrace output to: /var/tmp/moraydelink.20200320T211337.xtrace.log 1.moray.coalregion.joyent.us_moraydelink.sh: fatal error: this moraydelink script must run on a moray for shard '1.moray.coalregion.joyent.us': this is '1.moray.coalregion.joyent.us' ``` A successful run looks like this: ``` [root@01f043b4 (moray) /var/tmp]$ bash 1.moray.coalregion.joyent.us_moraydelink.sh Writing xtrace output to: /var/tmp/moraydelink.20200320T214010.xtrace.log Completed moraydelink successfully. ``` Please report any errors in running these scripts. You can use the following steps to mostly automate running all the moraydelink scripts. In a typical Manta deployment every DC will have a \"moray\" instance for every shard. This means that all the \"moraydelink\" can be run in the driver DC. The steps below assume that. Copy the moraydelink scripts to a moray instance for the appropriate shard. ```bash regionname=$(bash /lib/sdc/config.sh -json | json regionname) dnsdomain=$(bash /lib/sdc/config.sh -json | json dnsdomain) moray_insts=$(manta-adm show moray -Ho shard,zonename | sed 's/^ *//g' | sort) indexshards=$(sdc-sapi '/applications?name=manta&includemaster=true' | json -H 0.metadata.INDEXMORAYSHARDS | json -e 'this.sh = this.host.split(\".\")[0]' -a sh) morayselectedinsts=\"\" for shard in $index_shards; do shardhost=$shard.moray.$regionname.$dns_domain delinkscript=/var/db/snaplink-cleanup/delink/${shardhost}_moraydelink.sh if [[ ! -f \"$delink_script\" ]]; then echo \"error: $shardhost: moraydelink script missing: $delinkscript\" continue fi echo \"\" zonename=$(echo \"$moray_insts\" | awk \"/^$shard /{print \\$2}\" | head -1) if [[ -z \"$zonename\" ]]; then echo \"error: $shard_host: could not find a moray instance for shard $shard in this DC\" continue fi morayselectedinsts=\"$morayselectedinsts,$zonename\" manta-oneach -z $zonename -X -d /var/tmp -g $delink_script echo \"$shardhost: copied script to '/var/tmp/${shardhost}_moraydelink.sh' on zone '$zonename'\" done ``` Start the moraydelink scripts on each shard. The following will run them all in parallel: ```bash manta-oneach -z \"$morayselectedinsts\" 'servicename=$(json -f /opt/smartdc/moray/etc/config.json servicename); nohup bash /var/tmp/${servicename}moraydelink.sh &' ``` Check that each moraydelink script ran successfully. The delink scripts generate a \"$name.success\" file on successful completion. We use that to check for success. ```bash manta-oneach -z \"$morayselectedinsts\" 'cat /var/tmp/*_moraydelink.success' ``` For example: ``` [root@headnode (nightly-1) ~]# manta-oneach -z \"$morayselectedinsts\" 'cat /var/tmp/*_moraydelink.success' SERVICE ZONE OUTPUT moray 97a6655c [20200406T194318Z] Completed moraydelink successfully. ``` If a" }, { "data": "file is not found for a given moray instance, then one of the following is why: The moraydelink script is still running. The moraydelink script failed. Look at the \"/var/tmp/moraydelink.$timestamp.xtrace.log\" file on the moray instance for details. Re-run `mantav2-migrate snaplink-cleanup` and confirm the scripts have successfully been run by entering \"delinked\". After confirming, `mantav2-migrate snaplink-cleanup` will remove the `SNAPLINKCLEANUPREQUIRED` metadatum to indicate that all snaplinks have been removed! ``` [root@headnode (mydc) ~]# mantav2-migrate snaplink-cleanup ... Enter \"delinked\" when all delink scripts have been successfully run: delinked Removing \"SNAPLINKCLEANUPREQUIRED\" metadatum. All snaplinks have been removed! ``` However, there are a few more steps. Now that snaplinks have been removed, the old `ACCOUNTSSNAPLINKSDISABLED` metadata is obsolete. Print the current value (for record keeping) and remove it from the SAPI metadata: ``` mantaapp=$(sdc-sapi '/applications?name=manta&includemaster=true' | json -H 0.uuid) sapiadm get \"$mantaapp\" | json metadata.ACCOUNTSSNAPLINKS_DISABLED echo '{\"action\": \"delete\", \"metadata\": {\"ACCOUNTSSNAPLINKSDISABLED\": null}}' | sapiadm update \"$manta_app\" ``` Now that the `SNAPLINKCLEANUPREQUIRED` config var has been removed, all webapi instances need to be poked to get this new config. You must ensure every webapi instance restarts with updated config. A blunt process to do this is to run the following in every Manta DC in the region: ``` manta-oneach -s webapi 'svcadm disable -s config-agent && svcadm enable -s config-agent && svcadm restart svc:/manta/application/muskie:muskie-*' ``` However, in a larger Manta with many webapi instances, you may want to space those out. Back in step 3.3, the \"snaplink-sherlock.sh\" script runs left some data (VMs and snapshots) that should be cleaned up. Run the following on the headnode global zone of every DC in this Manta region to remove the \"snaplink-sherlock-*\" VMs: ``` sdc-oneachnode -a 'vmadm list alias=~^snaplink-sherlock- owner_uuid=00000000-0000-0000-0000-000000000000' sdc-oneachnode -a 'vmadm lookup alias=~^snaplink-sherlock- owner_uuid=00000000-0000-0000-0000-000000000000 | while read uuid; do echo \"removing snaplink-sherlock VM $uuid\"; vmadm delete $uuid; done' ``` Run the following on the headnode global zone of every DC in this Manta region to remove the \"manatee@sherlock-*\" ZFS snapshots: ``` sdc-oneachnode -a \"zfs list -t snapshot | grep manatee@sherlock- | awk '{print \\$1}'\" sdc-oneachnode -a \"zfs list -t snapshot | grep manatee@sherlock- | awk '{print \\$1}' | xargs -n1 zfs destroy -v\" ``` An example run looks like this: ``` [root@headnode (mydc) /var/tmp]# sdc-oneachnode -a 'vmadm lookup alias=~^snaplink-sherlock- owner_uuid=00000000-0000-0000-0000-000000000000 | while read uuid; do echo \"removing snaplink-sherlock VM $uuid\"; vmadm delete $uuid; done' === Output from 564d4042-6b0c-8ab9-ae54-c445386f951c (headnode): removing snaplink-sherlock VM 19f12a13-d124-4255-9258-1f2f51138f0c removing snaplink-sherlock VM 3a104161-d2cc-43e6-aeb4-462154aa7406 removing snaplink-sherlock VM 5e9e3a2a-efe6-4dbd-8c21-1bbdbf5c72d2 removing snaplink-sherlock VM 61a455fd-68a1-4b21-9676-38b191efca86 removing snaplink-sherlock VM 0364e94d-e831-430e-9393-96f85bd36702 [root@headnode (mydc) /var/tmp]# sdc-oneachnode -a \"zfs list -t snapshot | grep manatee@sherlock- | awk '{print \\$1}' | xargs -n1 zfs destroy -v\" === Output from 564d4042-6b0c-8ab9-ae54-c445386f951c (headnode): will destroy zones/f8bd09a5-769e-4dd4-b53d-ddc3a56c8ae6/data/manatee@sherlock-24879 will reclaim 267K will destroy zones/f8bd09a5-769e-4dd4-b53d-ddc3a56c8ae6/data/manatee@sherlock-39245 will reclaim 248K will destroy zones/f8bd09a5-769e-4dd4-b53d-ddc3a56c8ae6/data/manatee@sherlock-40255 will reclaim 252K will destroy zones/f8bd09a5-769e-4dd4-b53d-ddc3a56c8ae6/data/manatee@sherlock-41606 will reclaim 256K will destroy zones/f8bd09a5-769e-4dd4-b53d-ddc3a56c8ae6/data/manatee@sherlock-42555 will reclaim 257K ``` It is probably a good idea to archive the snaplink-cleanup files for record keeping. For example, run this on the driver DC: ``` (cd /var/db && tar czf /var/tmp/snaplink-cleanup-$(bash /lib/sdc/config.sh -json | json region_name).tgz snaplink-cleanup) ls -l /var/tmp/snaplink-cleanup*.tgz ``` And then attach or archive that tarball somewhere (perhaps attaching it to your process ticket tracking snaplink removal, if small enough). The new garbage-collector system should be deployed. As a prerequisite, update all \"moray\" service instances to a \"mantav2-moray\" image after 20200413 (to include the fix for MANTA-5155). As a prerequisite, update all \"electric-moray\" service instances to a \"mantav2-electric-moray\" image after 20200130 (to include the MANTA-4992" }, { "data": "Update all \"storage\" service instances to a recent (2020-03-19 or later) \"mantav2-storage\" image. A direct way to do this is as follows. A production Manta operator may prefer to space out these storage node updates. ``` updates-imgadm -C $(sdcadm channel get) list name=mantav2-storage lateststorageimage=$(updates-imgadm -C $(sdcadm channel get) list name=mantav2-storage -H -o uuid --latest) sdc-imgadm import -S https://updates.tritondatacenter.com $lateststorageimage manta-adm show -js >/var/tmp/config.json vi /var/tmp/config.json # update storage instances manta-adm update /var/tmp/config.json ``` Follow [the GC deployment steps](https://github.com/TritonDataCenter/manta-garbage-collector/tree/master/docs#deploying-the-garbage-collector). There are a number of Manta services that are obsoleted by mantav2 and can (and should) be removed at this time. (Note: Removal of these services also works before any of the above mantav2 migration steps, if that is easier.) The services (and their instances) to remove are: jobpuller jobsupervisor marlin-dashboard medusa marlin marlin-agent (This is an agent on each server, rather than a SAPI service.) (Internal Joyent ops should look at the appropriate [change-mgmt template](https://github.com/TritonDataCenter/change-mgmt/blob/master/change-plan-templates/mantav2/JPC/0-jobtier-remove.md) for this procedure.) A simplified procedure is as follows: Run the following in each Manta DC to remove all service instances: ```bash function delete_insts { local svc=$1 if [[ -z \"$svc\" ]]; then echo \"delete_insts error: 'svc' is empty\" >&2 return 1 fi echo \"\" echo \"# delete service '$svc' instances\" manta-adm show -Ho zonename \"$svc\" | xargs -n1 -I% sdc-sapi /instances/% -X DELETE } if [[ ! -f /var/tmp/manta-config-before-jobs-infra-cleanup.json ]]; then manta-adm show -js >/var/tmp/manta-config-before-jobs-infra-cleanup.json fi JOBS_SERVICES=\"jobpuller jobsupervisor marlin-dashboard medusa marlin\" for svc in $JOBS_SERVICES; do delete_insts \"$svc\" done ``` Run the following in each Manta DC to remove the marlin agent on every server: ``` sdc-oneachnode -a \"apm uninstall marlin\" ``` Notes: This can be re-run to catch missing servers. If the marlin agent is already removed, this will still run successfully on a server. Until is complete the marlin agent is still a part of the Triton \"agentsshar\", and hence will be re-installed when Triton agents are updated. This causes no harm. The above command can be re-run to re-remove the marlin agents. Run the following in one Manta DC to clear out SAPI service entries: ```bash function delete_svc { local svc=$1 if [[ -z \"$svc\" ]]; then echo \"delete_svc error: 'svc' is empty\" >&2 return 1 fi echo \"\" echo \"# delete manta SAPI service '$svc'\" local mantaapp=$(sdc-sapi \"/applications?name=manta&includemaster=true\" | json -H 0.uuid) if [[ -z \"$manta_app\" ]]; then echo \"delete_svc error: could not find 'manta' app\" >&2 return 1 fi local serviceuuid=$(sdc-sapi \"/services?name=$svc&applicationuuid=$mantaapp&includemaster=true\" | json -H 0.uuid) if [[ -z \"$service_uuid\" ]]; then echo \"delete_svc error: could not find manta service '$svc'\" >&2 return 1 fi sdc-sapi \"/services/$service_uuid\" -X DELETE } JOBS_SERVICES=\"jobpuller jobsupervisor marlin-dashboard medusa marlin\" for svc in $JOBS_SERVICES; do delete_svc \"$svc\" done ``` It is recommended that all current services be updated to the latest \"mantav2-\\*\" image (the \"webapi\" and \"storage\" services have already been done above). The \"storinfo\" service should be deployed. (TODO: The operator guide, or here, should provide details for sizing/scaling the storinfo service.) At this point the services for the new \"Rebalancer\" system and the \"Buckets API\" can be deployed. (TODO: The operator guide and/or here should provide details for deploying those.) There remain a few things that can be cleaned out of the system. They are: Clean out the old GC-related `mantadeletelog` (`MANTADELETELOGCLEANUPREQUIRED`). Clean out obsolete reports files under `/:login/reports/` (`REPORTSCLEANUPREQUIRED`). Clean out archived jobs files under `/:login/jobs/` (`ARCHIVEDJOBSCLEANUP_REQUIRED`). However, details on how to clean those up are not yet ready (TODO). None of this data causes any harm to the operation of mantav2." } ]
{ "category": "Runtime", "file_name": "mantav2-migration.md", "project_name": "Triton Object Storage", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Velero plugin system\" layout: docs Velero uses storage provider plugins to integrate with a variety of storage systems to support backup and snapshot operations. For server installation, Velero requires that at least one plugin is added (with the `--plugins` flag). The plugin will be either of the type object store or volume snapshotter, or a plugin that contains both. An exception to this is that when the user is not configuring a backup storage location or a snapshot storage location at the time of install, this flag is optional. Any plugin can be added after Velero has been installed by using the command `velero plugin add <registry/image:version>`. Example with a dockerhub image: `velero plugin add velero/velero-plugin-for-aws:v1.0.0`. In the same way, any plugin can be removed by using the command `velero plugin remove <registry/image:version>`. Anyone can add integrations for any platform to provide additional backup and volume storage without modifying the Velero codebase. To write a plugin for a new backup or volume storage platform, take a look at our and at our documentation for . After you publish your plugin on your own repository, open a PR that adds a link to it under the appropriate list of page in our documentation. You can also add the to your repo, and it will be shown under the aggregated list of repositories automatically." } ]
{ "category": "Runtime", "file_name": "overview-plugins.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "The load balancer allocator controller looks for services with the type LoadBalancer and tries to allocate addresses for it if needed. The controller doesn't enable any announcement of the addresses by default, so `--advertise-loadbalancer-ip` should be set to true and BGP peers configured. By default the controller allocates addresses for all LoadBalancer services with the where `loadBalancerClass` is empty or set to one of \"default\" or \"kube-router\". If `--loadbalancer-default-class` is set to false, the controller will only handle services with the class set to \"kube-router\". The controller needs some extra permissions to get, create and update leases for leader election and to update services with allocated addresses. Example permissions: ```yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kube-router namespace: kube-system rules: apiGroups: \"coordination.k8s.io\" resources: leases verbs: get create update apiGroups: \"\" resources: services/status verbs: update ``` The controller uses the environment variable `POD_NAME` as the identify for the lease used for leader election. Using the kubernetes downward api to set `POD_NAME` to the pod name the lease identify will match the current leader. ```yaml apiVersion: apps/v1 kind: DaemonSet metadata: labels: k8s-app: kube-router tier: node name: kube-router namespace: kube-system spec: ... template: metadata: .... spec: ... env: name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name ... ``` The environment variable `POD_NAMESPACE` can also be specified to set the namespace used for the lease. By default the namespace is looked up from within the pod using `/var/run/secrets/kubernetes.io/serviceaccount/namespace`. When running the controller outside a pod, both `PODNAME` and `PODNAMESPACE` must set for the controller to work. `POD_NAME` should be unique per instance, so using for example the hostname of the machine might be a good idea. `POD_NAMESPACE` must be the same across all instances running in the same cluster. It's not possible to specify the addresses for the load balancer services. A externalIP service can be used instead." } ]
{ "category": "Runtime", "file_name": "load-balancer-allocator.md", "project_name": "Kube-router", "subcategory": "Cloud Native Network" }
[ { "data": "title: Rook Upgrades This guide will walk through the steps to upgrade the software in a Rook cluster from one version to the next. This guide focuses on updating the Rook version for the management layer, while the guide focuses on updating the data layer. Upgrades for both the operator and for Ceph are entirely automated except where Rook's permissions need to be explicitly updated by an admin or when incompatibilities need to be addressed manually due to customizations. We welcome feedback and opening issues! This guide is for upgrading from Rook v1.13.x to Rook v1.14.x. Please refer to the upgrade guides from previous releases for supported upgrade paths. Rook upgrades are only supported between official releases. For a guide to upgrade previous versions of Rook, please refer to the version of documentation for those releases. !!! important Rook releases from master are expressly unsupported. It is strongly recommended to use of Rook. Unreleased versions from the master branch are subject to changes and incompatibilities that will not be supported in the official releases. Builds from the master branch can have functionality changed or removed at any time without compatibility support and without prior notice. The minimum supported version of Kubernetes is v1.25. Upgrade to Kubernetes v1.25 or higher before upgrading Rook. The Rook operator config `CSIENABLEREAD_AFFINITY` was removed. v1.13 clusters that have modified this value to be `\"true\"` must set the option as desired in each CephCluster as documented before upgrading to v1.14. Rook is beginning the process of deprecating CSI network \"holder\" pods. If there are pods named `csi-plugin-holder-` in the Rook operator namespace, see the to disable them. This is optional for v1.14, but will be required in a future release. In the operator helm chart, the images for the CSI driver are now specified with separate `repository` and `tag` values. If the CSI images have been customized, convert them from the `image` value to the separated `repository` and `tag` values. With this upgrade guide, there are a few notes to consider: WARNING*: Upgrading a Rook cluster is not without risk. There may be unexpected issues or obstacles that damage the integrity and health the storage cluster, including data loss. The Rook cluster's storage may be unavailable for short periods during the upgrade process for both Rook operator updates and for Ceph version updates. Read this document in full before undertaking a Rook cluster" }, { "data": "Unless otherwise noted due to extenuating requirements, upgrades from one patch release of Rook to another are as simple as updating the common resources and the image of the Rook operator. For example, when Rook v1.14.1 is released, the process of updating from v1.14.0 is as simple as running the following: ```console git clone --single-branch --depth=1 --branch v1.14.1 https://github.com/rook/rook.git cd rook/deploy/examples ``` If the Rook Operator or CephCluster are deployed into a different namespace than `rook-ceph`, see the section for instructions on how to change the default namespaces in `common.yaml`. Then, apply the latest changes from v1.14, and update the Rook Operator image. ```console kubectl apply -f common.yaml -f crds.yaml kubectl -n rook-ceph set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:v1.14.1 ``` As exemplified above, it is a good practice to update Rook common resources from the example manifests before any update. The common resources and CRDs might not be updated with every release, but Kubernetes will only apply updates to the ones that changed. Also update optional resources like Prometheus monitoring noted more fully in the . If Rook is installed via the Helm chart, Helm will handle some details of the upgrade itself. The upgrade steps in this guide will clarify what Helm handles automatically. !!! important If there are pods named `csi-plugin-holder-` in the Rook operator namespace, set the new config `csi.disableHolderPods: false` in the values.yaml before upgrading to v1.14. The `rook-ceph` helm chart upgrade performs the Rook upgrade. The `rook-ceph-cluster` helm chart upgrade performs a if the Ceph image is updated. The `rook-ceph` chart should be upgraded before `rook-ceph-cluster`, so the latest operator has the opportunity to update custom resources as necessary. !!! note Be sure to update to a In order to successfully upgrade a Rook cluster, the following prerequisites must be met: The cluster should be in a healthy state with full functionality. Review the in order to verify a CephCluster is in a good starting state. All pods consuming Rook storage should be created, running, and in a steady state. The examples given in this guide upgrade a live Rook cluster running `v1.13.7` to the version `v1.14.0`. This upgrade should work from any official patch release of Rook v1.13 to any official patch release of v1.14. Let's get started! These instructions will work for as long the environment is parameterized correctly. Set the following environment variables, which will be used throughout this document. ```console export ROOKOPERATORNAMESPACE=rook-ceph export ROOKCLUSTERNAMESPACE=rook-ceph ``` !!! hint Common resources and CRDs are automatically updated when using Helm charts. First, apply updates to Rook common resources. This includes modified privileges (RBAC) needed by the Operator. Also update the Custom Resource Definitions (CRDs). Get the latest common resources manifests that contain the latest" }, { "data": "```console git clone --single-branch --depth=1 --branch master https://github.com/rook/rook.git cd rook/deploy/examples ``` If the Rook Operator or CephCluster are deployed into a different namespace than `rook-ceph`, update the common resource manifests to use your `ROOKOPERATORNAMESPACE` and `ROOKCLUSTERNAMESPACE` using `sed`. ```console sed -i.bak \\ -e \"s/\\(.\\):.# namespace:operator/\\1: $ROOKOPERATORNAMESPACE # namespace:operator/g\" \\ -e \"s/\\(.\\):.# namespace:cluster/\\1: $ROOKCLUSTERNAMESPACE # namespace:cluster/g\" \\ common.yaml ``` Apply the resources. ```console kubectl apply -f common.yaml -f crds.yaml ``` If is enabled, follow this step to upgrade the Prometheus RBAC resources as well. ```console kubectl apply -f deploy/examples/monitoring/rbac.yaml ``` !!! hint The operator is automatically updated when using Helm charts. The largest portion of the upgrade is triggered when the operator's image is updated to `v1.14.x`. When the operator is updated, it will proceed to update all of the Ceph daemons. ```console kubectl -n $ROOKOPERATORNAMESPACE set image deploy/rook-ceph-operator rook-ceph-operator=rook/ceph:master ``` !!! hint This is automatically updated if custom CSI image versions are not set. !!! important The minimum supported version of Ceph-CSI is v3.8.0. Update to the latest Ceph-CSI drivers if custom CSI images are specified. See the documentation. !!! note If using snapshots, refer to the . Watch now in amazement as the Ceph mons, mgrs, OSDs, rbd-mirrors, MDSes and RGWs are terminated and replaced with updated versions in sequence. The cluster may be unresponsive very briefly as mons update, and the Ceph Filesystem may fall offline a few times while the MDSes are upgrading. This is normal. The versions of the components can be viewed as they are updated: ```console watch --exec kubectl -n $ROOKCLUSTERNAMESPACE get deployments -l rookcluster=$ROOKCLUSTER_NAMESPACE -o jsonpath='{range .items[*]}{.metadata.name}{\" \\treq/upd/avl: \"}{.spec.replicas}{\"/\"}{.status.updatedReplicas}{\"/\"}{.status.readyReplicas}{\" \\trook-version=\"}{.metadata.labels.rook-version}{\"\\n\"}{end}' ``` As an example, this cluster is midway through updating the OSDs. When all deployments report `1/1/1` availability and `rook-version=v1.14.0`, the Ceph cluster's core components are fully updated. ```console Every 2.0s: kubectl -n rook-ceph get deployment -o j... rook-ceph-mgr-a req/upd/avl: 1/1/1 rook-version=v1.14.0 rook-ceph-mon-a req/upd/avl: 1/1/1 rook-version=v1.14.0 rook-ceph-mon-b req/upd/avl: 1/1/1 rook-version=v1.14.0 rook-ceph-mon-c req/upd/avl: 1/1/1 rook-version=v1.14.0 rook-ceph-osd-0 req/upd/avl: 1// rook-version=v1.14.0 rook-ceph-osd-1 req/upd/avl: 1/1/1 rook-version=v1.13.7 rook-ceph-osd-2 req/upd/avl: 1/1/1 rook-version=v1.13.7 ``` An easy check to see if the upgrade is totally finished is to check that there is only one `rook-version` reported across the cluster. ```console This cluster is not yet finished: rook-version=v1.13.7 rook-version=v1.14.0 This cluster is finished: rook-version=v1.14.0 ``` At this point, the Rook operator should be running version `rook/ceph:v1.14.0`. Verify the CephCluster health using the . Rook is beginning the process of deprecating CSI network \"holder\" pods. If there are pods named `csi-plugin-holder-` in the Rook operator namespace, see the to disable them. This is optional for v1.14, but will be required in a future release." } ]
{ "category": "Runtime", "file_name": "rook-upgrade.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "| Author | | | | | | Date | 2022-12-07 | | Email | [email protected] | 1.25 https://github.com/kubernetes/cri-api Currently ignored for pods sharing the host networking namespace list of additional ips (not inclusive of PodSandboxNetworkStatus.Ip) of the PodSandBoxNetworkStatus WindowsPodSandboxConfig overhead. Optional overhead represents the overheads associated with this sandbox resources. Optional resources represents the sum of container resources for this sandbox seccomp. Seccomp profile for the sandbox apparmor. AppArmor profile for the sandbox seccompprofilepath userns_options. UsernsOptions for this pod sandbox pid. Support from (POD, CONTAINER, NODE) to (POD, CONTAINER, NODE, TARGET) ```proto // UserNamespace describes the intended user namespace configuration for a pod sandbox. message UserNamespace { // Mode is the NamespaceMode for this UserNamespace. // Note: NamespaceMode for UserNamespace currently supports only POD and NODE, not CONTAINER OR TARGET. NamespaceMode mode = 1; // Uids specifies the UID mappings for the user namespace. repeated IDMapping uids = 2; // Gids specifies the GID mappings for the user namespace. repeated IDMapping gids = 3; } ``` ```proto // IDMapping describes host to container ID mappings for a pod sandbox. message IDMapping { // HostId is the id on the host. uint32 host_id = 1; // ContainerId is the id in the container. uint32 container_id = 2; // Length is the size of the range to map. uint32 length = 3; } ``` seccomp. Seccomp profile for the container apparmor. AppArmor profile for the container apparmor_profile seccompprofilepath addambientcapabilities. List of ambient capabilities to add Dropping a capability will drop it from all sets add/drop capabilities // If a capability is added to only the add_capabilities list then it gets added to permitted, // inheritable, effective and bounding sets, i.e. all sets except the ambient set. // If a capability is added to only the addambientcapabilities list then it gets added to all sets, i.e permitted // inheritable, effective, bounding and ambient sets. // If a capability is added to addcapabilities and addambient_capabilities lists then it gets added to all sets, i.e. // permitted, inheritable, effective, bounding and ambient sets. WindowsContainerResources. annotations. Unstructured key-value map holding arbitrary additional information for container resources updating unified. Unified resources for cgroup v2 memoryswaplimitinbytes. Memory swap limit in bytes resources. Resource limits configuration of the container WindowsContainerResources ```proto // ContainerResources holds resource limits configuration for a container. message ContainerResources { // Resource limits configuration specific to Linux container. LinuxContainerResources linux = 1; // Resource limits configuration specific to Windows container. WindowsContainerResources windows = 2; } ``` pinned. Recommendation on whether this image should be exempt from garbage collection" }, { "data": "Total CPU usage (sum of all cores) averaged over the sample window available_bytes. Available memory for use usage_bytes. Total memory in use rss_bytes. The amount of anonymous and swap cache memory (includes transparent hugepages) page_faults. Cumulative number of minor page faults majorpagefaults. Cumulative number of major page faults PodSandboxStats returns stats of the pod sandbox ```proto message PodSandboxStatsRequest { // ID of the pod sandbox for which to retrieve stats. string podsandboxid = 1; } message PodSandboxStatsResponse { PodSandboxStats stats = 1; } ``` WindowsPodSandboxStats ```proto // PodSandboxStats provides the resource usage statistics for a pod. // The linux or windows field will be populated depending on the platform. message PodSandboxStats { // Information of the pod. PodSandboxAttributes attributes = 1; // Stats from linux. LinuxPodSandboxStats linux = 2; // Stats from windows. WindowsPodSandboxStats windows = 3; } ``` ```proto // PodSandboxAttributes provides basic information of the pod sandbox. message PodSandboxAttributes { // ID of the pod sandbox. string id = 1; // Metadata of the pod sandbox. PodSandboxMetadata metadata = 2; // Key-value pairs that may be used to scope and select individual resources. map<string,string> labels = 3; // Unstructured key-value map holding arbitrary metadata. // Annotations MUST NOT be altered by the runtime; the value of this field // MUST be identical to that of the corresponding PodSandboxStatus used to // instantiate the PodSandbox this status represents. map<string,string> annotations = 4; } ``` ```proto // LinuxPodSandboxStats provides the resource usage statistics for a pod sandbox on linux. message LinuxPodSandboxStats { // CPU usage gathered for the pod sandbox. CpuUsage cpu = 1; // Memory usage gathered for the pod sandbox. MemoryUsage memory = 2; // Network usage gathered for the pod sandbox NetworkUsage network = 3; // Stats pertaining to processes in the pod sandbox. ProcessUsage process = 4; // Stats of containers in the measured pod sandbox. repeated ContainerStats containers = 5; } ``` ```proto // NetworkUsage contains data about network resources. message NetworkUsage { // The time at which these stats were updated. int64 timestamp = 1; // Stats for the default network interface. NetworkInterfaceUsage default_interface = 2; // Stats for all found network interfaces, excluding the default. repeated NetworkInterfaceUsage interfaces = 3; } ``` ```proto // NetworkInterfaceUsage contains resource value data about a network interface. message NetworkInterfaceUsage { // The name of the network interface. string name = 1; // Cumulative count of bytes received. UInt64Value rx_bytes = 2; // Cumulative count of receive errors encountered. UInt64Value rx_errors = 3; // Cumulative count of bytes transmitted. UInt64Value tx_bytes = 4; // Cumulative count of transmit errors" }, { "data": "UInt64Value tx_errors = 5; } ``` ListPodSandboxStats returns stats of the pod sandboxes matching a filter ```proto message ListPodSandboxStatsRequest { // Filter for the list request. PodSandboxStatsFilter filter = 1; } message ListPodSandboxStatsResponse { // Stats of the pod sandbox. repeated PodSandboxStats stats = 1; } ``` ```proto // PodSandboxStatsFilter is used to filter the list of pod sandboxes to retrieve stats for. // All those fields are combined with 'AND'. message PodSandboxStatsFilter { // ID of the pod sandbox. string id = 1; // LabelSelector to select matches. // Only api.MatchLabels is supported for now and the requirements // are ANDed. MatchExpressions is not supported yet. map<string, string> label_selector = 2; } ``` ```proto // ProcessUsage are stats pertaining to processes. message ProcessUsage { // The time at which these stats were updated. int64 timestamp = 1; // Number of processes. UInt64Value process_count = 2; } ``` CheckpointContainer checkpoints a container ```proto message CheckpointContainerRequest { // ID of the container to be checkpointed. string container_id = 1; // Location of the checkpoint archive used for export string location = 2; // Timeout in seconds for the checkpoint to complete. // Timeout of zero means to use the CRI default. // Timeout > 0 means to use the user specified timeout. int64 timeout = 3; } message CheckpointContainerResponse {} ``` GetContainerEvents gets container events from the CRI runtime ```proto message GetEventsRequest {} message ContainerEventResponse { // ID of the container string container_id = 1; // Type of the container event ContainerEventType containereventtype = 2; // Creation timestamp of this event int64 created_at = 3; // ID of the sandbox container PodSandboxMetadata podsandboxmetadata = 4; } ``` ```proto enum ContainerEventType { // Container created CONTAINERCREATEDEVENT = 0; // Container started CONTAINERSTARTEDEVENT = 1; // Container stopped CONTAINERSTOPPEDEVENT = 2; // Container deleted CONTAINERDELETEDEVENT = 3; } ``` ```proto // A security profile which can be used for sandboxes and containers. message SecurityProfile { // Available profile types. enum ProfileType { // The container runtime default profile should be used. RuntimeDefault = 0; // Disable the feature for the sandbox or the container. Unconfined = 1; // A pre-defined profile on the node should be used. Localhost = 2; } // Indicator which `ProfileType` should be applied. ProfileType profile_type = 1; // Indicates that a pre-defined profile on the node should be used. // Must only be set if `ProfileType` is `Localhost`. // For seccomp, it must be an absolute path to the seccomp profile. // For AppArmor, this field is the AppArmor `<profile name>/` string localhost_ref = 2; } ``` WindowsPodSandboxConfig WindowsPodSandboxStats WindowsSandboxSecurityContext WindowsContainerSecurityContext WindowsContainerResources" } ]
{ "category": "Runtime", "file_name": "CRI_1.25_interface_change.md", "project_name": "iSulad", "subcategory": "Container Runtime" }
[ { "data": "(instances-configure)= You can configure instances by setting {ref}`instance-properties`, {ref}`instance-options`, or by adding and configuring {ref}`devices`. See the following sections for instructions. ```{note} To store and reuse different instance configurations, use {ref}`profiles <profiles>`. ``` (instances-configure-options)= You can specify instance options when you {ref}`create an instance <instances-create>`. Alternatively, you can update the instance options after the instance is created. ````{tabs} ```{group-tab} CLI Use the command to update instance options. Specify the instance name and the key and value of the instance option: incus config set <instancename> <optionkey>=<optionvalue> <optionkey>=<option_value> ... ``` ```{group-tab} API Send a PATCH request to the instance to update instance options. Specify the instance name and the key and value of the instance option: incus query --request PATCH /1.0/instances/<instancename> --data '{\"config\": {\"<optionkey>\":\"<optionvalue>\",\"<optionkey>\":\"<option_value>\"}}' See for more information. ``` ```` See {ref}`instance-options` for a list of available options and information about which options are available for which instance type. For example, change the memory limit for your container: ````{tabs} ```{group-tab} CLI To set the memory limit to 8 GiB, enter the following command: incus config set my-container limits.memory=8GiB ``` ```{group-tab} API To set the memory limit to 8 GiB, send the following request: incus query --request PATCH /1.0/instances/my-container --data '{\"config\": {\"limits.memory\":\"8GiB\"}}' ``` ```` ```{note} Some of the instance options are updated immediately while the instance is running. Others are updated only when the instance is restarted. See the \"Live update\" information in the {ref}`instance-options` reference for information about which options are applied immediately while the instance is running. ``` (instances-configure-properties)= ````{tabs} ```{group-tab} CLI To update instance properties after the instance is created, use the command with the `--property` flag. Specify the instance name and the key and value of the instance property: incus config set <instancename> <propertykey>=<propertyvalue> <propertykey>=<property_value> ... --property Using the same flag, you can also unset a property just like you would unset a configuration option: incus config unset <instancename> <propertykey> --property You can also retrieve a specific property value with: incus config get <instancename> <propertykey> --property ``` ```{group-tab} API To update instance properties through the API, use the same mechanism as for configuring instance options. The only difference is that properties are on the root level of the configuration, while options are under the `config` field. Therefore, to set an instance property, send a PATCH request to the instance: incus query --request PATCH /1.0/instances/<instancename> --data '{\"<propertykey>\":\"<propertyvalue>\",\"<propertykey>\":\"property_value>\"}}' To unset an instance property, send a PUT request that contains the full instance configuration that you want except for the property that you want to unset. See and for more information. ``` ```` (instances-configure-devices)= Generally, devices can be added or removed for a container while it is running. VMs support hotplugging for some device types, but not all. See {ref}`devices` for a list of available device types and their options. ```{note} Every device entry is identified by a name unique to the" }, { "data": "Devices from profiles are applied to the instance in the order in which the profiles are assigned to the instance. Devices defined directly in the instance configuration are applied last. At each stage, if a device with the same name already exists from an earlier stage, the whole device entry is overridden by the latest definition. Device names are limited to a maximum of 64 characters. ``` `````{tabs} ````{group-tab} CLI To add and configure an instance device for your instance, use the command. Specify the instance name, a device name, the device type and maybe device options (depending on the {ref}`device type <devices>`): incus config device add <instancename> <devicename> <devicetype> <deviceoptionkey>=<deviceoptionvalue> <deviceoptionkey>=<deviceoption_value> ... For example, to add the storage at `/share/c1` on the host system to your instance at path `/opt`, enter the following command: incus config device add my-container disk-storage-device disk source=/share/c1 path=/opt To configure instance device options for a device that you have added earlier, use the command: incus config device set <instancename> <devicename> <deviceoptionkey>=<deviceoptionvalue> <deviceoptionkey>=<deviceoptionvalue> ... ```{note} You can also specify device options by using the `--device` flag when {ref}`creating an instance <instances-create>`. This is useful if you want to override device options for a device that is provided through a {ref}`profile <profiles>`. ``` To remove a device, use the command. See for a full list of available commands. ```` ````{group-tab} API To add and configure an instance device for your instance, use the same mechanism of patching the instance configuration. The device configuration is located under the `devices` field of the configuration. Specify the instance name, a device name, the device type and maybe device options (depending on the {ref}`device type <devices>`): incus query --request PATCH /1.0/instances/<instancename> --data '{\"devices\": {\"<devicename>\": {\"type\":\"<devicetype>\",\"<deviceoptionkey>\":\"<deviceoptionvalue>\",\"<deviceoptionkey>\":\"deviceoption_value>\"}}}' For example, to add the storage at `/share/c1` on the host system to your instance at path `/opt`, enter the following command: incus query --request PATCH /1.0/instances/my-container --data '{\"devices\": {\"disk-storage-device\": {\"type\":\"disk\",\"source\":\"/share/c1\",\"path\":\"/opt\"}}}' See for more information. ```` ````` ````{tabs} ```{group-tab} CLI To display the current configuration of your instance, including writable instance properties, instance options, devices and device options, enter the following command: incus config show <instance_name> --expanded ``` ```{group-tab} API To retrieve the current configuration of your instance, including writable instance properties, instance options, devices and device options, send a GET request to the instance: incus query /1.0/instances/<instance_name> See for more information. ``` ```` (instances-configure-edit)= `````{tabs} ````{group-tab} CLI To edit the full instance configuration, including writable instance properties, instance options, devices and device options, enter the following command: incus config edit <instance_name> ```{note} For convenience, the command displays the full configuration including read-only instance properties. However, you cannot edit those properties. Any changes are ignored. ``` ```` ````{group-tab} API To update the full instance configuration, including writable instance properties, instance options, devices and device options, send a PUT request to the instance: incus query --request PUT /1.0/instances/<instancename> --data '<instanceconfiguration>' See for more information. ```{note} If you include changes to any read-only instance properties in the configuration you provide, they are ignored. ``` ```` `````" } ]
{ "category": "Runtime", "file_name": "instances_configure.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "Below is a list of adopters of Rook in production environments that have publicly shared their usage as well as the benefits provided by Rook that their business relies on. If you are an adopter of Rook and not yet on this list, we encourage you to add your organization here as well! The goal for this list is give inspiration to others who are starting their Rook journey. Thank you to all adopters and contributors of the Rook project! To add your organization to this list, choose one of the following options: to directly update this list with the information Send an email to with the information For the latter two options, a maintainer will submit a PR to update the table. Please reach out to us on with any questions. If submitting a description, the addition will be at the end of first part of the list. If a description is not provided, the addition will be at the end of the full list. [Calit2 (California Institute for Telecommunications and Information Technology)](http://www.calit2.net/) is one of 4 institutes formed by a joint partnership of University of California and the state of California with the goal of *\"inventing the university research environment of the future\"*. They operate one of the largest known Rook clusters in production and they are using Rook to provide cheap, reliable, and fast storage to scientific users. is the current Norwegian public welfare agency, responsible for 1/3 of the state budget of Norway. They find a massive simplification of management and maintenance for their Ceph clusters by adopting Rook. delivers \"SaaS On-Prem\"* and are the creators of open-source : a custom Kubernetes distro creator that software vendors use to package and distribute production-grade Kubernetes infrastructure. Rook is a default add-on in kURL, so all installations include Rook to manage highly available storage that the software vendor can depend" }, { "data": "is building the largest and most comprehensive music database and marketplace in the world and services millions of users all across the globe. Rook enables them to save both time and money in the long term and allows their IT operations team to function with fewer dedicated staff. offers a full range of leading fintech solutions to financial institutions across Europe. Rook has been running flawlessly for them across many versions and upgrades, and delivers the performance and resilience they require for their most critical business applications. is on a mission to accelerate the growth of the Canadian Information and Communications Technology (ICT) sector. The Rook Ceph operator is key to the Kubernetes clusters frequently set up for projects by small-medium enterprises in the CENGN labs. develops and operates software for organizations like the Dutch Notary Association. They have survived multiple disaster scenarios already with Rook and it has made their cloud native journey in the private cloud so much easier by providing a powerful tool that lets them take advantage of a mature storage product with ease and peace of mind. Provides geospatial services and Geographical Information Systems (GIS). The latest versions of Rook have amazed them, especially compared to many of the other storage options they have attempted in the cloud native ecosystem. uses Ceph with Rook to provide a redundant and stable S3-compatible storage infrastructure for their services in order to provide the world's most advanced digital everyday assistant to their users. utilizes the flexibility of Rook's orchestration of Ceph for their users, taking advantage of the fast I/O with block storage as well as multiple readers and writers of shared filesystem storage. believes in strong community projects and are therefore putting their bets on Rook. They were able to seamlessly migrate VMs between host nodes with zero downtime thanks to Rook. uses Rook to power their website and GitLab for their CI/CD pipeline, because of the truly cloud-native experience, like *\"a little drop of the Google magic in our own server rack\"*." } ]
{ "category": "Runtime", "file_name": "ADOPTERS.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 9 Upgrade methods vary with different JuiceFS clients. The JuiceFS client only has one binary file. So to upgrade the new version, you only need to replace the old one with the new one. Use pre-compiled client: Refer to for details. Manually compile client: You can pull the latest source code and recompile it to overwrite the old version of the client. Please refer to for details. :::caution For the file system that has been mounted using the old version of JuiceFS client, you need to , and then re-mount it with the new version of JuiceFS client. When unmounting the file system, make sure that no application is accessing it. Otherwise the unmount will fail. Do not forcibly unmount the file system, as it may cause the application unable to continue to access it as expected. ::: Please refer to to learn how to upgrade JuiceFS CSI Driver. Like , upgrading S3 Gateway is to replace the old version with the new version. If it is , you need to upgrade according to the specific deployment method, which is described in detail below. Download and modify the `juicedata/juicefs-csi-driver` image tag in S3 Gateway to the version you want to upgrade (see for a detailed description of all versions), and then run the following command: ```shell kubectl apply -f ./juicefs-s3-gateway.yaml ``` Please run the following commands in sequence to upgrade the S3 Gateway: ```shell helm repo update helm upgrade juicefs-s3-gateway juicefs-s3-gateway/juicefs-s3-gateway -n kube-system -f ./values.yaml ``` Please refer to to learn how to install the new version of the Hadoop Java SDK, and then follow steps in to redeploy the new version of the client to complete the upgrade. :::note Some components must be restarted to use the new version of the Hadoop Java SDK. Please refer to the for details. :::" } ]
{ "category": "Runtime", "file_name": "upgrade.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "sysinfo [<options>] The sysinfo tool allows you to gather several pieces of information about a SmartOS host in one place. With no arguments system info is written to stdout as a JSON object. As some of the data can take some time to gather and most does not change after the system is up, the data is cached. This allows sysinfo to return quickly after the cache has been created. Any scripts which modify data that is included in the sysinfo output should run 'sysinfo -u' to update the cache after making changes. -f force an update of cache and then output the updated data -p output parseable key=value format instead of JSON (JSON is the default) -u update the cache only, do not output anything -? Print help and exit. The fields of the JSON output from sysinfo are listed below along with a brief description. They're also marked as one of: GZ only -- only available in sysinfo output from the global zone NGZ only -- only available in sysinfo output from a non-global zone GZ and NGZ -- available in sysinfo output from global or non-global GZ SDC only -- same as GZ only but also only available in Joyent's SDC \"Boot Parameters\" An object containing key/value pairs that were passed on the kernel command line. GZ only \"Bhyve Capable\" A boolean indicating whether the current hardware has the required features in order to boot bhyve VMs. GZ only \"Bhyve Max Vcpus\" A number indicating the maximum \"vcpus\" value that bhyve supports on this platform. When no support is available the value will be 0. GZ only \"Boot Time\" Timestamp (seconds since 1970-01-01 00:00:00 UTC) at which the system was booted. For a non-global zone, this is the time the zone was booted. GZ and NGZ \"CPU Socket Count\" Number of physical CPU sockets in this host. GZ and NGZ \"CPU Core Count\" Number of physical CPU cores in this host. GZ and NGZ \"CPU Online Count\" Number of online CPUs in this host. GZ and NGZ \"CPU Count\" Total number of CPUs, including offline and disabled CPUs, in this host. GZ and NGZ \"CPU Physical Cores\" Number of physical CPU sockets (not cores) in this host. Legacy value, set for compatibility only. Prefer \"CPU Socket Count\". GZ and NGZ \"CPU Total Cores\" Total number of CPUs, including offline and disabled CPUs, in this host. Not the number of cores. Legacy value, set for compatibility only. Prefer \"CPU Count\" or \"CPU Online Count\". GZ and NGZ \"CPU Type\" The model of CPU in this host. Eg. \"Intel(R) Xeon(R) CPU E5530 @ 2.40GHz\". GZ only \"CPU Virtualization\" Which CPU Virtualization features this host supports. Will be one of: 'none', 'vmx', 'svm' GZ only \"Datacenter Name\" This indicates the name of the datacenter in which this node is" }, { "data": "GZ SDC Only \"Disks\" This is an object containing information about disks on the system. Each disk is represented by another object (keyed on disk name) which includes the size in GB of that disk. GZ only \"Hostname\" The results of `hostname`. GZ and NGZ \"Link Aggregations\" An object with a member for each link aggregation configured on the machine. Entries include the LACP mode and names of the interfaces in the aggregation. GZ only \"Live Image\" This is the build stamp of the current platform. GZ and NGZ \"Manufacturer\" This is the name of the Hardware Manufacturer as set in the SMBIOS. Eg. \"Dell\". GZ only \"MiB of Memory\" The amount of DRAM (in MiB) available to processes. For the GZ this is the amount available to the system. For a non-global zone, this is the cap on memory for this zone. GZ and NGZ \"Network Interfaces\" An object with a member for each physical NIC attached to the machine. Entries include the MAC, ip4addr, Link Status and NIC Names (tags) for each interface. GZ only \"Product\" This is the name of the Product as set by your hardware vendor in the SMBIOS Eg. \"PowerEdge R710\". GZ only \"SDC Agents\" In SDC this is an array of installed agents and their versions. GZ SDC only \"SDC Version\" The version of SDC this platform belongs to. GZ SDC Only \"Serial Number\" Manufacturers serial number as set in SMBIOS. GZ only \"Setup\" Used to indicate whether the machine has been setup and is ready for provisioning. GZ SDC only \"System Type\" This is the output of 'uname -s'. GZ and NGZ \"UUID\" Universally unique identifier for this machine. In the GZ this will be the UUID from the SMBIOS info. In a zone this will be the UUID of the zone. GZ and NGZ \"Virtual Network Interfaces\" An object with a member for each virtual NIC attached to the machine. Entries include the MAC Address, ip4addr, Link Status and VLAN for each interface. In the GZ you also can see the \"Host Interface\" a vnic is attached to. GZ and NGZ \"VM Capable\" This is set to 'true' when this host can start KVM brand VMs. Note: This does not necessarily mean that the KVM driver will load. GZ only \"ZFS Quota\" In a non-global zone, this will give you the quota on your zone's zoneroot. NGZ only \"Zpool\" The name of the system zpool as set by smartdc-init. (usually: 'zones') GZ only \"Zpool Disks\" This is a comma separated list of disks that are part of the zpool. GZ only \"Zpool Profile\" This displays the proile of the zpool's disks. Will be one of: mirror, raidz3, raidz2, raidz, striped GZ only \"Zpool Size in GiB\" The total size of the zpool in GiB. GZ only dladm(8), hostname(1), ifconfig(8), prtconf(8), psrinfo(8), smbios(8), uname(1), zfs(8), zpool(8)" } ]
{ "category": "Runtime", "file_name": "sysinfo.8.md", "project_name": "SmartOS", "subcategory": "Container Runtime" }
[ { "data": "Kanister uses structured logging to ensure that its logs can be easily categorized, indexed and searched by downstream log aggregation software. By default, Kanister logs are output to the controller\\'s `stderr` in JSON format. Generally, these logs can be categorized into system logs and datapath logs. System logs are logs emitted by the Kanister to track important controller events like interactions with the Kubernetes APIs, CRUD operations on blueprints and actionsets etc. Datapath logs, on the other hand, are logs emitted by task pods created by Kanister. These logs are streamed to the Kanister controller before the task pods are terminated to ensure they are not lost inadvertently. Datapath log lines usually include the `LogKind` field, with its value set to `datapath`. The rest of this documentation provides instructions on how to segregate Kanister\\'s system logs from datapath logs using and . To run the provided commands, access to a Kubernetes cluster using the `kubectl` and `helm` command-line tools is required. Follow the instructions in the page to deploy Kanister on the cluster. The commands and screenshots in this documentation are tested with the following software versions: Loki 2.5.0 Grafana 8.5.3 Promtail 2.5.0 Let\\'s begin by installing Loki. Loki is a datastore optimized for holding log data. It indexes log data via streams made up of logs, where each stream is associated with a unique set of labels. ``` bash helm repo add grafana https://grafana.github.io/helm-charts helm repo update helm -n loki install --create-namespace loki grafana/loki \\ --set image.tag=2.5.0 ``` Confirm that the Loki StatefulSet is successfully rolled out: ``` bash kubectl -n loki rollout status sts/loki ``` ::: tip NOTE The Loki configuration used in this installation is meant for demonstration purposes only. The Helm chart deploys a non-HA single instance of Loki, managed by a StatefulSet workload. See the [Loki installation documentation](https://grafana.com/docs/loki/latest/installation/) for other installation methods that may be more suitable for your requirements. ::: Use Helm to install Grafana with a pre-configured Loki data source: ``` bash svc_url=$(kubectl -n loki get svc loki -ojsonpath='{.metadata.name}.{.metadata.namespace}:{.spec.ports[?(@.name==\"http-metrics\")].port}') cat <<EOF | helm -n grafana install --create-namespace grafana grafana/grafana -f - datasources: datasources.yaml: apiVersion: 1 datasources: name: Loki type: loki url: http://$svc_url access: proxy isDefault: true EOF ``` Confirm that the Grafana Deployment is successfully rolled out: ``` bash kubectl -n grafana rollout status deploy/grafana ``` Set up port-forward to access the Grafana UI: ``` bash kubectl -n grafana port-forward svc/grafana 3000:80 ``` Use a web browser to navigate to `localhost:3000`: The default login username is" }, { "data": "The login password can be retrieved using the following command: ``` bash kubectl -n grafana get secret grafana -o jsonpath=\"{.data.admin-password}\" | base64 --decode ; echo ``` Navigate to the data sources configuration under `Configuration` \\> `Data Sources` using the left-hand panel. Confirm that the `Loki` data source has already been added as part of the Grafana installation: Access the `Loki` data source configuration page. Use the `Test` button near the bottom of the page to test the connectivity between Grafana and Loki: The final step in the setup involves installing Promtail. Promtail is an agent that can be used to discover log targets and stream their logs to Loki: ``` bash svc_url=$(kubectl -n loki get svc loki -ojsonpath='{.metadata.name}.{.metadata.namespace}:{.spec.ports[?(@.name==\"http-metrics\")].port}') helm -n loki upgrade --install --create-namespace promtail grafana/promtail \\ --set image.tag=2.5.0 \\ --set \"config.clients[0].url=http://${svc_url}/loki/api/v1/push\" ``` Confirm that the Promtail DaemonSet is successfully rolled out: ``` bash kubectl -n loki rollout status ds/promtail ``` To simulate a steady stream of log lines, the next step defines a blueprint that uses to generate Apache common and error logs: ``` bash cat<<EOF | kubectl apply -f - apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: name: stream-apache-logs namespace: kanister actions: flogTask: phases: func: KubeTask name: taskApacheLogs args: namespace: \"{{ .Namespace.Name }}\" image: mingrammer/flog:0.4.3 command: flog -f apache_combined -n \"120\" -s 0.5s EOF ``` Create the following actionset to invoke the `flogTask` action in the blueprint: ``` bash cat<<EOF | kubectl create -f - apiVersion: cr.kanister.io/v1alpha1 kind: ActionSet metadata: generateName: stream-apache-logs-task- namespace: kanister spec: actions: name: flogTask blueprint: stream-apache-logs object: kind: Namespace name: default EOF ``` Head over to the Explore pane in the Grafana UI. Ensure that the `Loki` data source is selected. Enter the following query in the Log Browser input box to retrieve all Kanister logs: ``` bash {namespace=\"kanister\"} ``` The log outputs should look similar to this: Use the next query to select only the datapath logs, replacing `${actionset}` with the name of the recently created actionset: ``` bash {namespace=\"kanister\"} | json | LogKind=\"datapath\",ActionSet=\"${actionset}\" ``` The Logs pane should only display Apache log lines generated by flog: LogQL is a very expressive language inspired by PromQL. There is so much more one can do with it. Be sure to check out its for other use cases that involve more advanced line and label filtering, formatting and parsing. As seen in this documentation, Kanister\\'s consistent structured log lines allow one to easily integrate Kanister with more advanced log aggregation solutions to improve ensure better observability within the data protection workflows. To remove Loki, Grafana and Promtail, use the following `helm` commands: ``` bash helm -n grafana uninstall grafana helm -n loki uninstall promtail helm -n loki uninstall loki ```" } ]
{ "category": "Runtime", "file_name": "logs.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "Breaking changes: [Version]: Bump the version of the WasmEdge shared library. Due to the breaking change of API, bump the `SOVERSION` to `0.1.0`. Due to the breaking change of API, bump the plug-in `API_VERSION` to `3`. [C API]: Changes for applying Typed Function References Proposal. New `WasmEdgeValType` structure for replacing `enum WasmEdgeValType`. Merge the `enum WasmEdgeValType` and `enum WasmEdgeRefType` into the `enum WasmEdge_TypeCode`. Refactored the error code. The error code number may different from previous versions. Extend the error code to 2 bytes. Updated the related APIs for using `enum WasmEdge_ValType` as parameters. `WasmEdge_FunctionTypeCreate()` `WasmEdge_FunctionTypeGetParameters()` `WasmEdge_FunctionTypeGetReturns()` `WasmEdge_TableTypeCreate()` `WasmEdge_TableTypeGetRefType()` `WasmEdge_GlobalTypeCreate()` `WasmEdge_GlobalTypeGetValType()` Removed `WasmEdge_ValueGenNullRef()` API. Due to non-defaultable values after this proposal, the following APIs return the result instead of void. `WasmEdge_GlobalInstanceSetValue()` Introduced the `WasmEdge_Bytes` structure. This structure is for packaging the `uint8_t` buffers. The old `FromBuffer` related APIs will be replaced by the corresponding APIs in the future versions. `WasmEdgeCompilerCompileFromBytes()` API has the same function as `WasmEdgeCompilerCompileFromBuffer()` and will replace it in the future. `WasmEdgeLoaderParseFromBytes()` API has the same function as `WasmEdgeLoaderParseFromBuffer()` and will replace it in the future. `WasmEdgeVMRegisterModuleFromBytes()` API has the same function as `WasmEdgeVMRegisterModuleFromBuffer()` and will replace it in the future. `WasmEdgeVMRunWasmFromBytes()` API has the same function as `WasmEdgeVMRunWasmFromBuffer()` and will replace it in the future. `WasmEdgeVMAsyncRunWasmFromBytes()` API has the same function as `WasmEdgeVMAsyncRunWasmFromBuffer()` and will replace it in the future. `WasmEdgeVMLoadWasmFromBytes()` API has the same function as `WasmEdgeVMLoadWasmFromBuffer()` and will replace it in the future. New APIs for WASM Exception-Handling proposal. Added the `WasmEdge_TagTypeContext` struct. Added the `WasmEdge_TagInstanceContext` struct. Added the `WasmEdge_TagTypeGetFunctionType()` API for retrieving the function type from a tag type. Added the `WasmEdge_ImportTypeGetTagType()` API for retrieving the tag type from an import type. Added the `WasmEdge_ExportTypeGetTagType()` API for retrieving the tag type from an export type. Added the `WasmEdge_ModuleInstanceFindTag()` API for finding an exported tag instance from a module instance. Added the `WasmEdgeModuleInstanceListTagLength()` and `WasmEdgeModuleInstanceListTag()` APIs for listing the exported tag instances of a module instance. Refactored the `OpCode` mechanism for speeding up and supporting WASM multi-bytes instruction OpCodes. Features: Bumpped `spdlog` to `v1.13.0`. Bumpped `simdjson` to `v3.9.1`. [Proposal]: Apply new propoals. Supported WASM Typed Function References proposal. Added the `WasmEdgeProposalFunctionReferences` for the configuration in WasmEdge C API. Users can use the `--enable-function-reference` to enable the proposal in `wasmedge` and `wasmedgec` tools. Supported WASM GC proposal (interpreter only). Added the `WasmEdgeProposalGC` for the configuration in WasmEdge C API. Users can use the `--enable-gc` to enable the proposal in `wasmedge` and `wasmedgec` tools. Supported WASM Exception-Handling proposal (interpreter only). Added the `WasmEdgeProposalExceptionHandling` for the configuration in WasmEdge C API. Users can use the `--enable-exception-handling` to enable the proposal in `wasmedge` and `wasmedgec` tools. This proposal supports old deprecated `try`, `catch`, and `catch_all` instructions, and will remove them in the future version. Component Model proposal (experimental, loader phase only). Added the `WasmEdgeProposalComponent` for the configuration in WasmEdge C API. Users can use the `--enable-component` to enable the proposal in `wasmedge` tool. [JIT]: Support LLVM JIT. [C API]: New C API for supporting the new proposals. `WasmEdge_ValType` related APIs can help developers to generate or compare value types. `WasmEdgeValTypeGenI32()` (replacing `WasmEdgeValType_I32`) `WasmEdgeValTypeGenI64()` (replacing `WasmEdgeValType_I64`) `WasmEdgeValTypeGenF32()` (replacing `WasmEdgeValType_F32`) `WasmEdgeValTypeGenF64()` (replacing `WasmEdgeValType_F64`) `WasmEdgeValTypeGenV128()` (replacing `WasmEdgeValType_V128`) `WasmEdgeValTypeGenFuncRef()` (replacing `WasmEdgeValType_FuncRef`) `WasmEdgeValTypeGenExternRef()` (replacing `WasmEdgeValType_ExternRef`) `WasmEdge_ValTypeIsEqual()` `WasmEdge_ValTypeIsI32()` `WasmEdge_ValTypeIsI64()` `WasmEdge_ValTypeIsF32()` `WasmEdge_ValTypeIsF64()` `WasmEdge_ValTypeIsV128()` `WasmEdge_ValTypeIsFuncRef()` `WasmEdge_ValTypeIsExternRef()` `WasmEdge_ValTypeIsRef()` `WasmEdge_ValTypeIsRefNull()` `WasmEdge_Bytes` related APIs can help developers to control the buffers. `WasmEdge_BytesCreate()` `WasmEdge_BytesWrap()` `WasmEdge_BytesDelete()` `WasmEdge_TableInstanceCreateWithInit()` to create a table instance with non-defaultable elements with assigning the initial value. [Serializer]: Supported WASM module serialization (experimental). This is the API-level feature. Developers can use the `WasmEdge_LoaderSerializeASTModule()` API to serialize a loaded WASM module into" }, { "data": "[Tools]: Print the plug-in versions when using the `--version` option. [Installer]: Enabled `ggml-blas` and `rustls` plugin supporting (#3032) (#3108). [WASI-NN] ggml backend: Bump llama.cpp to b2781. Support llama.cpp options: `threads`: the thread number for inference. `temp`: set temperature for inference. `repeat-penalty`: set repeat penalty for inference. `top-p`: set top-p for inference. `grammar`: set grammar syntax for inference. `main-gpu`: set the main GPU for inference. `tensor-split`: set the tensor split for inference. Add `enable-debug-log` option to show more debug information. Default enable Metal on macOS. Introduce `loadbynamewithconfig()` to load model with metadata. Introduce single token inference by `computesingle`, `getoutputsingle`, and `finisingle` Introduce `unload()` function to release the model. Add some llama errors to WASI-NN. `EndOfSequence`: returned when encounter `<EOS>` token on single token inferece. `ContextFull`: returned when the context is full. `PromptTooLong`: returned when the input size is too large. `ModelNotFound`: returned when the model is not found. Support Llava and Gemma inference. Add `mmproj` option to set the projection model. Add `image` option to set the image. Improve logging mechanism. Show the version of `llama.cpp` in the metadata. Support Phi-3-Mini model. Support embedding generation. Support Windows build. [Plugin] Initial support for `wasmedge_ffmpeg` plug-in. Fixed issues: Fixed some API document in the API header. [Executor]: Minor fixes. Fixed integer overflow on `memGrow` boundary check. Refined the slice copy in table instances. Cleaned the unused bits of WASM return values to avoid security issues. [WASI]: Minor fixes. Fixed the function signature matching for WASI imports when backwarding supporting older version. (#3073) Fixed large timestamp causing overflow (#3106). Handle HUP only events. Checking same file descriptor for `fd_renumber` (#3040). Fixed `pathunlinkfile` for trailing slash path. Fixed `path_readlink` for not following symbolic link issue. Fixed `pathopen` for checking `OTRUNC` rights. Fixed `path_open` for removing path relative rights on file. Checking `path_symlink` for creating a symlink to an absolute path. Checking `fdprestatdir_name` buffer size. Checking `filestatsettimes` for invalid flags. Checking validation of file descriptor in `socket_accept` (#3041). Fixed duplicated loading of the same plug-in. Fixed option toggle for `wasmedge_process` plug-in. Tests: Updated the WASM spec tests to the date 2024/02/17. Updated the spec tests for the Exception Handling proposal. Added the spec tests for the Typed Function Reference proposal. Added the spec tests for the GC proposal. Known issues: Universal WASM format failed on macOS platforms. In the current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause a bus error during execution. We are trying to fix this issue. For a working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. Thank all the contributors who made this release possible! Abhinandan Udupa, Akihiro Suda, Charlie chan, Dhruv Jain, Draco, Harry Chiang, Hrushikesh, Ikko Eltociear Ashimine, Khagan (Khan) Karimov, LFsWang, LO, CHIN-HAO, Little Willy, Lm Ts-thun, Meenu Yadav, Omkar Acharekar, Saiyam Pathak, Sarrah Bastawala, Shen-Ta Hsieh, Shreyas Atre, Yage Hu, Yi Huang, Yi-Ying He, alabulei1, am009, dm4, hetvishastri, hugo-syn, hydai, redismongo, richzw, tannal, vincent, zhumeme If you want to build from source, please use WasmEdge-0.14.0-rc.5-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: [Component] share loading entry for component and module (#2945) Initial support for the component model proposal. This PR allows WasmEdge to recognize the component and module format. [WASI-NN] ggml backend: Provide options for enabling OpenBLAS, Metal, and cuBLAS. Bump llama.cpp to b1383 Build thirdparty/ggml only when the ggml backend is enabled. Enable the ggml plugin on the macOS platform. Introduce `AUTO` detection. Wasm application will no longer need to specify the hardware spec (e.g., CPU or GPU). It will auto-detect by the" }, { "data": "Unified the preload options with case-insensitive matching Introduce `metadata` for setting the ggml options. The following options are supported: `enable-log`: `true` to enable logging. (default: `false`) `stream-stdout`: `true` to print the inferred tokens in the streaming mode to standard output. (default: `false`) `ctx-size`: Set the context size the same as the `--ctx-size` parameter in llama.cpp. (default: `512`) `n-predict`: Set the number of tokens to predict, the same as the `--n-predict` parameter in llama.cpp. (default: `512`) `n-gpu-layers`: Set the number of layers to store in VRAM, the same as the `--n-gpu-layers` parameter in llama.cpp. (default: `0`) `reverse-prompt`: Set the token pattern at which you want to halt the generation. Similar to the `--reverse-prompt` parameter in llama.cpp. (default: `\"\"`) `batch-size`: Set the number of batch sizes for prompt processing, the same as the `--batch-size` parameter in llama.cpp. (default: `512`) Notice: Because of the limitation of the WASI-NN proposal, there is no way to set the metadata during the loading process. The current workaround will re-load the model when `ngpulayers` is set to a non-zero value. Installer: Support WASI-NN ggml plugin on both macOS Intel model (CPU only) and macOS Apple Silicon model. (#2882) [Java Bindings] provide platform-specific jni and jar for Java bindings (#2980) [C API]: Provide getData API for FunctionInstance (#2937) Add the API to set WASI-NN preloads. (#2827) [Plugin]: [zlib]: initial support of the zlib plugin (#2562) With a simple building guide and basic working examples [MSVC] Support MSVC for building WasmEdge [AOT] Support LLVM 17 Fixed issues: [Installer]: Double quote the strings to prevent splitting in env file (#2994) [AOT]: Validate AOT section header fields Add invariant attribute for memory and global pointer [C API]: Fix the wrong logic of getting types from exports. [Example] Fix get-string with the latest C++ internal getSpan API. Fixes #2887 (#2929) [CI] install llvm@16 to fix macOS build (#2878) Misc: [Example] Update wit-bindgen version from 0.7.0 to 0.11.0 (#2770) Thank all the contributors who made this release possible! dm4, hydai, Lm Ts-thun, Meenu Yadav, michael1017, proohit, Saikat Dey, Shen-Ta Hsieh, Shreyas Atre, Wang Jikai, Wck-iipi, YiYing He If you want to build from source, please use WasmEdge-0.13.5-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: [C API] Provide API for registering the Pre- and Post- host functions Pre host function will be triggered before calling every host function Post host function will be triggered after calling every host function [CI] Update llvm-windows from 13.0.3 to 16.0.6 WasmEdge supports multiple LLVM version, users can choose whatever they want. This change is for CI. [CI] build alpine static libraries (#2699) This provides pre-built static libraries using musl-libc on alpine. [Plugin] add wasmedge\\rustls\\plugin (#2762) [Plugin] implement opencvmini `rectangle` and `cvtColor` (#2705) [Test] Migrating spec test from RapidJSON to SIMDJSON (#2659) [WASI Socket] AF\\_UNIX Support (#2216) This is disable by default. How to enable this feature: CLI: Use `--allow-af-unix`. C API: Use `WasmEdge\\_ConfigureSetAllowAFUNIX`. [WASI-NN] Add ggml backend for llama (#2763) Integrate llama.cpp as a new WASI-NN backend. [WASI-NN] Add load\\by\\name implementation into wasi-nn plugin (#2742) Support named\\_model feature. [WASI-NN] Added support for Tuple Type Output Tensors in Pytorch Backend (#2564) Fixed issues: [AOT] Fix fallback case of `compileVectorExtAddPairwise`. (#2736) [AOT] Fix the neontbl1 codegen error on macOS (#2738) [Runtime] fix memory.init oob. issue #2743 (#2758) [Runtime] fix table.init oob. issue #2744 (#2756) [System] Remove \"inline\" from Fault::emitFault (#2695) (#2720) [Test] Use std::filesystem::u8path instead of a `const char` Path (#2706) [Utils] Installer: Fix checking of shell paths (#2752) [Utils] Installer: Formatting and Better source message (#2721) [WASI] Avoid undefined function `FindHolderBase::reset` [WASI] itimerspec with 0 timeout will disarm timer, +1 to" }, { "data": "(#2730) Thank all the contributors that made this release possible! Adithya Krishna, Divyanshu Gupta, Faidon Liambotis, Jorge Prendes, LFsWang, Lev Veyde, Lm Ts-thun, Sarrah Bastawala, Shen-Ta Hsieh, Shreyas Atre, Vedant R. Nimje, Yi-Ying He, alabulei1, am009, dm4, erxiaozhou, hydai, vincent, zzz If you want to build from source, please use WasmEdge-0.13.4-src.tar.gz instead of the zip or tarball provided by GitHub directly. This is a bugfix release. Features: [CMake] Add a flag to disable libtinfo (#2676) [Plugin] Implement OpenCV-mini (#2648) [CI] Build wasmedge on Nix (#2674) Fixed issues: WASI Socket: Remove unused fds before closing them. (#2675), part of #2662 Known issues: Universal WASM format failed on macOS platforms. In the current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause a bus error during execution. We are trying to fix this issue. For a working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Thank all the contributors that made this release possible! Lm Ts-thun, Tricster, Tyler Rockwood If you want to build from source, please use WasmEdge-0.13.3-src.tar.gz instead of the zip or tarball provided by GitHub directly. This is a bugfix release. Features: Provide static library on `x86_64` and `aarch64` Linux (#2666) Provide `wasm_bpf` plugins in the release assets (#2610) WASI-NN: Updating install script for OpenVino 2023.0.0 version (#2636) Installer: Add new tags support for wasmedge-tensorflow (#2608) Fuss: Use own implement of `BoyerMooreHorspoolSearcher` (#2657) Fixed issues: WASI Socket: Fix blocking when multiple requests have the same fds. (#2662) Utils: devtoolset-11 is not available on manylinux2014 aarch64, downgrade to devtoolset-10 (#2663) Known issues: Universal WASM format failed on macOS platforms. In the current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause a bus error during execution. We are trying to fix this issue. For a working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Thank all the contributors that made this release possible! Divyanshu Gupta, Faidon Liambotis, hydai, Jorge Prendes, Officeyutong, Shen-Ta Hsieh, Shreyas Atre, Tricster, YiYing He If you want to build from source, please use WasmEdge-0.13.2-src.tar.gz instead of the zip or tarball provided by GitHub directly. This is a bugfix release. Fixed issues: Rollback the WasmEdge WASI Socket behavior of V1 functions. Related functions: `getlocaladdr`, and `getpeeraddr` Reason: The address type should be INET4(0) and INET6(1). This regrasion is introduced in . However, the original values of the previous version (< 0.13.0): INET4(4) and INET6(6). To avoid this incompatible behavior, we choose to keep the old behavior. Known issues: Universal WASM format failed on macOS platforms. In the current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause a bus error during execution. We are trying to fix this issue. For a working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Thank all the contributors that made this release possible! If you want to build from source, please use WasmEdge-0.13.1-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: Updated the WasmEdge shared library. Due to the breaking change of API, bump the `SOVERSION` to `0.0.3`. Unified the `wasmedge` CLI" }, { "data": "Supported the subcommand `run` and `compile` for the `wasmedge` CLI. Users now can use the command `wasmedge run [ARGS]` to drive the original `wasmedge` tool. Users now can use the command `wasmedge compile [ARGS]` to drive the original `wasmedgec` AOT compiler tool. Made WasmEdge on `armv7l` great again. Bumpped `spdlog` to `v1.11.0`. Refactored the logs to use the `fmt` for formatting. Bumpped `blake3` to `1.3.3`. Added the CMake option `WASMEDGEENABLEUB_SANITIZER` to enable the undefined behavior sanitizer. Deprecated the `wasmedge_httpsreq` plug-in. Migrated the WasmEdge extensions into plug-ins. Migrated the into the `wasmedge_image` plug-in. Migrated the into the `wasmedgetensorflow` and `wasmedgetensorflowlite` plug-ins. Supported `manylinux2014x8664`, `manylinux2014aarch64`, `darwinx8664`, and `darwinarm64` platforms for the above plug-ins. Introduced the `wasi_logging` plug-in. Added GPU support for WASI-NN PyTorch backend. New APIs for containing data into module instances when in creation. Added the `WasmEdge_ModuleInstanceCreateWithData()` API for creating a module instance with data and its finalizer callback function pointer. Added the `WasmEdge_ModuleInstanceGetHostData()` API for accessing the host data set into the module instance. Supported the async invocation with executor. Added the `WasmEdge_ExecutorAsyncInvoke()` API for invoking a WASM function asynchronously. Added helper functions for Windows CLI. Added the `WasmEdgeDriverArgvCreate()` and `WasmEdgeDriverArgvDelete()` APIs to convert UTF-16 arguments to UTF-8. Added the `WasmEdgeDriverSetConsoleOutputCPtoUTF8()` API to set the output code page to UTF-8. Added the unifed tool API. Added the `WasmEdgeDriverUniTool()` API to trigger the WasmEdge CLI tool with command line arguments. Fixed issues: Fixed the WasmEdge C API static library linking command for `llvm-ar-14`. Fixed the undefined behavior issues in Loader and Validator. Fixed the WASI issues. Denied the absolute path accessing. Opened directories with `WASIOFLAGSDIRECTORY` flag. Don't use `OPATH` unless flag is exactly `WASIOFLAGS_DIRECTORY`. Removed seeking rights on directories. Fixed checking wrong rights in `path_open`. Allowed renumbering and closing preopened `fd`. Disallowed accessing parent directory through `..`. Don't write null pointer at end of args/envs pointer array. Don't write first entry when buffer size is zero. Removed unused VFS objects. Fixed the `fd_readdir`. Corrected the readonly inheriting right. Fixed plug-in issues. Fixed the error enumeration in WASI-NN. Fixed the error messages of tensor type in WASI-NN Tensorflow-Lite backend. Handled the model data ownership in WASI-NN Tensorflow-Lite backend. Returned error with the following cases in WASI-Crypto, because OpenSSL 3.0 didn't implement context duplication for `aes-gcm` and `chacha20`. Refactor: Moved the Windows API definitions to `include/system/winapi.h`. Dropped the `boost` dependency. Replaced the `boost` endian detection by the macros. Used the `std::boyermoorehorspool_searcher` instead. Refactored the functions for accessing slides on memory instances. Moved the `WasmEdge::VM::Async` class to the `include/common` for supporting async invocation in executor. Refactored the WASI host functions. Removed duplicate codes on `poll_oneoff` with `edge-trigger` configuration. Refactored Poller interface for reusing the same objects. Supported absolute time flags for `poll_oneoff` on MacOS. Used static vector to speedup CI. Refactored the internal APIs of wasi-socket. Refactored the WASI-NN plug-in source. Refined the WASI-NN dependency linking in CMake. Separated the source files for different backends. Documentations: Moved and published the WasmEdge document to <https://wasmedge.org/docs/>. Removed all WASM binary files in the source tree. Tests: Updated the WASM spec tests to the date 2023/05/11. Added the plug-in unit tests and CI for Linux and MacOS platforms. Added new test cases of `cxx20::expected`. Known issues: Universal WASM format failed on macOS platforms. In the current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause a bus error during execution. We are trying to fix this issue. For a working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10" }, { "data": "Please refer to if the `msvcp140.dll is missing` occurs. Thank all the contributors that made this release possible! Adithya Krishna, Chris O'Hara, Edward Chen, Louis Tu, Lm Ts-thun, Maurizio Pillitu, Officeyutong, Shen-Ta Hsieh, Shreyas Atre, Tricster, Tyler Rockwood, Xin Liu, YiYing He, Yu Xingzi, alabulei1, hydai, michael1017, vincent, yanghaku If you want to build from source, please use WasmEdge-0.13.0-src.tar.gz instead of the zip or tarball provided by GitHub directly. This is a hotfix release. Fixed issues: WASI: fix rights of pre-open fd cannot write and fix read-only flag parse (#2458) WASI Socket: Workaround: reduce the address family size for the old API fix sock opt & add BINDTODEVICE (#2454) MacOS Use OpenSSL 3.0 on MacOS when building the plugins. Update the visibility of plugin functions. Fix AOT Error on MacOS; fix #2427 Change enumerate attributes value to zero Change import helper function to private linkage to hide symbols Detect OS version Fix building with statically linked LLVM-15 on MacOS. cmake: quote WASMEDGELLVMLINKLIBSNAME variable in order to fix arm64-osx AOT build (#2443) Windows: Fix missing msvcp140.dll issue (#2455) Revert #2455 temporarily. Use `CMAKEMSVCRUNTIMELIBRARY` instead of `MSVCRUNTIME_LIBRARY`. Rust Binding: Introduce `fiber-for-wasmedge` (#2468). The Rust binding relies on fiber for some features. Because the runwasi project supports both wasmtime and wasmedge, the wasmtime-fiber with different versions will make the compilation complex. To avoid this, we forked wasmtime-fiber as fiber-for-wasmedge. Add a second phase mechanism to load plugins after the VM has already been built. (#2469) Documents: Fix the naming of the AOT wasm file. Add wasmedgec use cases for a slim container. Add the Kwasm document. Fix HostFunction with data example (#2441) Known issues: Universal WASM format failed on macOS platforms. In the current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause a bus error during execution. We are trying to fix this issue. For a working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Thank all the contributors that made this release possible! Leonid Pospelov, Shen-Ta Hsieh, Tyler Rockwood, Xin Liu, YiYing He, dm4, hydai, vincent, yanghaku, zzz If you want to build from source, please use WasmEdge-0.12.1-src.tar.gz instead of the zip or tarball provided by GitHub directly. Breaking changes: Updated the WasmEdge shared library. Due to the breaking change of API, bump the `SOVERSION` to `0.0.2`. WasmEdge C API changes. Removed the `WasmEdge_HostRegistration` members and the corresponding module creation APIs to standardize the plug-in module creation. Please refer to the for how to upgrade. Removed the `WasmEdgeHostRegistrationWasiNN` enum and the `WasmEdge_ModuleInstanceCreateWasiNN()` API. Removed the `WasmEdgeHostRegistrationWasiCryptoCommon` enum and the `WasmEdgeModuleInstanceCreateWasiCryptoCommon()` API. Removed the `WasmEdgeHostRegistrationWasiCryptoAsymmetricCommon` enum and the `WasmEdgeModuleInstanceCreateWasiCryptoAsymmetricCommon()` API. Removed the `WasmEdgeHostRegistrationWasiCryptoKx` enum and the `WasmEdgeModuleInstanceCreateWasiCryptoKx()` API. Removed the `WasmEdgeHostRegistrationWasiCryptoSignatures` enum and the `WasmEdgeModuleInstanceCreateWasiCryptoSignatures()` API. Removed the `WasmEdgeHostRegistrationWasiCryptoSymmetric` enum and the `WasmEdgeModuleInstanceCreateWasiCryptoSymmetric()` API. Removed the `WasmEdgeHostRegistrationWasmEdgeProcess` enum and the `WasmEdgeModuleInstanceCreateWasmEdgeProcess()` API. Changed the `WasmEdge_VMCleanup()` behavior. After calling this API, the registered modules except the WASI and plug-ins will all be cleaned. Standaloned the `WasmEdge-Process` plug-in. After this version, users should use the installer to install the `WasmEdge-Process` plug-in. Features: Introduced the `Plugin` context and related APIs. Added the `WasmEdge_PluginContext` struct. Added the `WasmEdge_PluginLoadFromPath()` API for loading a plug-in from a specific path. Added the `WasmEdgePluginListPluginsLength()` and `WasmEdgePluginListPlugins()` APIs for getting the loaded plug-in names. Added the `WasmEdge_PluginFind()` API for retrieving a loaded plug-in by its name. Added the `WasmEdge_PluginGetPluginName()` API for retrieving the plug-in" }, { "data": "Added the `WasmEdgePluginListModuleLength()` and `WasmEdgePluginListModule()` APIs for listing the module names of a plug-in. Added the `WasmEdge_PluginCreateModule()` API for creating the specific module instance in a plug-in by its name. Introduced the multiple WASI socket API implementation. The `sock_accept()` is compatible with the WASI spec. The V2 socket implementation is using a larger socket address data structures. With this, we can start to supporting `AF_UINX` Added the `VM` APIs. Added the `WasmEdge_VMGetRegisteredModule()` API for retrieving a registered module by its name. Added the `WasmEdgeVMListRegisteredModuleLength()` and `WasmEdgeVMListRegisteredModule()` APIs for listing the registered module names. Introduced the python version WasmEdge installer. Added the `wasm_bpf` plug-in. Enabled the read-only WASI filesystem. Users can add the `--dir guestpath:hostpath:readonly` option in WasmEdge CLI to assign the read-only configuration. Updated the ABI of the `wasiephemeralsock`. Added the output port of the `sockrecvfrom`. Updated the API of `sock_getlocaladdr`. Unified the socket address size to 128-bit. Allowed the multiple VM instances. Supported using `libtool` to archive the WasmEdge static library. Supported LLVM 15.0.7. Fixed issues: Fixed WASI issues. Fixed the leaking information about the host STDIN, STDOUT, and STDERR after getting the `filestat`. Fixed the lookup of symbolic link at `pathfilestatset_times`. Fixed `open` for the wchar path issue on windows. Fixed the rights of `path_open`. Fixed WASI-NN issues. Fixed the definition of `wasi_nn::TensorType` to prevent from comparing with dirty data. Fixed WASI-Crypto issues. Fixed the `keypair_generate` for rsa-pss. Fixed the `keypair_import` read pem as pkcs8. Fixed WASI-Socket issues. Fixed the buffer size of `sock_getpeeraddr`. Fixed the lost intrinsics table in AOT mode when using the WasmEdge C API. Fixed the registration failed of WasmEdge plug-in through the C API. Fixed the implementation in `threads` proposal. Fixed the error in `atomic.notify` and `atomic.wait` instructions. Fixed the decoding of `atomic.fence` instruction. Corrected the error message of waiting on unshared memory. Handle canonical and arithmetical `NaN` in `runMaxOp()` and `runMinOp()`. Refactor: Refactored the implementation of number loading in the file manager. Supported `s33` and `sn` loading and decoding. Refactored the `WasmEdge::ValType`. Removed the `WasmEdge::ValType::None`. Used the flag in `WasmEdge::BlockType` for supporting the type index. Removed the `WasmEdge::Validator::VType` and used the `WasmEdge::ValType` instead. Known issues: Universal WASM format failed on MacOS platforms. In current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause bus error when execution. We are trying to fix this issue. For working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Plug-in linking on MacOS platforms. The plug-in on MacOS platforms will cause symbol not found when dynamic linking. We are trying to fix this issue. For working around, please implement the host modules instead of plug-ins. Documentations: Fixed various typos. Updated the C API documents. Added the . Updated the . Added the . Added the . Tests: Updated the WASM spec tests to the date 2022/12/15. Added the plug-in unit tests on Linux platforms. Thank all the contributors that made this release possible! Abhinandan Udupa, Achille, Afshan Ahmed Khan, Daniel Golding, DarumaDocker, Draco, Harry Chiang, Justin Echternach, Kenvi Zhu, LFsWang, Leonid Pospelov, Lm Ts-thun, MediosZ, O3Ol, Officeyutong, Puelloc, Rafael Fernndez Lpez, Shen-Ta Hsieh, Shreyas Atre, Sylveon, Tatsuyuki Kobayashi, Vishv Salvi, Xin Liu, Xiongsheng Wang, YiYing He, alabulei1, dm4, hydai, jeongkyu, little-willy, michael1017, shun murakami, xxchan, If you want to build from source, please use WasmEdge-0.12.0-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: Added the new WasmEdge C API. Added the `WasmEdge_ConfigureSetForceInterpreter()` API to set the force interpreter" }, { "data": "Added the `WasmEdge_ConfigureIsForceInterpreter()` API to check the force interpreter mode in configurations. Added the `WasmEdge_LogOff()` API to turn off the logging. Due to introducing the new APIs, bump the `SOVERSION` to `0.0.1`. Added the additional hint messages if import not found when in instantiation. Added the forcibly interpreter execution mode in WasmEdge CLI. Users can use the `--force-interpreter` option in the `wasmedge` tool to forcibly execute WASM files (includes the AOT compiled WASM files) in interpreter mode. Supported WASI-NN plug-in with TensorFlow-Lite backend on Ubuntu 20.04 x86_64. Users can refer to the for the information. For building with enabling WASI-NN with TensorFlow-Lite backend, please add the `-DWASMEDGEPLUGINWASINNBACKEND=\"TensorFlowLite\"` in `cmake`. Bump the `fmt` format of logging to `9.0.0`. Added the new experimental edge-triggered epoll API `epollOneoff` in the WASI component. Fixed issues: Detected the valid `_start` function of the WasmEdge CLI command mode. For the invalid `_start` function, the WasmEdge CLI will execute that function in the reactor mode. Fixed the non-English WasmEdge CLI arguments error on Windows. Fixed the AOT compiler issues. Fixed the operand of `frintn` on `arm64` platforms. Corrected the `unreachable` status to record on every control stacks. Refined the Loader performance. Capped the maximum local counts to 67108864 (2^26). Rejected wrong data when loading the universal WASM. Rejected the unreasonable long vector sizes. Fixed the lost `std` namespace in the `experimental::expected`. Fixed the repeatedly compilation of universal WASM format. If users use the `wasmedgec` tool to compile the universal WASM file, the AOT compiled WASM data will be appended into the output. In the cases of duplicated AOT compiled universal WASM file which has more than 1 section of AOT compiled WASM data, the WasmEdge runtime will use the latest appended one when execution. Hidden the local symbols of the WasmEdge shared library. Loaded the default plug-in path from the path related to the WasmEdge shared library. This only fixed on the MacOS and Linux platforms now. Updated the minimum CMake required version on Android. Known issues: Universal WASM format failed on MacOS platforms. In current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause bus error when execution. We are trying to fix this issue. For working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Plug-in linking on MacOS platforms. The plug-in on MacOS platforms will cause symbol not found when dynamic linking. We are trying to fix this issue. For working around, please implement the host modules instead of plug-ins. Documentations: Updated the to `v0.11.0`. Tests: Added the WASI-NN TensorFlow-Lite backend unit test. Added the new C API unit tests. Applied more fuzz tests for WasmEdge CLI. Thank all the contributors that made this release possible! Abhinandan Udupa, Gustavo Ye, HangedFish, Harry Chiang, Hiroaki Nakamura, Kenvi Zhu, LFsWang, MediosZ, Shen-Ta Hsieh, Shreyas Atre, Xin Liu, YiYing He, abhinandanudupa, dm4, he11c, hydai, vincent, yyy1000, zhlhahaha If you want to build from source, please use WasmEdge-0.11.2-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: Supported WASI-NN plug-in with PyTorch backend on Ubuntu 20.04 x86_64. Users can refer to the for the information. For building with enabling WASI-NN with PyTorch backend, please add the `-DWASMEDGEPLUGINWASINNBACKEND=\"PyTorch\"` in `cmake`. Updated the WASI-Crypto proposal and supported OpenSSL 3.0. Supported LLVM 15. Added the plug-in C API. Extended WasmEdge CLI. Allow the optimization level assignment in `wasmedgec` tool. Supported the `v128` value type printing in `wasmedge` tool. Released Ubuntu 20.04 version with statically linked" }, { "data": "Fixed issues: Fixed the `private` members into the `protected` in the module instance class. Fixed the type mismatch for IntrinsicsTable initialization statement in the AOT compiler. Known issues: Universal WASM format failed on MacOS platforms. In current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause bus error when execution. We are trying to fix this issue. For working around, please use the `--optimize=0` to set the compiler optimization level to `O0` in `wasmedgec` CLI. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Plug-in linking on MacOS platforms. The plug-in on MacOS platforms will cause symbol not found when dynamic linking. We are trying to fix this issue. For working around, please implement the host modules instead of plug-ins. Documentations: Refactored the . Tests: Added the WASI-NN PyTorch backend unit test. Added fuzzing tests for WasmEdge CLI. Thank all the contributors that made this release possible! DarumaDocker, Faidon Liambotis, Gustavo Ye, LFsWang, MediosZ, Michael Yuan, Shen-Ta Hsieh, Tricster, Xin Liu, Yeongju Kang, YiYing He, Zhou Zhou, hydai, jeeeerrrpop, sonder-joker, vincent If you want to build from source, please use WasmEdge-0.11.1-src.tar.gz instead of the zip or tarball provided by GitHub directly. Breaking changes: WasmEdge C API changes. Refactored the host function definition to export the calling frame. The first parameter of `WasmEdge_HostFunc_t` is replaced by `const WasmEdge_CallingFrameContext `. The first parameter of `WasmEdge_WrapFunc_t` is replaced by `const WasmEdge_CallingFrameContext `. Extended the content of `WasmEdge_Result`. Added the const qualifier of some APIs. Added the const qualifier of the first parameter of `WasmEdge_StoreFindModule()`. Added the const qualifier of the first parameter of `WasmEdge_AsyncWait()`. Added the const qualifier of the first parameter of `WasmEdge_AsyncWaitFor()`. Added the const qualifier of the first parameter of `WasmEdge_AsyncGetReturnsLength()`. Added the const qualifier of the first parameter of `WasmEdge_AsyncGet()`. Added the const qualifier of the first parameter of `WasmEdge_VMGetFunctionType()`. Added the const qualifier of the first parameter of `WasmEdge_VMGetFunctionTypeRegistered()`. Added the const qualifier of the first parameter of `WasmEdge_VMGetFunctionListLength()`. Added the const qualifier of the first parameter of `WasmEdge_VMGetFunctionList()`. Added the const qualifier of the first parameter of `WasmEdge_VMGetImportModuleContext()`. Renamed the plugin API. Renamed `WasmEdgePluginloadWithDefaultPluginPaths()` to `WasmEdge_PluginLoadWithDefaultPaths()`. Dropped the manylinux1 and manylinux2010 support. Please refer to the . Standardize the SONAME and SOVERSION for WasmEdge C API The name of the library is changed to `libwasmedge.so`, `libwasmedge.dyld`, and `wasmedge.dll`. Users should change the linker flag from `lwasmedge_c` to `lwasmedge`. The initialized SONAME is set to `libwasmedge.so.0`. The initialized SOVERSION is set to `libwasmedge.so.0.0.0`. Features: Updated CMake options of WasmEdge project. Added `WASMEDGELINKLLVM_STATIC` option to link the LLVM statically into WasmEdge shared library or tools. Removed the `WASMEDGEBUILDSTATICTOOLS` option and replaced by the `WASMEDGELINKTOOLSSTATIC` option. For details, please refer to the . After this version, our releases on MacOS platforms will link the LLVM library statically to reduce the installation of LLVM from Homebrew for the users. Supported the user-defined error code for host functions. The 24-bit size user-defined error code is supported (smaller than 16777216). Developers can use the `WasmEdge_ResultGen()` API to generate the result and return. Exported the `CallingFrame` instead of the memory instance in host functions. New `WasmEdge_CallingFrameContext` struct. Developers can use `WasmEdge_CallingFrameGetModuleInstance()` API to get the module instance of current top frame in calling stack in host function body. Developers can use `WasmEdge_CallingFrameGetMemoryInstance()` API to get the memory instance by index in host function body. To quickly upgrade from the previous WasmEdge versions, developer can use the `WasmEdge_CallingFrameGetMemoryInstance(Context, 0)` to get the same memory instance of the previous host function" }, { "data": "Developers can use `WasmEdge_CallingFrameGetExecutor()` API to get the executor context in host function body. Extended the `WasmEdge_Result` struct to support user defined error codes of host functions. Added `WasmEdgeResultGen()` API to generate the `WasmEdgeResult` struct of user defined error code. Added `WasmEdge_ResultGetCategory()` API to get the error code category. Added a new API for looking up the native handler from a given WASI mapped Fd/Handler. Added `WasmEdge_ModuleInstanceWASIGetNativeHandler` to get the native handler. Added a new API for compiling a given WASM byte array. Added `WasmEdge_CompilerCompileFromBuffer` to compile from buffer. Added `httpsreq` plugin on Linux platforms. Fixed issues: Fixed the binary format loading. Fixed the error of immediate loading of const instructions in debug mode. Updated the `memarg` of memory instructions for the multiple memories proposal changes. Fixed the AOT issues. Fixed the missed mask of shift operands. Fixed the fallback case of vector instructions if the `SSE4.1` is not supported on the x86_64 platforms or the `NEON` is not supported on the aarch64 platforms. Fixed the `sdk_version` of `lld` warning on MacOS with LLVM 14. Fixed the unexpected error message when execution. Refined the terminated case to prevent from printing the unexpected error message. Refined the symbols of output WasmEdge shared libraries. Removed the weak symbol of WasmEdge plugins. Hide the `lld` symbols of WasmEdge shared library. Fixed the release packaging. Fixed the lost of statically linking LLVM into WasmEdge shared library. Fixed the lost of files when packaging on Windows. Refactor: Reorganized the CI workflows to reuse the similar jobs. Refactored the enum related headers. Separated the C and C++ enum definition headers. Not to package the C++ related headers. Updated the WASI and plugin host functions for the API change. Known issues: Universal WASM format failed on MacOS platforms. In current status, the universal WASM format output of the AOT compiler with the `O1` or upper optimizations on MacOS platforms will cause bus error when execution. We are trying to fix this issue. For working around, please use the shared library format output of the AOT mode, or set the compiler optimization level to `O0` in WasmEdge C API. Developers can specify the extension name as `.dylib` on MacOS for the shared library format output when using `wasmedgec` tool. WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Plug-in linking on MacOS platforms. The plug-in on MacOS platforms will cause symbol not found when dynamic linking. We are trying to fix this issue. For working around, please implement the host modules instead of plug-ins. Documentations: Updated the . Updated the for the breaking change. For upgrading from `0.10.1` to `0.11.0`, please refer to . For the old API of `0.10.1`, please refer to . Tests: Updated the spec tests to the date `20220712`. Updated the test suite of the multiple memories proposal. Updated the plugin tests for the host function API breaking change. Thank all the contributors that made this release possible! Cheng-En Lee, Chih-Hsuan Yen, Galden, GreyBalloonYU, HeZean, Michael Yuan, Shen-Ta Hsieh, Xin Liu, Yi Huang, Yi-Ying He, Zhenghao Lu, Zhou Zhou, dm4, hydai If you want to build from source, please use WasmEdge-0.11.0-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: Supported WASI-NN plug-in with OpenVINO backend on Ubuntu 20.04 x86_64. Users can refer to the for the information. For building with enabling WASI-NN with OpenVINO backend, please add the `-DWASMEDGEPLUGINWASINNBACKEND=\"OpenVINO\"` in `cmake`. Supported WASI-crypto plug-in on Ubuntu 20.04 x8664, manylinux2014 x8664, and manylinux2014 aarch64. Users can refer to the for the information. For building with enabling WASI-crypto with OpenSSL 1.1, please add the `-DWASMEDGEPLUGINWASI_CRYPTO=ON` in `cmake`. Added the static tool building" }, { "data": "By default, WasmEdge tools will depend on the WasmEdge shared library. Developers can add the `-DWASMEDGEBUILDSTATICLIB=On` and `-DWASMEDGEBUILDSTATICTOOLS=On` to build the stand-alone WasmEdge CLI tools. Exported the components of `WasmEdge_VMContext` in WasmEdge C API. Added the `WasmEdgeVMGetLoaderContext` API for retrieving the `WasmEdgeLoaderContext` in VM. Added the `WasmEdgeVMGetValidatorContext` API for retrieving the `WasmEdgeValidatorContext` in VM. Added the `WasmEdgeVMGetExecutorContext` API for retrieving the `WasmEdgeExecutorContext` in VM. Added the API for CLI tools. Developers can use the `WasmEdgeDriverCompiler` API to trigger the WasmEdge AOT compiler tool. Developers can use the `WasmEdgeDriverTool` API to trigger the WasmEdge runtime tool. Supported the WASM `threads` proposal. Added the `WasmEdgeProposalThreads` for the configuration in WasmEdge C API. Users can use the `--enable-threads` to enable the proposal in `wasmedge` and `wasmedgec` tools. Supported LLVM 14 on MacOS. Used the new `macho` in lld on LLVM-14 envronment. Bumpped IWYU to 0.18 to be compatible with LLVM 14 on MacOS. Bumpped the MacOS x86_64 build to MacOS 11. Fixed issues: Fixed the universal WASM format failed on MacOS platforms. Developers can specify the extension name as `.wasm` on MacOS as the universal WASM format output of the AOT compiler to enable the AOT mode. Fixed the WasmEdge C API static library on MacOS with LLVM 14. The WasmEdge C API static library is in experimental and not guaranteed. The shared library is recommended. Reduced the branch miss when instantiating AOT-compiled WASM. Refactor: Moved the code of WasmEdge CLI tools into `WasmEdge::Driver`. Moved the plugin tests into the `test/plugins` folder. Known issues: WasmEdge CLI failed on Windows 10 issue. Please refer to if the `msvcp140.dll is missing` occurs. Plug-in linking on MacOS platforms. The plug-in on MacOS platforms will cause symbol not found when dynamic linking. We are trying to fix this issue. For working around, please implement the host modules instead of plug-ins. Documentations: Added the . Tests: Added the spec tests for the `threads` proposal. Added the WASI-NN unit tests. Thank all the contributors that made this release possible! Abhinandan Udupa, Chris Ho, Faidon Liambotis, Frank Lin, Jianbai Ye, Kevin O'Neal, LFsWang, Lokesh Mandvekar, Michael Yuan, O3Ol, RichardAH, Shen-Ta Hsieh, Shreyas Atre, Sylveon, Tricster, William Wen, , Xin Liu, Yi Huang, Yi-Ying He, Yixing Jia, Yukang, abhinandanudupa, alabulei1, dm4, eat4toast, eee4017, hydai, sonder-joker, spacewander, swartz-k, yale If you want to build from source, please use WasmEdge-0.10.1-src.tar.gz instead of the zip or tarball provided by GitHub directly. Breaking changes: WasmEdge C API changes. Merged the `WasmEdgeImportObjectContext` into the `WasmEdgeModuleInstanceContext`. `WasmEdgeImportObjectCreate()` is changed to `WasmEdgeModuleInstanceCreate()`. `WasmEdgeImportObjectDelete()` is changed to `WasmEdgeModuleInstanceDelete()`. `WasmEdgeImportObjectAddFunction()` is changed to `WasmEdgeModuleInstanceAddFunction()`. `WasmEdgeImportObjectAddTable()` is changed to `WasmEdgeModuleInstanceAddTable()`. `WasmEdgeImportObjectAddMemory()` is changed to `WasmEdgeModuleInstanceAddMemory()`. `WasmEdgeImportObjectAddGlobal()` is changed to `WasmEdgeModuleInstanceAddGlobal()`. `WasmEdgeImportObjectCreateWASI()` is changed to `WasmEdgeModuleInstanceCreateWASI()`. `WasmEdgeImportObjectCreateWasmEdgeProcess()` is changed to `WasmEdgeModuleInstanceCreateWasmEdgeProcess()`. `WasmEdgeImportObjectInitWASI()` is changed to `WasmEdgeModuleInstanceInitWASI()`. `WasmEdgeImportObjectInitWasmEdgeProcess()` is changed to `WasmEdgeModuleInstanceInitWasmEdgeProcess()`. Used the pointer to `WasmEdge_FunctionInstanceContext` instead of the index in the `FuncRef` value type. `WasmEdge_ValueGenFuncRef()` is changed to use the `const WasmEdge_FunctionInstanceContext ` as it's argument. `WasmEdge_ValueGetFuncRef()` is changed to return the `const WasmEdge_FunctionInstanceContext `. Moved the functions of `WasmEdgeStoreContext` to the `WasmEdgeModuleInstanceContext`. `WasmEdgeStoreListFunctionLength()` and `WasmEdgeStoreListFunctionRegisteredLength()` is replaced by `WasmEdge_ModuleInstanceListFunctionLength()`. `WasmEdgeStoreListTableLength()` and `WasmEdgeStoreListTableRegisteredLength()` is replaced by `WasmEdge_ModuleInstanceListTableLength()`. `WasmEdgeStoreListMemoryLength()` and `WasmEdgeStoreListMemoryRegisteredLength()` is replaced by `WasmEdge_ModuleInstanceListMemoryLength()`. `WasmEdgeStoreListGlobalLength()` and `WasmEdgeStoreListGlobalRegisteredLength()` is replaced by `WasmEdge_ModuleInstanceListGlobalLength()`. `WasmEdgeStoreListFunction()` and `WasmEdgeStoreListFunctionRegistered()` is replaced by `WasmEdge_ModuleInstanceListFunction()`. `WasmEdgeStoreListTable()` and `WasmEdgeStoreListTableRegistered()` is replaced by `WasmEdge_ModuleInstanceListTable()`. `WasmEdgeStoreListMemory()` and `WasmEdgeStoreListMemoryRegistered()` is replaced by `WasmEdge_ModuleInstanceListMemory()`. `WasmEdgeStoreListGlobal()` and `WasmEdgeStoreListGlobalRegistered()` is replaced by `WasmEdge_ModuleInstanceListGlobal()`. `WasmEdgeStoreFindFunction()` and `WasmEdgeStoreFindFunctionRegistered()` is replaced by `WasmEdge_ModuleInstanceFindFunction()`. `WasmEdgeStoreFindTable()` and `WasmEdgeStoreFindTableRegistered()` is replaced by `WasmEdge_ModuleInstanceFindTable()`. `WasmEdgeStoreFindMemory()` and `WasmEdgeStoreFindMemoryRegistered()` is replaced by `WasmEdge_ModuleInstanceFindMemory()`. `WasmEdgeStoreFindGlobal()` and `WasmEdgeStoreFindGlobalRegistered()` is replaced by `WasmEdge_ModuleInstanceFindGlobal()`. Updated the `WasmEdge_VMContext` APIs. Added the `WasmEdge_VMGetActiveModule()`. `WasmEdge_VMGetImportModuleContext()` is changed to return the `WasmEdge_FunctionInstanceContext" }, { "data": "`WasmEdge_VMRegisterModuleFromImport()` is changed to use the `const WasmEdge_ModuleInstanceContext ` as it's argument. For upgrading from `0.9.1` to `0.10.0`, please refer to . Features: Supported LLVM 14. Supported the WASM `tail-call` proposal. Added the `WasmEdgeProposalTailCall` for the configuration in WasmEdge C API. Users can use the `--enable-tail-call` to enable the proposal in `wasmedge` and `wasmedgec` tools. Supported the WASM `extended-const` proposal. Added the `WasmEdgeProposalExtendedConst` for the configuration in WasmEdge C API. Users can use the `--enable-extended-const` to enable the proposal in `wasmedge` and `wasmedgec` tools. Supported thread-safe in `WasmEdgeVMContext`, `WasmEdgeConfigureContext`, `WasmEdgeModuleInstanceContext`, and `WasmEdgeStoreContext` APIs. Supported the gas limit in AOT mode. New supporting of the wasi-socket proposal. Supported `send_to`. Supported `resv_from`. Plugin support Add loadable plugin support. Move `wasmedge_process` to a loadable plugin. Fixed issues: Fixed wasi-socket proposal issues. Fixed wasi-socket on MacOS. Fixed error when calling `poll_oneoff` with the same `fd` twice. Fixed error when calling `fd_close` on socket. Forged zero-terminated string for `::getaddrinfo`. Checked the socket options enumeration for valid value. Fixed the statistics enable/disable routine. Fixed the output format by the file extension name detection on multiple platforms. Known issues: Universal WASM format failed on MacOS platforms. In current status, the universal WASM format output of the AOT compiler on MacOS platforms will cause bus error when execution. We are trying to fix this issue. For working around, please use the shared library format output of the AOT mode. Developers can specify the extension name as `.dylib` on MacOS, `.so` on Linux, and `.dll` on Windows for the shared library format output of the AOT compiler. Refactor: Supported multi-thread execution. Changed the `StackManager` in `Executor` as thread local to support the multi-thread. Used atomic operations for cost measuring. Supported multi-thread timer. Refactored the enumerations. Replaced the `std::unordered_map` of the enumeration strings with `DenseMap` and `SpareMap`. Merged the both C and C++ enumeration definitions into the `enum.inc` file. Updated the `ErrCode` enumeration for the newest spec tests. Refactored the code architecture for supporting `tail-call` proposal. Split the `call_indirect` execution routine in compiler into AOT and interpreter path. Updated the pop frame mechanism in the `StackManager`. Updated the enter function mechanism. Refined the file manager in `Loader`. Supported the offset seeking in file and buffer. Skipped the instructions parsing in AOT mode for better loading performance. Refined the branch mechanism in the `StackManager` for better performance in the interpreter mode. Pre-calculated the stack offset for branch in the validation phase. Removed the label stack in the `StackManager` and used the pre-calculated data for branch. Removed the dummy frame mechanism in the `StackManager`. Supplied the pointer-based retrieving mechanism in the `StoreManager` and `ModuleInstance`. Removed the address mechanism for instances in the `StoreManager`. Added the unsafe getter functions for the instances. Refactored the `StoreManager`, `ModuleInstance`, and `Executor`. Used the `ModuleInstance`-based resource management instead of `StoreManager`-based. Moved the ownership of instances from the `StoreManager` into the `ModuleInstance`. Merged the `ImportObject` into the `ModuleInstance`. Invoking functions by `FunctionInstance` rather than the function name in `Executor`. Documentations: Updated the for the breaking change. For upgrading from `0.9.1` to `0.10.0`, please refer to . For the old API of `0.9.1`, please refer to . Updated the for the breaking change. For upgrading from `v0.9.2` to `v0.10.0`, please refer to . For the old API of `v0.9.2`, please refer to . Tests: Updated the spec tests to the date `20220504`. Added the spec tests for the `tail-call` proposal. Added the spec tests for the `extended-const` proposal. Added the mixed invocation tests between interpreter mode and AOT mode WASM functions. Added the thread-safe and multi-thread execution tests. Added wasi-socket tests for `polloneoff`, `sendto`, and" }, { "data": "Thank all the contributors that made this release possible! , Abhinandan Udupa, Ang Lee, Binbin Zhang, Chin Zhi Wei, DarumaDocker, Elon Cheng, FlyingOnion, Hanged Fish, Herschel Wang, JIAN ZHONG, JcJinChen, Jeremy, JessesChou, JieDing, Kodalien, Kunshuai Zhu, LFsWang, LaingKe, MediosZ, Michael Yuan, Nicholas Zhan, , O3Ol, Rui Li, Shen-Ta Hsieh, Shreyas Atre, Sylveon, TheLightRunner, Vaniot, Vinson, , Xin Liu, Yi Huang, YiYing He, YoungLH, abhinandanudupa, border1px, dm4, eat4toast, hydai, jerbmarx, luckyJ-nj, meoww-bot, mydreamer4134, situ2001, tpmccallum, treeplus, wangyuan249, yale, If you want to build from source, please use WasmEdge-0.10.0-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: WASI Added the `sockgetsockopt`, `socksetsockopt`, `sockgetlocaladdr`, `sockgetpeeraddr`, and `sock_getaddrinfo` host functions for the WASI socket proposal. Supported the interruptible execution. Added the `WasmEdge_Async` struct in WasmEdge C API for the asynchronous execution. Added the `WasmEdge_AsyncWait` API for waiting an asynchronous execution. Added the `WasmEdge_AsyncWaitFor` API for waiting an asynchronous execution with timeout. Added the `WasmEdge_AsyncCancel` API for canceling an asynchronous execution. Added the `WasmEdge_AsyncGetReturnsLength` API for waiting and getting the return value length of asynchronous execution. Added the `WasmEdge_AsyncGet` API for waiting and getting the asynchronous execution results. Added the `WasmEdgeAsyncDelete` API for destroying the `WasmEdgeAsync` object. Added the asynchronous mode execution APIs. Added the `WasmEdge_VMAsyncRunWasmFromFile` API for executing WASM from a file asynchronously. Added the `WasmEdge_VMAsyncRunWasmFromBuffer` API for executing WASM from a buffer asynchronously. Added the `WasmEdgeVMAsyncRunWasmFromASTModule` API for executing WASM from an `WasmEdgeASTModuleContext` asynchronously. Added the `WasmEdge_VMAsyncExecute` API for invoking a WASM function asynchronously. Added the `WasmEdge_VMAsyncExecuteRegistered` API for invoking a registered WASM function asynchronously. Added the option for timeout settings of the AOT compiler. Added the `WasmEdge_ConfigureCompilerSetInterruptible` API for setting the interruptibility of the AOT compiler. Added the `WasmEdge_ConfigureCompilerIsInterruptible` API for getting the interruptibility of the AOT compiler. Supported the WASM `multi-memories` proposal. Added the `WasmEdgeProposalMultiMemories` for the configuration in WasmEdge C API. Users can use the `--enable-multi-memory` to enable the proposal in `wasmedge` and `wasmedgec` tools. Enabled the gas limitation of the `wasmedge` CLI. Users can use the `--gas-limit` to assign the limitation of costs. Beautified and colorized the WasmEdge CLI help information. Fixed issues: Fixed the memory leak in function instances. Reduced the memory usage of the instruction class. Fixed the return value of the `fread` and `fwrite` WASI functions on Windows. Refactor: Used `assumingUnreachable` instead of `builtin_unreachable` to help the compiler to generate better codes. Updated the order of the members in the proposal enumeration. Refactored the instruction class for reducing the memory usage. Refactored the `WasmEdge::BlockType` into a struct. Categorized the members of the instruction class into a union. Documentations: Added the documentation. Added the . Updated the . Tests: Handled the tests for the 32-bit platforms. Added the spec tests for the `multi-memories` proposal. Added the test cases for `getaddrinfo` host function. Added the interruptible execution tests. Added the unit tests of async APIs. Misc: Updated the `blake3` library to `1.2.0`. Added the copyright text. Fixed the coding style of the comments. Added the Windows installer release CI. Added the aarch64 Android support based on r23b. Added the Android example for WasmEdge C API. Thank all the contributors that made this release possible! 2021, Antonio Yang, AvengerMoJo, Hanged Fish, Harinath Nampally, KernelErr, Michael Yuan, MileyFu, O3Ol, Saksham Sharma, Shen-Ta Hsieh(BestSteve), Shreyas Atre, SonOfMagic, Stephan Renatus, Sven Pfennig, Vaughn Dice, Xin Liu, Yi, Yi-Ying He, Yukang Chen, ZefengYu, ZhengX, alabulei1, alittlehorse, baiyutang, , hydai, javadoors, majinghe, meoww-bot, pasico, peterbi, villanel, wangshishuo, wangyuan249, wby, wolfishLamb, If you want to build from source, please use WasmEdge-0.9.1-src.tar.gz instead of the zip or tarball provided by GitHub directly. Breaking changes: Turned on the `SIMD` proposal by" }, { "data": "The `WasmEdge_ConfigureContext` will turn on the `SIMD` proposal automatically. Users can use the `--disable-simd` to disable the `SIMD` proposal in `wasmedge` and `wasmedgec`. For better performance, the Statistics module is disabled by default. To enable instruction counting, please use `--enable-instruction-count`. To enable gas measuring, please use `--enable-gas-measuring`. To enable time measuring, please use `--enable-time-measuring`. For the convenience, use `--enable-all-statistics` will enable all available statistics options. `wasmedgec` AOT compiler tool behavior changes. For the output file name with extension `.so`, `wasmedgec` will output the AOT compiled WASM in shared library format. For the output file name with extension `.wasm` or other cases, `wasmedgec` will output the WASM file with adding the AOT compiled binary in custom sections. `wasmedge` runtime will run in AOT mode when it executes the output WASM file. Modulized the API Headers. Moved the API header into the `wasmedge` folder. Developers should include the `wasmedge/wasmedge.h` for using the WasmEdge shared library after installation. Moved the enumeration definitions into `enumerrcode.h`, `enumtypes.h`, and `enum_configure.h` in the `wasmedge` folder. Added the `201402L` C++ standard checking if developer includes the headers with a C++ compiler. Adjusted the error code names. Please refer to the definition. Renamed the `Interpreter` into `Executor`. Renamed the `Interpreter` namespace into `Executor`. Moved the headers and sources in the `Interpreter` folder into `Executor` folder. Renamed the `Interpreter` APIs and listed below. WasmEdge C API changes. Updated the host function related APIs. Deleted the data object column in the creation function of `ImportObject` context. Merged the `HostFunctionContext` into `FunctionInstanceContext`. Deleted the `WasmEdgeHostFunctionContext` object. Please use the `WasmEdgeFunctionInstanceContext` object instead. Deleted the `WasmEdgeHostFunctionCreate` function. Please use the `WasmEdgeFunctionInstanceCreate` function instead. Deleted the `WasmEdgeHostFunctionCreateBinding` function. Please use the `WasmEdgeFunctionInstanceCreateBinding` function instead. Deleted the `WasmEdgeHostFunctionDelete` function. Please use the `WasmEdgeFunctionInstanceDelete` function instead. Deleted the `WasmEdgeImportObjectAddHostFunction` function. Please use the `WasmEdgeImportObjectAddFunction` function instead. Added the data object column in the creation function of `FunctionInstance` context. Instead of the unified data object of the host functions in the same import object before, the data objects are independent in every host function now. Added the WASM types contexts. Added the `WasmEdge_TableTypeContext`, which is used for table instances creation. Added the `WasmEdge_MemoryTypeContext`, which is used for memory instances creation. Added the `WasmEdge_GlobalTypeContext`, which is used for global instances creation. Added the member getter functions of the above contexts. Updated the instances creation APIs. Used `WasmEdge_TableTypeContext` for table instances creation. Removed `WasmEdge_TableInstanceGetRefType` API. Developers can use the `WasmEdge_TableInstanceGetTableType` API to get the table type instead. Used `WasmEdge_MemoryTypeContext` for memory instances creation. Added `WasmEdge_MemoryInstanceGetMemoryType` API. Used `WasmEdge_GlobalTypeContext` for global instances creation. Removed `WasmEdgeGlobalInstanceGetValType` and `WasmEdgeGlobalInstanceGetMutability` API. Developers can use the `WasmEdge_GlobalInstanceGetGlobalType` API to get the global type instead. Refactored for the objects' life cycle to reduce copying. Developers should NOT destroy the `WasmEdgeFunctionTypeContext` objects returned from `WasmEdgeVMGetFunctionList`, `WasmEdgeVMGetFunctionType`, and `WasmEdgeVMGetFunctionTypeRegistered` functions. Developers should NOT destroy the `WasmEdgeString` objects returned from `WasmEdgeStoreListFunction`, `WasmEdgeStoreListFunctionRegistered`, `WasmEdgeStoreListTable`, `WasmEdgeStoreListTableRegistered`, `WasmEdgeStoreListMemory`, `WasmEdgeStoreListMemoryRegistered`, `WasmEdgeStoreListGlobal`, `WasmEdgeStoreListGlobalRegistered`, `WasmEdgeStoreListModule`, and `WasmEdge_VMGetFunctionList` functions. Renamed the `Interpreter` related APIs. Replaced `WasmEdgeInterpreterContext` struct with `WasmEdgeExecutorContext` struct. Replaced `WasmEdgeInterpreterCreate` function with `WasmEdgeExecutorCreate` function. Replaced `WasmEdgeInterpreterInstantiate` function with `WasmEdgeExecutorInstantiate` function. Replaced `WasmEdgeInterpreterRegisterImport` function with `WasmEdgeExecutorRegisterImport` function. Replaced `WasmEdgeInterpreterRegisterModule` function with `WasmEdgeExecutorRegisterModule` function. Replaced `WasmEdgeInterpreterInvoke` function with `WasmEdgeExecutorInvoke` function. Replaced `WasmEdgeInterpreterInvokeRegistered` function with `WasmEdgeExecutorInvokeRegistered` function. Replaced `WasmEdgeInterpreterDelete` function with `WasmEdgeExecutorDelete` function. Refactored for statistics options Renamed `WasmEdgeConfigureCompilerSetInstructionCounting` to `WasmEdgeConfigureStatisticsSetInstructionCounting`. Renamed `WasmEdgeConfigureCompilerSetCostMeasuring` to `WasmEdgeConfigureStatisticsSetCostMeasuring`. Renamed `WasmEdgeConfigureCompilerSetTimeMeasuring` to `WasmEdgeConfigureStatisticsSetTimeMeasuring`. Renamed `WasmEdgeConfigureCompilerGetInstructionCounting` to `WasmEdgeConfigureStatisticsGetInstructionCounting`. Renamed `WasmEdgeConfigureCompilerGetCostMeasuring` to `WasmEdgeConfigureStatisticsGetCostMeasuring`. Renamed `WasmEdgeConfigureCompilerGetTimeMeasuring` to `WasmEdgeConfigureStatisticsGetTimeMeasuring`. Simplified the WASI creation and initialization APIs. Removed the `Dirs` and `DirLen` parameters in the `WasmEdge_ImportObjectCreateWASI`. Removed the `Dirs` and `DirLen` parameters in the `WasmEdge_ImportObjectInitWASI`. Features: Applied the old WebAssembly proposals options (All turned on by" }, { "data": "Developers can use the `disable-import-export-mut-globals` to disable the Import/Export mutable globals proposal in `wasmedge` and `wasmedgec`. Developers can use the `disable-non-trap-float-to-int` to disable the Non-trapping float-to-int conversions proposal in `wasmedge` and `wasmedgec`. Developers can use the `disable-sign-extension-operators` to disable the Sign-extension operators proposal in `wasmedge` and `wasmedgec`. Developers can use the `disable-multi-value` to disable the Multi-value proposal in `wasmedge` and `wasmedgec`. New WasmEdge C API for listing imports and exports from AST module contexts. Developers can query the `ImportTypeContext` and `ExportTypeContext` from the `ASTModuleContext`. New object `WasmEdge_ImportTypeContext`. New object `WasmEdge_ExportTypeContext`. New AST module context functions to query the import and export types. `WasmEdge_ASTModuleListImportsLength` function can query the imports list length from an AST module context. `WasmEdge_ASTModuleListExportsLength` function can query the exports list length from an AST module context. `WasmEdge_ASTModuleListImports` function can list all import types of an AST module context. `WasmEdge_ASTModuleListExports` function can list all export types of an AST module context. New import type context functions to query data. `WasmEdge_ImportTypeGetExternalType` function can get the external type of an import type context. `WasmEdge_ImportTypeGetModuleName` function can get the import module name. `WasmEdge_ImportTypeGetExternalName` function can get the import external name. `WasmEdge_ImportTypeGetFunctionType` function can get the function type of an import type context. `WasmEdge_ImportTypeGetTableType` function can get the table type of an import type context. `WasmEdge_ImportTypeGetMemoryType` function can get the memory type of an import type context. `WasmEdge_ImportTypeGetGlobalType` function can get the global type of an import type context. New export type context functions to query data. `WasmEdge_ExportTypeGetExternalType` function can get the external type of an export type context. `WasmEdge_ExportTypeGetExternalName` function can get the export external name. `WasmEdge_ExportTypeGetFunctionType` function can get the function type of an export type context. `WasmEdge_ExportTypeGetTableType` function can get the table type of an export type context. `WasmEdge_ExportTypeGetMemoryType` function can get the memory type of an export type context. `WasmEdge_ExportTypeGetGlobalType` function can get the global type of an export type context. For more details of the usages of imports and exports, please refer to the . Exported the WasmEdge C API for getting exit code from WASI. `WasmEdge_ImportObjectWASIGetExitCode` function can get the exit code from WASI after execution. Exported the WasmEdge C API for AOT compiler related configurations. `WasmEdge_ConfigureCompilerSetOutputFormat` function can set the AOT compiler output format. `WasmEdge_ConfigureCompilerGetOutputFormat` function can get the AOT compiler output format. `WasmEdge_ConfigureCompilerSetGenericBinary` function can set the option of AOT compiler generic binary output. `WasmEdge_ConfigureCompilerIsGenericBinary` function can get the option of AOT compiler generic binary output. Provided install and uninstall script for installing/uninstalling WasmEdge on linux(amd64 and aarch64) and macos(amd64 and arm64). Supported compiling WebAssembly into a new WebAssembly file with a packed binary section. Supported the automatically pre-open mapping with the path name in WASI. Fixed issues: Refined the WasmEdge C API behaviors. Handle the edge cases of `WasmEdge_String` creation. Fixed the instruction iteration exception in interpreter mode. Forcely added the capacity of instruction vector to prevent from connection of instruction vectors in different function instances. Fixed the loader of AOT mode WASM. Checked the file header instead of file name extension when loading from file. Showed the error message when loading AOT compiled WASM from buffer. For AOT mode, please use the universal WASM binary. Fixed the zero address used in AOT mode in load manager. Fixed the loading failed for the AOT compiled WASM without intrinsics table. Fixed the `VM` creation issue. Added the loss of intrinsics table setting when creating a VM instance. Fixed wasi-socket issues. Support wasi-socket on MacOS. Remove the port parameter from `sock_accept`. Refactor: Refined headers inclusion in all files. Refactor the common headers. Removed the unnecessary `genNullRef()`. Merged the building environment-related definitions into" }, { "data": "Merged the `common/values.h` into `common/types.h`. Separated all enumeration definitions. Refactored the AST nodes. Simplified the AST nodes definitions into header-only classes. Moved the binary loading functions into `loader`. Updated the `validator`, `executor`, `runtime`, `api`, and `vm` for the AST node changes. Refactored the runtime objects. Used `AST::FunctionType`, `AST::TableType`, `AST::MemoryType`, and `AST::GlobalType` for instance creation and member handling. Removed `Runtime::Instance::FType` and used `AST::FunctionType` instead. Added routines to push function instances into import objects. Removed the exported map getter in `StoreManager`. Used the getter from `ModuleInstance` instead. Added the module name mapping in `StoreManager`. Refactored the VM class. Returned the reference to function type instead of copying when getting the function list. Returned the vector of return value and value type pair when execution. Updated the include path for rust binding due to the API headers refactoring. Documentations: Updated the `wasmedge` commands in the and Updated the examples in the . Updated the examples in the . Updated the examples in the . Bindings: Move rust crate from root path to `bindings/rust`. Tests: Updated the core test suite to the newest WASM spec. Updated and fixed the value comparison in core tests. Added `ErrInfo` unit tests. Added instruction tests for turning on/off the old proposals. Moved and updated the `AST` unit tests into `loader`. Moved and updated the `Interpreter` tests into `Executor` folder. Added the unit tests for new APIs. Applied the WasmEdge C API in the `ExternRef` tests. Misc: Enabled GitHub CodeSpaces Added `assuming` for `assert` checking to help compiler to generate better codes. Thank all the contributors that made this release possible! 2021, actly, alabulei1, Alex, Antonio Yang, Ashutosh Sharma, Avinal Kumar, blackanger, Chojan Shang, dm4, eee4017, fossabot, hydai, Jayita Pramanik, Kenvi Zhu, luishsu, LuisHsu, MaazKhan711635, Michael Yuan, MileyFu, Nick Hynes, O3Ol, Peter Chang, robnanarivo, Shen-Ta Hsieh, Shreyas Atre, slidoooor, Sylveon, Timothy McCallum, Vikas S Shetty, vincent, Xin Liu, Yi Huang, yiying, YiYing He, Yona, Yukang, If you want to build from source, please use WasmEdge-0.9.0-src.tar.gz instead of the zip or tarball provided by GitHub directly. Features: WASI: Supported WASI on macOS(Intel & M1). Supported WASI on Windows 10. Supported WASI Socket functions on Linux. C API: Supported 32-bit environment. Added the static library target `libwasmedge_c.a` (`OFF` by default). Added the `ErrCode` to C declarations. Added the API about converting `WasmEdge_String` to C string. Added the API to get data pointer from the `WasmEdge_MemoryInstanceContext`. AOT: Added `--generic-binary` to generate generic binaries and disable using host features. Multi platforms: Enabled Ubuntu 20.04 x86\\_64 build. Enabled Ubuntu 21.04 x86\\_64 build. Enabled manylinux2014 aarch64 build. Enabled Ubuntu 21.04 arm32 build. Rust supports: Added the `wasmedge-sys` and `wasmedge-rs` crates. Added the wrapper types to rust. Removed binfmt support. Fixed issues: Ensured every platform defines is defined. Disabled blake3 AVX512 support on old platforms. Avoided vector ternary operator in AOT, which is unsupported by clang on mac. The preopen should be `--dir guestpath:hostpath`. Fixed usused variables error in API libraries when AOT build is disabled. Fixed the WASI function signature error. `wasisnapshotpreview1::pathreadlink` Fixed the signature error with the lost read size output. Added the `Out` comments for parameters with receiving outputs. `wasisnapshotpreview1::pathfilestatset_times` Corrected the time signature to the `u64`. Misc: Changed all CMake global properties to target specified properties. Added namespace to all cmake options. Added the CMake option `WASMEDGEFORCEDISABLE_LTO` to forcibly disable link time optimization (`OFF` by default). WasmEdge project enables LTO by default in Release/RelWithDeb build. If you would like to disable the LTO forcibly, please turn on the `WASMEDGEFORCEDISABLE_LTO` option. Installed `dpkg-dev` in docker images to enable `dpkg-shlibdeps` when creating the deb release. Refactor: Refactored the WASI VFS" }, { "data": "Simplified the memory indexing in validator. Renamed the file names in interpreter. Replaced the instances when registering host instances with existing names. Documentations: Added the document. Added the document. Fixed the wrong `printf` type in the C API document. Tests: Added wasi-test for testing basic WASI interface Added C API unit tests. Added the `WasmEdge_String` copy tests. Added the `WasmEdge_MemoryInstanceContext` get data pointer tests. Removed unnecessary Wagon and Ethereum tests. Features: Exported new functions in C API to import the `wasmedge_process` module. `WasmEdgeImportObjectCreateWasmEdgeProcess()` can create and initialize the `wasmedgeprocess` import object. `WasmEdgeImportObjectInitWasmEdgeProcess()` can initialize the given `wasmedgeprocess` import object. Exported new AOT compiler configuration setting C APIs. Users can set the options about AOT optimization level, dump IR, and instruction counting and cost measuring in execution after compilation to the AOT compiler through C APIs. Updated error codes according to the . Applied the correct error message when trapping in the loading phase. Implemented the UTF-8 decoding in file manager. Implemented the basic name section parsing in custom sections. Added memory-mapped file helper, `MMap` for Linux. Used `mmap` with `MAP_NORESERVE` for overcommited allocation. Used `MMap` for file loading. Merged `FileMgr` variants into one class. Fixed issues: Applied the UTF-8 decoding. Check the UTF-8 validation in custom sections, export sections, and import sections. Detected the redundant sections in modules. Fixed this issue hence the sections rather than the custom section should be unique. Corrected the logging of data offset in the file while trap occurred in the loading phase. Updated to the correct offset according to the refactored file manager. Refactor: Updated manylinux\\ dockerfiles. Upgraded gcc to `11.1.0`. Upgraded llvm to `11.1.0`. Upgraded boost to `1.76`. Moved environment variables to Dockerfile. Used helper scripts to build. Moved the options of the AOT compiler into the `Configure` class. Refactor the file manager for supporting the `Unexpected end` loading malformed test cases. Added the `setSectionSize` function to specify the reading boundary before the end of the file. Adjusted build scripts. Set job pools for ninja generator. Checked for newer compilers in `std::filesystem`. Adjusted library dependency. Documentations: Updated the document. Renamed the `SSVM` related projects into `WasmEdge`. Tools: Updated the `wasmedgec` AOT compiler tool for API changes of the `Configure`. Tests: Turn on the `assert_malformed` tests for WASM binary in spec tests. Apply the interpreter tests. Apply the AOT tests. Apply the API tests. Updated the API unit tests for the new `Configure` APIs. Updated the AST and loader unit tests. Added test cases of file manager to raise the coverage. Added test cases of every AST node to raise the coverage. Breaking changes: Renamed this project to `WasmEdge` (formerly `ssvm`). The tool `wasmedge` is the WebAssembly runtime (formerly `ssvm`). The tool `wasmedgec` is the WebAssembly AOT compiler (formerly `ssvmc`). Renamed the CMake options. Option `BUILDAOTRUNTIME` (formerly `SSVMDISABLEAOT_RUNTIME` and `OFF` by default), which is `ON` by default, is for enabling the compilation of the ahead-of-Time compiler. Turned on the `reference-types` and `bulk-memory-operations` proposals by default in tools. Users can use the `disable-bulk-memory` to disable the `bulk-memory-operations` proposal in `wasmedge` and `wasmedgec`. Users can use the `disable-reference-types` to disable the `reference-types` proposal in `wasmedge` and `wasmedgec`. Features: Added `WasmEdge` C API and shared library. Developers can include the `wasmedge.h` and link the `libwasmedge_c.so` for compiling and running `WASM`. Add CMake option `BUILDSHAREDLIB` to enable compiling the shared library (`ON` by default). The APIs about the ahead-of-time compiler will always return failed if the CMake option `BUILDAOTRUNTIME` is set as `OFF`. Added `common/version.h`: define the package version from `cmake`. Updated `Configure`. Turned on the `reference-types` and `bulk-memory-operations` proposals by" }, { "data": "Supports memory page limitation for limiting the largest available pages in memory instances. Added a function in `Log` to enable the debug logging level. Added global options with subcommands into `PO`. Added an API into `StoreManager` to list the registered module names. Added an API into `TableInstance` to grow table with `ref.null`. Updated `SIMD` implementation with the newest . Supported `AOT` compile cache. Added `blake3` hash calculator to calculate hash for caching files. Added an API into `VM` for loading `WASM` module from `AST::Module`. Fixed issues: Adjusted and fixed cmake issues. Used `CMAKECURRENTSOURCE_DIR` in this project for supporting to be as a submodule. Assigned a default version number (`0.0.0-unreleased`) when getting the version from git describe failed. Fixed `boost` include variable names. Fixed `WASI` `poll_oneoff`. Allow `SIGINT` and `SIGTERM` while waiting for the file descriptor and check `SIGTERM` after `epoll`. Rearranged variables for CPU feature detection in `AOT` compiler. Fixed `Validator` errors. Fixed the error in `br_table` for pushing wrong types into validation stack. Fixed the error in `global_set` for iterating illegal indices. Fixed `Interpreter` errors. Fixed the failed case that not returned the errors except `ErrCode::ExecutionFailed` when invoking the host functions. Not to return success when the `ErrCode::Terminated` occurs. Fixed the unmapping size in the destructor of `MemoryInstance`. Refactor: Merged the `CostTable` class into `Statistics`. Simplified the API for getting and setting cost table. Initialized the costs for every instruction as `1` by default. Merged the `Proposal` and `HostRegistration` configurations into `Configure`. Adjusted the `Proposal` order. Applied the copy of `Configure` in `Loader`, `Validator`, `Interpreter`, and `VM` instead of passing by reference. Refactored the functions in the `StoreManager`. Updated the templates of functions to register instances. Forwarded the parameters to reduce moving. Refactored and used the `std::variant` to save space in `FunctionInstance`. Applied function parameter type checking when invoking a wasm function in `Interpreter`. Set the module instantiation as the anonymous active module in `Interpreter`. Added the `const` quantifier in `get` and `load` data functions of `MemoryInstance`. Documentations: Added document. Added document. Added document. Added document. Updated document for the VM API changes. Updated the document. Added scripts to generate witx documents. Cherry-pick `wasiephemeralsock` APIs from `wasisnapshotpreview1`. Tools: `wasmedge`: WebAssembly runtime (formerly `ssvm`) Turned on the `bulk-memory-operations` and `reference-types` proposals by default. Users can use the `disable-bulk-memory` to disable the `bulk-memory-operations` proposal. Users can use the `disable-reference-types` to disable the `reference-types` proposal. Updated for the `vm` API changes. Return the exit code in command mode in forced terminated occurs in `WASI`. `wasmedgec`: WebAssembly AOT compiler (formerly `ssvmc`) Turned on the `bulk-memory-operations` and `reference-types` proposals by default. Users can use the `disable-bulk-memory` to disable the `bulk-memory-operations` proposal when compiling. Users can use the `disable-reference-types` to disable the `reference-types` proposal when compiling. Tests: Added AOT cache tests. Added memory page size limit tests. Updated the WASM spec tests. Updated and check out the newest test suites. Updated the `SIMD` test data. For the `WasmEdge 0.8.0`, we use the `wasm-dev-0.8.0` tag for the core tests and the `SIMD` proposal tests. Adjusted the code architecture for core testing. Combined the duplicated functions into the `SpecTest` class. Split out the `spectest` host function definitions for importing repeatedly. Added `WasmEdge` C API tests. Added unit tests for APIs in the `WasmEdge` shared library. Applied WASM core tests for the `WasmEdge` shared library in both using `Interpreter` APIs and `VM` APIs. Features: Updated the `easylogging++` to v9.97.0. Disabled the file logging. Initial supported the `WASI` host functions for old system (CentOS 6). Updated the `WASI` subscription insterface. Used `pipe` for old `GLIBC`. Added supporting of subcommand in `PO`. Provided options to toggle white lists of `ssvm_process` in `ssvm`" }, { "data": "`--allow-command COMMAND` to add a command into white list in `ssvm_process` host functions. `--allow-command-all` to allow all commands in `ssvm_process` host functions. Added the documentation of . Fixed issues: Fixed the loading issues in `file manager`. Refined performance and added error handling in `readBytes`. Fixed `LEB128` and `ULEB128` decoding and error codes. Fixed security issues of executing commands in `ssvm_process` host functions. Managed a white list for command execution. Refactor: Used vector of instance instead of `std::unique_ptr` in AST nodes. Merged all instruction node classes. Added `OpCode::Else` instruction. Serialized the instruction sequences. Move out the block body of `If`, `Block`, and `Loop` instructions. Applied the proposal configuration checking in the loader phase. Moved the `OpCode` and value type validation of proposal configuration checking to loader phase. Fixed the logging message. Added helper functions to clean codes of logging. Refactored the validator for instruction serialization. Removed the duplicated proposal configuration checking done at the loader phase. Serialized the instruction iterating when validating. Refactored the `Label` in `stack manager`. `Label` will record the `from` instruction iterator that causes entering this label. Removed the `next` instruction getter in `stack manager`. Refactored the instruction iterating mechanism in `interpreter`. Used the `program counter` to iterate and execute the instructions. Merged all switch cases of `OpCode`. Moved out `AOT` related proxy codes and helper functions in `interpreter` to dependent files. Tools: Added `binfmt` supporting for `WASM` interpreter. Please use the tool `tools/ssvm/ssvm-static` with the same arguments as `ssvm`. Provided `manylinux` support for legacy operatoring systems `manylinux1` is based on CentOS 5.9 `manylinux2010` is based on CentOS 6 `manylinux2014` is based on CentOS 7 Tests: Updated file manager tests for `LEB128` and `ULEB128` decoding. Updated AST tests for refactored AST nodes. Updated instruction tests for refactored instruction nodes. Added `PO` tests. Added `ssvm_process` tests. Features: Added a cmake option to toggle the compilation of `ssvm` and `ssvmr` tools. This option is `ON` in default. `cmake -DBUILD_TOOLS=Off` to disable the compilation of `tools/ssvm` folder when building. Applied the proposal. Please refer to the for more details. Provided options to toggle proposals for the compiler and runtime. `--enable-bulk-memory` to enable bulk-memory operations proposal. `--enable-reference-types` to enable reference types proposal. `--enable-simd` to enable SIMD proposal. `--enable-all` to enable all supported proposals. Supported `roundeven` intrinsic in LLVM 11. Fixed issues: Used `std::filesystem::path` for all paths. Interpreter Fixed `call_indirect` table index checking in the validation phase. Removed redundant `reinterpret_cast` in interpreter. AOT compiler Forced unalignment in load and store instructions in AOT. Not to report error in `terminated` case. WASI Updated size of `linkcount` to `u64`. Refactor: Added `uint128_t` into `SSVM::ValVariant`. Added number type `v128`. Added `SSVM::RefVariant` for 64bit-width reference variant. Refactor AOT for better performance. Added code attribute in AOT to speed up normal execution. Rewrote element-wise boolean operators. Used vector type in stack and function for better code generation. Rewrite `trunc` instructions for readability. Tools: Deprecated `ssvmr` tool, since the functionalities are the same as `ssvm` tool. Please use the tool `tools/ssvm/ssvm` with the same arguments. Combined the tools folder. All tools in `tools/ssvm-aot` are moved into `tools/ssvm` now. Tests: Added Wasi test cases. Added test cases for `args` functions. Added test cases for `environ` functions. Added test cases for `clock` functions. Added test cases for `procexit` and `randomget`. Updated test suites and categorized them into proposals. Added SIMD proposal test suite. * Features: Applied the proposal for AOT. Support LLVM 11. Refactor: Refactor symbols in AOT. Removed the symbols in instances. Added intrinsics table for dynamic linking when running a compiled wasm. Merged the program counter into `stack manager`. Added back the `OpCode::End` instruction. Refactored the validator workflow of checking" }, { "data": "Used `std::bitset` for VM configuration. Used `std::array` for cost table storage. Combined `include/support` into `include/common`. Merged `support/castng.h` into `common/types.h`. Merged `Measurement` into `Statistics`. Renamed `support/time.h` into `common/timer.h`. Used standard steady clock instead. Renamed `common/ast.h` into `common/astdef.h`. Moved `common/ast/` to `ast/`. Removed the `SSVM::Support` namespace. Tests: Applied new test suite of the reference types and bulk memory operation proposal for AOT. Features: Applied the proposal. Added the definition of reference types. Added helper functions for function index to `funcref` conversions. Added helper functions for reference to `externref` conversions. Added the following new instructions. Reference instructions: ref.null ref.is_null ref.func Table instructions: table.get table.set table.init elem.drop table.copy table.grow table.size table.fill Memory instructions: memory.init data.drop memory.copy memory.fill Parametric instructions: select t Updated implementation of the following instructions. call_indirect select Applied the new definition of `data count section`, `data segment`, and `element segment`. Applied validation for `data segment` and `element segment`. Added the `data instance` and `element instance`. Applied the new instantiation flow. Refactor: Completed the enumeration value checking in the loading phase. Updated the value type definition. `ValType` is updated to include `NumType` and `RefType`. `NumType` is updated to include `i32`, `i64`, `f32`, and `f64`. `RefType` is updated to include `funcref` and `externref`, which replaced the `ElemType`. Updated error codes according to the test suite for the reference types proposal. Extended validation context for recording `datas`, `elements`, and `refs`. Updated runtime structures. Fixed minimum pages definition in `memory instance`. Applied new definitions of `table instance`. Extended `module instance` for placing `data instance` and `element instance`. Extended `store` for owning `data instance` and `element instance`. Updated template typename aliasing in `interpreter`. Tests: Applied new test suite for the proposal. * Supported `funcref` and `externref` types parameters in core tests. Added `externref` tests for testing object binding and samples. Please see the for detail. Features: Added gas and instruction count measurement in AOT. Features: Supported loop parameters in AOT. Added optimization level settings in the AOT compiler. Refactor: Applied page based allocation in `memory instance`, instead of preserving 4G at once. Fixed Issues: Fixed error marking stdin, stdout, and stderr file descriptor as pre-opened when initializing WASI environment. Fixed `ssvm_process` error handling when execution commands. Print error message when command not found or permission denied. Fixed casting of return codes. Tests: Split the core test to helper class for supporting AOT core tests in the future. This is a bug-fix release for the ssvm_process component. Fixed Issues: Handle the large size writing to pipe in `ssvm_process`. Features: Add option for dumping LLVM IR in `ssvmc`. Add `SSVM_Process` configuration. VM with this option will import `ssvm_process` host modules. `ssvm_process` host functions are SSVM extension for executing commands. This host module is to support wasm compiled from rust with . Turn on `SSVM_Process` configuration in both `ssvmr` and `ssvm`. Refactor: Apply `mprotect` memory boundary checking in `memory instance`. Fixed Issues: Prevent undefined behavior on shift operations in interpreter and file manager. Features: Support WebAssembly reactor mode in both `ssvmr` and `ssvm`. Refactor: Use `vector` instead of `deque` in `Validator`. Fixed Issues: Fixed cost table to support 2-byte instructions. Resolved warnings about signed and unsigned comparing. Fixed printing error about hex strings in error messages. Corrected memory boundary logging in error messages. Ignore `SIGINT` when `ssvm` is forced interrupted. Tests: Add ssvm-aot tests. Tools: Updated `ssvm` interpreter. `ssvm` provides interpreter mode of executing wasm. The usage of `ssvm` is the same as `ssvmr`. Added `STATIC_BUILD` mode for linking std::filesystem statically. This is a bug-fix release for the warnings. Fixed Issues: Resolved warnings with compilation flag `-Wall`. Add `-Wall` flag in CMakeFile. Refactor: Refactored instruction classes for supporting 2-byte" }, { "data": "Refined corresponding switch cases in validator, interpreter, and AOT. This is a bug-fix release for the wasi component. Fixed Issues: Change the fd number remap mechanism from static offset to dynamic map. Features: New target support: Add aarch64 target support for both ssvm-interpreter and ssvm-aot tools. Wasm spec 1.1 support: Implement `multi-value return` proposal. Implement `signed extension` and `saturated convert` instructions. i32.extend8_s i32.extend16_s i64.extend8_s i64.extend16_s i64.extend32_s i32.truncsatf32_s i32.truncsatf32_u i32.truncsatf64_s i32.truncsatf64_u i64.truncsatf32_s i64.truncsatf32_u i64.truncsatf64_s i64.truncsatf64_u Wasm spec test suites support: Add toolkit for integrating wasm spec test suites. Enable `assert_invalid` tests Wasi support: Enable environ variables support: add `--env` option for environment variables. allow developers to append more environment variables from a given env list, e.g. `PATH=/usr/bin`, `SHELL=ZSH`. Enable preopens support: add `--dir` option for preopens directories. allow developers to append more preopens directories from a given dir list, e.g. `/sandbox:/real/path`, `/sandbox2:/real/path2`. New Statistics API: With statistics class, developers can get the following information after each execution: Total execution time in `us`. (= `Wasm instruction execution time` + `Host function execution time`) Wasm instruction execution time in `us`. Host function execution time in `us`. A host function can be a evmc function like `evmc::storageget`, a wasi function like `randomget`, or any customized host function. Instruction count. (Total executed instructions in the previous round.) Total gas cost. (Execution cost by applying ethereum-flavored wasm cost table.) Instruction per second. Validator: Support Wasm 1.1 instructions validation. Support blocktype check which is used in multi-value return proposal. Logging system: Support 2-byte instructions. Refactor: Remove redundant std::move in return statements. Fixed Issues: Fix std::filesystem link issue in ssvm-aot tool. Fix `-Wreorder` warnings in errinfo.h Fix several implementation errors in wasi functions. Tools: CI: Update base image from Ubuntu 18.04 to Ubuntu 20.04 Features: Error Logging System Add information structures to print information when an error occurs. Apply error logging in every phase. Refactor: Internal tuple span mechanism Apply C++20 `span` features instead of `std::vector &`. Internal string passing mechanism Apply C++17 `std::string_view` for passing strings. Move enumeration definitions Add string mapping of types, instructions, and AST nodes. Move enumerations to SSVM top scope. Memory instance passing in host functions Pass pointer instead of reference of memory instance to allow `nullptr`. Fixed Issues: Instantiation Phase Fixed boundary checking bugs when initializing data sections. Function invocation Add dummy frame when invoking function from VM. Features: Building System Add CMake option `SSVMDISABLEAOT_RUNTIME` to disable building ahead of time compilation mode. Wasm AST Add support of multiple partitions of sections in wasm module. AOT Add SSVM-AOT tools. Tools: SSVM-AOT Enable to compile and run separately. Enable to run compiled module and normal module with the interpreter. Refactor: Internal tuple span mechanism Apply C++20 `span` features in host functions. Internal error handling mechanism Apply non-exception version of `expected`. Refine CMake files Update file copying macro in `CMakeFile` to support recursively copying. Refine include paths and dependencies in every static library. Modularize static libraries to be included as submodules easier. Interpreter Use function address in `Store` for invoking instead of the exported function name. Support invocation of a host function. Host functions Return `Expect` instead of `ErrCode` in host functions. Return function return values in `Expect` class rather than in function parameter. New VM APIs Add routine to invoke a function of registered and named module in `Store`. Removed old `executor` and use `interpreter` instead. Renamed `ExpVM` to `VM` and removed the old one. Apply new `VM` to all tools. AOT Integrated into new VM API and HostFunctions Generate minimum machine code for `nearestint` instructions. Fixed Issues: Loader Add checking Wasm header and version when" }, { "data": "Validation Fix `export section` checking to support `\"\"` function name. Fix type transforming when function invocation and return. Runtime Data Structure Fix the wrong table resizing when initialization in `table instance`. Interpreter Instantiation Fix instantiation steps of `data` and `element sections`. Check `memory` and `table instances` boundary according to Wasm spec. Not to replace data in `memory` and `table instances` until all checkings were done. Engine Fix wrong arity assignment in `loop` instruction. Fix wrong answer issue in `trunc` and `clz` instructions. Fix logic of `div` instruction in both integer and floating-point inputs. Fix wrong handling of `NaN` operand in `min` and `max` instructions. Add dummy frame before function invocation according to Wasm spec. Add memory boundary checking when loading value in `memory` instructions. AOT Fix wrong handling of the minimum operand in `mod` instructions. Fix wrong handling of `NaN` operand in `min` and `max` instructions. Tests: Remove `ssvm-evmc` tests. (Experimental) Add unit tests for C++ `span` feature. Deprecated: SSVM-Proxy is removed. SSVM-EVMC is removed. is separated from this project as an independent repository. SSVM 0.5.1 is a bug-fix release from 0.5.0. Issues: Set correct reset timing of the interpreter. Fix data copying in table instance in the instantiation phase. Fix label popping in stack manager. Features: Ethereum environment interface Implemented all EEI functions. For more details, please refer to Validation Completed validations for wasm sections. Completed checkings in const expressions. Runtime Wasm module registering WASM modules can be registered into `Store` for importing. Host modules, which may contain host functions and `global`s, can be registered into `Store`. (Experimental) New VM APIs New VM is refactoring from legacys VM and provides a rapidly running process for WASM. Export `Store` for external access. Node.js addon Integrate SSVM with Node.js Addon API. is separated from this project as an independent repository. Refactor: Code structure layout Create `common` namespace for cross-component data structures and type definitions. Extract AST structures from ast to `common`. Extract duplicate enumerations to `common`. Collects all error code classes into `common`. Internal error handling mechanism Apply C++ p0323r9 `expected` features Add several helper functions for wrapping return values with error code. Wasm loader Simplify workflow. Take a wasm input and return an `AST` object directly. Wasm validator Simplify workflow. Take an `AST` object and return the results. Rename `validator/vm` to `formchecker`. Refine runtime data structure Extract `instance`s, `host function`s, `stack manager`, and `store manager` classes to `runtime` folder. Extract `frame`, `label`, and `value` entry classes into `stack manager`. Delete redundant checks in `stack manager`. All of these checks are verified in the validation stage. Add `ImportObj` class for handling the host modules registration. Interpreter Create `interpreter` namespace. Extract `executor` class to `interpreter`. Add instantiation methods for registering host modules. Host functions Create `host` namespace. Extract `EEI`, `Wasi-core`, and `ONNC` host functions to `host`. Make host functions construction in host modules. Extract `host environment`s from `environment manager` to respective `host module`s. Refactoring from legacy VM. Simplify workflow. Provide two approaches for invoking a wasm function. All-in-one way: Calling `runWasmFile` can instantiate and invoke a wasm function directly. Step-by-step way: Calling `loadWasm`, `validate`, `instantiate`, `execute` sequentially can make developers control the workflow manually. External access APIs Access `export`ed wasm functions. Export `Store`. Export measurement data class including instruction counter, timer, and cost meter. Provide registration API for wasm modules and host modules. Extract `host environment`s of `EEI` and `Wasi-core` into respective `host module`s. Apply experimental VM to `ssvm-proxy` and `ssvm-evmc` tools. Tools: Remove unused ssvm-evm `ssvm-evm` is replaced by `ssvm-evmc`. (Experimental) Add sub-project `ssvm-aot` `ssvm-aot` provides ahead-of-time(AOT) compilation mechanism for general wasm applications. Tests: Remove redundant `ssvm-evm` tests. (Experimental) Add integration tests for" }, { "data": "(Experimental) Add unit tests for C++ `expected` feature. Move `AST` tests to the test top folder. Fixed issues: Ethereum Environment Interface Fix function signatures. Return `fail` instead of `revert` when the execution state is `out of gas`. Handle memory edge case when loading and storing from memory instance. Add missing check for evmc flags. Set running code to evmc environment. Complete import matching when instantiation in the interpreter. Fix lost of validation when importing `global`s. Features: Ethereum environment interface implementation Add EVMC library. * Update gas costs of Ewasm functions. Refactor: Host functions: Use the template to generate wasm function type of host function body. Move function module name and function name to host function class. Tools: Sub-project EVM with evmc SSVM-EVMC integrates EVMC and Ethereum Environment Interface(EEI). SSVM-EVMC is a shared library for EVMC-compatible clients. Tests: ERC20 contracts for SSVM-EVMC Create an example VM for testing. Test the following functionalities of ERC20 contracts: Deploy ERC20 contract Check balance Check total supply Transfer Approve Check allowance Fixed issues: Handle empty length of memory in `vm_snapshot`. Correct error message when execution failed in SSVM proxy mode. Fixed issues: Change the naming style of JSON format in SSVM proxy mode Use snake case for the keys of JSON files instead Change the arguments and return value formats. Add `argumenttypes` and `returntypes` in input JSON format. Expand home directory path Accept ~ in the file path Features: WebAssembly Validation Implement the Wasm Validation mechanism. SSVM will validate wasm modules before execution. Snapshot and restore execution state SSVM provides restore mechanism from the previous execution state. SSVM provides a snapshot mechanism to dump the current execution state. * Initialize and set up SSVM via input JSON format. Retrieve execution results via output JSON format. Tools: Sub-project RPC service proxy mode SSVM-PROXY is a component of . SSVM-PROXY can archive current execution states and serialize these data into output JSON format. SSVM-PROXY can restore previous program states from input JSON format. Features: Native Cost Metering SSVM provides CostTab for each instruction including Wasm, Wasi, Ewasm. With this feature, users can set the cost limit for measuring the execution cost. Built-in performance timer TimeRecord collects execution time for the performance analysis. TimeRecord supports multiple timers. SSVM also provides Wasi timer API for developers to customize TimeRecord. Multiple Virtual Machine Environment Wasm mode: Support general Wasm program. Wasi mode: In addition to Wasm mode, this mode contains basic Wasi functions like print. QITC mode: In addition to Wasi mode, this mode is designed for ONNC runtime to execute AI models by leveraging Qualcomm Hexagon SDK. Ewasm mode: In addition to Wasm mode, this mode is designed for Ethereum flavor WebAssembly. Start functions enhancement Support start function assignment. This makes users invoke an exported function with a given function name. Support start function arguments and return value. This makes users can insert arguments and retrieve result after execution. Simple statistics output Dump total execution time and instruction per second for benchmarking. Print used gas costs for Ewasm mode. Print storage and return values. Tools: Sub-project Qualcomm Innovate in Taiwan Challenge(a.k.a QITC) 2019 SSVM-QITC enables AI model execution by integrating runtime and Qualcomm Hexagon SDK. With this tool, users can run AI model inference within a WebAssembly Virtual Machine. Sub-project Ethereum SSVM-EVM integrates the Ethereum Environment Interface(EEI) as a WebAssembly extension. With this tool, users can run blockchain applications, which are compiled into Ewasm bytecodes. Sub-project General Wasi Support SSVM tool provides basic Wasi functions support, such as print function. Features: Lexer: Support full wasm bytecode format AST: Be able to load a wasm module Instantiate: Support wasm module" } ]
{ "category": "Runtime", "file_name": "Changelog.md", "project_name": "WasmEdge Runtime", "subcategory": "Container Runtime" }
[ { "data": "As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We are committed to making participation in this project a harassment-free experience for everyone, regardless of level of experience, gender, gender identity and expression, sexual orientation, disability, personal appearance, body size, race, ethnicity, age, religion, or nationality. Examples of unacceptable behavior by participants include: The use of sexualized language or imagery Personal attacks Trolling or insulting/derogatory comments Public or private harassment Publishing others' private information, such as physical or electronic addresses, without explicit permission Other unethical or unprofessional conduct. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct. By adopting this Code of Conduct, project maintainers commit themselves to fairly and consistently applying these principles to every aspect of managing this project. Project maintainers who do not follow or enforce the Code of Conduct may be permanently removed from the project team. This code of conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting a project maintainer listed in the file. This Code of Conduct is adapted from the Contributor Covenant (http://contributor-covenant.org), version 1.2.0, available at http://contributor-covenant.org/version/1/2/0/" } ]
{ "category": "Runtime", "file_name": "CODE_OF_CONDUCT.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "(cluster-manage)= After your cluster is formed, use to see a list of its members and their status: ```{terminal} :input: incus cluster list :scroll: ++-++--+-+-+--+-+ | NAME | URL | ROLES | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE | MESSAGE | ++-++--+-+-+--+-+ | server1 | https://192.0.2.101:8443 | database-leader | x86_64 | default | | ONLINE | Fully operational | | | | database | | | | | | ++-++--+-+-+--+-+ | server2 | https://192.0.2.102:8443 | database-standby | aarch64 | default | | ONLINE | Fully operational | ++-++--+-+-+--+-+ | server3 | https://192.0.2.103:8443 | database-standby | aarch64 | default | | ONLINE | Fully operational | ++-++--+-+-+--+-+ ``` To see more detailed information about an individual cluster member, run the following command: incus cluster show <member_name> To see state and usage information for a cluster member, run the following command: incus cluster info <member_name> To configure your cluster, use . For example: incus config set cluster.max_voters 5 Keep in mind that some {ref}`server configuration options <server>` are global and others are local. You can configure the global options on any cluster member, and the changes are propagated to the other cluster members through the distributed database. The local options are set only on the server where you configure them (or alternatively on the server that you target with `--target`). In addition to the server configuration, there are a few cluster configurations that are specific to each cluster member. See {ref}`cluster-member-config` for all available configurations. To set these configuration options, use or . For example: incus cluster set server1 scheduler.instance manual To add or remove a {ref}`member role <clustering-member-roles>` for a cluster member, use the command. For example: incus cluster role add server1 event-hub ```{note} You can add or remove only those roles that are not assigned automatically by Incus. ``` To edit all properties of a cluster member, including the member-specific configuration, the member roles, the failure domain and the cluster groups, use the command. (cluster-evacuate)= There are scenarios where you might need to empty a given cluster member of all its instances (for example, for routine maintenance like applying system updates that require a reboot, or to perform hardware changes). To do so, use the command. This command migrates all instances on the given server, moving them to other cluster members. The evacuated cluster member is then transitioned to an \"evacuated\" state, which prevents the creation of any instances on it. You can control how each instance is moved through the {config:option}`instance-miscellaneous:cluster.evacuate` instance configuration key. Instances are shut down cleanly, respecting the `boot.hostshutdowntimeout` configuration key. When the evacuated server is available again, use the command to move the server back into a normal running state. This command also moves the evacuated instances back from the servers that were temporarily holding them. (cluster-automatic-evacuation)= If you set the" }, { "data": "configuration to a non-zero value, instances are automatically evacuated if a cluster member goes offline. When the evacuated server is available again, you must manually restore it. (cluster-manage-delete-members)= To cleanly delete a member from the cluster, use the following command: incus cluster remove <member_name> You can only cleanly delete members that are online and that don't have any instances located on them. If a cluster member goes permanently offline, you can force-remove it from the cluster. Make sure to do so as soon as you discover that you cannot recover the member. If you keep an offline member in your cluster, you might encounter issues when upgrading your cluster to a newer version. To force-remove a cluster member, enter the following command on one of the cluster members that is still online: incus cluster remove --force <member_name> ```{caution} Force-removing a cluster member will leave the member's database in an inconsistent state (for example, the storage pool on the member will not be removed). As a result, it will not be possible to re-initialize Incus later, and the server must be fully reinstalled. ``` To upgrade a cluster, you must upgrade all of its members. All members must be upgraded to the same version of Incus. ```{caution} Do not attempt to upgrade your cluster if any of its members are offline. Offline members cannot be upgraded, and your cluster will end up in a blocked state. ``` To upgrade a single member, simply upgrade the Incus package on the host and restart the Incus daemon. If the new version of the daemon has database schema or API changes, the upgraded member might transition into a \"blocked\" state. In this case, the member does not serve any Incus API requests (which means that `incus` commands don't work on that member anymore), but any running instances will continue to run. This happens if there are other cluster members that have not been upgraded and are therefore running an older version. Run on a cluster member that is not blocked to see if any members are blocked. As you proceed upgrading the rest of the cluster members, they will all transition to the \"blocked\" state. When you upgrade the last member, the blocked members will notice that all servers are now up-to-date, and the blocked members become operational again. In an Incus cluster, the API on all servers responds with the same shared certificate, which is usually a standard self-signed certificate with an expiry set to ten years. The certificate is stored at `/var/lib/incus/cluster.crt` and is the same on all cluster members. You can replace the standard certificate with another one, for example, a valid certificate obtained through ACME services (see {ref}`authentication-server-certificate` for more information). To do so, use the command. This command replaces the certificate on all servers in your cluster." } ]
{ "category": "Runtime", "file_name": "cluster_manage.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "MMDS consists of three major logical components: the backend, the data store, and the minimalist HTTP/TCP/IPv4 stack (named Dumbo). They all exist within the Firecracker process, and outside the KVM boundary; the first is a part of the API server, the data store is a global entity for a single microVM, and the last is a part of the device model. Users can add/update the MMDS contents via the backend, which is accessible through the Firecracker API. Setting the initial contents involves a `PUT` request to the `/mmds` API resource, with a JSON body that describes the desired data store structure and contents. Here's a JSON example: ```json { \"latest\": { \"meta-data\": { \"ami-id\": \"ami-12345678\", \"reservation-id\": \"r-fea54097\", \"local-hostname\": \"ip-10-251-50-12.ec2.internal\", \"public-hostname\": \"ec2-203-0-113-25.compute-1.amazonaws.com\", \"network\": { \"interfaces\": { \"macs\": { \"02:29:96:8f:6a:2d\": { \"device-number\": \"13345342\", \"local-hostname\": \"localhost\", \"subnet-id\": \"subnet-be9b61d\" } } } } } } } ``` The MMDS contents can be updated either via a subsequent `PUT` (that replaces them entirely), or using `PATCH` requests, which feed the JSON body into the JSON Merge Patch functionality, based on . MMDS related API requests come from the host, which is considered a trusted environment, so there are no checks beside the kind of validation done by HTTP server and `serde-json` (the crate used to de/serialize JSON). The size limit for the stored metadata is configurable and defaults to 51200 bytes. When increasing this limit, one must take into consideration that storing and retrieving large amount of data may induce bottlenecks for the HTTP REST API processing, which is based on `micro-http` crate. MMDS contents can be retrieved using the Firecracker API, via a `GET` request to the `/mmds` resource. This is a global data structure, currently referenced using a global variable, that represents the strongly-typed version of JSON-based user input describing the MMDS contents. It leverages the recursive type exposed by `serde-json`. It can only be accessed from thread-safe contexts. MMDS data store supports at the moment storing and retrieving JSON values. Data store contents can be retrieved using the Firecracker API server from host and using the embedded MMDS HTTP/TCP/IPv4 network stack from guest. MMDS data store is upper bounded to the value of the `--mmds-size-limit` command line parameter. If left unconfigured, it will default to the value of `--http-api-max-payload-size`, which is 51200 bytes by default. The Dumbo HTTP/TCP/IPv4 network stack handles guest HTTP requests heading towards the configured MMDS IPv4 address. Before going into Dumbo specifics, it's worth going through a brief description of the Firecracker network device model. Firecracker only offers Virtio-net paravirtualized devices to guests. Drivers running in the guest OS use ring buffers in a shared memory area to communicate with the device model when sending or receiving frames. The device model associates each guest network device with a TAP device on the host. Frames sent by the guest are written to the TAP fd, and frames read from the TAP fd are handed over to the guest. The Dumbo stack can be instantiated once for every network device, and is disabled by default. It can be enabled through the API request body used to configure MMDS by specifying the ID of the network interface inside the `network_interfaces` list. In order for the API call to succeed, the network device must be attached beforehand, otherwise an error is" }, { "data": "Once enabled, the stack taps into the aforementioned data path. Each frame coming from the guest is examined to determine whether it should be processed by Dumbo instead of being written to the TAP fd. Also, every time there is room in the ring buffer to hand over frames to the guest, the device model first checks whether Dumbo has anything to send; if not, it resumes getting frames from the TAP fd (when available). We chose to implement our own solution, instead of leveraging existing libraries/implementations, because responding to guest MMDS queries in the context of Firecracker is amenable to a wide swath of simplifications. First of all, we only need to handle `GET` and `PUT` requests, which require a bare-bones HTTP 1.1 server, without support for most headers and more advanced features like chunking. Also, we get to choose what subset of HTTP is used when building responses. Moving lower in the stack, we are dealing with TCP connections over what is essentially a point-to-point link, that seldom loses packets and does not reorder them. This means we can do away with congestion control (we only use flow control), complex reception logic, and support for most TCP options/features. At this point, the layers below (Ethernet and IPv4) don't involve much more than sanity checks of frame/packet contents. Dumbo is built using both general purpose components (which we plan to offer as part of one or more libraries), and Firecracker MMDS specific code. The former category consists of various helper modules used to process streams of bytes as protocol data units (Ethernet & ARP frames, IPv4 packets, and TCP segments), a TCP handler which listens for connections while demultiplexing incoming segments, a minimalist TCP connection endpoint implementation, and a greatly simplified HTTP 1.1 server. The Firecracker MMDS specific code is found in the logic which taps into the device model, and the component that parses an HTTP request, builds a response based on MMDS contents, and finally sends back a reply. Somewhat confusingly, this is the name of the component which taps the device model. It has a user-configured IPv4 address (see ) and MAC (`06:01:23:45:67:01`) addresses. The latter is also used to respond to ARP requests. For every frame coming from the guest, the following steps take place: Apply a heuristic to determine whether the frame may contain an ARP request for the MMDS IP address, or an IPv4 packet heading towards the same address. There can be no false negatives. Frames that fail both checks are rejected (deferred to the device model for regular processing). Reject invalid Ethernet frames. Reject valid frames if their EtherType is neither ARP, nor IPv4. (if EtherType == ARP) Reject invalid ARP frames. Reject the frame if its target protocol address field is different from the MMDS IP address. Otherwise, record that an ARP request has been received (the stack only remembers the most recent request). (if EtherType == IPv4) Reject invalid packets. Reject packets if their destination address differs from the MMDS IP address. Drop (stop processing without deferring to the device model) packets that do not carry TCP segments (by looking at the protocol number field). Send the rest to the inner TCP handler. The current implementation does not support Ethernet 802.1Q tags, and does not handle IP" }, { "data": "Tagged Ethernet frames are most likely going to be deferred to the device model for processing, because the heuristics do not take the presence of the tag into account. Moreover, their EtherType will not appear to be of interest. Fragmented IP packets do not get reassembled; they are treated as independent packets. Whenever the guest is able to receive a frame, the device model first requests one from the MMDS network stack associated with the current network device. If an ARP request has been previously recorded, send an ARP reply and forget about the request. If the inner TCP handler has any packets to transmit, wrap the next one into a frame and send it. There are no MMDS related frames to send, so tell the device model to read from the TAP fd instead. Handles received packets that appear to carry TCP segments. Its operation is described in the `dumbo` crate documentation. Each connection is associated with an MMDS endpoint. This component gets the byte stream from an inner TCP connection object, identifies the boundaries of the next HTTP request, and parses it using an HttpRequest object. For each valid `GET` request, the URI is used to identify a key from the metadata store (like in the previous example), and a response is built using the Firecracker implementation of HttpResponse logic, based on the associated value, and sent back to the guest over the same connection. Each endpoint has a fixed size receive buffer, and a variable length response buffer (depending on the size of each response). TCP receive window semantics are used to ensure the guest does not overrun the receive buffer during normal operation (the connection has to drop segments otherwise). There can be at most one response pending at any given time. Here are more details describing what happens when a segment is received by an MMDS endpoint (previously created when a SYN segment arrived at the TCP handler): Invoke the receive functionality of the inner connection object, and append any new data to the receive buffer. If no response is currently pending, attempt to identify the end of the first request in the receive buffer. If no such boundary can be found, and the buffer is full, reset the inner connection (which also causes the endpoint itself to be subsequently removed) because the guest exceeded the maximum allowed request size. If no response is pending, and we can identify a request in the receive buffer, parse it, free up the associated buffer space (also update the connection receive window), and build an HTTP response, which becomes the current pending response. If a FIN segment was received, and there's no pending response, call `close` on the inner connection. If a valid RST is received at any time, mark the endpoint for removal. When the TCP handler asks an MMDS endpoint for any segments to send, the transmission logic of the inner connection is invoked, specifying the pending response (when present) as the payload source. All packets coming from MMDS have the TTL value set to 1 by default. Connection objects are minimalist implementation of the TCP protocol. They are used to reassemble the byte stream which carries guest HTTP requests, and to send back segments which contain parts of the response. More details are available in the `dumbo` crate documentation." } ]
{ "category": "Runtime", "file_name": "mmds-design.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "title: \"Velero 1.5: Auto volume backup with restic, DeleteItemAction plugins, Restore Hooks, and much more!\" excerpt: The community high tide raises Velero to its 1.5 release. Announcing support for backing up pod volumes using restic without annotating every pod, DeleteItemAction- a new plugin kind, Restore Hooks, and much more. It is with that pride and excitement we ship Velero 1.5. author_name: Ashish Amarnath slug: Velero-1.5-For-And-By-Community categories: ['velero','release'] image: /img/posts/post-1.5.jpg tags: ['Velero Team', 'Ashish Amarnath', 'Velero Release'] Velero continues to evolve and gain adoption in the Kubernetes community. It is with pride and excitement that we announce the release of Velero 1.5. With new features and functionalities, like using restic to backup pod volumes without adding annotations to pods, introducing the new DeleteItemAction plugin type, supporting hooks to customize restore operations, the broad theme for this release is operational ease. Before diving into the features in this release, we would like to state explicitly that we stand in solidarity against racism and, to that effect, we have made the following changes to the Velero project: We have removed insensitive language from all of our documentation including those on our website . All of our repositories, including the Velero and plugins repositories, now use `main` as their default branch. The development images pushed to on every PR merge now have the `main` tag. For example, the latest development image for `velero` is `velero/velero:main`. If we have missed addressing insensitive verbiage anywhere, please let us know on the or by and we will address it immediately. Prior to this release, Velero only supported opt-in behavior to backup pod volumes using restic that required users to individually annotate their pod spec with the volumes to backup up using restic. This was frequently raised in the community as a pain point and many users built automation to add the necessary annotations. With the release of 1.5, Velero now has the ability to backup all pod volumes using restic, without having to individually annotate every pod. This feature allows users to backup all pod volumes, by default, using restic, except for: Volumes mounting the default service account token Hostpath volumes Volumes mounting Kubernetes Secrets and ConfigMaps. You can enable this feature on a per backup basis or as a default setting for all Velero backups. Read more about this feature on our page on our documentation website. Plugins in Velero are independent binaries that you can use to customize Veleros behavior. In this release, we introduce a new plugin type, DeleteItemAction plugin, that offers yet another extension point to customize Veleros functionality. This is in addition to the already existing ObjectStore plugin, VolumeSnapshotter plugin, BackupItemAction, and RestoreItemAction plugin types. The introduced a new pattern for backing up and restoring volume snapshots using BackupItemAction and RestoreItemAction plugins. To allow the community to adopt a similar pattern for their custom resources, Velero had to provide an extension point to clean up both in-cluster and external resources, created by their BackupItemAction plugins. This is now possible with DeleteItemAction" }, { "data": "The interface for this new plugin type is similar to that of BackupItemAction and RestoreItemAction plugins. You can read more about the design for this plugin in the . Velero has been helping its users with disaster recovery for their Kubernetes clusters since its first release in August 2017. Over the past three years, there have been major improvements in the ecosystem, including new frameworks that make it easier to develop solutions for Kubernetes. This release marks the first steps in our journey to modernize the Velero codebase and take advantage of newer frameworks as we begin the adoption of , the most popular framework to build custom Kubernetes APIs and their respective controllers. As this effort continues, we would like to invite more folks to be a part of our growing contributor base. Staying on the theme of using tools that are current and growing our contributor base: thanks to our community member and first-time contributor , Velero now uses to build multi-arch images for Velero. You can read more about this on and . Using functionality, users can quiesce and un-quiesce applications before and after backup operations. This allows users to take application consistent backups of volume data. However, similar functionality to perform custom actions during or after a restore operation was unavailable. This led our users to build custom extensions outside of Velero as a workaround. Driven wholly by the Velero community, we have a design for the missing Restore Hooks functionality. Thank you to our community members and for driving the design proposal, and to everyone who participated in the design discussions and reviews. In the design, there are two kinds of Restore Hooks: InitContainer Restore Hooks: These will add init containers into restored pods to perform any necessary setup before the application containers of the restored pod can start. Exec Restore Hooks: These can be used to execute custom commands or scripts in containers of a restored Kubernetes pod. You can find more details about the design in the . Velero can specify a custom order in which resources can be backed up. Thanks to our community member for driving this functionality from design to implementation. This is going to serve as a building block to support backup and restore of certain stateful applications. Here is the for the enhancement. These were just some highlights of the release. You can always find more information about the release in the . See the to start planning your upgrade today. Velero is better because of our contributors and maintainers. It is because of you that we can bring great software to the community. Please join us during our online community meetings every Tuesday and catch up with past meetings on YouTube on the . You can always find the latest project information at velero.io. Look for issues on GitHub marked Good first issue or Help wanted if you want to roll up your sleeves and write some code with us. You can chat with us on and follow us on Twitter at ." } ]
{ "category": "Runtime", "file_name": "2020-09-16-Velero-1.5-For-And-By-Community.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "English | is an underlay networking solution that provides rich IPAM and CNI integration capabilities, this article will compare it with the mainstream IPAM CNI plug-ins (e.g. , ) and the widely used IPAM CNI plug-ins that are running in underlay scenarios. This article will compare it with the mainstream IPAM CNI plug-ins running in underlay scenarios (e.g., , ) and the widely-used overlay IPAM CNI plugins in `1000 Pod` scenarios. Why do we need to do performance testing on the underlay IPAM CNI plugin? The speed at which IPAM allocates IP addresses largely determines the speed of application publishing. Underlay IPAM often becomes a performance bottleneck when a large-scale Kubernetes cluster recovers from failures. Under underlay networks, private IPv4 addresses are limited. Within a limited range of IP addresses, concurrent creation of Pods can involve IP address preemption and conflict, and it is challenging to quickly adjust the limited IP address resources. Kubernetes: `v1.26.7` container runtime: `containerd v1.7.2` OS: `Ubuntu 22.04 LTS` kernel: `5.15.0-33-generic` | Node | Role | CPU | Memory | | -- | | | | | master1 | control-plane, worker | 3C | 8Gi | | master2 | control-plane, worker | 3C | 8Gi | | master3 | control-plane, worker | 3C | 8Gi | | worker4 | worker | 3C | 8Gi | | worker5 | worker | 3C | 8Gi | | worker6 | worker | 3C | 8Gi | | worker7 | worker | 3C | 8Gi | | worker8 | worker | 3C | 8Gi | | worker9 | worker | 3C | 8Gi | | worker10 | worker | 3C | 8Gi | This test is based on the `0.3.1` version of , with and Spiderpool as the test object, and selected several other common network solutions in the open source community as a comparison: | test object | version | | -- | | | Spiderpool based on macvlan | `v0.8.0` | | Whereabouts based on macvlan | `v0.6.2` | | Kube-OVN | `v1.12.2` | | Cilium | `v1.14.3` | | Calico | `v3.26.3` | | The test ideas are mainly: Underlay IP resources are limited, IP leakage and duplication of IP allocation can easily cause interference, so the accuracy of IP allocation is very important. When a large number of Pods start up and compete for IP allocation, the IPAM allocation algorithm should be efficient in order to ensure that the Pods are released quickly and successfully. Therefore, we designed a limit test with the same number of IP resources and Pod resources, and timed the time from Pod creation to Running to test the accuracy and robustness of IPAM in" }, { "data": "The test conditions are as follows: IPv4 single-stack and IPv4/IPv6 dual-stack scenarios. Create 100 Deployments, each with 10 replicas. The following shows the results of the IPAM performance test, which includes two scenarios, `The number of IPs is equal to the number of Pods` and `IP sufficient`, to test each CNI, whereas Calico and Cilium, for example, are based on the IP block pre-allocation mechanism to allocate IPs, and therefore can't perform the `The number of IPs is equal to the number of Pods` test in a relatively `fair` way, and only perform the `IP sufficient` scenario. We can only test `unlimited IPs` scenarios. | test object | Limit IP to Pod Equivalents | IP sufficient | | | | | | Spiderpool based on macvlan | 207s | 182 | | Whereabouts based on macvlan | failure | 2529s | | Kube-OVN | 405s | 343s | | Cilium | NA | 215s | | Calico | NA | 322s | Spiderpool allocate IP addresses from the same CIDR range to all Pods in the whole cluster. Consequently, IP allocation and release face intense competition, presenting larger challenges in terms of IP allocation performance. By comparison, Whereabouts, Calico, and Cilium adopt an IPAM allocation principle where each node has a small IP address pool. This reduces the competition for IP allocation and mitigates the associated performance challenges. However, experimental data shows that despite Spiderpool's \"lossy\" IPAM principle, its IP allocation performance is actually quite good. During testing, the following phenomenon was encountered: Whereabouts based on macvlanWe tested the combination of macvlan and Whereabouts in a scenario where the available number of IP addresses matches the number of Pods in a 1:1 ratio. Within 300 seconds, 261 Pods reached the \"Running\" state at a relatively steady pace. By the 1080-second mark, 768 IP addresses were allocated. Afterward, the growth rate of Pods significantly slowed down, reaching 845 Pods by 2280 seconds. Subsequently, Whereabouts essentially stopped working, resulting in a positively near-infinite amount of time needed for further allocation. In our testing scenario, where the number of IP addresses matches the number of Pods in a 1:1 ratio, if the IPAM component fails to properly reclaim IP addresses, new Pods will fail to start due to a lack of available IP resources. And observed some of the following errors in the Pod that failed to start: ```bash name \"whereabout-9-5c658db57b-tdlmsdefaulte1525b95-f433-4dbe-81d9-6c85fd02fa70_1\" is reserved for \"38e7139658f37e40fa7479c461f84ec2777e29c9c685f6add6235fd0dba6e175\" ``` Although Spiderpool is primarily designed for underlay networks, it provides powerful IPAM capabilities. Its IP allocation and reclamation features face more intricate challenges, including IP address contention and conflicts, compared to the popular Overlay CNI IPAM plugins. However, Spiderpool's performance is ahead of the latter." } ]
{ "category": "Runtime", "file_name": "ipam-performance.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "Maintainers are the default assignee for related tracker issues and pull requests. | Component | Name | ||| | auth, STS | Pritha Srivastava | | bucket index, resharding | J. Eric Ivancich | | bucket notifications | Yuval Lifshitz | | data caching | Mark Kogan | | garbage collection | Pritha Srivastava | | http frontends | Casey Bodley | | lifecycle | Matt Benjamin | | lua scripting | Yuval Lifshitz | | multisite | Casey Bodley | | object i/o | Casey Bodley | | rgw orchestration, admin APIs | Ali Maredia | | radosgw-admin | Daniel Gryniewicz | | rest ops | Daniel Gryniewicz | | rgw-nfs | Matt Benjamin | | performance | Mark Kogan | | s3 select | Gal Salomon | | storage abstraction layer | Daniel Gryniewicz | security (crypto, SSE, CVEs) swift api" } ]
{ "category": "Runtime", "file_name": "MAINTAINERS.md", "project_name": "Ceph", "subcategory": "Cloud Native Storage" }
[ { "data": "``` shell wget https://busybox.net/downloads/busybox-1.36.1.tar.bz2 tar -xjf busybox-1.36.1.tar.bz2 ``` ``` shell make defconfig make menuconfig ``` NoticeCheck Build static binary, can build binary without dependence library. ```text Settings > [*] Build static binary (no shared libs) ``` Install the compiled BusyBox to default path: `_install`. ``` shell make install ``` ```shell cd _install mkdir proc sys dev etc etc/init.d touch etc/init.d/rcS cat >etc/init.d/rcS<<EOF mount -t proc none /proc mount -t sysfs none /sys /sbin/mdev -s EOF chmod +x etc/init.d/rcS ``` ```shell cd _install find . | cpio -o --format=newc > /tmp/StratoVirt-initrd ``` ```shell $ ./stratovirt \\ -machine microvm \\ -kernel /path/to/kernel \\ -append \"console=ttyS0 reboot=k panic=1 root=/dev/ram rdinit=/bin/sh\" \\ -initrd /tmp/StratoVirt-initrd \\ -qmp unix:/path/to/socket,server,nowait \\ -serial stdio ```" } ]
{ "category": "Runtime", "file_name": "mk_initrd.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Connection tracking tables ``` -h, --help help for ct ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Direct access to local BPF maps - Flush all connection tracking entries - List connection tracking entries" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_ct.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Name | Type | Description | Notes | - | - | - NumPciSegments | Pointer to int32 | | [optional] IommuSegments | Pointer to []int32 | | [optional] SerialNumber | Pointer to string | | [optional] Uuid | Pointer to string | | [optional] OemStrings | Pointer to []string | | [optional] Tdx | Pointer to bool | | [optional] [default to false] `func NewPlatformConfig() *PlatformConfig` NewPlatformConfig instantiates a new PlatformConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewPlatformConfigWithDefaults() *PlatformConfig` NewPlatformConfigWithDefaults instantiates a new PlatformConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *PlatformConfig) GetNumPciSegments() int32` GetNumPciSegments returns the NumPciSegments field if non-nil, zero value otherwise. `func (o PlatformConfig) GetNumPciSegmentsOk() (int32, bool)` GetNumPciSegmentsOk returns a tuple with the NumPciSegments field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PlatformConfig) SetNumPciSegments(v int32)` SetNumPciSegments sets NumPciSegments field to given value. `func (o *PlatformConfig) HasNumPciSegments() bool` HasNumPciSegments returns a boolean if a field has been set. `func (o *PlatformConfig) GetIommuSegments() []int32` GetIommuSegments returns the IommuSegments field if non-nil, zero value otherwise. `func (o PlatformConfig) GetIommuSegmentsOk() ([]int32, bool)` GetIommuSegmentsOk returns a tuple with the IommuSegments field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PlatformConfig) SetIommuSegments(v []int32)` SetIommuSegments sets IommuSegments field to given value. `func (o *PlatformConfig) HasIommuSegments() bool` HasIommuSegments returns a boolean if a field has been set. `func (o *PlatformConfig) GetSerialNumber() string` GetSerialNumber returns the SerialNumber field if non-nil, zero value otherwise. `func (o PlatformConfig) GetSerialNumberOk() (string, bool)` GetSerialNumberOk returns a tuple with the SerialNumber field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PlatformConfig) SetSerialNumber(v string)` SetSerialNumber sets SerialNumber field to given value. `func (o *PlatformConfig) HasSerialNumber() bool` HasSerialNumber returns a boolean if a field has been set. `func (o *PlatformConfig) GetUuid() string` GetUuid returns the Uuid field if non-nil, zero value otherwise. `func (o PlatformConfig) GetUuidOk() (string, bool)` GetUuidOk returns a tuple with the Uuid field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PlatformConfig) SetUuid(v string)` SetUuid sets Uuid field to given value. `func (o *PlatformConfig) HasUuid() bool` HasUuid returns a boolean if a field has been set. `func (o *PlatformConfig) GetOemStrings() []string` GetOemStrings returns the OemStrings field if non-nil, zero value otherwise. `func (o PlatformConfig) GetOemStringsOk() ([]string, bool)` GetOemStringsOk returns a tuple with the OemStrings field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PlatformConfig) SetOemStrings(v []string)` SetOemStrings sets OemStrings field to given value. `func (o *PlatformConfig) HasOemStrings() bool` HasOemStrings returns a boolean if a field has been set. `func (o *PlatformConfig) GetTdx() bool` GetTdx returns the Tdx field if non-nil, zero value otherwise. `func (o PlatformConfig) GetTdxOk() (bool, bool)` GetTdxOk returns a tuple with the Tdx field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *PlatformConfig) SetTdx(v bool)` SetTdx sets Tdx field to given value. `func (o *PlatformConfig) HasTdx() bool` HasTdx returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "PlatformConfig.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "This document covers the installation and configuration of and . The containerd provides not only the `ctr` command line tool, but also the interface for and other CRI clients. This document is primarily written for Kata Containers v1.5.0-rc2 or above, and containerd v1.2.0 or above. Previous versions are addressed here, but we suggest users upgrade to the newer versions for better support. is a Kubernetes feature first introduced in Kubernetes 1.12 as alpha. It is the feature for selecting the container runtime configuration to use to run a pods containers. This feature is supported in `containerd` since . Before the `RuntimeClass` was introduced, Kubernetes was not aware of the difference of runtimes on the node. `kubelet` creates Pod sandboxes and containers through CRI implementations, and treats all the Pods equally. However, there are requirements to run trusted Pods (i.e. Kubernetes plugin) in a native container like runc, and to run untrusted workloads with isolated sandboxes (i.e. Kata Containers). As a result, the CRI implementations extended their semantics for the requirements: At the beginning, checks the network configuration of a Pod, and treat Pod with `host` network as trusted, while others are treated as untrusted. The containerd introduced an annotation for untrusted Pods since : ```yaml annotations: io.kubernetes.cri.untrusted-workload: \"true\" ``` Similarly, CRI-O introduced the annotation `io.kubernetes.cri-o.TrustedSandbox` for untrusted Pods. To eliminate the complexity of user configuration introduced by the non-standardized annotations and provide extensibility, `RuntimeClass` was introduced. This gives users the ability to affect the runtime behavior through `RuntimeClass` without the knowledge of the CRI daemons. We suggest that users with multiple runtimes use `RuntimeClass` instead of the deprecated annotations. The implements the for Kata. With `shimv2`, Kubernetes can launch Pod and OCI-compatible containers with one shim per Pod. Prior to `shimv2`, `2N+1` shims (i.e. a `containerd-shim` and a `kata-shim` for each container and the Pod sandbox itself) and no standalone `kata-proxy` process were used, even with VSOCK not available. The shim v2 is introduced in containerd and Kata `shimv2` is implemented in Kata Containers v1.5.0. Follow the instructions to . Note: `cri` is a native plugin of containerd 1.1 and above. It is built into containerd and enabled by default. You do not need to install `cri` if you have containerd 1.1 or above. Just remove the `cri` plugin from the list of `disabled_plugins` in the containerd configuration file (`/etc/containerd/config.toml`). Follow the instructions from the . Then, check if `containerd` is now available: ```bash $ command -v containerd ``` If you have installed Kubernetes with `kubeadm`, you might have already installed the CNI" }, { "data": "You can manually install CNI plugins as follows: ```bash $ git clone https://github.com/containernetworking/plugins.git $ pushd plugins $ ./build_linux.sh $ sudo mkdir /opt/cni $ sudo cp -r bin /opt/cni/ $ popd ``` Note: `cri-tools` is a set of tools for CRI used for development and testing. Users who only want to use containerd with Kubernetes can skip the `cri-tools`. You can install the `cri-tools` from source code: ```bash $ git clone https://github.com/kubernetes-sigs/cri-tools.git $ pushd cri-tools $ make $ sudo -E make install $ popd ``` By default, the configuration of containerd is located at `/etc/containerd/config.toml`, and the `cri` plugins are placed in the following section: ```toml [plugins] [plugins.cri] [plugins.cri.containerd] [plugins.cri.containerd.default_runtime] [plugins.cri.cni] conf_dir = \"/etc/cni/net.d\" ``` The following sections outline how to add Kata Containers to the configurations. For Kata Containers v1.5.0 or above (including `1.5.0-rc`) Containerd v1.2.0 or above Kubernetes v1.12.0 or above The `RuntimeClass` is suggested. The following configuration includes two runtime classes: `plugins.cri.containerd.runtimes.runc`: the runc, and it is the default runtime. `plugins.cri.containerd.runtimes.kata`: The function in containerd (reference ) where the dot-connected string `io.containerd.kata.v2` is translated to `containerd-shim-kata-v2` (i.e. the binary name of the Kata implementation of ). ```toml [plugins.cri.containerd] no_pivot = false [plugins.cri.containerd.runtimes] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc] privilegedwithouthost_devices = false runtime_type = \"io.containerd.runc.v2\" [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.runc.options] BinaryName = \"\" CriuImagePath = \"\" CriuPath = \"\" CriuWorkPath = \"\" IoGid = 0 [plugins.cri.containerd.runtimes.kata] runtime_type = \"io.containerd.kata.v2\" privilegedwithouthost_devices = true pod_annotations = [\"io.katacontainers.*\"] container_annotations = [\"io.katacontainers.*\"] [plugins.\"io.containerd.grpc.v1.cri\".containerd.runtimes.kata.options] ConfigPath = \"/opt/kata/share/defaults/kata-containers/configuration.toml\" ``` `privilegedwithouthost_devices` tells containerd that a privileged Kata container should not have direct access to all host devices. If unset, containerd will pass all host devices to Kata container, which may cause security issues. `pod_annotations` is the list of pod annotations passed to both the pod sandbox as well as container through the OCI config. `container_annotations` is the list of container annotations passed through to the OCI config of the containers. This `ConfigPath` option is optional. If you do not specify it, shimv2 first tries to get the configuration file from the environment variable `KATACONFFILE`. If neither are set, shimv2 will use the default Kata configuration file paths (`/etc/kata-containers/configuration.toml` and `/usr/share/defaults/kata-containers/configuration.toml`). For cases without `RuntimeClass` support, we can use the legacy annotation method to support using Kata Containers for an untrusted workload. With the following configuration, you can run trusted workloads with a runtime such as `runc` and then, run an untrusted workload with Kata Containers: ```toml [plugins.cri.containerd] [plugins.cri.containerd.default_runtime] runtime_type = \"io.containerd.runtime.v1.linux\" [plugins.cri.containerd.untrustedworkloadruntime] runtime_type =" }, { "data": "``` You can find more information on the If you want to set Kata Containers as the only runtime in the deployment, you can simply configure as follows: ```toml [plugins.cri.containerd] [plugins.cri.containerd.default_runtime] runtime_type = \"io.containerd.kata.v2\" ``` Note: If you skipped the section, you can skip this section too. First, add the CNI configuration in the containerd configuration. The following is the configuration if you installed CNI as the section outlined. Put the CNI configuration as `/etc/cni/net.d/10-mynet.conf`: ```json { \"cniVersion\": \"0.2.0\", \"name\": \"mynet\", \"type\": \"bridge\", \"bridge\": \"cni0\", \"isGateway\": true, \"ipMasq\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"172.19.0.0/24\", \"routes\": [ { \"dst\": \"0.0.0.0/0\" } ] } } ``` Next, reference the configuration directory through containerd `config.toml`: ```toml [plugins.cri.cni] conf_dir = \"/etc/cni/net.d\" ``` The configuration file of `crictl` command line tool in `cri-tools` locates at `/etc/crictl.yaml`: ```yaml runtime-endpoint: unix:///var/run/containerd/containerd.sock image-endpoint: unix:///var/run/containerd/containerd.sock timeout: 10 debug: true ``` To run a container with Kata Containers through the containerd command line, you can run the following: ```bash $ sudo ctr image pull docker.io/library/busybox:latest $ sudo ctr run --cni --runtime io.containerd.run.kata.v2 -t --rm docker.io/library/busybox:latest hello sh ``` This launches a BusyBox container named `hello`, and it will be removed by `--rm` after it quits. The `--cni` flag enables CNI networking for the container. Without this flag, a container with just a loopback interface is created. Use the script to create rootfs ```bash ctr i pull quay.io/prometheus/busybox:latest ctr i export rootfs.tar quay.io/prometheus/busybox:latest rootfs_tar=rootfs.tar bundle_dir=\"./bundle\" mkdir -p \"${bundle_dir}\" rootfsdir=\"${bundledir}/rootfs\" mkdir -p \"${rootfs_dir}\" layers_dir=\"$(mktemp -d)\" tar -C \"${layersdir}\" -pxf \"${rootfstar}\" for ((i=0;i<$(cat ${layers_dir}/manifest.json | jq -r \".[].Layers | length\");i++)); do tar -C ${rootfsdir} -xf ${layersdir}/$(cat ${layers_dir}/manifest.json | jq -r \".[].Layers[${i}]\") done ``` Use runc spec to generate `config.json` ```bash cd ./bundle/rootfs runc spec mv config.json ../ ``` Change the root `path` in `config.json` to the absolute path of rootfs ```JSON \"root\":{ \"path\":\"/root/test/bundle/rootfs\", \"readonly\": false }, ``` ```bash sudo ctr run -d --runtime io.containerd.run.kata.v2 --config bundle/config.json hello sudo ctr t exec --exec-id ${ID} -t hello sh ``` With the `crictl` command line of `cri-tools`, you can specify runtime class with `-r` or `--runtime` flag. Use the following to launch Pod with `kata` runtime class with the pod in of `cri-tools`: ```bash $ sudo crictl runp -r kata podsandbox-config.yaml 36e23521e8f89fabd9044924c9aeb34890c60e85e1748e8daca7e2e673f8653e ``` You can add container to the launched Pod with the following: ```bash $ sudo crictl create 36e23521e8f89 container-config.yaml podsandbox-config.yaml 1aab7585530e62c446734f12f6899f095ce53422dafcf5a80055ba11b95f2da7 ``` Now, start it with the following: ```bash $ sudo crictl start 1aab7585530e6 1aab7585530e6 ``` In Kubernetes, you need to create a `RuntimeClass` resource and add the `RuntimeClass` field in the Pod Spec (see this for more information). If `RuntimeClass` is not supported, you can use the following annotation in a Kubernetes pod to identify as an untrusted workload: ```yaml annotations: io.kubernetes.cri.untrusted-workload: \"true\" ```" } ]
{ "category": "Runtime", "file_name": "containerd-kata.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available via the `/proc/sys/` virtual process file system. The parameters include the following subsystems among others: `fs` (file systems) `kernel` (kernel) `net` (networking) `vm` (virtual memory) To get a complete list of kernel parameters, run: ``` $ sudo sysctl -a ``` Kubernetes provide mechanisms for setting namespaced sysctls. Namespaced sysctls can be set per pod in the case of Kubernetes. The following sysctls are known to be namespaced and can be set with Kubernetes: `kernel.shm*` `kernel.msg*` `kernel.sem` `fs.mqueue.*` `net.*` Kata Containers supports setting namespaced sysctls with Kubernetes. All namespaced sysctls can be set in the same way as regular Linux based containers, the difference being, in the case of Kata they are set inside the guest. Kubernetes considers certain sysctls as safe and others as unsafe. For detailed information about what sysctls are considered unsafe, please refer to the . For using unsafe sysctls, the cluster admin would need to allow these as: ``` $ kubelet --allowed-unsafe-sysctls 'kernel.msg*,net.ipv4.route.min_pmtu' ... ``` or using the declarative approach as: ``` $ cat kubeadm.yaml apiVersion: kubeadm.k8s.io/v1alpha3 kind: InitConfiguration nodeRegistration: kubeletExtraArgs: allowed-unsafe-sysctls: \"kernel.msg,kernel.shm.,net.*\" ... ``` The above YAML can then be passed to `kubeadm init` as: ``` $ sudo -E kubeadm init --config=kubeadm.yaml ``` Both safe and unsafe sysctls can be enabled in the same way in the Pod YAML: ``` apiVersion: v1 kind: Pod metadata: name: sysctl-example spec: securityContext: sysctls: name: kernel.shmrmidforced value: \"0\" name: net.ipv4.route.min_pmtu value: \"1024\" ``` Kubernetes disallow sysctls without a namespace. The recommendation is to set them directly on the host or use a privileged container in the case of Kubernetes. In the case of Kata, the approach of setting sysctls on the host does not work since the host sysctls have no effect on a Kata Container running inside a guest. Kata gives you the ability to set non-namespaced sysctls using a privileged container. This has the advantage that the non-namespaced sysctls are set inside the guest without having any effect on the `/proc/sys` values of any other pod or the host itself. The recommended approach to do this would be to set the sysctl value in a privileged init container. In this way, the application containers do not need any elevated privileges. ``` apiVersion: v1 kind: Pod metadata: name: busybox-kata spec: runtimeClassName: kata-qemu securityContext: sysctls: name: kernel.shmrmidforced value: \"0\" containers: name: busybox-container securityContext: privileged: true image: debian command: sleep \"3000\" initContainers: name: init-sys securityContext: privileged: true image: busybox command: ['sh', '-c', 'echo \"64000\" > /proc/sys/vm/maxmapcount'] ```" } ]
{ "category": "Runtime", "file_name": "how-to-use-sysctls-with-kata.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "The operation audit trail of the mount point is stored in the specified directory on the client, making it easy to access third-party log collection platforms. You can enable or disable local audit log function in the client configuration . You can send commands to the client through HTTP to actively enable or disable the log function without remounting. The client audit logs are recorded locally. When the log file exceeds 200MB, it will be rolled over, and the stale logs after rolling over will be deleted after 7 days. That is, the audit logs are kept for 7 days by default, and stale log files are scanned and deleted every hour. ```text [Cluster Name (Master Domain Name or IP), Volume Name, Subdir, Mount Point, Timestamp, Client IP, Client Hostname, Operation Type (Create, Delete, Rename), Source Path, Target Path, Error Message, Operation Time, Source File Inode, Destination File Inode] ``` The currently supported audit operations are as follows: Create, create a file Mkdir, create a directory Remove, delete a file or directory Rename, mv operation Audit logs are asynchronously written to disk and do not block client operations synchronously. ```bash curl -v \"http://192.168.0.2:17410/auditlog/enable?path=/cfs/log&prefix=client2&logsize=1024\" ``` ::: tip Note `192.168.0.2` is the IP address of the mounted client, and the same below. ::: | Parameter | Type | Description | |--|--|--| | path | string | Audit log directory | | prefix | string | Specify the prefix directory of the audit log, which can be a module name or \"audit\" to distinguish the directory of flow logs and audit logs | | logsize | uint32 | Used to set the size threshold for log rolling. If not set, the default is 200MB | ```bash curl -v \"http://192.168.0.2:17410/auditlog/disable\" ``` MetaNode, DataNode, and Master all have two types of logs: service running logs and raft" }, { "data": "Since the client does not use raft, there is only a process service log. The log paths of each service log and raft log can be configured in the startup configuration file as follows: ``` { \"logDir\": \"/cfs/log\", \"raftDir\": \"/cfs/log\", ...... } ``` ObjectNode has one type of log, which is the service running log. The various modules of the erasure coding subsystem have two types of logs: service running logs and audit logs. The audit logs are disabled by default. If you want to enable them, please refer to . If you are a developer or tester and want to debug, you can set the log level to Debug or Info. If it is a production environment, you can set the log level to Warn or Error, which will greatly reduce the amount of logs. The supported log levels are Debug, Info, Warn, Error, Fatal, and Critical (the erasure coding subsystem does not support the Critical level). There are two ways to set the log: Set in the configuration file, as follows: ``` \"logLevel\": \"debug\" ``` You can dynamically modify it through the command. The command is as follows: ``` http://127.0.0.1:{profPort}/loglevel/set?level={log-level} ``` ::: tip Note The log settings for the erasure coding subsystem are slightly different. ::: Set in the configuration file, please refer to . Modify through the command, please refer to . The log format is as follows: ```text For example: 2023/03/08 18:38:06.628192 [ERROR] partition.go:664: action[LaunchRepair] partition(113300) err(no valid master). ``` ::: tip Note The format of the erasure coding system is slightly different. Here, the running log and audit log are introduced separately. ::: The format of the service running log is as follows: ```test [Detailed Information] 2023/03/15 18:59:10.350557 [DEBUG] scheduler/blob_deleter.go:540 [tBICACl6si0FREwX:522f47d329a9961d] delete shards: location[{Vuid:94489280515 Host:http://127.0.0.1:8889 DiskID:297}] ``` The format of the audit log is as follows: ```text REQ SCHEDULER 16793641137770897 POST /inspect/complete {\"Accept-Encoding\":\"gzip\",\"Content-Length\":\"90\",\"Content-Type\":\"application/json\",\"User-Agent\":\"blobnode/cm1.2.0/5616eb3c957a01d189765cf004cd2df50bc618a8 (linux/amd64; go1.16.13)} {\"taskid\":\"inspect-45800-cgch04ehrnv40jlcqio0\",\"inspecterrstr\":\"\",\"missed_shards\":null} 200 {\"Blobstore-Tracer-Traceid\":\"0c5ebc85d3dba21b\",\"Content-Length\":\"0\",\"Trace-Log\":[\"SCHEDULER\"],\"Trace-Tags\":[\"span.kind:server\"]} 0 68 ```" } ]
{ "category": "Runtime", "file_name": "log.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, users need to upgrade the engine image manually in the UI after upgrading Longhorn. We should provide an option. When it is enabled, automatically upgrade engine images for volumes when it is applicable. https://github.com/longhorn/longhorn/issues/2152 Reduce the amount of manual work users have to do when upgrading Longhorn by automatically upgrade engine images for volumes when they are ok to upgrade. E.g. when we can do engine live upgrade or when volume is in detaching state. Add a new boolean setting, `Concurrent Automatic Engine Upgrade Per Node Limit`, so users can control how Longhorn upgrade engines. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version. After user upgrading Longhorn to a new version, there will be a new default engine image and possibly a new default instance manager image. The proposal has 2 parts: In which component we will do the upgrade. Identify when is it ok to upgrade engine for volume to the new default engine image. For part 1, we will do the upgrade inside `engine image controller`. The controller will constantly watch and upgrade the engine images for volumes when it is ok to upgrade. For part 2, we upgrade engine image for a volume when the following conditions are met: The new default engine image is ready Volume is not upgrading engine image The volume condition is one of the following: Volume is in detached state. Volume is in attached state (live upgrade). And volume is healthy. And The current volume's engine image is compatible with the new default engine image. And If volume is not a DR volume. And Volume is not expanding. Before this enhancement, users have to manually upgrade engine images for volume after upgrading Longhorn system to a newer version. If there are thousands of volumes in the system, this is a significant manual work. After this enhancement users either have to do nothing (in case live upgrade is possible) or they only have to scale down/up the workload (in case there is a new default IM image) User upgrade Longhorn to a newer version. The new Longhorn version is compatible with the volume's current engine image. Longhorn automatically do live engine image upgrade for volumes User upgrade Longhorn to a newer version. The new Longhorn version is not compatible with the volume's current engine image. Users only have to scale the workload down and up. This experience is similar to restart Google Chrome to use a new version. Note that users need to disable this feature if they want to update the engine image to a specific version for the volumes. If `Concurrent Automatic Engine Upgrade Per Node Limit` setting is bigger than 0, Longhorn will not allow user to manually upgrade engine to a version other than the default version. No API change is needed. Inside `engine image controller` sync function, get the value of the setting `Concurrent Automatic Engine Upgrade Per Node Limit` and assign it to concurrentAutomaticEngineUpgradePerNodeLimit variable. If concurrentAutomaticEngineUpgradePerNodeLimit <= 0, we skip upgrading. Find the new default engine image. Check if the new default engine image is ready. If it is not we skip the upgrade. List all volumes in Longhorn system. Select a set of volume candidate for upgrading. We select candidates that has the condition is one of the following case: Volume is in detached" }, { "data": "Volume is in attached state (live upgrade). And volume is healthy. And Volume is not upgrading engine image. And The current volume's engine image is compatible with the new default engine image. And the volume is not a DR volume. And volume is not expanding. Make sure not to upgrade too many volumes on the same node at the same time. Filter the upgrading candidate set so that total number of upgrading volumes and candidates per node is not over `concurrentAutomaticEngineUpgradePerNodeLimit`. For each volume candidate, set `v.Spec.EngineImage = new default engine image` to update the engine for the volume. If the engine upgrade failed to complete (e.g. the v.Spec.EngineImage != v.Status.CurrentImage), we just consider it is the same as volume is in upgrading process and skip it. Volume controller will handle the reconciliation when it is possible. Integration test plan. Preparation: set up a backup store Deploy a compatible new engine image Case 1: Concurrent engine upgrade Create 10 volumes each of 1Gb. Attach 5 volumes vol-0 to vol-4. Write data to it Upgrade all volumes to the new engine image Wait until the upgrades are completed (volumes' engine image changed, replicas' mode change to RW for attached volumes, reference count of the new engine image changed, all engine and replicas' engine image changed) Set concurrent-automatic-engine-upgrade-per-node-limit setting to 3 In a retry loop, verify that the number of volumes who is upgrading engine is always smaller or equal to 3 Wait until the upgrades are completed (volumes' engine image changed, replica mode change to RW for attached volumes, reference count of the new engine image changed, all engine and replicas' engine image changed, etc ...) verify the volumes' data Case 2: Dr volume Create a backup for vol-0. Create a DR volume from the backup Try to upgrade the DR volume engine's image to the new engine image Verify that the Longhorn API returns error. Upgrade fails. Set concurrent-automatic-engine-upgrade-per-node-limit setting to 0 Try to upgrade the DR volume engine's image to the new engine image Wait until the upgrade are completed (volumes' engine image changed, replicas' mode change to RW, reference count of the new engine image changed, engine and replicas' engine image changed) Wait for the DR volume to finish restoring Set concurrent-automatic-engine-upgrade-per-node-limit setting to 3 In a 2-min retry loop, verify that Longhorn doesn't automatically upgrade engine image for DR volume. Case 3: Expanding volume set concurrent-automatic-engine-upgrade-per-node-limit setting to 0 Upgrade vol-0 to the new engine image Wait until the upgrade are completed (volumes' engine image changed, replicas' mode change to RW, reference count of the new engine image changed, engine and replicas' engine image changed) Detach vol-0 Expand the vol-0 from 1Gb to 5GB Wait for the vol-0 to start expanding Set concurrent-automatic-engine-upgrade-per-node-limit setting to 3 While vol-0 is expanding, verify that its engine is not upgraded to the default engine image Wait for the expansion to finish and vol-0 is detached Verify that Longhorn upgrades vol-0's engine to the default version Case 4: Degraded volume set concurrent-automatic-engine-upgrade-per-node-limit setting to 0 Upgrade vol-1 (an healthy attached volume) to the new engine image Wait until the upgrade are completed (volumes' engine image changed, replicas' mode change to RW, reference count of the new engine image changed, engine and replicas' engine image changed) Increase number of replica count to 4 to make the volume degraded Set concurrent-automatic-engine-upgrade-per-node-limit setting to 3 In a 2-min retry loop, verify that Longhorn doesn't automatically upgrade engine image for vol-1. Cleaning up: Clean up volumes Reset automatically-upgrade-engine-to-default-version setting in the client fixture No upgrade strategy is needed. None" } ]
{ "category": "Runtime", "file_name": "20210111-upgrade-engine-automatically.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "The sys/unix package provides access to the raw system call interface of the underlying operating system. See: https://godoc.org/golang.org/x/sys/unix Porting Go to a new architecture/OS combination or adding syscalls, types, or constants to an existing architecture/OS pair requires some manual effort; however, there are tools that automate much of the process. There are currently two ways we generate the necessary files. We are currently migrating the build system to use containers so the builds are reproducible. This is being done on an OS-by-OS basis. Please update this documentation as components of the build system change. The old build system generates the Go files based on the C header files present on your system. This means that files for a given GOOS/GOARCH pair must be generated on a system with that OS and architecture. This also means that the generated code can differ from system to system, based on differences in the header files. To avoid this, if you are using the old build system, only generate the Go files on an installation with unmodified header files. It is also important to keep track of which version of the OS the files were generated from (ex. Darwin 14 vs Darwin 15). This makes it easier to track the progress of changes and have each OS upgrade correspond to a single change. To build the files for your current OS and architecture, make sure GOOS and GOARCH are set correctly and run `mkall.sh`. This will generate the files for your specific system. Running `mkall.sh -n` shows the commands that will be run. Requirements: bash, go The new build system uses a Docker container to generate the go files directly from source checkouts of the kernel and various system libraries. This means that on any platform that supports Docker, all the files using the new build system can be generated at once, and generated files will not change based on what the person running the scripts has installed on their computer. The OS specific files for the new build system are located in the `${GOOS}` directory, and the build is coordinated by the `${GOOS}/mkall.go` program. When the kernel or system library updates, modify the Dockerfile at `${GOOS}/Dockerfile` to checkout the new release of the source. To build all the files under the new build system, you must be on an amd64/Linux system and have your GOOS and GOARCH set accordingly. Running `mkall.sh` will then generate all of the files for all of the GOOS/GOARCH pairs in the new build system. Running `mkall.sh -n` shows the commands that will be" }, { "data": "Requirements: bash, go, docker This section describes the various files used in the code generation process. It also contains instructions on how to modify these files to add a new architecture/OS or to add additional syscalls, types, or constants. Note that if you are using the new build system, the scripts/programs cannot be called normally. They must be called from within the docker container. The hand-written assembly file at `asm${GOOS}${GOARCH}.s` implements system call dispatch. There are three entry points: ``` func Syscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr) func Syscall6(trap, a1, a2, a3, a4, a5, a6 uintptr) (r1, r2, err uintptr) func RawSyscall(trap, a1, a2, a3 uintptr) (r1, r2, err uintptr) ``` The first and second are the standard ones; they differ only in how many arguments can be passed to the kernel. The third is for low-level use by the ForkExec wrapper. Unlike the first two, it does not call into the scheduler to let it know that a system call is running. When porting Go to a new architecture/OS, this file must be implemented for each GOOS/GOARCH pair. Mksysnum is a Go program located at `${GOOS}/mksysnum.go` (or `mksysnum_${GOOS}.go` for the old system). This program takes in a list of header files containing the syscall number declarations and parses them to produce the corresponding list of Go numeric constants. See `zsysnum${GOOS}${GOARCH}.go` for the generated constants. Adding new syscall numbers is mostly done by running the build on a sufficiently new installation of the target OS (or updating the source checkouts for the new build system). However, depending on the OS, you may need to update the parsing in mksysnum. The `syscall.go`, `syscall${GOOS}.go`, `syscall${GOOS}_${GOARCH}.go` are hand-written Go files which implement system calls (for unix, the specific OS, or the specific OS/Architecture pair respectively) that need special handling and list `//sys` comments giving prototypes for ones that can be generated. The mksyscall.go program takes the `//sys` and `//sysnb` comments and converts them into syscalls. This requires the name of the prototype in the comment to match a syscall number in the `zsysnum${GOOS}${GOARCH}.go` file. The function prototype can be exported (capitalized) or not. Adding a new syscall often just requires adding a new `//sys` function prototype with the desired arguments and a capitalized name so it is exported. However, if you want the interface to the syscall to be different, often one will make an unexported `//sys` prototype, and then write a custom wrapper in `syscall_${GOOS}.go`. For each OS, there is a hand-written Go file at `${GOOS}/types.go` (or `types_${GOOS}.go` on the old system). This file includes standard C headers and creates Go type aliases to the corresponding C" }, { "data": "The file is then fed through godef to get the Go compatible definitions. Finally, the generated code is fed though mkpost.go to format the code correctly and remove any hidden or private identifiers. This cleaned-up code is written to `ztypes${GOOS}${GOARCH}.go`. The hardest part about preparing this file is figuring out which headers to include and which symbols need to be `#define`d to get the actual data structures that pass through to the kernel system calls. Some C libraries preset alternate versions for binary compatibility and translate them on the way in and out of system calls, but there is almost always a `#define` that can get the real ones. See `types_darwin.go` and `linux/types.go` for examples. To add a new type, add in the necessary include statement at the top of the file (if it is not already there) and add in a type alias line. Note that if your type is significantly different on different architectures, you may need some `#if/#elif` macros in your include statements. This script is used to generate the system's various constants. This doesn't just include the error numbers and error strings, but also the signal numbers and a wide variety of miscellaneous constants. The constants come from the list of include files in the `includes_${uname}` variable. A regex then picks out the desired `#define` statements, and generates the corresponding Go constants. The error numbers and strings are generated from `#include <errno.h>`, and the signal numbers and strings are generated from `#include <signal.h>`. All of these constants are written to `zerrors${GOOS}${GOARCH}.go` via a C program, `_errors.c`, which prints out all the constants. To add a constant, add the header that includes it to the appropriate variable. Then, edit the regex (if necessary) to match the desired constant. Avoid making the regex too broad to avoid matching unintended constants. This program is used to extract duplicate const, func, and type declarations from the generated architecture-specific files listed below, and merge these into a common file for each OS. The merge is performed in the following steps: Construct the set of common code that is idential in all architecture-specific files. Write this common code to the merged file. Remove the common code from all architecture-specific files. A file containing all of the system's generated error numbers, error strings, signal numbers, and constants. Generated by `mkerrors.sh` (see above). A file containing all the generated syscalls for a specific GOOS and GOARCH. Generated by `mksyscall.go` (see above). A list of numeric constants for all the syscall number of the specific GOOS and GOARCH. Generated by mksysnum (see above). A file containing Go types for passing into (or returning from) syscalls. Generated by godefs and the types file (see above)." } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "title: FilesystemSubVolumeGroup CRD !!! info This guide assumes you have created a Rook cluster as explained in the main Rook allows creation of Ceph Filesystem through the custom resource definitions (CRDs). Filesystem subvolume groups are an abstraction for a directory level higher than Filesystem subvolumes to effect policies (e.g., File layouts) across a set of subvolumes. For more information about CephFS volume, subvolumegroup and subvolume refer to the . To get you started, here is a simple example of a CRD to create a subvolumegroup on the CephFilesystem \"myfs\". ```yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystemSubVolumeGroup metadata: name: group-a namespace: rook-ceph # namespace:cluster spec: name: csi filesystemName: myfs pinning: distributed: 1 # distributed=<0, 1> (disabled=0) ``` If any setting is unspecified, a suitable default will be used automatically. `name`: The name that will be used for the Ceph Filesystem subvolume group. `name`: The spec name that will be used for the Ceph Filesystem subvolume group if not set metadata name will be used. `filesystemName`: The metadata name of the CephFilesystem CR where the subvolume group will be created. `quota`: Quota size of the Ceph Filesystem subvolume group. `dataPoolName`: The data pool name for the subvolume group layout instead of the default data pool. `pinning`: To distribute load across MDS ranks in predictable and stable ways. See the Ceph doc for . `distributed`: Range: <0, 1>, for disabling it set to 0 `export`: Range: <0-256>, for disabling it set to -1 `random`: Range: [0.0, 1.0], for disabling it set to 0.0 !!! note Only one out of (export, distributed, random) can be set at a time. By default pinning is set with value: `distributed=1`." } ]
{ "category": "Runtime", "file_name": "ceph-fs-subvolumegroup-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "in spdk directory ```shell cp dpdk/build/lib/*.a build/lib/ cp isa-l/.libs/*.a build/lib/ cp isa-l-crypto/.libs/*.a build/lib/ cd build/lib/ rm libspdkutmock.a cc -shared -o libspdk_fat.so -Wl,--whole-archive *.a -Wl,--no-whole-archive sudo cp libspdk_fat.so /usr/local/lib ```" } ]
{ "category": "Runtime", "file_name": "README.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "Active Help is a framework provided by Cobra which allows a program to define messages (hints, warnings, etc) that will be printed during program usage. It aims to make it easier for your users to learn how to use your program. If configured by the program, Active Help is printed when the user triggers shell completion. For example, ``` bash-5.1$ helm repo add [tab] You must choose a name for the repo you are adding. bash-5.1$ bin/helm package [tab] Please specify the path to the chart to package bash-5.1$ bin/helm package bin/ internal/ scripts/ pkg/ testdata/ ``` Hint: A good place to use Active Help messages is when the normal completion system does not provide any suggestions. In such cases, Active Help nicely supplements the normal shell completions to guide the user in knowing what is expected by the program. Active Help is currently only supported for the following shells: Bash (using only). Note that bash 4.4 or higher is required for the prompt to appear when an Active Help message is printed. Zsh As Active Help uses the shell completion system, the implementation of Active Help messages is done by enhancing custom dynamic completions. If you are not familiar with dynamic completions, please refer to . Adding Active Help is done through the use of the `cobra.AppendActiveHelp(...)` function, where the program repeatedly adds Active Help messages to the list of completions. Keep reading for details. Adding Active Help when completing a noun is done within the `ValidArgsFunction(...)` of a command. Please notice the use of `cobra.AppendActiveHelp(...)` in the following example: ```go cmd := &cobra.Command{ Use: \"add [NAME] [URL]\", Short: \"add a chart repository\", Args: require.ExactArgs(2), RunE: func(cmd *cobra.Command, args []string) error { return addRepo(args) }, ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { var comps []string if len(args) == 0 { comps = cobra.AppendActiveHelp(comps, \"You must choose a name for the repo you are adding\") } else if len(args) == 1 { comps = cobra.AppendActiveHelp(comps, \"You must specify the URL for the repo you are adding\") } else { comps = cobra.AppendActiveHelp(comps, \"This command does not take any more arguments\") } return comps, cobra.ShellCompDirectiveNoFileComp }, } ``` The example above defines the completions (none, in this specific example) as well as the Active Help messages for the `helm repo add` command. It yields the following behavior: ``` bash-5.1$ helm repo add [tab] You must choose a name for the repo you are adding bash-5.1$ helm repo add grafana [tab] You must specify the URL for the repo you are adding bash-5.1$ helm repo add grafana https://grafana.github.io/helm-charts [tab] This command does not take any more arguments ``` Hint: As can be seen in the above example, a good place to use Active Help messages is when the normal completion system does not provide any suggestions. In such cases, Active Help nicely supplements the normal shell completions. Providing Active Help for flags is done in the same fashion as for nouns, but using the completion function registered for the flag. For example: ```go _ = cmd.RegisterFlagCompletionFunc(\"version\", func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { if len(args) != 2 { return cobra.AppendActiveHelp(nil, \"You must first specify the chart to install before the --version flag can be completed\"), cobra.ShellCompDirectiveNoFileComp } return compVersionFlag(args[1], toComplete) }) ``` The example above prints an Active Help message when not enough information was given by the user to complete the `--version` flag. ``` bash-5.1$ bin/helm install myrelease --version 2.0.[tab] You must first specify the chart to install before the --version flag can be completed" }, { "data": "bin/helm install myrelease bitnami/solr --version 2.0. 2.0.1 2.0.2 2.0.3 ``` You may want to allow your users to disable Active Help or choose between different levels of Active Help. It is entirely up to the program to define the type of configurability of Active Help that it wants to offer, if any. Allowing to configure Active Help is entirely optional; you can use Active Help in your program without doing anything about Active Help configuration. The way to configure Active Help is to use the program's Active Help environment variable. That variable is named `<PROGRAM>ACTIVEHELP` where `<PROGRAM>` is the name of your program in uppercase with any `-` replaced by an `_`. The variable should be set by the user to whatever Active Help configuration values are supported by the program. For example, say `helm` has chosen to support three levels for Active Help: `on`, `off`, `local`. Then a user would set the desired behavior to `local` by doing `export HELMACTIVEHELP=local` in their shell. For simplicity, when in `cmd.ValidArgsFunction(...)` or a flag's completion function, the program should read the Active Help configuration using the `cobra.GetActiveHelpConfig(cmd)` function and select what Active Help messages should or should not be added (instead of reading the environment variable directly). For example: ```go ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { activeHelpLevel := cobra.GetActiveHelpConfig(cmd) var comps []string if len(args) == 0 { if activeHelpLevel != \"off\" { comps = cobra.AppendActiveHelp(comps, \"You must choose a name for the repo you are adding\") } } else if len(args) == 1 { if activeHelpLevel != \"off\" { comps = cobra.AppendActiveHelp(comps, \"You must specify the URL for the repo you are adding\") } } else { if activeHelpLevel == \"local\" { comps = cobra.AppendActiveHelp(comps, \"This command does not take any more arguments\") } } return comps, cobra.ShellCompDirectiveNoFileComp }, ``` Note 1: If the `<PROGRAM>ACTIVEHELP` environment variable is set to the string \"0\", Cobra will automatically disable all Active Help output (even if some output was specified by the program using the `cobra.AppendActiveHelp(...)` function). Using \"0\" can simplify your code in situations where you want to blindly disable Active Help without having to call `cobra.GetActiveHelpConfig(cmd)` explicitly. Note 2: If a user wants to disable Active Help for every single program based on Cobra, she can set the environment variable `COBRAACTIVEHELP` to \"0\". In this case `cobra.GetActiveHelpConfig(cmd)` will return \"0\" no matter what the variable `<PROGRAM>ACTIVEHELP` is set to. Note 3: If the user does not set `<PROGRAM>ACTIVEHELP` or `COBRAACTIVEHELP` (which will be a common case), the default value for the Active Help configuration returned by `cobra.GetActiveHelpConfig(cmd)` will be the empty string. Cobra provides a default `completion` command for programs that wish to use it. When using the default `completion` command, Active Help is configurable in the same fashion as described above using environment variables. You may wish to document this in more details for your users. Debugging your Active Help code is done in the same way as debugging your dynamic completion code, which is with Cobra's hidden `complete` command. Please refer to for details. When debugging with the `complete` command, if you want to specify different Active Help configurations, you should use the active help environment variable. That variable is named `<PROGRAM>ACTIVEHELP` where any `-` is replaced by an `_`. For example, we can test deactivating some Active Help as shown below: ``` $ HELMACTIVEHELP=1 bin/helm complete install wordpress bitnami/h<ENTER> bitnami/haproxy bitnami/harbor activeHelp WARNING: cannot re-use a name that is still in use :0 Completion ended with directive: ShellCompDirectiveDefault $ HELMACTIVEHELP=0 bin/helm complete install wordpress bitnami/h<ENTER> bitnami/haproxy bitnami/harbor :0 Completion ended with directive: ShellCompDirectiveDefault ```" } ]
{ "category": "Runtime", "file_name": "active_help.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Ceph CSI Drivers There are three CSI drivers integrated with Rook that are used in different scenarios: RBD: This block storage driver is optimized for RWO pod access where only one pod may access the storage. . CephFS: This file storage driver allows for RWX with one or more pods accessing the same storage. . NFS (experimental): This file storage driver allows creating NFS exports that can be mounted on pods, or directly via an NFS client from inside or outside the Kubernetes cluster. The Ceph Filesystem (CephFS) and RADOS Block Device (RBD) drivers are enabled automatically by the Rook operator. The NFS driver is disabled by default. All drivers will be started in the same namespace as the operator when the first CephCluster CR is created. The two most recent Ceph CSI version are supported with Rook. Refer to ceph csi for more information. The RBD and CephFS drivers support the creation of static PVs and static PVCs from an existing RBD image or CephFS volume/subvolume. Refer to the documentation for more information. If you've deployed the Rook operator in a namespace other than `rook-ceph`, change the prefix in the provisioner to match the namespace you used. For example, if the Rook operator is running in the namespace `my-namespace` the provisioner value should be `my-namespace.rbd.csi.ceph.com`. The same provisioner name must be set in both the storageclass and snapshotclass. To find the provisioner name in the example storageclasses and volumesnapshotclass, search for: `# csi-provisioner-name` To use a custom prefix for the CSI drivers instead of the namespace prefix, set the `CSIDRIVERNAME_PREFIX` environment variable in the operator configmap. For instance, to use the prefix `my-prefix` for the CSI drivers, set the following in the operator configmap: ```console kubectl patch cm rook-ceph-operator-config -n rook-ceph -p $'data:\\n \"CSIDRIVERNAME_PREFIX\": \"my-prefix\"' ``` Once the configmap is updated, the CSI drivers will be deployed with the `my-prefix` prefix. The same prefix must be set in both the storageclass and snapshotclass. For example, to use the prefix `my-prefix` for the CSI drivers, update the provisioner in the storageclass: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block-sc provisioner: my-prefix.rbd.csi.ceph.com ... ``` The same prefix must be set in the volumesnapshotclass as well: ```yaml apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: rook-ceph-block-vsc driver: my-prefix.rbd.csi.ceph.com ... ``` When the prefix is set, the driver names will be: RBD: `my-prefix.rbd.csi.ceph.com` CephFS: `my-prefix.cephfs.csi.ceph.com` NFS: `my-prefix.nfs.csi.ceph.com` !!! note Please be careful when setting the `CSIDRIVERNAME_PREFIX` environment" }, { "data": "It should be done only in fresh deployments because changing the prefix in an existing cluster will result in unexpected behavior. To find the provisioner name in the example storageclasses and volumesnapshotclass, search for: `# csi-provisioner-name` All CSI pods are deployed with a sidecar container that provides a Prometheus metric for tracking whether the CSI plugin is alive and running. Check the to see how to integrate CSI liveness and GRPC metrics into Ceph monitoring. To expand the PVC the controlling StorageClass must have `allowVolumeExpansion` set to `true`. `csi.storage.k8s.io/controller-expand-secret-name` and `csi.storage.k8s.io/controller-expand-secret-namespace` values set in the storageclass. Now expand the PVC by editing the PVC's `pvc.spec.resource.requests.storage` to a higher values than the current size. Once the PVC is expanded on the back end and the new size is reflected on the application mountpoint, the status capacity `pvc.status.capacity.storage` of the PVC will be updated to the new size. To support RBD Mirroring, the will be started in the RBD provisioner pod. CSI-Addons support the `VolumeReplication` operation. The volume replication controller provides common and reusable APIs for storage disaster recovery. It is based on the specification. It follows the controller pattern and provides extended APIs for storage disaster recovery. The extended APIs are provided via Custom Resource Definitions (CRDs). To enable the CSIAddons sidecar and deploy the controller, follow the steps The generic ephemeral volume feature adds support for specifying PVCs in the `volumes` field to create a Volume as part of the pod spec. This feature requires the `GenericEphemeralVolume` feature gate to be enabled. For example: ```yaml kind: Pod apiVersion: v1 ... volumes: name: mypvc ephemeral: volumeClaimTemplate: spec: accessModes: [\"ReadWriteOnce\"] storageClassName: \"rook-ceph-block\" resources: requests: storage: 1Gi ``` A volume claim template is defined inside the pod spec, and defines a volume to be provisioned and used by the pod within its lifecycle. Volumes are provisioned when a pod is spawned and destroyed when the pod is deleted. Refer to the for more info. See example manifests for an and a . The CSI-Addons Controller handles requests from users. Users create a CR that the controller inspects and forwards to one or more CSI-Addons sidecars for execution. Deploy the controller by running the following commands: ```console kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.8.0/deploy/controller/crds.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.8.0/deploy/controller/rbac.yaml kubectl create -f https://raw.githubusercontent.com/csi-addons/kubernetes-csi-addons/v0.8.0/deploy/controller/setup-controller.yaml ``` This creates the required CRDs and configures permissions. To use the features provided by the CSI-Addons, the `csi-addons` containers need to be deployed in the RBD provisioner and nodeplugin pods, which are not enabled by" }, { "data": "Execute the following to enable the CSI-Addons sidecars: Update the `rook-ceph-operator-config` configmap and patch the following configuration: ```console kubectl patch cm rook-ceph-operator-config -nrook-ceph -p $'data:\\n \"CSIENABLECSIADDONS\": \"true\"' ``` After enabling `CSIENABLECSIADDONS` in the configmap, a new sidecar container named `csi-addons` will start automatically in the RBD CSI provisioner and nodeplugin pods. CSI-Addons supports the following operations: Reclaim Space * * Network Fencing Volume Replication * Ceph-CSI supports encrypting individual RBD PersistentVolumeClaims with LUKS. More details can be found including a full list of supported encryption configurations. A sample configmap can be found . !!! note Rook also supports OSD-level encryption (see `encryptedDevice` option ). Using both RBD PVC encryption and OSD encryption at the same time will lead to double encryption and may reduce read/write performance. Existing Ceph clusters can also enable Ceph-CSI RBD PVC encryption support and multiple kinds of encryption KMS can be used on the same Ceph cluster using different storageclasses. The following steps demonstrate how to enable support for encryption: Create the `rook-ceph-csi-kms-config` configmap with required encryption configuration in the same namespace where the Rook operator is deployed. An example is shown below: ```yaml apiVersion: v1 kind: ConfigMap metadata: name: rook-ceph-csi-kms-config namespace: rook-ceph data: config.json: |- { \"user-secret-metadata\": { \"encryptionKMSType\": \"metadata\", \"secretName\": \"storage-encryption-secret\" } } ``` Update the `rook-ceph-operator-config` configmap and patch the following configurations ```console kubectl patch cm rook-ceph-operator-config -nrook-ceph -p $'data:\\n \"CSIENABLEENCRYPTION\": \"true\"' ``` Create the resources (secrets, configmaps etc) required by the encryption type. In this case, create `storage-encryption-secret` secret in the namespace of the PVC as follows: ```yaml apiVersion: v1 kind: Secret metadata: name: storage-encryption-secret namespace: rook-ceph stringData: encryptionPassphrase: test-encryption ``` Create a new with additional parameters `encrypted: \"true\"` and `encryptionKMSID: \"<key used in configmap>\"`. An example is show below: ```yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: rook-ceph-block-encrypted parameters: encrypted: \"true\" encryptionKMSID: \"user-secret-metadata\" ``` PVCs created using the new storageclass will be encrypted. Ceph CSI supports mapping RBD volumes with KRBD options and mounting CephFS volumes with ceph mount options to allow serving reads from an OSD closest to the client, according to OSD locations defined in the CRUSH map and topology labels on nodes. Refer to the for more details. Execute the following step to enable read affinity for a specific ceph cluster: Patch the ceph cluster CR to enable read affinity: ```console kubectl patch cephclusters.ceph.rook.io <cluster-name> -n rook-ceph -p '{\"spec\":{\"csi\":{\"readAffinity\":{\"enabled\": true}}}}' ``` ```yaml csi: readAffinity: enabled: true ``` Add topology labels to the Kubernetes nodes. The same labels may be used as mentioned in the topic. Ceph CSI will extract the CRUSH location from the topology labels found on the node and pass it though krbd options during mapping RBD volumes. !!! note This requires Linux kernel version 5.8 or higher." } ]
{ "category": "Runtime", "file_name": "ceph-csi-drivers.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "(installing)= The easiest way to install Incus is to {ref}`install one of the available packages <installing-from-package>`, but you can also {ref}`install Incus from the sources <installingfromsource>`. After installing Incus, make sure you have an `incus-admin` group on your system. Users in this group can interact with Incus. See {ref}`installing-manage-access` for instructions. % Include content from ```{include} support.md :start-after: <!-- Include start release --> :end-before: <!-- Include end release --> ``` LTS releases are recommended for production environments, because they benefit from regular bugfix and security updates. However, there are no new features added to an LTS release, nor any kind of behavioral change. To get all the latest features and monthly updates to Incus, use the feature release branch instead. (installing-from-package)= The Incus daemon only works on Linux. The client tool () is available on most platforms. Packages are available for a number of Linux distributions, either in their main repository or through third party repositories. ````{tabs} ```{group-tab} Alpine Incus and all of its dependencies are available in Alpine Linux's community repository as `incus`. Install Incus with: apk add incus incus-client Then enable and start the service: rc-update add incusd rc-service incusd start Please report packaging issues . ``` ```{group-tab} Arch Linux Incus and all of its dependencies are available in Arch Linux's main repository as `incus`. Install Incus with: pacman -S incus Please report packaging issues . ``` ```{group-tab} Debian There are three options currently available to Debian users. Native `incus` package A native `incus` package is currently available in the Debian testing and unstable repositories. This package will be featured in the upcoming Debian 13 (`trixie`) release. On such systems, just running `apt install incus` will get Incus installed. Native `incus` backported package A native `incus` backported package is currently available for Debian 12 (`bookworm`) users. On such systems, just running `apt install incus/bookworm-backports` will get Incus installed. NOTE: Users of backported packages should not file bugs in the Debian Bug Tracker, instead please reach out or directly to the Debian packager. Zabbly package repository provides up to date and supported Incus packages for Debian stable releases (11 and 12). Those packages contain everything needed to use all Incus features. Up to date installation instructions may be found here: ``` ```{group-tab} Docker Docker/Podman images of Incus, based on the Zabbly package repository, are available with instructions here: ``` ```{group-tab} Fedora RPM packages of Incus and its dependencies are not yet available via official Fedora repositories but via the Community Project (COPR) repository. Install the COPR plugin for `dnf` and then enable the COPR repository: dnf install 'dnf-command(copr)' dnf copr enable ganto/lxc4 Install Incus with: dnf install incus For the additional setup steps see . Note that this is not an official project of Incus nor Fedora. Please report packaging issues . ``` ```{group-tab} Gentoo Incus and all of its dependencies are available in Gentoo's main repository as . Install Incus with: emerge -av app-containers/incus Note: Installing LTS vs. feature-release will be explained later, when Incus upstream and Gentoo's repository has those releases available. There will be two newly created groups associated to Incus: `incus` to allow basic user access (launch containers), and `incus-admin` for `incus admin` controls. Add your regular users to either, or both, depending on your setup and use" }, { "data": "After installation, you may want to configure Incus. This is optional though, as the defaults should also just work. `openrc`: Edit `/etc/conf.d/incus` `systemd`: `systemctl edit --full incus.service` Set up `/etc/subuid` and `/etc/subgid`: echo \"root:1000000:1000000000\" | tee -a /etc/subuid /etc/subgid For more information: {ref}`Idmaps for user namespace <userns-idmap>` Start the daemon: `openrc`: `rc-service incus start` `systemd`: `systemctl start incus` Continue in the . ``` ```{group-tab} NixOS Incus and its dependencies are packaged in NixOS and are configurable through NixOS options. See for a complete set of available options. The service can be enabled and started by adding the following to your NixOS configuration. virtualisation.incus.enable = true; Incus initialization can be done manually using `incus admin init`, or through the preseed option in your NixOS configuration. See the NixOS documentation for an example preseed. virtualisation.incus.preseed = {}; Finally, you can add users to the `incus-admin` group to provide non-root access to the Incus socket. In your NixOS configuration: users.users.YOUR_USERNAME.extraGroups = [\"incus-admin\"]; For any NixOS specific issues, please in the package repository. ``` ```{group-tab} Ubuntu There are two options currently available to Ubuntu users. Native `incus` package A native `incus` package is currently available in the Ubuntu development repository. This package will be featured in the upcoming Ubuntu 24.04 (noble) release. On such systems, just running `apt install incus` will get Incus installed. Zabbly package repository provides up to date and supported Incus packages for Ubuntu LTS releases (20.04 and 22.04). Those packages contain everything needed to use all Incus features. Up to date installation instructions may be found here: ``` ```{group-tab} Void Linux Incus and all of its dependencies are available in Void Linux's repository as `incus`. Install Incus with: xbps-install incus incus-client Then enable and start the services with: ln -s /etc/sv/incus /var/service ln -s /etc/sv/incus-user /var/service sv up incus sv up incus-user Please report packaging issues . ``` ```` ```{important} The builds for other operating systems include only the client, not the server. ``` ````{tabs} ```{group-tab} macOS Incus publishes builds of the Incus client for macOS through . To install the feature branch of Incus, run: brew install incus ``` ```{group-tab} Windows The Incus client on Windows is provided as a and package. To install it using Chocolatey or Winget, follow the instructions below: Chocolatey Install Chocolatey by following the . Install the Incus client: choco install incus Winget Install Winget by following the Install the Incus client: winget install LinuxContainers.Incus ``` ```` You can also find native builds of the Incus client on : Incus client for Linux: , Incus client for Windows: , Incus client for macOS: , (installingfromsource)= Follow these instructions if you want to build and install Incus from the source code. We recommend having the latest versions of `liblxc` (>= 5.0.0 required) available for Incus development. Additionally, Incus requires a modern Golang (see {ref}`requirements-go`) version to work. ````{tabs} ```{group-tab} Alpine Linux You can get the development resources required to build Incus on your Alpine Linux via the following command: apk add acl-dev autoconf automake eudev-dev gettext-dev go intltool libcap-dev libtool libuv-dev linux-headers lz4-dev tcl-dev sqlite-dev lxc-dev make xz To take advantage of all the necessary features of Incus, you must install additional packages. You can reference the list of packages you need to use specific functions from" }, { "data": "<!-- wokeignore:rule=master --> Also you can find the package you need with the binary name from . Install the main dependencies: apk add acl attr ca-certificates cgmanager dbus dnsmasq lxc libintl iproute2 iptables netcat-openbsd rsync squashfs-tools shadow-uidmap tar xz Install the extra dependencies for running virtual machines: apk add qemu-system-x86_64 qemu-chardev-spice qemu-hw-usb-redirect qemu-hw-display-virtio-vga qemu-img qemu-ui-spice-core ovmf sgdisk util-linux-misc virtiofsd After preparing the source from a release tarball or git repository, you need follow the below steps to avoid known issues during build time: NOTE: Some build errors may occur if `/usr/local/include` doesn't exist on the system. Also, due to a , you may need to set those additional environment variables: export CGOLDFLAGS=\"$CGOLDFLAGS -L/usr/lib -lintl\" export CGO_CPPFLAGS=\"-I/usr/include\" ``` ```{group-tab} Debian and Ubuntu Install the build and required runtime dependencies with: sudo apt update sudo apt install acl attr autoconf automake dnsmasq-base git golang-go libacl1-dev libcap-dev liblxc1 liblxc-dev libsqlite3-dev libtool libudev-dev liblz4-dev libuv1-dev make pkg-config rsync squashfs-tools tar tcl xz-utils ebtables There are a few storage drivers for Incus besides the default `dir` driver. Installing these tools adds a bit to initramfs and may slow down your host boot, but are needed if you'd like to use a particular driver: sudo apt install btrfs-progs sudo apt install ceph-common sudo apt install lvm2 thin-provisioning-tools sudo apt install zfsutils-linux To run the test suite, you'll also need: sudo apt install busybox-static curl gettext jq sqlite3 socat bind9-dnsutils NOTE: If you use the `liblxc-dev` package and get compile time errors when building the `go-lxc` module, ensure that the value for `LXC_DEVEL` is `0` for your `liblxc` build. To check that, look at `/usr/include/lxc/version.h`. If the `LXC_DEVEL` value is `1`, replace it with `0` to work around the problem. It's a packaging bug, and we are aware of it for Ubuntu 22.04/22.10. Ubuntu 23.04/23.10 does not have this problem. ``` ```{group-tab} OpenSUSE You can get the development resources required to build Incus on your OpenSUSE Tumbleweed system via the following command: sudo zypper install autoconf automake git go libacl-devel libcap-devel liblxc1 liblxc-devel sqlite3-devel libtool libudev-devel liblz4-devel libuv-devel make pkg-config tcl In addition, for normal operation, you'll also likely need sudo zypper install dnsmasq squashfs xz rsync tar attr acl qemu qemu-img qemu-spice qemu-hw-display-virtio-gpu-pci iptables ebtables nftables As OpenSUSE stores QEMU firmware files using an unusual filename and location, you will need to create some symlinks for them: sudo mkdir /usr/share/OVMF sudo ln -s /usr/share/qemu/ovmf-x8664-4m-code.bin /usr/share/OVMF/OVMFCODE.4MB.fd sudo ln -s /usr/share/qemu/ovmf-x8664-4m-vars.bin /usr/share/OVMF/OVMFVARS.4MB.fd sudo ln -s /usr/share/qemu/ovmf-x8664-ms-4m-vars.bin /usr/share/OVMF/OVMFVARS.4MB.ms.fd sudo ln -s /usr/share/qemu/ovmf-x8664-ms-4m-code.bin /usr/share/OVMF/OVMFCODE.4MB.ms.fd ``` ```` These instructions for building from source are suitable for individual developers who want to build the latest version of Incus, or build a specific release of Incus which may not be offered by their Linux distribution. Source builds for integration into Linux distributions are not covered here and may be covered in detail in a separate document in the future. ```bash git clone https://github.com/lxc/incus cd incus ``` This will download the current development tree of Incus and place you in the source tree. Then proceed to the instructions below to actually build and install Incus. The Incus release tarballs bundle a complete dependency tree as well as a local copy of `libraft` and `libcowsql` for Incus' database setup. ```bash tar zxvf incus-0.1.tar.gz cd incus-0.1 ``` This will unpack the release tarball and place you inside of the source" }, { "data": "Then proceed to the instructions below to actually build and install Incus. The actual building is done by two separate invocations of the Makefile: `make deps` -- which builds libraries required by Incus -- and `make`, which builds Incus itself. At the end of `make deps`, a message will be displayed which will specify environment variables that should be set prior to invoking `make`. As new versions of Incus are released, these environment variable settings may change, so be sure to use the ones displayed at the end of the `make deps` process, as the ones below (shown for example purposes) may not exactly match what your version of Incus requires: We recommend having at least 2GiB of RAM to allow the build to complete. ```{terminal} :input: make deps ... make[1]: Leaving directory '/root/go/deps/cowsql' Please set the following in your environment (possibly ~/.bashrc) :input: make ``` Once the build completes, you simply keep the source tree, add the directory referenced by `$(go env GOPATH)/bin` to your shell path, and set the `LDLIBRARYPATH` variable printed by `make deps` to your environment. This might look something like this for a `~/.bashrc` file: ```bash export PATH=\"${PATH}:$(go env GOPATH)/bin\" export LDLIBRARYPATH=\"$(go env GOPATH)/deps/cowsql/.libs/:$(go env GOPATH)/deps/raft/.libs/:${LDLIBRARYPATH}\" ``` Now, the `incusd` and `incus` binaries will be available to you and can be used to set up Incus. The binaries will automatically find and use the dependencies built in `$(go env GOPATH)/deps` thanks to the `LDLIBRARYPATH` environment variable. You'll need sub{u,g}ids for root, so that Incus can create the unprivileged containers: ```bash echo \"root:1000000:1000000000\" | sudo tee -a /etc/subuid /etc/subgid ``` Now you can run the daemon (the `--group sudo` bit allows everyone in the `sudo` group to talk to Incus; you can create your own group if you want): ```bash sudo -E PATH=${PATH} LDLIBRARYPATH=${LDLIBRARYPATH} $(go env GOPATH)/bin/incusd --group sudo ``` ```{note} If `newuidmap/newgidmap` tools are present on your system and `/etc/subuid`, `etc/subgid` exist, they must be configured to allow the root user a contiguous range of at least 10M UID/GID. ``` (installing-manage-access)= Access control for Incus is based on group membership. The root user and all members of the `incus-admin` group can interact with the local daemon. See {ref}`security-daemon-access` for more information. If the `incus-admin` group is missing on your system, create it and restart the Incus daemon. You can then add trusted users to the group. Anyone added to this group will have full control over Incus. Because group membership is normally only applied at login, you might need to either re-open your user session or use the `newgrp incus-admin` command in the shell you're using to talk to Incus. ````{important} % Include content from ```{include} ../README.md :start-after: <!-- Include start security note --> :end-before: <!-- Include end security note --> ``` ```` (installing-upgrade)= After upgrading Incus to a newer version, Incus might need to update its database to a new schema. This update happens automatically when the daemon starts up after an Incus upgrade. A backup of the database before the update is stored in the same location as the active database (at `/var/lib/incus/database`). ```{important} After a schema update, older versions of Incus might regard the database as invalid. That means that downgrading Incus might render your Incus installation unusable. In that case, if you need to downgrade, restore the database backup before starting the downgrade. ```" } ]
{ "category": "Runtime", "file_name": "installing.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "This document describes Firecracker release planning, API support, and the Firecracker release lifetime. Firecracker provides this Release Policy to help customers effectively plan their Firecracker based operations. Firecracker uses for all releases. By definition, the API version implemented by a Firecracker binary is equivalent to that binarys version. Semantic versions are comprised of three fields in the form: `vMAJOR.MINOR.PATCH`. Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format. For example: v0.20.0, v0.22.0-beta5, and v99.123.77+foo.bar.baz.5. Firecracker publishes major, minor and patch releases: Patch release - The `PATCH` field is incremented whenever critical bugs and/or security issues are found in a supported release. The fixes in a PATCH release do not change existing behavior or the user interface. Upgrade is recommended. Minor release - When the `MINOR` field is incremented, the new release adds new features, bug fixes, or both without changing the existing user interface or user facing functionality. Adding new APIs can be done in a `MINOR` Firecracker release as long as it doesnt change the functionality of the APIs available in the previous release. Minor releases are shipped when features are ready for production. Multiple features may be bundled in the same release. Major release - When the `MAJOR` field is incremented, the new release adds new features and/or bug fixes, changing the existing user interface or user facing functionality. This may make the new release it incompatible with previous ones. A major release will likely require changes from other components interacting with Firecracker, e.g. API request, commands, or guest components. The changes will be detailed in the release notes. Major releases are published whenever features or bug fixes that changes the existing user interface, or user facing functionality, are ready for production. The Firecracker maintainers will only provide support for Firecracker releases under our . The Firecracker maintainers will provide patch releases for critical bugs and security issues when they are found, for: the last two Firecracker `vMAJOR.MINOR` releases for up to 1 year from release date; any Firecracker `vMAJOR.MINOR` release for at least 6 months from release date; for each `vMAJOR`, the latest `MINOR` for 1 year since release date; Starting with release v1.0, for each major and minor release, we will also be specifying the supported kernel versions. Considering an example where the last Firecracker releases are: v2.10.0 released on 2022-05-01 v2.11.0 released on 2022-07-10 v2.12.0 released on 2022-09-11 In case of an event occurring in 2022-10-03, all three releases will be patched since less than 6 months elapsed from their MINOR release time. Considering an example where the last Firecracker releases are: v2.10.0 released on 2022-05-01 v2.11.0 released on 2022-07-10 v2.12.0 released on 2022-09-11 In case of of an event occurring in 2023-05-04, v2.11 and" }, { "data": "will be patched since those were the last 2 Firecracker major releases and less than an year passed since their release time. Considering an example where the last Firecracker releases are: v2.14.0 released on 2022-05-01 v3.0.0 released on 2022-07-10 v3.1.0 released on 2022-09-11 In case of of an event occurring in 2023-01-13, v2.14 will be patched since is the last minor of v2 and has less than one year since release while v3.0 and v3.1 will be patched since were the last two Firecracker releases and less than 6 months have passed since release time. | Release | Release Date | Latest Patch | Min. end of support | Official end of Support | | : | --: | --: | : | : | | v1.7 | 2024-03-18 | v1.7.0 | 2024-09-18 | Supported | | v1.6 | 2023-12-20 | v1.6.0 | 2024-06-20 | Supported | | v1.5 | 2023-10-09 | v1.5.1 | 2024-04-09 | 2024-04-09 (end of 6mo support) | | v1.4 | 2023-07-20 | v1.4.1 | 2024-01-20 | 2024-01-20 (end of 6mo support) | | v1.3 | 2023-03-02 | v1.3.3 | 2023-09-02 | 2023-10-09 (v1.5 released) | | v1.2 | 2022-11-30 | v1.2.1 | 2023-05-30 | 2023-07-20 (v1.4 released) | | v1.1 | 2022-05-06 | v1.1.4 | 2022-11-06 | 2023-03-02 (v1.3 released) | | v1.0 | 2022-01-31 | v1.0.2 | 2022-07-31 | 2022-11-30 (v1.2 released) | | v0.25 | 2021-03-13 | v0.25.2 | 2021-09-13 | 2022-03-13 (end of 1y support) | The Firecracker API follows the semantic versioning standard. For a new release, we will increment the: MAJOR version when we make breaking changes in our API; MINOR version when we add or change functionality in a backwards compatible manner; PATCH version when we make backwards compatible bug fixes. Given a Firecracker version X.Y.Z user-generated client, it is guaranteed to work as expected with all Firecracker binary versions X.V.W, where V >= Y. Firecracker uses in terms of deprecating and removing API elements. We will consider a deprecated API element to be an element which still has backing functionality and will be supported at least until the next MAJOR version, where they will be removed. The support period of deprecated API elements is tied to . A feature is \"in\" developer preview if its marked as such in the and/or in the . Features in developer preview should not be used in production as they are not supported. Firecracker team may not provide patch releases for critical bug fixes or security issues found in features marked as developer preview. Features in developer preview may be subject to changes at any time. Changes in existing user interface or user facing functionality of a feature marked as developer preview can be released without changing the major version. Firecracker feature planning is outlined in the ." } ]
{ "category": "Runtime", "file_name": "RELEASE_POLICY.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "sidebar_position: 2 sidebar_label: \"Use HA Volumes\" When the HA module is enabled, HwameiStor Operator will generate a StorageClass of HA automatically. As an example, we will deploy a MySQL application by creating a highly available (HA) volume. :::note The yaml file for MySQL is learnt from ::: `StorageClass` \"hwameistor-storage-lvm-hdd-ha\" has parameter `replicaNumber: \"2\"`, which indicates a DRBD replication pair. ```console $ kubectl apply -f examples/sc_ha.yaml $ kubectl get sc hwameistor-storage-lvm-hdd-ha -o yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hwameistor-storage-lvm-hdd-ha parameters: replicaNumber: \"2\" convertible: \"false\" csi.storage.k8s.io/fstype: xfs poolClass: HDD poolType: REGULAR striped: \"true\" volumeKind: LVM provisioner: lvm.hwameistor.io reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true ``` With HwameiStor and its `StorageClass` ready, a MySQL StatefulSet and its volumes can be deployed by a single command: ```Console $ kubectl apply -f exapmles/sts-mysql_ha.yaml ``` Please note the `volumeClaimTemplates` uses `storageClassName: hwameistor-storage-lvm-hdd-ha`: ```yaml spec: volumeClaimTemplates: metadata: name: data labels: app: sts-mysql-ha app.kubernetes.io/name: sts-mysql-ha spec: storageClassName: hwameistor-storage-lvm-hdd-ha accessModes: [\"ReadWriteOnce\"] resources: requests: storage: 1Gi ``` In this example, the pod is scheduled on node `k8s-worker-3`. ```console $ kubectl get po -l app=sts-mysql-ha -o wide NAME READY STATUS RESTARTS AGE IP NODE sts-mysql-ha-0 2/2 Running 0 3m08s 10.1.15.151 k8s-worker-1 $ kubectl get pvc -l app=sts-mysql-ha NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE data-sts-mysql-ha-0 Bound pvc-5236ee6f-8212-4628-9876-1b620a4c4c36 1Gi RWO hwameistor-storage-lvm-hdd 3m Filesystem ``` By listing `LocalVolume(LV)` objects with the same name as that of the `PV`, we can see that the `LV` object is created on two nodes: `k8s-worker-1` and `k8s-worker-2`. ```console $ kubectl get lv pvc-5236ee6f-8212-4628-9876-1b620a4c4c36 NAME POOL REPLICAS CAPACITY ACCESSIBILITY STATE RESOURCE PUBLISHED AGE pvc-5236ee6f-8212-4628-9876-1b620a4c4c36 LocalStorage_PoolHDD 1 1073741824 Ready -1 k8s-worker-1,k8s-worker-2 3m ``` `LocalVolumeReplica (LVR)` further shows the backend logical volume devices on each node. ```concole $ kubectl get lvr NAME CAPACITY NODE STATE SYNCED DEVICE AGE 5236ee6f-8212-4628-9876-1b620a4c4c36-d2kn55 1073741824 k8s-worker-1 Ready true /dev/LocalStorage_PoolHDD-HA/5236ee6f-8212-4628-9876-1b620a4c4c36 4m 5236ee6f-8212-4628-9876-1b620a4c4c36-glm7rf 1073741824 k8s-worker-3 Ready true /dev/LocalStorage_PoolHDD-HA/5236ee6f-8212-4628-9876-1b620a4c4c36 4m ```" } ]
{ "category": "Runtime", "file_name": "ha.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "sidebar_position: 3 sidebar_label: \"MinIO\" MinIO is a high performance object storage solution with native support for Kubernetes deployments. It can provide distributed, S3-compatible, and multi-cloud storage service in public cloud, private cloud, and edge computing scenarios. MinIO is a software-defined product and released under . It can also run well on x86 and other standard hardware. MinIO is designed to meet private cloud's requirements for high performance, in addition to all required features of object storage. MinIO features easy to use, cost-effective, and high performance in providing scalable cloud-native object storage services. MinIO works well in traditional object storage scenarios, such as secondary storage, disaster recovery, and archiving. It also shows competitive capabilities in machine learning, big data, private cloud, hybrid cloud, and other emerging fields to well support data analysis, high performance workloads, and cloud-native applications. MinIO is designed for the cloud-native architecture, so it can be run as a lightweight container and managed by external orchestration tools like Kubernetes. The MinIO package comprises of static binary files less than 100 MB. This small package enables it to efficiently use CPU and memory resources even with high workloads and can host a large number of tenants on shared hardware. MinIO's architecture is as follows: MinIO can run on a standard server that have installed proper local drivers (JBOD/JBOF). An MinIO cluster has a totally symmetric architecture. In other words, each server provide same functions, without any name node or metadata server. MinIO can write both data and metadata as objects, so there is no need to use metadata servers. MinIO provides erasure coding, bitrot protection, encryption and other features in a strict and consistent way. Each MinIO cluster is a set of distributed MinIO servers, one MinIO process running on each node. MinIO runs in a userspace as a single process, and it uses lightweight co-routines for high concurrence. It divides drivers into erasure sets (generally 16 drivers in each set), and uses the deterministic hash algorithm to place objects into these erasure sets. MinIO is specifically designed for large-scale and multi-datacenter cloud storage service. Tenants can run their own MinIO clusters separately from others, getting rid of interruptions from upgrade or security problems. Tenants can scale up by connecting multi clusters across geographical" }, { "data": "A Kubernetes cluster was deployed with three virtual machines: one as the master node and two as worker nodes. The kubelet version is 1.22.0. Deploy HwameiStor local storage on Kubernetes: Allocate five disks (SDB, SDC, SDD, SDE, and SDF) for each worker node to support HwameiStor local disk management: Check node status of local storage: Create storageClass: This section will show how to deploy minio-operator, how to create a tenant, and how to configure HwameiStor local volumes. Copy minio-operator repo to your local environment ```git git clone <https://github.com/minio/operator.git> ``` Enter helm operator directory `/root/operator/helm/operator` Deploy the minio-operator instance ```shell helm install minio-operator \\ --namespace minio-operator \\ --create-namespace \\ --generate-name . --set persistence.storageClass=local-storage-hdd-lvm . ``` Check minio-operator running status Enter the `/root/operator/examples/kustomization/base` directory and change `tenant.yaml` Enter the `/root/operator/helm/tenant/` directory and change `values.yaml` Enter `/root/operator/examples/kustomization/tenant-lite` directory and change `kustomization.yaml` Change `tenant.yaml` Change `tenantNamePatch.yaml` Create a tenant ```shell kubectl apply k . ``` Check resource status of the tenant minio-t1 To create another new tenant, you can first create a new directory `tenant` (in this example `tenant-lite-2`) under `/root/operator/examples/kustomization` and change the files listed above Run `kubectl apply k .` to create the new tenant `minio-t2` Run the following commands in sequence to finish this configuration: ```shell kubectl get statefulset.apps/minio-t1-pool-0 -nminio-tenant -oyaml ``` ```shell kubectl get pvc A ``` ```shell kubectl get pvc export-minio6-0 -nminio-6 -oyaml ``` ```shell kubectl get pv ``` ```shell kubectl get pvc data0-minio-t1-pool-0-0 -nminio-tenant -oyaml ``` ```shell kubectl get lv ``` ```shell kubect get lvr ``` With the above settings in place, now let's test basic features and tenant isolation. Log in to `minio console10.6.163.52:30401/login` Get JWT by `kubectl minio proxy -n minio-operator` Browse and manage information about newly-created tenants Log in as tenant `minio-t1` (Account: minio) Browse bucket `bk-1` Create a new bucket `bk-1-1` Create path `path-1-2` Upload the file Upload the folder Create a user with read-only permission Log in as tenant `minio-t2` Only `minio-t2` information is visible. You cannot see information about tenant `minio-t1`. Create bucket Create path Upload the file Create a user Configure user policies Delete a bucket In this test, we successfully deployed MinIO distributed object storage on the basis of Kubernetes 1.22 and the HwameiStor local storage. We performed the basic feature test, system security test, and operation and maintenance management test. All tests are passed, proving HwameiStor can well support for MinIO." } ]
{ "category": "Runtime", "file_name": "minio.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Debugging Restores\" layout: docs When Velero finishes a Restore, its status changes to \"Completed\" regardless of whether or not there are issues during the process. The number of warnings and errors are indicated in the output columns from `velero restore get`: ``` NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR backup-test-20170726180512 backup-test Completed 155 76 2017-07-26 11:41:14 -0400 EDT <none> backup-test-20170726180513 backup-test Completed 121 14 2017-07-26 11:48:24 -0400 EDT <none> backup-test-2-20170726180514 backup-test-2 Completed 0 0 2017-07-26 13:31:21 -0400 EDT <none> backup-test-2-20170726180515 backup-test-2 Completed 0 1 2017-07-26 13:32:59 -0400 EDT <none> ``` To delve into the warnings and errors into more detail, you can use `velero restore describe`: ```bash velero restore describe backup-test-20170726180512 ``` The output looks like this: ``` Name: backup-test-20170726180512 Namespace: velero Labels: <none> Annotations: <none> Backup: backup-test Namespaces: Included: * Excluded: <none> Resources: Included: serviceaccounts Excluded: nodes, events, events.events.k8s.io Cluster-scoped: auto Namespace mappings: <none> Label selector: <none> Restore PVs: auto Preserve Service NodePorts: auto Phase: Completed Validation errors: <none> Warnings: Velero: <none> Cluster: <none> Namespaces: velero: serviceaccounts \"velero\" already exists serviceaccounts \"default\" already exists kube-public: serviceaccounts \"default\" already exists kube-system: serviceaccounts \"attachdetach-controller\" already exists serviceaccounts \"certificate-controller\" already exists serviceaccounts \"cronjob-controller\" already exists serviceaccounts \"daemon-set-controller\" already exists serviceaccounts \"default\" already exists serviceaccounts \"deployment-controller\" already exists serviceaccounts \"disruption-controller\" already exists serviceaccounts \"endpoint-controller\" already exists serviceaccounts \"generic-garbage-collector\" already exists serviceaccounts \"horizontal-pod-autoscaler\" already exists serviceaccounts \"job-controller\" already exists serviceaccounts \"kube-dns\" already exists serviceaccounts \"namespace-controller\" already exists serviceaccounts \"node-controller\" already exists serviceaccounts \"persistent-volume-binder\" already exists serviceaccounts \"pod-garbage-collector\" already exists serviceaccounts \"replicaset-controller\" already exists serviceaccounts \"replication-controller\" already exists serviceaccounts \"resourcequota-controller\" already exists serviceaccounts \"service-account-controller\" already exists serviceaccounts \"service-controller\" already exists serviceaccounts \"statefulset-controller\" already exists serviceaccounts \"ttl-controller\" already exists default: serviceaccounts \"default\" already exists Errors: Velero: <none> Cluster: <none> Namespaces: <none> ``` Errors appear for incomplete or partial restores. Warnings appear for non-blocking issues, for example, the restore looks \"normal\" and all resources referenced in the backup exist in some form, although some of them may have been pre-existing. Both errors and warnings are structured in the same way: `Velero`: A list of system-related issues encountered by the Velero server. For example, Velero couldn't read a directory. `Cluster`: A list of issues related to the restore of cluster-scoped resources. `Namespaces`: A map of namespaces to the list of issues related to the restore of their respective resources." } ]
{ "category": "Runtime", "file_name": "debugging-restores.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Code quality: move magefile in its own subdir/submodule to remove magefile dependency on logrus consumer improve timestamp format documentation Fixes: fix race condition on logger hooks Correct versioning number replacing v1.7.1. Beware this release has introduced a new public API and its semver is therefore incorrect. Code quality: use go 1.15 in travis use magefile as task runner Fixes: small fixes about new go 1.13 error formatting system Fix for long time race condiction with mutating data hooks Features: build support for zos Fixes: the dependency toward a windows terminal library has been removed Features: a new buffer pool management API has been added a set of `<LogLevel>Fn()` functions have been added Fixes: end of line cleanup revert the entry concurrency bug fix whic leads to deadlock under some circumstances update dependency on go-windows-terminal-sequences to fix a crash with go 1.14 Features: add an option to the `TextFormatter` to completely disable fields quoting Code quality: add golangci linter run on travis Fixes: add mutex for hooks concurrent access on `Entry` data caller function field for go1.14 fix build issue for gopherjs target Feature: add an hooks/writer sub-package whose goal is to split output on different stream depending on the trace level add a `DisableHTMLEscape` option in the `JSONFormatter` add `ForceQuote` and `PadLevelText` options in the `TextFormatter` Fixes build break for plan9, nacl, solaris This new release introduces: Enhance TextFormatter to not print caller information when they are empty (#944) Remove dependency on golang.org/x/crypto (#932, #943) Fixes: Fix Entry.WithContext method to return a copy of the initial entry (#941) This new release introduces: Add `DeferExitHandler`, similar to `RegisterExitHandler` but prepending the handler to the list of handlers (semantically like `defer`) (#848). Add `CallerPrettyfier` to `JSONFormatter` and `TextFormatter` (#909, #911) Add `Entry.WithContext()` and `Entry.Context`, to set a context on entries to be used e.g. in hooks (#919). Fixes: Fix wrong method calls `Logger.Print` and `Logger.Warningln` (#893). Update `Entry.Logf` to not do string formatting unless the log level is enabled (#903) Fix infinite recursion on unknown `Level.String()` (#907) Fix race condition in `getCaller` (#916). This new release introduces: Log, Logf, Logln functions for Logger and Entry that take a Level Fixes: Building prometheus node_exporter on AIX (#840) Race condition in TextFormatter (#468) Travis CI import path (#868) Remove coloured output on Windows (#862) Pointer to func as field in JSONFormatter (#870) Properly marshal Levels (#873) This new release introduces: A new method `SetReportCaller` in the `Logger` to enable the file, line and calling function from which the trace has been issued A new trace level named `Trace` whose level is below `Debug` A configurable exit function to be called upon a Fatal trace The `Level` object now implements `encoding.TextUnmarshaler` interface This is a bug fix" }, { "data": "fix the build break on Solaris don't drop a whole trace in JSONFormatter when a field param is a function pointer which can not be serialized This new release introduces: several fixes: a fix for a race condition on entry formatting proper cleanup of previously used entries before putting them back in the pool the extra new line at the end of message in text formatter has been removed a new global public API to check if a level is activated: IsLevelEnabled the following methods have been added to the Logger object IsLevelEnabled SetFormatter SetOutput ReplaceHooks introduction of go module an indent configuration for the json formatter output colour support for windows the field sort function is now configurable for text formatter the CLICOLOR and CLICOLOR\\_FORCE environment variable support in text formater This new release introduces: a new api WithTime which allows to easily force the time of the log entry which is mostly useful for logger wrapper a fix reverting the immutability of the entry given as parameter to the hooks a new configuration field of the json formatter in order to put all the fields in a nested dictionnary a new SetOutput method in the Logger a new configuration of the textformatter to configure the name of the default keys a new configuration of the text formatter to disable the level truncation Fix hooks race (#707) Fix panic deadlock (#695) Fix race when adding hooks (#612) Fix terminal check in AppEngine (#635) Replace example files with testable examples bug: quote non-string values in text formatter (#583) Make (Logger) SetLevel a public method bug: fix escaping in text formatter (#575) Officially changed name to lower-case bug: colors on Windows 10 (#541) bug: fix race in accessing level (#512) feature: add writer and writerlevel to entry (#372) bug: fix undefined variable on solaris (#493) formatter: configure quoting of empty values (#484) formatter: configure quoting character (default is `\"`) (#484) bug: fix not importing io correctly in non-linux environments (#481) bug: fix windows terminal detection (#476) bug: fix tty detection with custom out (#471) performance: Use bufferpool to allocate (#370) terminal: terminal detection for app-engine (#343) feature: exit handler (#375) feature: Add a test hook (#180) feature: `ParseLevel` is now case-insensitive (#326) feature: `FieldLogger` interface that generalizes `Logger` and `Entry` (#308) performance: avoid re-allocations on `WithFields` (#335) logrus/text_formatter: don't emit empty msg logrus/hooks/airbrake: move out of main repository logrus/hooks/sentry: move out of main repository logrus/hooks/papertrail: move out of main repository logrus/hooks/bugsnag: move out of main repository logrus/core: run tests with `-race` logrus/core: detect TTY based on `stderr` logrus/core: support `WithError` on logger logrus/core: Solaris support logrus/core: fix possible race (#216) logrus/doc: small typo fixes and doc improvements hooks/raven: allow passing an initialized client logrus/core: revert #208 formatter/text: fix data race (#218) logrus/core: fix entry log level (#208) logrus/core: improve performance of text formatter by 40% logrus/core: expose `LevelHooks` type logrus/core: add support for DragonflyBSD and NetBSD formatter/text: print structs more verbosely logrus: fix more Fatal family functions logrus: fix not exiting on `Fatalf` and `Fatalln` logrus: defaults to stderr instead of stdout hooks/sentry: add special field for `http.Request` formatter/text: ignore Windows for colors formatter/\\: allow configuration of timestamp layout formatter/text: Add configuration option for time format (#158)" } ]
{ "category": "Runtime", "file_name": "CHANGELOG.md", "project_name": "Inclavare Containers", "subcategory": "Container Runtime" }
[ { "data": "title: \"Getting started\" layout: docs The following example sets up the Ark server and client, then backs up and restores a sample application. For simplicity, the example uses Minio, an S3-compatible storage service that runs locally on your cluster. NOTE The example lets you explore basic Ark functionality. In the real world, however, you would back your cluster up to external storage. See for how to configure Ark for a production environment. Access to a Kubernetes cluster, version 1.7 or later. Version 1.7.5 or later is required to run `ark backup delete`. A DNS server on the cluster `kubectl` installed Clone or fork the Ark repository: ``` git clone [email protected]:heptio/ark.git ``` NOTE: Make sure to check out the appropriate version. We recommend that you check out the latest tagged version. The main branch is under active development and might not be stable. Start the server and the local storage service. In the root directory of Ark, run: ```bash kubectl apply -f examples/common/00-prereqs.yaml kubectl apply -f examples/minio/ ``` NOTE: If you get an error about Config creation, wait for a minute, then run the commands again. Deploy the example nginx application: ```bash kubectl apply -f examples/nginx-app/base.yaml ``` Check to see that both the Ark and nginx deployments are successfully created: ``` kubectl get deployments -l component=ark --namespace=heptio-ark kubectl get deployments --namespace=nginx-example ``` . Make sure that you install somewhere in your PATH. Create a backup for any object that matches the `app=nginx` label selector: ``` ark backup create nginx-backup --selector app=nginx ``` Alternatively if you want to backup all objects except those matching the label `backup=ignore`: ``` ark backup create nginx-backup --selector 'backup notin (ignore)' ``` Simulate a disaster: ``` kubectl delete namespace nginx-example ``` To check that the nginx deployment and service are gone, run: ``` kubectl get deployments --namespace=nginx-example kubectl get services --namespace=nginx-example kubectl get namespace/nginx-example ``` You should get no results. NOTE: You might need to wait for a few minutes for the namespace to be fully cleaned up. Run: ``` ark restore create --from-backup nginx-backup ``` Run: ``` ark restore get ``` After the restore finishes, the output looks like the following: ``` NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR nginx-backup-20170727200524 nginx-backup Completed 0 0 2017-07-27 20:05:24 +0000 UTC <none> ``` NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`. After a successful restore, the `STATUS` column is `Completed`, and `WARNINGS` and `ERRORS` are 0. All objects in the `nginx-example` namespace should be just as they were before you deleted them. If there are errors or warnings, you can look at them in detail: ``` ark restore describe <RESTORE_NAME> ``` For more information, see . If you want to delete any backups you created, including data in object storage and persistent volume snapshots, you can run: ``` ark backup delete BACKUP_NAME ``` This asks the Ark server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Ark will allow you to delete multiple backups by name or label selector. Once fully removed, the backup is no longer visible when you run: ``` ark backup get BACKUP_NAME ``` If you want to uninstall Ark but preserve the backup data in object storage and persistent volume snapshots, it is safe to remove the `heptio-ark` namespace and everything else created for this example: ``` kubectl delete -f examples/common/ kubectl delete -f examples/minio/ kubectl delete -f examples/nginx-app/base.yaml ```" } ]
{ "category": "Runtime", "file_name": "quickstart.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "(networking)= ```{toctree} :maxdepth: 1 /explanation/networks Create and configure a network </howto/network_create> Configure a network </howto/network_configure> Configure network ACLs </howto/network_acls> Configure network forwards </howto/network_forwards> Configure network integrations </howto/network_integrations> Configure network zones </howto/network_zones> Configure Incus as BGP server </howto/network_bgp> Display Incus IPAM information </howto/network_ipam> /reference/network_bridge /reference/network_ovn /reference/network_external Increase bandwidth <howto/networkincreasebandwidth> ```" } ]
{ "category": "Runtime", "file_name": "networks.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "name: Enhancement Request about: Suggest an idea for this project Is your feature request related to a problem?/Why is this needed <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> Describe the solution you'd like in detail <!-- A clear and concise description of what you want to happen. --> Describe alternatives you've considered <!-- A clear and concise description of any alternative solutions or features you've considered. --> Additional context <!-- Add any other context or screenshots about the feature request here. -->" } ]
{ "category": "Runtime", "file_name": "enhancement.md", "project_name": "Carina", "subcategory": "Cloud Native Storage" }
[ { "data": "This document defines governance policies for the Kube-OVN project. Kube-OVN Maintainers have write access to the . They can merge their own patches or patches from others. The current maintainers can be found in . Maintainers collectively manage the project's resources and contributors. This privilege is granted with some expectation of responsibility: maintainers are people who care about the Kube-OVN project and want to help it grow and improve. A maintainer is not just someone who can make changes, but someone who has demonstrated their ability to collaborate with the team, get the most knowledgeable people to review code and docs, contribute high-quality code, and follow through to fix issues (in code or tests). A maintainer is a contributor to the project's success and a citizen helping the project succeed. <!-- If you have full Contributor Ladder documentation that covers becoming a Maintainer or Owner, then this section should instead be a reference to that documentation --> To become a Maintainer, you need to demonstrate the following: commitment to the project: participate in discussions, contributions, code and documentation reviews for 3 months or more, perform reviews for 10 non-trivial pull requests, contribute 10 non-trivial pull requests and have them merged, ability to write quality code and/or documentation, ability to collaborate with the team, understanding of how the team works (policies, processes for testing and code review, etc), understanding of the project's code base and coding and documentation style. A new Maintainer must be proposed by an existing maintainer by sending a message to the . A simple majority vote of existing Maintainers approves the application. Time zones permitting, Maintainers are expected to participate in the public developer meeting, which occurs [1, 15 every month]. Maintainers will also have closed meetings in order to discuss security reports or Code of Conduct violations. Such meetings should be scheduled by any Maintainer on receipt of a security issue or CoC report. All current Maintainers must be invited to such closed meetings, except for any Maintainer who is accused of a CoC violation. Any Maintainer may suggest a request for CNCF resources, either in the , or during a meeting. A simple majority of Maintainers approves the request. The Maintainers may also choose to delegate working with the CNCF to non-Maintainer community members. While most business in Kube-OVN is conducted by \"lazy consensus\", periodically the Maintainers may need to vote on specific actions or changes. Votes may be taken at . Any Maintainer may demand a vote be taken. Most votes require a simple majority of all Maintainers to succeed. Maintainers can be removed by a 2/3 majority vote of all Maintainers, and changes to this Governance require a 2/3 vote of all Maintainers." } ]
{ "category": "Runtime", "file_name": "GOVERNANCE.md", "project_name": "Kube-OVN", "subcategory": "Cloud Native Network" }
[ { "data": "title: Use JuiceFS on Kubernetes sidebar_position: 2 slug: /howtouseonkubernetes JuiceFS is an ideal storage layer for Kubernetes, read this chapter to learn how to use JuiceFS in Kubernetes. If you simply need to use JuiceFS inside Kubernetes pods, without any special requirements (e.g. isolation, permission control), then can be a good practice, which is also really easy to setup: Install and mount JuiceFS on all Kubernetes worker nodes, is recommended for this type of work. Use `hostPath` volume inside pod definition, and mount a JuiceFS sub-directory to container: ```yaml {8-16} apiVersion: v1 kind: Pod metadata: name: juicefs-app spec: containers: ... volumeMounts: name: jfs-data mountPath: /opt/app-data volumes: name: jfs-data hostPath: path: \"/jfs/myapp/\" type: Directory ``` In comparison to using JuiceFS CSI Driver, `hostPath` is a much more simple practice, and easier to debug when things go wrong, but notice that: For ease of management, generally all pods use the same host mount point. Lack of isolation may lead to data security issues, and obviously, you won't be able to adjust JuiceFS mount parameters separately for each application. Please evaluate carefully. All worker nodes should mount JuiceFS in advance, so when adding a new node to the cluster, JuiceFS needs to be installed and mounted during the initialization process, otherwise the new node does not have a JuiceFS mount point, and the container will not be created. The system resources (such as CPU, memory, etc.) occupied by the JuiceFS mounting process on the host are not controlled by Kubernetes, and may occupy too many host resources. You can consider using to properly adjust the system resource reservation of Kubernetes, to reserve more resources for the JuiceFS mount process. If the JuiceFS mount process on the host exits unexpectedly, the application pod will not be able to access the mount point" }, { "data": "In this case, the JuiceFS file system needs to be remounted and the application pod must be rebuilt. However, JuiceFS CSI Driver solves this problem well by providing the mechanism. If you're using Docker as Kubernetes container runtime, it's best to start JuiceFS mount prior to Docker in startup order, to avoid containers being created before JuiceFS is properly mounted. For systemd, you can use below unit file to manually control startup order: ```systemd title=\"/etc/systemd/system/docker.service.d/override.conf\" [Unit] After=network-online.target firewalld.service containerd.service jfs.mount ``` To use JuiceFS in Kubernetes, refer to . In some cases, you may need to mount JuiceFS volume directly in the container, which requires the use of the JuiceFS client in the container. You can refer to the following `Dockerfile` example to integrate the JuiceFS client into your application image: ```dockerfile title=\"Dockerfile\" FROM alpine:latest LABEL maintainer=\"Juicedata <https://juicefs.com>\" RUN apk add --no-cache curl && \\ JFSLATESTTAG=$(curl -s https://api.github.com/repos/juicedata/juicefs/releases/latest | grep 'tag_name' | cut -d '\"' -f 4 | tr -d 'v') && \\ wget \"https://github.com/juicedata/juicefs/releases/download/v${JFSLATESTTAG}/juicefs-${JFSLATESTTAG}-linux-amd64.tar.gz\" && \\ tar -zxf \"juicefs-${JFSLATESTTAG}-linux-amd64.tar.gz\" && \\ install juicefs /usr/bin && \\ rm juicefs \"juicefs-${JFSLATESTTAG}-linux-amd64.tar.gz\" && \\ rm -rf /var/cache/apk/* && \\ apk del curl ENTRYPOINT [\"/usr/bin/juicefs\", \"mount\"] ``` Since JuiceFS needs to use the FUSE device to mount the file system, it is necessary to allow the container to run in privileged mode when creating a Pod: ```yaml {19-20} apiVersion: apps/v1 kind: Deployment metadata: name: nginx-run spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: name: nginx image: linuxserver/nginx ports: containerPort: 80 securityContext: privileged: true ``` :::caution With the privileged mode being enabled by `privileged: true`, the container has access to all devices of the host, that is, it has full control of the host's kernel. Improper uses will bring serious safety hazards. Please conduct a thorough safety assessment before using it. :::" } ]
{ "category": "Runtime", "file_name": "how_to_use_on_kubernetes.md", "project_name": "JuiceFS", "subcategory": "Cloud Native Storage" }
[ { "data": "A SpiderEndpoint resource represents IP address allocation details for the corresponding pod. This resource one to one pod, and it will inherit the pod name and pod namespace. Notice: For kubevirt VM static IP feature, the SpiderEndpoint object would inherit the kubevirt VM/VMI resource name and namespace. ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderEndpoint metadata: name: test-app-1-9dc78fb9-rs99d status: current: ips: cleanGateway: false interface: eth0 ipv4: 172.31.199.193/20 ipv4Gateway: 172.31.207.253 ipv4Pool: worker-172 vlan: 0 node: dc-test02 uid: e7b50a38-25c2-41d0-b332-7f619c69194e ownerControllerName: test-app-1 ownerControllerType: Deployment ``` | Field | Description | Schema | Validation | |--|--|--|| | name | the name of this SpiderEndpoint resource | string | required | | namespace | the namespace of this SpiderEndpoint resource | string | required | The IPPool status is a subresource that processed automatically by the system to summarize the current state. | Field | Description | Schema | Validation | ||-||| | current | the IP allocation details of the corresponding pod | | required | | ownerControllerType | the corresponding pod top owner controller type | string | required | | ownerControllerName | the corresponding pod top owner controller name | string | required | This property describes the SpiderEndpoint corresponding pod details. | Field | Description | Schema | Validation | |-|-|--|| | uid | corresponding pod uid | string | required | | node | total IP counts of this pool to use | string | required | | ips | current allocated IP counts | list of | required | This property describes single Interface allocation details. | Field | Description | Schema | Validation | Default | |--||-||| | interface | single interface name | string | required | | | ipv4 | single IPv4 allocated IP address | string | optional | | | ipv6 | single IPv6 allocated IP address | string | optional | | | ipv4Pool | the IPv4 allocated IP address corresponding pool | string | optional | | | ipv6Pool | the IPv6 allocated IP address corresponding pool | string | optional | | | vlan | vlan ID | int | optional | 0 | | ipv4Gateway | the IPv4 gateway IP address | string | optional | | | ipv6Gateway | the IPv6 gateway IP address | string | optional | | | cleanGateway | a flag to choose whether need default route by the gateway | boolean | optional | | | routes | the allocation routes | list if | optional | |" } ]
{ "category": "Runtime", "file_name": "crd-spiderendpoint.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "A one-OSD-per-Pod placement should be implemented to improve reliability and resource efficiency for Ceph OSD daemon. Currently in Rook 0.7, Rook Operator starts a ReplicaSet to run command (hereafter referred to as `OSD Provisioner`) on each storage node. The ReplicaSet has just one replica. `OSD Provisioner` scans and prepares devices, creates OSD IDs and data directories or devices, generates Ceph configuration. At last, `OSD Provisioner` starts all Ceph OSD, i.e. `ceph-osd`, daemons in foreground and tracks `ceph-osd` processes. As observed, all Ceph OSDs are running in the same Pod. The limitations of current design are: Reliability issue. One Pod for all OSDs doesn't have the highest reliability nor efficiency. If the Pod is deleted, accidentally or during maintenance, all OSDs are down till the ReplicaSet restart. Efficiency issue. Resource limits cannot be set effectively on the OSDs since the number of osds per in the pod could vary from node to node. The operator cannot make decisions about the topology because it doesn't know in advance what devices are available on the nodes. Tight Ceph coupling. The monolithic device discovery and provisioning code cannot be reused for other backends. Process management issue. Rook's process management is very simple. Using Kubernetes pod management is much more reliable. A more comprehensive discussions can be found at . Device Discovery. A DaemonSet that discovers unformatted devices on the host. The DaemonSet populates a per node Raw Device Configmap with device information. The daemonSet is running on nodes that are labelled as storage nodes. The DaemonSet can start independently of Rook Operator. Device Discovery is storage backend agnostic. Device Provisioner. A Pod that is given device or directory paths upon start and make backend specific storage types. For instance, the provisioner prepares OSDs for Ceph backend. It is a Kubernetes batch job and exits after the devices are prepared. We propose the following change to address the limitations. | Sequence |Rook Operator | Device Discovery | Device Provisioner | Ceph OSD Deployment | |||||| | 0 | | Start on labeled storage nodes, discover unformatted devices and store device information in per node Raw Device Configmap | | | 1 | Read devices and node filters from cluster CRD | | | | 2 | parse Raw Device Configmap, extract nodes and device paths and filters them based on cluster CRD, and create an Device Provisioner deployment for each device || | | | 3 | Watch device provisioning Configmap | | Prepare OSDs, Persist OSD ID, datapath, and node info in a per node device provisioning Configmap | | | 4 | Detect device provisioning Configmap change, parse Configmap, extract OSD info, construct OSD Pod command and arg | | | | | 5 | Create one deployment per OSD | | |Start `ceph-osd` Daemon one Pod per device | This change addresses the above limitations in the follow ways: High reliability. Each `ceph-osd` daemon runs its own Pod, their restart and upgrade are by Kubernetes controllers. Upgrading Device Provisioner Pod no longer restarts `ceph-osd` daemons. More efficient resource" }, { "data": "Once Device Discovery detects all devices, Rook Operator is informed of the topology and assigns appropriate resources to each Ceph OSD deployment. Reusable. Device discovery can be used for other storage backends. Each `Device Discovery` DaemonSet walks through device trees to unformatted block devices and stores the device information in a per node `Raw Device Configmap`. A sample of `Raw Device Configmap` from Node `node1` is as the following: ```yaml apiVersion: v1 kind: ConfigMap metadata: namespace: rook-system name: node1-raw-devices data: devices device-path: /dev/disk/by-id/scsi-dead # persistent device path size: 214748364800 # size in byte rotational: 0 # 0 for ssd/nvme, 1 for hdd, based on reading from sysfs extra: '{ \"vendor\": \"X\", \"model\": \"Y\" }' # extra information under sysfs about the device in json, such as vendor/model, scsi level, target info, etc. device-path: /dev/disk/by-id/scsi-beef # persistent device path size: 214748364800 # size in byte rotational: 1 # 0 for ssd/nvme, 1 for hdd, based on reading from sysfs ``` It is expected Device Discovery will be merged into Rook Operator once local PVs are supported in Rook Cluster CRD. Rook Operator can infer the device topology from local PV Configmaps. However, as long as raw devices or directories are still in use, a dedicated Device Discovery Pod is still needed. If the storage nodes are also compute nodes, it is possible that dynamically attached and unformatted devices to those nodes are discovered by Device Discovery DaemonSet. To avoid this race condition, admin can choose to use separate device tree directories: one for devices used for Rook and the other for compute. Or the Cluster CRD should explicitly identify which devices should be used for Rook. Alternatively, since `rook agent` is currently running as a DaemonSet on all nodes, it is conceivable to make `rook agent` to poll devices and update device orchestration Configmap. This approach, however, needs to give `rook agent` the privilege to modify Configmaps. Moreover, `Device Discovery` Pod doesn't need privileged mode, host network, or write access to hosts' `/dev` directory, all of which are required by `rook agent`. Security. Device Provisioner Pod needs privilege to access Configmaps but Ceph OSD Pod don't need to access Kubernetes resources and thus don't need any RBAC rules. Rook Operator. Rook Operator watches two Configmaps: the raw device Configmaps that created by Device Discovery Pod and storage specific device provisioning Configmaps that are created by Device Provisioner Pod. For raw device Configmap, Operator creates storage specific device provisioner deployment to prepare these devices. For device provisioning Configmaps, Operator creates storage specific daemon deployment (e.g. Ceph OSD Daemon deployments) with the device information in Configmaps and resource information in Cluster CRD. Device Discovery. It is a new long running process in a DaemonSet that runs on each node that has matching labels. It discovers storage devices on the nodes and populates the raw devices Configmaps. Device Provisioner. Device Provisioner becomes a batch job, it no longer exec Ceph OSD daemon. Ceph OSD Daemon. `ceph-osd` is no longer exec'ed by Device Provisioner, it becomes the Pod entrypoint. Ceph OSD Pod naming. Rook Operator creates Ceph OSD Pod metadata using cluster name, node name, and OSD ID." } ]
{ "category": "Runtime", "file_name": "dedicated-osd-pod.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "Please refer to for details. For backward compatibility, Cobra still supports its legacy dynamic completion solution (described below). Unlike the `ValidArgsFunction` solution, the legacy solution will only work for Bash shell-completion and not for other shells. This legacy solution can be used along-side `ValidArgsFunction` and `RegisterFlagCompletionFunc()`, as long as both solutions are not used for the same command. This provides a path to gradually migrate from the legacy solution to the new solution. Note: Cobra's default `completion` command uses bash completion V2. If you are currently using Cobra's legacy dynamic completion solution, you should not use the default `completion` command but continue using your own. The legacy solution allows you to inject bash functions into the bash completion script. Those bash functions are responsible for providing the completion choices for your own completions. Some code that works in kubernetes: ```bash const ( bashcompletionfunc = `kubectlparseget() { local kubectl_output out if kubectl_output=$(kubectl get --no-headers \"$1\" 2>/dev/null); then out=($(echo \"${kubectl_output}\" | awk '{print $1}')) COMPREPLY=( $( compgen -W \"${out[*]}\" -- \"$cur\" ) ) fi } kubectlgetresource() { if [[ ${#nouns[@]} -eq 0 ]]; then return 1 fi kubectlparseget ${nouns[${#nouns[@]} -1]} if [[ $? -eq 0 ]]; then return 0 fi } kubectlcustomfunc() { case ${last_command} in kubectlget | kubectldescribe | kubectldelete | kubectlstop) kubectlgetresource return ;; *) ;; esac } `) ``` And then I set that in my command definition: ```go cmds := &cobra.Command{ Use: \"kubectl\", Short: \"kubectl controls the Kubernetes cluster manager\", Long: `kubectl controls the Kubernetes cluster manager. Find more information at https://github.com/GoogleCloudPlatform/kubernetes.`, Run: runHelp, BashCompletionFunction: bashcompletionfunc, } ``` The `BashCompletionFunction` option is really only valid/useful on the root command. Doing the above will cause `kubectl_custom_func()` (`<command-use>customfunc()`) to be called when the built in processor was unable to find a solution. In the case of kubernetes a valid command might look something like `kubectl get pod ` the `kubectl_customc_func()` will run because the cobra.Command only understood \"kubectl\" and \"get.\" `kubectlcustomfunc()` will see that the cobra.Command is \"kubectlget\" and will thus call another helper `kubectlgetresource()`. `kubectlgetresource` will look at the 'nouns' collected. In our example the only noun will be `pod`. So it will call `kubectlparseget pod`. `kubectlparse_get` will actually call out to kubernetes and get any pods. It will then set `COMPREPLY` to valid pods! Similarly, for flags: ```go annotation := make(mapstring) annotation[cobra.BashCompCustom] = []string{\"kubectlgetnamespaces\"} flag := &pflag.Flag{ Name: \"namespace\", Usage: usage, Annotations: annotation, } cmd.Flags().AddFlag(flag) ``` In addition add the `kubectlgetnamespaces` implementation in the `BashCompletionFunction` value, e.g.: ```bash kubectlgetnamespaces() { local template template=\"{{ range .items }}{{ .metadata.name }} {{ end }}\" local kubectl_out if kubectl_out=$(kubectl get -o template --template=\"${template}\" namespace 2>/dev/null); then COMPREPLY=( $( compgen -W \"${kubectl_out}[*]\" -- \"$cur\" ) ) fi } ```" } ]
{ "category": "Runtime", "file_name": "bash_completions.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- Thanks for sending the pull request! --> <!-- PR title format should be type(scope): subject. For details, see . Each pull request should address only one issue, not mix up code from multiple issues. Each commit in the pull request has a meaningful commit message. For details, see . Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review. Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below. --> <!-- Either this PR fixes an issue, --> Fixes: #xyz <!-- or this PR is one task of an issue. --> Main Issue: #xyz <!-- Explain here the context, and why you're making that change. What is the problem you're trying to solve. --> blaaaaa <!-- Describe the modifications you've done. --> ``` text blaaaaa ``` <!-- Show in a checkbox-style, the expected types of changes your project is supposed to have: --> <!-- Put an `x` in the boxes that apply --> [ ] New feature (non-breaking change which adds functionality) [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) [ ] Bugfix (non-breaking change which fixes an issue) [ ] Documentation Update (if none of the other choices apply) [ ] So on... <!-- Please pick either of the following options. --> [ ] Make sure that the change passes the testing checks. This change is a trivial rework / code cleanup without any test coverage. (or) This change is already covered by existing tests, such as (please describe tests). (or) This change added tests and can be verified as follows: (example:) This can be verified in development debugging This can be realized in a mocked environment, like a test cluster consisting in docker (or) This change `MUST` reappear in online clusters, or occur in that specific scenarios. <!-- Which of the following parts are affected by this change? --> [ ] Master [ ] MetaNode [ ] DataNode [ ] ObjectNode [ ] AuthNode [ ] LcNode [ ] Blobstore [ ] Client [ ] Cli [ ] SDK [ ] Other Tools [ ] Common Packages [ ] Dependencies [ ] Anything that affects deployment <!-- Is there a chinese and english document modification? --> [ ] `doc` <!-- Your PR contains doc changes. --> [ ] `doc-required` <!-- Your PR changes impact docs and you will update later --> [ ] `doc-not-needed` <!-- Your PR changes do not impact docs --> [ ] `doc-complete` <!-- Docs have been already added --> <!-- How long would you like the team to be completed in your contributing? --> [ ] `in-two-days` [ ] `weekly` [ ] `free-time` [ ] `whenever` <!-- enter the url if has PR in forked repository. --> PR in forked repository: <!-- ENTER URL HERE --> <!-- Thanks for contributing, best days! -->" } ]
{ "category": "Runtime", "file_name": "pull_request_template.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"ark backup download\" layout: docs Download a backup Download a backup ``` ark backup download NAME [flags] ``` ``` --force forces the download and will overwrite file if it exists already -h, --help help for download -o, --output string path to output file. Defaults to <NAME>-data.tar.gz in the current directory --timeout duration maximum time to wait to process download request (default 1m0s) ``` ``` --alsologtostderr log to standard error as well as files --kubeconfig string Path to the kubeconfig file to use to talk to the Kubernetes apiserver. If unset, try the environment variable KUBECONFIG, as well as in-cluster configuration --kubecontext string The context to use to talk to the Kubernetes apiserver. If unset defaults to whatever your current-context is (kubectl config current-context) --logbacktraceat traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files -n, --namespace string The namespace in which Ark should operate (default \"heptio-ark\") --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level log level for V logs --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging ``` - Work with backups" } ]
{ "category": "Runtime", "file_name": "ark_backup_download.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently, Longhorn does not have the capability to support online replica rebuilding for volumes utilizing the V2 Data Engine. However, an automatic offline replica rebuilding mechanism has been implemented as a solution to address this limitation. https://github.com/longhorn/longhorn/issues/6071 Support volumes using v2 data engine Support volumes using v1 data engine In the event of abnormal power outages or network partitions, replicas of a volume may be lost, resulting in volume degradation. Unfortunately, volumes utilizing the v2 data engine do not currently have the capability for online replica rebuilding. As a solution to address this limitation, Longhorn has implemented an automatic offline replica rebuilding mechanism. When a degraded volume is detached, this mechanism places the volume in maintenance mode and initiates the rebuilding process. After the rebuilding is successfully completed, the volume is detached according to the user's specified expectations. If a volume using the v2 data engine is degraded, the online replica rebuilding process is currently unsupported. If offline replica rebuilding feature is enabled when one of the conditions is met Global setting `offline-replica-rebuild` is `enabled` and `Volume.Spec.OfflineReplicaRebuilding` is `ignored` `Volume.Spec.OfflineReplicaRebuilding` is `enabled` The volume's `Spec.OfflineReplicaRebuildingRequired` is set to `true` if a volume is degraded. When a degraded volume is detached, this mechanism places the volume in maintenance mode and initiates the rebuilding process. After the rebuilding is successfully completed, the volume is detached according to the user's specified expectations. If a user attaches the volume without enabling maintenance mode while the replica rebuilding process is in progress, the ongoing replica rebuilding operation will be terminated. Settings Add global setting `offline-replica-rebuilding`. Default value is `enabled`. The available options are: `enabled` `disable` CRD Add `Volume.Spec.OfflineReplicaRebuilding`. The available options are: ignored`: The volume's offline replica rebuilding behavior follows the settings defined by the global setting `offline-replica-rebuilding`. `enabled`: Offline replica rebuilding of the volume is always enabled. `disabled`: Offline replica rebuilding of the volume is always disabled. Add `Volume.Status.OfflineReplicaRebuildingRequired` Controller Add `volume-rebuilding-controller` for creating and deleting `volume-rebuilding-controller` attachment ticket. Logics A volume-controller sets 'Volume.Status.OfflineReplicaRequired' to 'true' when it realizes a v2 data engine is degraded. If a volume's `Volume.Status.OfflineReplicaRebuildingRequired` is `true`, volume-rebuilding-controller creates a `volume-rebuilding-controller` attachment ticket with frontend disabled and lower priority than tickets with workloads. When the volume is detached, volume-attachment-controller attaches the volume with a `volume-rebuilding-controller` attachment ticket in maintenance mode. volume-controller triggers replica rebuilding. After finishing the replica rebuilding, the volume-controller sets `Volume.Status.OfflineReplicaRebuildingRequired` to `false` if a number of healthy replicas is expected. volume-rebuilding-controller deletes the 'volume-rebuilding-controller' attachment ticket. volume-attachment-controller is aware of the deletion of the `volume-rebuilding-controller` attachment ticket, which causes volume detachment. Degraded Volume lifecycle (creation, attachment, detachment and deletion) and automatic replica rebuilding" } ]
{ "category": "Runtime", "file_name": "20230616-automatic-offline-replica-rebuild.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- toc --> - - - <!-- /toc --> This document proposes the changes required within Kanister to integrate and enhance the observability of action sets. Kanister controller already has a registered metrics endpoint `/metrics`. There are no metrics exported other than the default Prometheus metrics that the default handler provides. Adding metrics to track the ActionSets and Blueprints workflow will help improve the overall observability. To achieve this, we need to build a framework for exporting metrics from the Kanister controller, and to start with, export some metrics to Prometheus. This framework simplifies the common need for Prometheus counters to publish 0 values at startup for all permutations of labels and label values. This ensures that Kanister controller restarts are recognized by Prometheus and that the PromQL rate() and increase() functions work properly across restarts. Some example metrics include: ActionSets succeeded ActionSets failed Action duration Phase duration, etc. Design a framework that allows us to export new Kanister metrics to Prometheus easily. Add a few fundamental metrics related to ActionSets and Blueprints to start with. The initializer of the consumer package calls `newMetrics`, a helper method that talks to Kanisters metrics package. The result is a new metrics struct that owns all the Prometheus metrics. In order to initialize all the required Prometheus metrics, the `new_metrics` method calls the `InitCounterVec`, `InitGaugeVec`, `InitHistogramVec` in the metrics package. It passes the metric names and the specific label names and label values as `BoundedLabels` to the metrics package. Once it initializes all the Prometheus metrics successfully, it returns a struct that wraps all the metrics for the consumer package to use. The metrics package internally initializes the Prometheus metrics and registers them with Prometheus. If the registration fails because the specific metric with label header already exists, the metric will simply be returned to the caller. If the registration fails due to other reasons, then the metrics package will cause a panic, signaling programmer error. In case of the `CounterVec`, the `InitCounterVec` function generates all possible permutations of label values and initializes each counter within the `CounterVec` with a value of 0. Once the collector is created in the metrics package, it will be returned to the consumer packages `newMetrics` helper method. Once the initialization of all Prometheus metrics are complete, a new metrics struct will be returned to the consumer package's initializer. The consumer package may find it useful to implement a helper method that constructs a `prometheus.Labels` mapping to access a specific counter from a `CounterVec` and perform an increment operation. ```golang // BoundedLabel is a type that represents a label and its associated // valid values type BoundedLabel struct { LabelName string LabelValues []string } ``` An example of a `BoundedLabel` is in the scenario of ActionSet resolutions. Suppose we want to track these resolutions across different blueprints, we would create the bounded labels in the following way: ```golang BoundedLabel{ LabelName: \"operation_type\", LabelValues: []string{ \"backup\", \"restore\", }, } BoundedLabel{ LabelName: \"actionsetresolution\", LabelValues: []string{ \"success\", \"failure\", }, } ``` ```golang // InitCounterVec initializes and registers the counter metrics vector. It // takes a list of BoundedLabel objects - if any label value or label name is // nil, then this method will panic. Based on the combinations returned by // generateCombinations, it will set each counter value to 0. // If a nil counter is returned during registration, the method will panic. func InitCounterVec(r prometheus.Registerer, opts prometheus.CounterOpts, boundedLabels []BoundedLabel) *prometheus.CounterVec // InitGaugeVec initializes the gauge metrics vector. It takes a list of // BoundedLabels, but the LabelValue field of each BoundedLabel will be //" }, { "data": "If a nil counter is returned during registration, the method will // panic. func InitGaugeVec(r prometheus.Registerer, opts prometheus.CounterOpts, boundedLabels []BoundedLabel) *prometheus.GaugeVec // InitHistogramVec initializes the histogram metrics vector. It takes a list // of BoundedLabels, but the LabelValue field of each BoundedLabel will be // ignored. If a nil counter is returned during registration, the method will // panic. func InitHistogramVec(r prometheus.Registerer, opts prometheus.CounterOpts, boundedLabels []BoundedLabel) *prometheus.HistogramVec // InitCounter initializes a new counter. // If a nil counter is returned during registration, the method will panic. func InitCounter(r prometheus.Registerer, opts prometheus.CounterOpts) prometheus.Counter // InitGauge initializes a new gauge. // If a nil counter is returned during registration, the method will panic. func InitGauge(r prometheus.Registerer, opts prometheus.GaugeOpts) prometheus.Gauge // InitHistogram initializes a new histogram. // If a nil counter is returned during registration, the method will panic. func InitHistogram(r prometheus.Registerer, opts prometheus.HistogramOpts) prometheus.Histogram ``` Initialize a new `CounterVec` with relevant options and label names. Attempt to register the new `CounterVec`. If successful, Generate combinations of label names. Create counters for each combination and set the counter to 0. If not successful, check if the error is an `AlreadyRegisteredError`. If yes, return the `CounterVec` and ignore the error. If no, then panic, signalling programmer error. If received a `CounterVec` from registration, it is guaranteed that the registration is successful. The below example will walk through how a consumer package will be integrated with the metrics package: Each consumer package in Kanister will have a main struct and a `metrics.go` file. An example of this would be the controller package: controller/controller.go ```golang type Controller struct { config *rest.Config crClient versioned.Interface clientset kubernetes.Interface dynClient dynamic.Interface osClient osversioned.Interface recorder record.EventRecorder actionSetTombMap sync.Map metrics *metrics // add a new member to the existing struct } ``` ```golang // New creates a controller for watching Kanister custom resources created. func New(c rest.Config) Controller { return &Controller{ config: c, metrics: newMetrics(prometheus.DefaultRegistry), // this helper method call will be made during init } } ``` controller/metrics.go ```golang const ( ACTIONSETCOUNTERVECLABEL_RES = \"resolution\" ACTIONSETCOUNTERVECLABELOPTYPE = \"operation_type\" ) type metrics struct { ActionSetCounterVec *prometheus.CounterVec } // getActionSetCounterVecLabels is a helper method to construct the correct // \"LabelHeaders\":\"LabelValues\" mapping to ensure type safety. func getActionSetCounterVecLabels() []kanistermetrics.BoundedLabel { bl := make([]kanistermetrics.BoundedLabel, 2) bl[0] = kanistermetrics.BoundedLabel{ LabelName: ACTIONSETCOUNTERVECLABEL_RES, LabelValues: []string{\"success\", \"failure\"}, } bl[1] = kanistermetrics.BoundedLabel{ LabelName: ACTIONSETCOUNTERVECLABEL_BLUEPRINT, LabelValues: []string{\"backup\", \"restore\"}, } return bl } // constructActionSetCounterVecLabels is a helper method to construct the // labels correctly. func constructActionSetCounterVecLabels(operation_type string, resolution string) prometheus.Labels { return prometheus.Labels{ ACTIONSETCOUNTERVECLABELOPTYPE: operation_type, ACTIONSETCOUNTERVECLABEL_RES: resolution, } } // newMetrics is a helper method to create a Metrics interface. func newMetrics(gatherer prometheus.Gatherer) *metrics { actionSetCounterOpts := prometheus.CounterOpts{ Name: \"actionsetresolutions_total\", Help: \"Total number of action set resolutions\", } actionSetCounterVec := kanistermetrics.InitCounterVec( gatherer, actionSetCounterOpts, getActionSetCounterVecLabels(), ) return &metrics{ActionSetCounterVec: actionSetCounterVec} } ``` The below example will show how the above created `ActionSetCounterVec` will be incremented in a method: ```golang func (c *Controller) handleActionSet(ctx context.Context) { c.metrics.ActionSetCounterVec.With(constructActionSetCounterVecLabels(\"backup\", \"success\")).Inc() } ``` Alternatively, one can also directly call the Prometheus API with positional arguments: ```golang func (c *Controller) handleActionSet(ctx context.Context) { c.metrics.ActionSetCounterVec.WithLabelValues(\"backup\", \"success\").Inc() } ``` The testing will include manual testing of whether the metrics added are successfully getting exported to Kanister. The interfaces listed above in the metrics package, apart from `InitCounterVec` and `generateLabelCombinations`, will not be unit-tested, since they would be testing the behavior of the Prometheus API itself, which breaks the chain of trust principle with dependencies in unit testing. `InitCounterVec` will be unit tested using the package in Prometheus. Integration tests will be added for code that exports new metrics, to ensure that the behavior of exporting metrics is correct." } ]
{ "category": "Runtime", "file_name": "kanister-prometheus-integration.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "OpenEBS follows a monthly release cadence. The scope of the release is determined by contributor availability and the pending items listed in the . The scope is published in the . Release Manager is identified at the beginning of each release from the contributor community, who will work with one of the maintainers of the OpenEBS project. Release manager is responsible for tracking the scope, coordinating with various stack holders and helping root out the risks to the release as early as possible. Release manager uses the and the to help everyone stay focused on the release. Each release has the following important milestones that have organically developed to help maintain the monthly release cadence. First Week - Release planning is completed with leads identified for all the features targeted for the release. At the end of the first week, there should not be any \"To do\" items in the release tracker. If there are no leads available, the \"To do\" items will be pushed to the next release backlog. It is possible that some of the contributors will continue to work on the items that may not make it into current release, but will be pushed into the upcoming release. Second Week - Depending on the feature status - alpha, beta or stable, the various tasks from development, review, e2e and docs are identified. Any blockers in terms of design or implementation are discussed and mitigated. Third Week - is created and the first are made available. Post this stage only bug fixes are accepted into the release. Upgrades are tested. Staging documentation is updated. CI Pipelines are automated with new tests. Fourth Week - Release notes are prepared. The final RC build is pushed to Dogfooding and beta testing with some of the users. The installers are updated. Release, Contributor and User Documentation are published. Fifth Week - Post release activities like pushing the new release to the partner charts like Rancher Catalog, Amazon marketplace and so forth are worked on. OpenEBS Components are developed on several different code repositories. Active development happens on the master branch. When all the development activities scoped for a given release are completed or the deadline for code-freeze (third week) has approached, a release branch is created from the master. The branch is named after major and minor version of the release. Example for `1.10.0`, release, the release branch will be" }, { "data": "Post creating the release branch, if any critical fixes are identified for the release, then the fixes will be pushed to master as well as cherry-picked into the corresponding release branches. Release Branch will be used for tagging the release belonging to a given major and minor version and all the subsequent patch releases. For example, `v1.10.0`, `v1.10.1` and so forth will be done from `v1.10.x` release branch. Release branches need to be created for the following repositories: openebs/jiva openebs/libcstor openebs/cstor openebs/istgt openebs/velero-plugin openebs/cstor-csi openebs/jiva-operator openebs/api openebs/cstor-operators openebs/upgrade openebs/dynamic-localpv-provisioner openebs/m-exporter The following repositories currently follow a custom release version. openebs/node-disk-manager openebs/zfs-localpv openebs/lvm-localpv openebs/rawfile-localpv openebs/Mayastor The following repositories are under active development and releases are created from main branch. openebs/linux-utils openebs/monitor-pv openebs/dynamic-nfs-provisioner openebs/device-localpv openebs/openebsctl openebs/monitoring To verify that release branches are created, you can run the following script: ``` git clone https://github.com/openebs/charts cd charts git checkout gh-pages cd scripts/release ./check-release-branch.sh <release-branch> ``` OpenEBS Components are released as container images with versioned tags. OpenEBS components are spread across various repositories. Creating a release tag on the repository will trigger the build, integration tests and pushing of the docker image to and container repositories. The format of the GitHub release tag is either \"Release-Name-RC1\" or \"Release-Name\" depending on whether the tag is a release candidate or a release. (Example: v1.0.0-RC1 is a GitHub release tag for OpenEBS release candidate build. v1.0.0 is the release tag that is created after the release criteria are satisfied by the release candidate builds.) Each repository has the automation scripts setup to push the container images to docker and quay container repositories. The container image tag will be derived from GitHub release tag by truncating `v` from the release name. (Example: v1.0.0-RC1 will create container images for 1.0.0-RC1). Once a release made on a repository, Travis will trigger the release on the dependent repositories. The release tree looks as follows: openebs/linux-utils openebs/dynamic-localpv-provisioner openebs/jiva openebs/jiva-operator openebs/cstor openebs/libcstor openebs/m-exporter openebs/istgt openebs/cstor-operators openebs/velero-plugin openebs/cstor-csi openebs/upgrade The following repositories currently follow a different release versioning than other components, so these are triggered parallelly. openebs/node-disk-manager openebs/zfs-localpv openebs/Mayastor openebs/lvm-localpv openebs/rawfile-localpv The following repositories are under active development and are not yet added into the release process. These needs to be manually tagged on-demand. openebs/api openebs/monitor-pv openebs/dynamic-nfs-provisioner openebs/device-localpv openebs/openebsctl openebs/monitoring The following repositories are being deprecated and will be manually tagged. openebs/maya openebs/openebs-k8s-provisioner Once the release is triggered, Travis build process has to be monitored. Once Travis builds are passed, images are pushed to docker hub and quay.io. Images can be verified by going through docker hub and quay.io. Also the images shouldn't have any critical security vulnerabilities. Example: https://quay.io/repository/openebs/cstor-pool?tab=tags https://hub.docker.com/r/openebs/openebs-k8s-provisioner/tags Each minor and major release comprises of one or more release candidate builds, followed by a final" }, { "data": "Release Candidate builds are started from the third week into the release cycle. These release candidate builds help to freeze the scope and maintain the quality of the release. A release branch is created prior to generating the first release candidate build. Any issues found during the verification of the release candidate build are fixed in the master and cherry-picked into the release branch, prior to next release candidate build or the release build. Once the release candidate or release images are generated, raise a to kick-start the build validation via automated and manual e2e tests. The e2e tests include the following: Platform Verification Regression and Feature Verification Automated tests. Exploratory testing by QA engineers Strict security scanners on the container images Upgrade from previous releases Beta testing by users on issues that they are interested in. Dogfooding on OpenEBS workload and e2e infrastructure clusters Once all the above tests are completed successfully on release candidate builds, the final release build is triggered. E2e tests are repeated on the final release build. The status of the E2e on the release builds can be tracked in Once the release build validation is complete, helm and operator YAMLs are published to . All release blockers found by are resolved. The updated Install and Feature documentation are verified. Release notes with changes summary, changelog are updated. Verify that releases under each individual repository are updated with commit and CHANGELOG. openebs-operator and helm charts are published. Release is tagged on openebs/openebs repository. Release is announced on slack, distribution lists and social media. For each individual repositories: Update the Github releases with the commit log. The following commands can be used: ``` git checkout <release-branch> git log --pretty=format:'- %s (%h) (@%an)' --date=short --since=\"1 month\" ``` In case there are no changes, update it with \"No changes\". For RC tags, update the commit log with changes since the last tag. For Release tag, update the commit log with changes since the last release tag. This will ideally be sum of all the commits from the RC tags. Raise a PR to update the CHANGELOG.md. Create an aggregate Change Summary for the release under . Create an Release Summary with highlights from the release that will be used with . Blogs on new features are published Update the newsletter contents Release Webinar Update blogs, documentation with new content or examples. For example update the readme with change in status or process. Update the charts like Helm stable and other partner charts. (Rancher, OpenShift, IBM ICP Community Charts, Netapp NKS Trusted Charts (formerly StackPointCloud), AWS Marketplace, OperatorHub and DigitalOcean)" } ]
{ "category": "Runtime", "file_name": "release-management.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "The VFIO driver is an IOMMU/device agnostic framework for exposing direct access to userspace, in a secure, IOMMU protected environment. Virtual machine often makes use of direct device access when configured for the highest possible I/O performance. In order to successfully use VFIO device, it is mandatory that hardware supports virtualization and IOMMU groups. Execute the following command on your host OS to check whether the IOMMU has been turned on. ```shell ``` If the IOMMU is turned on, the terminal display as follows: ```shell iommu: Default domain type: Translated hibmc-drm 0000:0a:00.0: Adding to iommu group 0 ehci-pci 0000:7a:01.0: Adding to iommu group 1 ehci-pci 0000:ba:01.0: Adding to iommu group 2 ohci-pci 0000:7a:00.0: Adding to iommu group 3 ohci-pci 0000:ba:00.0: Adding to iommu group 4 xhci_hcd 0000:7a:02.0: Adding to iommu group 5 ... ``` Assume user wants to access PCI device 0000:1a:00.3. The device is attached to PCI bus, therefore user will make use of vfio-pci to manage the group: ```shell ``` Binding this device to the vfio-pci driver, it will create the VFIO group character devices for this group. ```shell ``` Four properties are supported for VFIO device host: PCI device info in the system that contains domain, bus number, slot number and function number. id: VFIO device name. bus: bus number of VFIO device. addr: including slot number and function number. ```shell -device vfio-pci,host=0000:1a:00.3,id=net,bus=pcie.0,addr=0x03.0x0[,multifunction=on] ``` Note: the kernel must contain physical device drivers, otherwise it cannot be loaded normally. Note: avoid using balloon devices and vfio devices together. StratoVirt standard VM supports hot-plug VFIO devices with QMP. Refer to qmp.md for specific command line parameters. hot plug VFIO device: ```json -> {\"execute\":\"device_add\", \"arguments\":{\"id\":\"vfio-0\", \"driver\":\"vfio-pci\", \"bus\": \"pcie.1\", \"addr\":\"0x0\", \"host\": \"0000:1a:00.3\"}} <- {\"return\": {}} ``` hot unplug VFIO device: ```json -> {\"execute\": \"device_del\", \"arguments\": {\"id\": \"vfio-0\"}} <- {\"event\":\"DEVICE_DELETED\",\"data\":{\"device\":\"vfio-0\",\"path\":\"vfio-0\"},\"timestamp\":{\"seconds\":1614310541,\"microseconds\":554250}} <- {\"return\": {}} ``` If it is necessary to unbind VFIO device directly, you can execute the following command. Note: assume uses hinic driver ```shell ```" } ]
{ "category": "Runtime", "file_name": "vfio.md", "project_name": "StratoVirt", "subcategory": "Container Runtime" }
[ { "data": "The ttrpc protocol is client/server protocol to support multiple request streams over a single connection with lightweight framing. The client represents the process which initiated the underlying connection and the server is the process which accepted the connection. The protocol is currently defined as asymmetrical, with clients sending requests and servers sending responses. Both clients and servers are able to send stream data. The roles are also used in determining the stream identifiers, with client initiated streams using odd number identifiers and server initiated using even number. The protocol may be extended in the future to support server initiated streams, that is not supported in the latest version. The ttrpc protocol is designed to be lightweight and optimized for low latency and reliable connections between processes on the same host. The protocol does not include features for handling unreliable connections such as handshakes, resets, pings, or flow control. The protocol is designed to make low-overhead implementations as simple as possible. It is not intended as a suitable replacement for HTTP2/3 over the network. Each Message Frame consists of a 10-byte message header followed by message data. The data length and stream ID are both big-endian 4-byte unsigned integers. The message type is an unsigned 1-byte integer. The flags are also an unsigned 1-byte integer and use is defined by the message type. ++ | Data Length (32) | ++ | Stream ID (32) | ++--+ | Msg Type (8) | ++ | Flags (8) | ++--+ | Data (*) | ++ The Data Length field represents the number of bytes in the Data field. The total frame size will always be Data Length + 10 bytes. The maximum data length is 4MB and any larger size should be rejected. Due to the maximum data size being less than 16MB, the first frame byte should always be zero. This first byte should be considered reserved for future use. The Stream ID must be odd for client initiated streams and even for server initiated streams. Server initiated streams are not currently supported. | Message Type | Name | Description | |--|-|-| | 0x01 | Request | Initiates stream | | 0x02 | Response | Final stream data and terminates | | 0x03 | Data | Stream data | The request message is used to initiate stream and send along request data for properly routing and handling the stream. The stream may indicate unary without any inbound or outbound stream data with only a response is expected on the stream. The request may also indicate the stream is still open for more data and no response is expected until data is finished. If the remote indicates the stream is closed, the request may be considered non-unary but without anymore stream data" }, { "data": "In the case of `remote closed`, the remote still expects to receive a response or stream data. For compatibility with non streaming clients, a request with empty flags indicates a unary request. | Flag | Name | Description | ||--|--| | 0x01 | `remote closed` | Non-unary, but no more data expected from remote | | 0x02 | `remote open` | Non-unary, remote is still sending data | The response message is used to end a stream with data, an empty response, or an error. A response message is the only expected message after a unary request. A non-unary request does not require a response message if the server is sending back stream data. A non-unary stream may return a single response message but no other stream data may follow. No response flags are defined at this time, flags should be empty. The data message is used to send data on an already initialized stream. Either client or server may send data. A data message is not allowed on a unary stream. A data message should not be sent after indicating `remote closed` to the peer. The last data message on a stream must set the `remote closed` flag. The `no data` flag is used to indicate that the data message does not include any data. This is normally used with the `remote closed` flag to indicate the stream is now closed without transmitting any data. Since ttrpc normally transmits a single object per message, a zero length data message may be interpreted as an empty object. For example, transmitting the number zero as a protobuf message ends up with a data length of zero, but the message is still considered data and should be processed. | Flag | Name | Description | ||--|--| | 0x01 | `remote closed` | No more data expected from remote | | 0x04 | `no data` | This message does not have data | All ttrpc requests use streams to transfer data. Unary streams will only have two messages sent per stream, a request from a client and a response from the server. Non-unary streams, however, may send any numbers of messages from the client and the server. This makes stream management more complicated than unary streams since both client and server need to track additional state. To keep this management as simple as possible, ttrpc minimizes the number of states and uses two flags instead of control frames. Each stream has two states while a stream is still alive: `local closed` and `remote closed`. Each peer considers local and remote from their own perspective and sets flags from the other peer's perspective. For example, if a client sends a data frame with the `remote closed` flag, that is indicating that the client is now `local closed` and the server will be `remote" }, { "data": "A unary operation does not need to send these flags since each received message always indicates `remote closed`. Once a peer is both `local closed` and `remote closed`, the stream is considered finished and may be cleaned up. Due to the asymmetric nature of the current protocol, a client should always be in the `local closed` state before `remote closed` and a server should always be in the `remote closed` state before `local closed`. This happens because the client is always initiating requests and a client always expects a final response back from a server to indicate the initiated request has been fulfilled. This may mean server sends a final empty response to finish a stream even after it has already completed sending data before the client. +--+ +--+ | Client | | Server | ++-+ +-++ | ++ | local >+ Request +--> remote closed | ++ | closed | | | +-+ | finished <--+ Response +--< finished | +-+ | | | RC: `remote closed` flag RO: `remote open` flag +--+ +--+ | Client | | Server | ++-+ +-++ | +--+ | >-+ Request [RO] +--> | +--+ | | | | ++ | >--+ Data +> | ++ | | | | +--+ | local >+ Data [RC] +> remote closed | +--+ | closed | | | +-+ | finished <--+ Response +--< finished | +-+ | | | +--+ +--+ | Client | | Server | ++-+ +-++ | +--+ | local >-+ Request [RC] +--> remote closed | +--+ | closed | | | ++ | <--+ Data +< | ++ | | | | +--+ | finished <+ Data [RC] +< finished | +--+ | | | +--+ +--+ | Client | | Server | ++-+ +-++ | +--+ | >-+ Request [RO] +--> | +--+ | | | | ++ | >--+ Data +> | ++ | | | | ++ | <--+ Data +< | ++ | | | | ++ | >--+ Data +> | ++ | | | | +--+ | local >+ Data [RC] +> remote closed | +--+ | closed | | | ++ | <--+ Data +< | ++ | | | | +--+ | finished <+ Data [RC] +< finished | +--+ | | | While this protocol is defined primarily to support Remote Procedure Calls, the protocol does not define the request and response types beyond the messages defined in the protocol. The implementation provides a default protobuf definition of request and response which may be used for cross language rpc. All implementations should at least define a request type which support routing by procedure name and a response type which supports call status. | Version | Features | ||| | 1.0 | Unary requests only | | 1.2 | Streaming support |" } ]
{ "category": "Runtime", "file_name": "PROTOCOL.md", "project_name": "CRI-O", "subcategory": "Container Runtime" }
[ { "data": "We would like to acknowledge previous runc maintainers and their huge contributions to our collective success: Alexander Morozov (@lk4d4) Andrei Vagin (@avagin) Rohit Jnagal (@rjnagal) Victor Marmol (@vmarmol) We thank these members for their service to the OCI community." } ]
{ "category": "Runtime", "file_name": "EMERITUS.md", "project_name": "runc", "subcategory": "Container Runtime" }
[ { "data": "Welcome to Kubernetes. We are excited about the prospect of you joining our ! The Kubernetes community abides by the CNCF . Here is an excerpt: As contributors and maintainers of this project, and in the interest of fostering an open and welcoming community, we pledge to respect all people who contribute through reporting issues, posting feature requests, updating documentation, submitting pull requests or patches, and other activities. We have full documentation on how to get started contributing here: <! If your repo has certain guidelines for contribution, put them here ahead of the general k8s resources --> Kubernetes projects require that you sign a Contributor License Agreement (CLA) before we can accept your pull requests - Main contributor documentation, or you can just jump directly to the - Common resources for existing developers - We have a diverse set of mentorship programs available that are always looking for volunteers! <! Custom Information - if you're copying this template for the first time you can add custom content here, for example: - Replace `kubernetes-users` with your slack channel string, this will send users directly to your channel. -->" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "HwameiStor", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List current pcap recorders ``` cilium-dbg recorder list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Introspect or mangle pcap recorder" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_recorder_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "name: Bug report about: Tell us about a problem you are experiencing What steps did you take and what happened: <!--A clear and concise description of what the bug is, and what commands you ran.--> What did you expect to happen: The following information will help us better understand what's going on: If you are using velero v1.7.0+: Please use `velero debug --backup <backupname> --restore <restorename>` to generate the support bundle, and attach to this issue, more options please refer to `velero debug --help` If you are using earlier versions: Please provide the output of the following commands (Pasting long output into a or other pastebin is fine.) `kubectl logs deployment/velero -n velero` `velero backup describe <backupname>` or `kubectl get backup/<backupname> -n velero -o yaml` `velero backup logs <backupname>` `velero restore describe <restorename>` or `kubectl get restore/<restorename> -n velero -o yaml` `velero restore logs <restorename>` Anything else you would like to add: <!--Miscellaneous information that will assist in solving the issue.--> Environment: Velero version (use `velero version`): Velero features (use `velero client config get features`): Kubernetes version (use `kubectl version`): Kubernetes installer & version: Cloud provider or hardware configuration: OS (e.g. from `/etc/os-release`): Vote on this issue! This is an invitation to the Velero community to vote on issues, you can see the project's . Use the \"reaction smiley face\" up to the right of this comment to vote. :+1: for \"I would like to see this bug fixed as soon as possible\" :-1: for \"There are more important bugs to focus on right now\"" } ]
{ "category": "Runtime", "file_name": "bug_report.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: Velero v1.1 backing up and restoring Stateful apps on vSphere slug: Velero-v1-1-Stateful-Backup-vSphere image: /img/posts/cassandra.gif excerpt: This post demonstrates how Velero can be used on Kubernetes running on vSphere to backup a Stateful application. For the purposes of this example, we will backup and restore a Cassandra NoSQL database management system. author_name: Cormac Hogan author_avatar: /img/contributors/cormac-pic.png categories: ['kubernetes'] tags: ['Velero', 'Cormac Hogan', 'how-to'] Velero version 1.1 provides support to backup applications orchestrated on upstream Kubernetes running natively on vSphere. This post will provide detailed information on how to use Velero v1.1 to backup and restore a stateful application (`Cassandra`) that is running in a Kubernetes cluster deployed on vSphere. At this time there is no vSphere plugin for snapshotting stateful applications during a Velero backup. In this case, we rely on a third party program called `restic` to copy the data contents from Persistent Volumes. The data is stored in the same S3 object store where the Kubernetes object metadata is stored. Download and deploy Cassandra Create and populate a database and table in Cassandra Prepare Cassandra for a Velero backup by adding appropriate annotations Use Velero to take a backup Destroy the Cassandra deployment Use Velero to restore the Cassandra application Verify that the Cassandra database and table of contents have been restored This tutorial does not show how to deploy Velero v1.1 on vSphere. This is available in other tutorials. For this backup to be successful, Velero needs to be installed with the `use-restic` flag. The assumption is that the Kubernetes nodes in your cluster have internet access in order to pull the necessary Velero images. This guide does not show how to pull images using a local repository. For instructions on how to download and deploy a simple Cassandra StatefulSet, please refer to . This will show you how to deploy a Cassandra StatefulSet which we can use to do our Stateful application backup and restore. The manifests use an earlier version of Cassandra (v11) that includes the `cqlsh` tool which we will use now to create a database and populate a table with some sample data. If you follow the instructions above on how to deploy Cassandra on Kubernetes, you should see a similar response if you run the following command against your deployment: ```bash kubectl exec -it cassandra-0 -n cassandra -- nodetool status Datacenter: DC1-K8Demo ====================== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.244.1.18 162.95 KiB 32 66.9% 2fc03eff-27ee-4934-b483-046e096ba116 Rack1-K8Demo UN 10.244.1.19 174.32 KiB 32 61.4% 83867fd7-bb6f-45dd-b5ea-cdf5dcec9bad Rack1-K8Demo UN 10.244.2.14 161.04 KiB 32 71.7% 8d88d0ec-2981-4c8b-a295-b36eee62693c Rack1-K8Demo ``` Now we will populate Cassandra with some data. Here we are connecting to the first Pod, cassandra-0 and running the `cqlsh` command which will allow us to create a Keyspace and a table. ```bash $ kubectl exec -it cassandra-0 -n cassandra -- cqlsh Connected to K8Demo at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] Use HELP for" }, { "data": "cqlsh> CREATE KEYSPACE demodb WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 3 }; cqlsh> use demodb; cqlsh:demodb> CREATE TABLE emp(empid int PRIMARY KEY, empname text, empcity text, empsal varint,emp_phone varint); cqlsh:demodb> INSERT INTO emp (empid, empname, empcity, empphone, emp_sal) VALUES (100, 'Cormac', 'Cork', 999, 1000000); cqlsh:demodb> select * from emp; empid | empcity | empname | empphone | emp_sal --+-+-+--+ 100 | Cork | Cormac | 999 | 1000000 (1 rows) cqlsh:demodb> exit ``` Now that we have populated the application with some data, let's annotate each of the Pods, back it up, destroy the application and then try to restore it using Velero v1.1. The first step is to add annotations to each of the Pods in the StatefulSet to indicate that the contents of the persistent volumes, mounted on cassandra-data, needs to be backed up as well. As mentioned previously, Velero uses the `restic` program at this time for capturing state/data from Kubernetes running on vSphere. ```bash $ kubectl -n cassandra describe pod/cassandra-0 | grep Annotations Annotations: <none> ``` ```bash $ kubectl -n cassandra annotate pod/cassandra-0 backup.velero.io/backup-volumes=cassandra-data pod/cassandra-0 annotated ``` ```bash $ kubectl -n cassandra describe pod/cassandra-0 | grep Annotations Annotations: backup.velero.io/backup-volumes: cassandra-data ``` Repeat this action for the other Pods, in this example, Pods cassandra-1 and cassandra-2. This is an indication that we need to backup the persistent volume contents associated with each Pod. ```bash $ velero backup create cassandra --include-namespaces cassandra Backup request \"cassandra\" submitted successfully. Run `velero backup describe cassandra` or `velero backup logs cassandra` for more details. ``` ```bash $ velero backup describe cassandra Name: cassandra Namespace: velero Labels: velero.io/storage-location=default Annotations: <none> Phase: InProgress Namespaces: Included: cassandra Excluded: <none> Resources: Included: * Excluded: <none> Cluster-scoped: auto Label selector: <none> Storage Location: default Snapshot PVs: auto TTL: 720h0m0s Hooks: <none> Backup Format Version: 1 Started: 2019-09-02 15:37:19 +0100 IST Completed: <n/a> Expiration: 2019-10-02 15:37:19 +0100 IST Persistent Volumes: <none included> Restic Backups (specify --details for more information): In Progress: 1 ``` ```bash $ velero backup describe cassandra Name: cassandra Namespace: velero Labels: velero.io/storage-location=default Annotations: <none> Phase: Completed Namespaces: Included: cassandra Excluded: <none> Resources: Included: * Excluded: <none> Cluster-scoped: auto Label selector: <none> Storage Location: default Snapshot PVs: auto TTL: 720h0m0s Hooks: <none> Backup Format Version: 1 Started: 2019-09-02 15:37:19 +0100 IST Completed: 2019-09-02 15:37:34 +0100 IST Expiration: 2019-10-02 15:37:19 +0100 IST Persistent Volumes: <none included> Restic Backups (specify --details for more information): Completed: 3 ``` If we include the option `--details` to the previous command, we can see the various objects that were backed up. ```bash $ velero backup describe cassandra --details Name: cassandra Namespace: velero Labels: velero.io/storage-location=default Annotations: <none> Phase: Completed Namespaces: Included: cassandra Excluded: <none> Resources: Included: * Excluded: <none> Cluster-scoped: auto Label selector: <none> Storage Location: default Snapshot PVs: auto TTL: 720h0m0s Hooks: <none> Backup Format Version: 1 Started: 2019-09-02 15:37:19 +0100 IST Completed: 2019-09-02 15:37:34 +0100 IST Expiration: 2019-10-02 15:37:19 +0100 IST Resource List: apps/v1/ControllerRevision: cassandra/cassandra-55b978b564 apps/v1/StatefulSet: cassandra/cassandra v1/Endpoints: cassandra/cassandra v1/Namespace: cassandra v1/PersistentVolume: pvc-2b574305-ca52-11e9-80e4-005056a239d9 pvc-51a681ad-ca52-11e9-80e4-005056a239d9 pvc-843241b7-ca52-11e9-80e4-005056a239d9 v1/PersistentVolumeClaim: cassandra/cassandra-data-cassandra-0 cassandra/cassandra-data-cassandra-1 cassandra/cassandra-data-cassandra-2 v1/Pod: cassandra/cassandra-0 cassandra/cassandra-1 cassandra/cassandra-2 v1/Secret: cassandra/default-token-bzh56 v1/Service: cassandra/cassandra v1/ServiceAccount: cassandra/default Persistent Volumes: <none included> Restic Backups: Completed: cassandra/cassandra-0: cassandra-data cassandra/cassandra-1: cassandra-data cassandra/cassandra-2: cassandra-data ``` The command `velero backup logs` can be used to get additional information about the backup progress. Now that we have successfully taken a backup, which includes the `Restic` backups of the data, we will now go ahead and destroy the Cassandra namespace, and restore it once again. ```bash $ kubectl delete ns cassandra namespace \"cassandra\" deleted $ kubectl get pv No resources found. $ kubectl get pods -n cassandra No resources found. $ kubectl get pvc -n cassandra No resources found. ``` Now use Velero to restore the application and" }, { "data": "The name of the backup must be specified at the command line using the `--from-backup` option. You can get the backup name from the following command: ```bash $ velero backup get NAME STATUS CREATED EXPIRES STORAGE LOCATION SELECTOR cassandra1 Completed 2019-10-02 15:37:34 +0100 IST 31d default <none> ``` Next, initiate the restore: ```bash $ velero restore create cassandra1 --from-backup cassandra1 Restore request \"cassandra1\" submitted successfully. Run `velero restore describe cassandra1` or `velero restore logs cassandra1` for more details. ``` ```bash $ velero restore describe cassandra1 Name: cassandra1 Namespace: velero Labels: <none> Annotations: <none> Phase: InProgress Backup: cassandra1 Namespaces: Included: * Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io Cluster-scoped: auto Namespace mappings: <none> Label selector: <none> Restore PVs: auto Restic Restores (specify --details for more information): New: 3 ``` Let's get some further information by adding the `--details` option. ```bash $ velero restore describe cassandra1 --details Name: cassandra1 Namespace: velero Labels: <none> Annotations: <none> Phase: InProgress Backup: cassandra1 Namespaces: Included: * Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io Cluster-scoped: auto Namespace mappings: <none> Label selector: <none> Restore PVs: auto Restic Restores: New: cassandra/cassandra-0: cassandra-data cassandra/cassandra-1: cassandra-data cassandra/cassandra-2: cassandra-data ``` When the restore completes, the `Phase` and `Restic Restores` should change to `Completed` as shown below. ```bash $ velero restore describe cassandra1 --details Name: cassandra1 Namespace: velero Labels: <none> Annotations: <none> Phase: Completed Backup: cassandra1 Namespaces: Included: * Excluded: <none> Resources: Included: * Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io Cluster-scoped: auto Namespace mappings: <none> Label selector: <none> Restore PVs: auto Restic Restores: Completed: cassandra/cassandra-0: cassandra-data cassandra/cassandra-1: cassandra-data cassandra/cassandra-2: cassandra-data ``` The `velero restore logs` command can also be used to track restore progress. Use some commands seen earlier to validate that not only is the application restored, but also the data. ```bash $ kubectl get ns NAME STATUS AGE cassandra Active 2m35s default Active 13d kube-node-lease Active 13d kube-public Active 13d kube-system Active 13d velero Active 35m wavefront-collector Active 7d5h ``` ```bash $ kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-51ae99a9-cd91-11e9-80e4-005056a239d9 1Gi RWO Delete Bound cassandra/cassandra-data-cassandra-0 cass-sc-csi 2m28s pvc-51b15558-cd91-11e9-80e4-005056a239d9 1Gi RWO Delete Bound cassandra/cassandra-data-cassandra-1 cass-sc-csi 2m22s pvc-51b4079c-cd91-11e9-80e4-005056a239d9 1Gi RWO Delete Bound cassandra/cassandra-data-cassandra-2 cass-sc-csi 2m27s ``` ```bash $ kubectl get pvc -n cassandra NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cassandra-data-cassandra-0 Bound pvc-51ae99a9-cd91-11e9-80e4-005056a239d9 1Gi RWO cass-sc-csi 2m49s cassandra-data-cassandra-1 Bound pvc-51b15558-cd91-11e9-80e4-005056a239d9 1Gi RWO cass-sc-csi 2m49s cassandra-data-cassandra-2 Bound pvc-51b4079c-cd91-11e9-80e4-005056a239d9 1Gi RWO cass-sc-csi 2m49s ``` ```bash $ kubectl exec -it cassandra-0 -n cassandra -- nodetool status Datacenter: DC1-K8Demo ====================== Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.244.1.21 138.53 KiB 32 66.9% 2fc03eff-27ee-4934-b483-046e096ba116 Rack1-K8Demo UN 10.244.1.22 166.45 KiB 32 71.7% 8d88d0ec-2981-4c8b-a295-b36eee62693c Rack1-K8Demo UN 10.244.2.23 160.43 KiB 32 61.4% 83867fd7-bb6f-45dd-b5ea-cdf5dcec9bad Rack1-K8Demo ``` ```bash $ kubectl exec -it cassandra-0 -n cassandra -- cqlsh Connected to K8Demo at 127.0.0.1:9042. [cqlsh 5.0.1 | Cassandra 3.9 | CQL spec 3.4.2 | Native protocol v4] Use HELP for help. cqlsh> use demodb; cqlsh:demodb> select * from emp; empid | empcity | empname | empphone | emp_sal --+-+-+--+ 100 | Cork | Cormac | 999 | 1000000 (1 rows) cqlsh:demodb> ``` It looks like the restore has been successful. Velero v1.1 has successfully restored the Kubernetes objects for the Cassandra application, as well as restored the database and table contents. As always, we welcome feedback and participation in the development of Velero. You can find us on , and follow us on Twitter at ." } ]
{ "category": "Runtime", "file_name": "2019-10-10-Velero-v1-1-Stateful-Backup-vSphere.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "A SpiderReservedIP resource represents a collection of IP addresses that Spiderpool expects not to be allocated. For details on using this CRD, please read the . ```yaml apiVersion: spiderpool.spidernet.io/v2beta1 kind: SpiderIPPool metadata: name: exclude-ips spec: subnet: 172.18.41.0/24 ips: 172.18.41.40-172.18.41.44 172.18.41.46-172.18.41.50 ``` | Field | Description | Schema | Validation | |-|--|--|| | name | the name of this SpiderReservedIP resource | string | required | This is the SpiderReservedIP spec for users to configure. | Field | Description | Schema | Validation | Values | |-|-|||| | ipVersion | IP version of this resource | int | optional | 4,6 | | ips | IP ranges for this resource that we expect not to use | list of strings | optional | array of IP ranges and single IP address |" } ]
{ "category": "Runtime", "file_name": "crd-spiderreservedip.md", "project_name": "Spiderpool", "subcategory": "Cloud Native Network" }
[ { "data": "Kilo enables peers outside of a Kubernetes cluster to connect to the created WireGuard network. This enables several use cases, for example: giving cluster applications secure access to external services, e.g. services behind a corporate VPN; improving the development flow of applications by running them locally and connecting them to the cluster; allowing external services to access the cluster; and enabling developers and support to securely debug cluster resources. In order to declare a peer, start by defining a Kilo Peer resource. See the following `peer.yaml`, where the `publicKey` field holds a : ```yaml apiVersion: kilo.squat.ai/v1alpha1 kind: Peer metadata: name: squat spec: allowedIPs: 10.5.0.1/32 # Example IP address on the peer's interface. publicKey: GY5aT1N9dTR/nJnT1N2f4ClZWVj0jOAld0r8ysWLyjg= persistentKeepalive: 10 ``` Then, apply the resource to the cluster: ```shell kubectl apply -f peer.yaml ``` Now, the `kgctl` tool can be used to generate the WireGuard configuration for the newly defined peer: ```shell PEER=squat kgctl showconf peer $PEER ``` This will produce some output like: ```ini [Peer] PublicKey = 2/xU029dz/WtvMZAbnSzmhicl8U1/Y3NYmunRr8EJ0Q= AllowedIPs = 10.4.0.2/32, 10.2.3.0/24, 10.1.0.3/32 Endpoint = 108.61.142.123:51820 ``` The configuration can then be applied to a local WireGuard interface, e.g. `wg0`: ```shell IFACE=wg0 kgctl showconf peer $PEER > peer.ini sudo wg setconf $IFACE peer.ini ``` Finally, in order to access the cluster, the client will need appropriate routes for the new configuration. For example, on a Linux machine, the creation of these routes could be automated by running: ```shell for ip in $(kgctl showconf peer $PEER | grep AllowedIPs | cut -f 3- -d ' ' | tr -d ','); do sudo ip route add $ip dev $IFACE done ``` Once the routes are in place, the connection to the cluster can be tested. For example, try connecting to the API server: ```shell curl -k https://$(kubectl get endpoints kubernetes | tail -n +2 | tr , \\\\t | awk '{print $2}') ``` Likewise, the cluster now also has layer 3 access to the newly added peer. From any node or Pod on the cluster, one can now ping the peer: ```shell ping 10.5.0.1 ``` If the peer exposes a layer 4 service, for example an HTTP server listening on TCP port 80, then one could also make requests against that endpoint from the cluster: ```shell curl http://10.5.0.1 ``` Kubernetes Services can be created to provide better discoverability to cluster workloads for services exposed by peers, for example: ```shell cat <<'EOF' | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: important-service spec: ports: port: 80 apiVersion: v1 kind: Endpoints metadata: name: important-service subsets: addresses: ip: 10.5.0.1 ports: port: 80 EOF ``` . Although it is not a primary goal of the project, the VPN created by Kilo can also be ." } ]
{ "category": "Runtime", "file_name": "vpn.md", "project_name": "Kilo", "subcategory": "Cloud Native Network" }
[ { "data": "`bunyan` (without redir) ^C should stop, doesn't since recent change man page for the bunyan CLI (refer to it in the readme) `tail -f`-like support 2.0 (?) with `v: 1` in log records. Fwd/bwd compat in `bunyan` CLI document log.addStream() and log.addSerializers() full-on docs better examples/ better coloring \"template\" support for 'rotating-file' stream to get dated rolled files \"all\" or \"off\" levels? log4j? logging.py? logging.py has NOTSET === 0. I think that is only needed/used for multi-level hierarchical effective level. buffered writes to increase speed: I'd start with a tools/timeoutput.js for some numbers to compare before/after. Sustained high output to a file. perhaps this would be a \"buffered: true\" option on the stream object then wrap the \"stream\" with a local class that handles the buffering to finish this, need the 'log.close' and `process.on('exit', ...)` work that Trent has started. \"canWrite\" handling for full streams. Need to buffer a la log4js test file log with logadm rotation: does it handle that? test suite: test for a cloned logger double-`stream.end()` causing problems. Perhaps the \"closeOnExit\" for existing streams should be false for clones. test that a `log.clone(...)` adding a new field matching a serializer works and that an existing field in the parent is not re-serialized. split out `bunyan` cli to a \"bunyan\" or \"bunyan-reader\" or \"node-bunyan-reader\" as the basis for tools to consume bunyan logs. It can grow indep of node-bunyan for generating the logs. It would take a Bunyan log record object and be expected to emit it. node-bunyan-reader .createReadStream(path, [options]) ? coloring bug: in less the indented extra info lines only have the first line colored. Do we need the ANSI char on each line? That'll be slower. document \"well-known\" keys from bunyan CLI p.o.v.. Add \"client_req\". More `bunyan` output formats and filtering features. Think about a bunyan dashboard that supports organizing and viewing logs from multiple hosts and services. doc the restify RequestCaptureStream usage of RingBuffer. Great example. A vim plugin (a la http://vim.cybermirror.org/runtime/autoload/zip.vim ?) to allow browsing (read-only) a bunyan log in rendered form. Some speed comparisons with others to get a feel for Bunyan's speed. what about promoting 'latency' field and making that easier? `log.close` to close streams and shutdown and `this.closed` process.on('exit', log.close) -> 'end' for the name bunyan cli: more layouts (http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/EnhancedPatternLayout.html) Custom log formats (in config file? in '-f' arg) using printf or hogan.js or whatever. Dap wants field width control for lining up. Hogan.js is probably overkill for this. loggly example using raw streams, hook.io?, whatever. serializer support: restify-server.js example -> restifyReq ? or have `req` detect that. That is nicer for the \"use all standard ones\". Does restify req have anything special? differential HTTP client req/res with server req/res. statsd stream? http://codeascraft.etsy.com/2011/02/15/measure-anything-measure-everything/ Think about it. web ui. Ideas: http://googlecloudplatform.blogspot.ca/2014/04/a-new-logs-viewer-for-google-cloud.html" } ]
{ "category": "Runtime", "file_name": "TODO.md", "project_name": "SmartOS", "subcategory": "Container Runtime" }
[ { "data": "<BR> OpenEBS is an \"umbrella project\". Every project, repository and file in the OpenEBS organization adopts and follows the policies found in the Community repo umbrella project files. <BR> This project follows the" } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "OpenEBS", "subcategory": "Cloud Native Storage" }
[ { "data": "Currently confd ships binaries for OS X and Linux 64bit systems. You can download the latest release from ``` $ wget https://github.com/bacongobbler/confd/releases/download/v0.12.1/confd-0.12.1-darwin-amd64 ``` ``` $ wget https://github.com/bacongobbler/confd/releases/download/v0.12.1/confd-0.12.1-linux-amd64 ``` ``` $ ./build $ sudo ./install ``` Since many people are using Alpine Linux as their base images for Docker there's support to build Alpine package also. Naturally by using Docker itself. :) ``` $ docker build -t confd_builder -f Dockerfile.build.alpine . $ docker run -ti --rm -v $(pwd):/app confd_builder ./build ``` The above docker commands will produce binary in the local bin directory. Get up and running with the ." } ]
{ "category": "Runtime", "file_name": "installation.md", "project_name": "Project Calico", "subcategory": "Cloud Native Network" }
[ { "data": "This document discusses rkt architecture in detail. For a more hands-on guide to inspecting rkt's internals, check . rkt's primary interface is a command-line tool, `rkt`, which does not require a long running daemon. This architecture allows rkt to be updated in-place without affecting application containers which are currently running. It also means that levels of privilege can be separated out between different operations. All state in rkt is communicated via the filesystem. Facilities like file-locking are used to ensure co-operation and mutual exclusion between concurrent invocations of the `rkt` command. Execution with rkt is divided into several distinct stages. NB The goal is for the ABI between stages to be relatively fixed, but while rkt is still under heavy development this is still evolving. After calling `rkt` the execution chain follows the numbering of stages, having the following general order: invoking process -> stage0: The invoking process uses its own mechanism to invoke the rkt binary (stage0). When started via a regular shell or a supervisor, stage0 is usually forked and exec'ed becoming a child process of the invoking shell or supervisor. stage0 -> stage1: An ordinary is being used to replace the stage0 process with the stage1 entrypoint. The entrypoint is referenced by the `coreos.com/rkt/stage1/run` annotation in the stage1 image manifest. stage1 -> stage2: The stage1 entrypoint uses its mechanism to invoke the stage2 app executables. The app executables are referenced by the `apps.app.exec` settings in the stage2 image manifest. The details of the execution flow varies across the different stage1 implementations. The first stage is the actual `rkt` binary itself. When running a pod, this binary is responsible for performing a number of initial preparatory tasks: Fetching the specified ACIs, including the stage1 ACI of --stage1-{url,path,name,hash,from-dir} if specified. Generating a Pod UUID Generating a Pod Manifest Creating a filesystem for the pod Setting up stage1 and stage2 directories in the filesystem Unpacking the stage1 ACI into the pod filesystem Unpacking the ACIs and copying each app into the stage2 directories Given a run command such as: ``` ``` a pod manifest compliant with the ACE spec will be generated, and the filesystem created by stage0 should be: ``` /pod /stage1 /stage1/manifest /stage1/rootfs/init /stage1/rootfs/opt /stage1/rootfs/opt/stage2/${app1-name} /stage1/rootfs/opt/stage2/${app2-name} ``` where: `pod` is the pod manifest file `stage1` is a copy of the stage1 ACI that is safe for read/write `stage1/manifest` is the manifest of the stage1 ACI `stage1/rootfs` is the rootfs of the stage1 ACI `stage1/rootfs/init` is the actual stage1 binary to be executed (this path may vary according to the `coreos.com/rkt/stage1/run` annotation of the stage1 ACI) `stage1/rootfs/opt/stage2` are copies of the unpacked ACIs At this point the stage0 execs `/stage1/rootfs/init` with the current working directory set to the root of the new" }, { "data": "The next stage is a binary that the user trusts, and has the responsibility of taking the pod filesystem that was created by stage0, create the necessary container isolation, network, and mounts to launch the pod. Specifically, it must: Read the Image and Pod Manifests. The Image Manifest defines the default `exec` specifications of each application; the Pod Manifest defines the ordering of the units, as well as any overrides. Set up/execute the actual isolation environment for the target pod, called the \"stage1 flavor\". Currently there are three flavors implemented: fly: a simple chroot only environment. systemd/nspawn: a cgroup/namespace based isolation environment using systemd, and systemd-nspawn. kvm: a fully isolated kvm environment. There are also out of tree stage1: , stage1 based on the Xen hypervisor , stage1 written in Rust, akin to stage1-fly , a weaker version of stage1-fly using `LD_PRELOAD` tricks instead of chroot , a stage1 used internally by the container build and runtime tool dgr , scripts and tools to generate KVM-based stage1 with customized Linux kernels The final stage, stage2, is the actual environment in which the applications run, as launched by stage1. The \"host\", \"src\", and \"coreos\" flavors (referenced to as systemd/nspawn flavors) use `systemd-nspawn`, and `systemd` to set up the execution chain. They include a very minimal systemd that takes care of launching the apps in each pod, apply per-app resource isolators and makes sure the apps finish in an orderly manner. These flavors will: Read the image and pod manifests Generate systemd unit files from those Manifests Create and enter network namespace if rkt is not started with `--net=host` Start systemd-nspawn (which takes care of the following steps) Set up any external volumes Launch systemd as PID 1 in the pod within the appropriate cgroups and namespaces Have systemd inside the pod launch the app(s). This process is slightly different for the qemu-kvm stage1 but a similar workflow starting at `exec()`'ing kvm instead of an nspawn. We will now detail how the starting, shutdown, and exit status collection of the apps in a pod are implemented internally. rkt supports two kinds of pod runtime environments: an immutable pod runtime environment, and a new, experimental mutable pod runtime environment. The immutable runtime environment is currently the default, i.e. when executing any `rkt prepare` or `rkt run` command. Once a pod has been created in this mode, no modifications can be applied. Conversely, the mutable runtime environment allows users to add, remove, start, and stop applications after a pod has been started. Currently this mode is only available in the experimental `rkt app` family of subcommands; see the for a more detailed description. Both runtime environments are supervised internally by systemd, using a custom dependency graph. The differences between both dependency graphs are described below. There's a systemd rkt apps target (`default.target`) which has a and dependency on each app's service file, making sure they all start. Once this target is reached, the pod is in its steady-state. This is signaled by the pod supervisor via a dedicated `supervisor-ready.service`, which is triggered by" }, { "data": "with a dependency on it. Each app's service has a Wants dependency on an associated reaper service that deals with writing the app's status exit. Each reaper service has a Wants and After dependency with `shutdown.service` that simply shuts down the pod. The reaper services and the `shutdown.service` all start at the beginning but do nothing and remain after exit (with the flag). By using the flag, whenever they stop being referenced, they'll do the actual work via the ExecStop command. This means that when an app service is stopped, its associated reaper will run and will write its exit status to `/rkt/status/${app}` and the other apps will continue running. When all apps' services stop, their associated reaper services will also stop and will cease referencing `shutdown.service` causing the pod to exit. Every app service has an flag that starts the `halt.target`. This means that if any app in the pod exits with a failed status, the systemd shutdown process will start, the other apps' services will automatically stop and the pod will exit. In this case, the failed app's exit status will get propagated to rkt. A dependency was also added between each reaper service and the halt and poweroff targets (they are triggered when the pod is stopped from the outside when rkt receives `SIGINT`). This will activate all the reaper services when one of the targets is activated, causing the exit statuses to be saved and the pod to finish like it was described in the previous paragraph. The initial mutable runtime environment is very simple and resembles a minimal systemd system without any applications installed. Once `default.target` has been reached, apps can be added/removed. Unlike the immutable runtime environment, the `default.target` has no dependencies on any apps, but only on `supervisor-ready.service` and `systemd-journald.service`, to ensure the journald daemon is started before apps are added. In order for the pod to not shut down immediately on its creation, the `default.target` has `Before` and `Conflicts` dependencies on `halt.target`. This \"deadlock\" state between `default.target` and `halt.target` keeps the mutable pod alive. `halt.target` has `After` and `Requires` dependencies on `shutdown.service`. When adding an app, the corresponding application service units `[app].service` and `reaper-[app].service` are generated (where `[app]` is the actual app name). In order for the pod to not shut down when all apps stop, there is no dependency on `shutdown.service`. The `OnFailure` behavior is the same as in an immutable environment. When an app fails, `halt.target`, and `shutdown.service` will be started, and `default.target` will be stopped. The following table enumerates the service unit behavior differences in the two environments: Unit | Immutable | Mutable |--|-- `shutdown.service` | In Started state when the pod starts. Stopped, when there is no dependency on it (`StopWhenUnneeded`) or `OnFailure` of any app. | In Stopped state when the pod starts. Started at explicit shutdown or `OnFailure` of any app. | `reaper-app.service` | `Wants`, and `After` dependency on" }, { "data": "`Conflicts`, and `Before` dependency on `halt.target`. | `Conflicts`, and `Before` dependency on `halt.target`. | We will now detail the execution chain for the stage1 systemd/nspawn flavors. The entrypoint is implemented in the `stage1/init/init.go` binary and sets up the following execution chain: \"ld-linux-.so.\": Depending on the architecture the appropriate loader helper in the stage1 rootfs is invoked using \"exec\". This makes sure that subsequent binaries load shared libraries from the stage1 rootfs and not from the host file system. \"systemd-nspawn\": Used for starting the actual container. systemd-nspawn registers the started container in \"systemd-machined\" on the host, if available. It is parametrized with the `--boot` option to instruct it to \"fork+exec\" systemd as the supervisor in the started container. \"systemd\": Used as the supervisor in the started container. Similar as on a regular host system, it uses \"fork+exec\" to execute the child app processes. The following diagram illustrates the execution chain: The resulting process tree reveals the parent-child relationships. Note that \"exec\"ing processes do not appear in the tree: ``` $ ps auxf ... \\_ -bash \\_ stage1/rootfs/usr/lib/ld-linux-x86-64.so.2 stage1/rootfs/usr/bin/systemd-nspawn \\_ /usr/lib/systemd/systemd \\_ /usr/lib/systemd/systemd-journald \\_ nginx ``` Depending on how rkt is executed, certain external resource limits will be applied or not. If rkt is executed within a systemd service, the container will inherit the cgroup resource limits applied to the service itself and any ulimit-like limits. If rkt is executed, say, from a terminal, the container will inherit ulimit-like limits, but not cgroup resource limits. The reason for this is that systemd will move the container to a new `machine` slice. The \"fly\" flavor uses a very simple mechanism being limited to only execute one child app process. The entrypoint is implemented in `stage1_fly/run/main.go`. After setting up a chroot'ed environment it simply exec's the target app without any further internal supervision: The resulting example process tree shows the target process as a direct child of the invoking process: ``` $ ps auxf ... \\_ -bash \\_ nginx ``` rkt commands like prepare and run, as a first step, need to retrieve all the images requested in the command line and prepare the stage2 directories with the application contents. This is done with the following chain: Fetch: in the fetch phase rkt retrieves the requested images. The fetching implementation depends on the provided image argument such as an image string/hash/https URL/file (e.g. `example.com/app:v1.0`). Store: in the store phase the fetched images are saved to the local store. The local store is a cache for fetched images and related data. Render: in the render phase, a renderer pulls the required images from the store and renders them so they can be easily used as stage2 content. These three logical blocks are implemented inside rkt in this way: Currently rkt implements the internally, converting to it from other container image formats for" }, { "data": "In the future, additional formats like the may be added to rkt, keeping the same basic scheme for fetching, storing, and rendering application container images. Fetchers: Fetchers retrieve images from either a provided URL, or a URL found by on a given image string. Fetchers read data from the Image Store to check if an image is already present. Once fetched, images are verified with their signatures, then saved in the Image Store. An image's are also discovered and fetched. For details, see the documentation. Image Store: the Image Store is used to store images (currently ACIs) and their related information. The render phase can be done in different ways: Directly render the stage1-2 contents inside a pod. This will require more disk space and more stage1-2 preparation time. Render in the treestore. The treestore is a cache of rendered images (currently ACIs). When using the treestore, rkt mounts an overlayfs with the treestore rendered image as its lower directory. When using stage1-2 with overlayfs a pod will contain references to the required treestore rendered images. So there's an hard connection between pods and the treestore. Aci Renderer Both stage1-2 render modes internally uses the . Since an ACI may depend on other ones the acirenderer may require other ACIs. The acirenderer only relies on the ACIStore, so all the required ACIs must already be available in the store. Additionally, since appc dependencies can be found only via discovery, a dependency may be updated and so there can be multiple rendered images for the same ACI. Given this 1:N relation between an ACI and their rendered images, the ACIStore and TreeStore are decoupled. Applications running inside a rkt pod can produce output on stdout/stderr, which can be redirected at runtime. Optionally, they can receive input on stdin from an external component that can be attached/detached during execution. The internal architecture for attaching (TTY and single streams) and logging is described in full details in the . For each application, rkt support separately configuring stdin/stdout/stderr via runtime command-line flags. The following modes are available: interactive: application will be run under the TTY of the parent process. A single application is allowed in the pod, which is tied to the lifetime of the parent terminal and cannot be later re-attached. TTY: selected I/O streams will be run under a newly allocated TTY, which can be later used for external attaching. streaming: selected I/O streams will be supervised by a separate multiplexing process (running in the pod context). They can be later externally attached. logging: selected output streams will be supervised by a separate logging process (running in the pod context). Output entries will be handled as log entries, and the application cannot be later re-attached. null: selected I/O streams will be closed. Application will not received the file-descriptor for the corresponding stream, and it cannot be later re-attached. From a UI perspective, main consumers of the logging and attaching subsystem are the `rkt attach` subcommand and the `--stdin`, `--stdout`, `--stderr` runtime options." } ]
{ "category": "Runtime", "file_name": "architecture.md", "project_name": "rkt", "subcategory": "Container Runtime" }
[ { "data": "name: Test about: Create or update test title: \"[TEST] \" labels: kind/test assignees: '' <!--A clear and concise description of what test you want to develop.--> <!-- Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists [ ] `item 1` --> <!--Add any other context or screenshots about the test request here.-->" } ]
{ "category": "Runtime", "file_name": "test.md", "project_name": "Longhorn", "subcategory": "Cloud Native Storage" }
[ { "data": "Welcome, and thank you for considering contributing to Kanister. We welcome all help in raising issues, improving documentation, fixing bugs, or adding new features. If you are interested in contributing, start by reading this document. Please also take a look at our . If you have any questions at all, do not hesitate to reach out to us on . We look forward to working together! To contribute to this project, you must agree to the Developer Certificate of Origin (DCO) for each commit you make. The DCO is a simple statement that you, as a contributor, have the legal right to make the contribution. See the file for the full text of what you must agree to. The most common way to signify your agreement to the DCO is to add a signoff to the git commit message, using the `-s` flag of the `git commit` command. It will add a signoff the looks like following line to your commit message: ```txt Signed-off-by: John Smith <[email protected]> ``` You must use your real name and a reachable email address in the signoff. Alternately, instead of commits signoff, you can also leave a comment on the PR with the following statement: I agree to the DCO for all the commits in this PR. Note that this option still requires that your commits are made under your real name and a reachable email address. Generally, pull requests that address and fix existing GitHub issues are assigned higher priority over those that don't. Use the existing issue labels to help you identify relevant and interesting issues. If you think something ought to be fixed but none of the existing issues sufficiently address the problem, feel free to . For new contributors, we encourage you to tackle issues that are labeled as `good first issues`. Regardless of your familiarity with this project, documentation help is always appreciated. Once you found an issue that interests you, post a comment to the issue, asking the maintainers to assign it to you. In this project, we adhere to the style and best practices established by the Go project and its community. Specifically, this means: adhering to guidelines found in the document following the common The tool is used to enforce many styling and safety rules. See the document for instructions on how to build, test and run Kanister locally. If your changes involve the Kanister API types, generate the API documentation using the `make crd_docs` command and push the updated `API.md` file along with any other changes. The basic idea is that we ask all contributors to practice to make reviews and retrospection easy. Use your git commits to provide context for the reviewers, and the folks who will be reading the codebase in the months and years to" }, { "data": "We're trying to keep all commits in `master` to follow format. See for more info on types and scopes. We are using squash and merge approach to PRs which means that commit descriptions are generated from PR titles. It's recommended to use conventional commits when strarting a PR, but follow-up commits in the PR don't have to follow the convention. PR titles should be in following format: ```text <type>[optional scope]: <description> ``` See for more info on types and scopes. When submitting a pull request, it's important that you communicate your intent, by clearly: describing the problem you are trying to solve with links to the relevant GitHub issues describing your solution with links to any design documentation and discussion defining how you test and validate your solution updating the relevant documentation and examples where appropriate The pull request template is designed to help you convey this information. In general, smaller pull requests are easier to review and merge than bigger ones. It's always a good idea to collaborate with the maintainers to determine how best to break up a big pull request. Once the maintainers approve your PR, they will label it as `kueue`. The `mergify` bot will then squash the commits in your PR, and add it to the merge queue. The bot will auto-merge your work when it's ready. Congratulations! Your pull request has been successfully merged! Thank you for reading through our contributing guide to ensure your contributions are high quality and easy for our community to review and accept. Please don't hesitate to reach out to us on . if you have any questions about contributing! `feat` - new feature/functionality it's recommended to link a GH issue or discussion which describes the feature request or describe it in the commit message `fix` - bugfix it's recommended to link a GH issue or discussion to describe the bug or describe it in the commit message `refactor` - code restructure/refactor which does not affect the (public) behaviour `docs` - changes in documentation `test` - adding, improving, removing tests `build` - changes to build scripts, dockerfiles, ci pipelines `deps` - updates to dependencies configuration `chore` - none of the above use is generally discuraged `revert` - revert previous commit There is no strict list of scopes to be used, suggested scopes are: `build(ci)` - changes in github workflows `build(release)` - changes in release process `deps(go)` - dependabot updating go library `docs(examples)` - changes in examples `docs(readme)` - changes in MD files at the repo root `feat(kanctl)` - new functionality for `kanctl` (e.g. new command) `refactor(style)` - formatting, adding newlines, etc. in code There can be optional `!` after the type and scope to indicate breaking changes `fix(scope)!: fix with breaking changes` Short description of WHAT was changed in the commit. SHOULD start with lowercase. MUST NOT have a `.` at the end." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "Kanister", "subcategory": "Cloud Native Storage" }
[ { "data": "title: CephRBDMirror CRD Rook allows creation and updating rbd-mirror daemon(s) through the custom resource definitions (CRDs). RBD images can be asynchronously mirrored between two Ceph clusters. For more information about user management and capabilities see the . To get you started, here is a simple example of a CRD to deploy an rbd-mirror daemon. ```yaml apiVersion: ceph.rook.io/v1 kind: CephRBDMirror metadata: name: my-rbd-mirror namespace: rook-ceph spec: count: 1 ``` This guide assumes you have created a Rook cluster as explained in the main If any setting is unspecified, a suitable default will be used automatically. `name`: The name that will be used for the Ceph RBD Mirror daemon. `namespace`: The Kubernetes namespace that will be created for the Rook cluster. The services, pods, and other resources created by the operator will be added to this namespace. `count`: The number of rbd mirror instance to run. `placement`: The rbd mirror pods can be given standard Kubernetes placement restrictions with `nodeAffinity`, `tolerations`, `podAffinity`, and `podAntiAffinity` similar to placement defined for daemons configured by the .. `annotations`: Key value pair list of annotations to add. `labels`: Key value pair list of labels to add. `resources`: The resource requirements for the rbd mirror pods. `priorityClassName`: The priority class to set on the rbd mirror pods. Configure mirroring peers individually for each CephBlockPool. Refer to the for more detail." } ]
{ "category": "Runtime", "file_name": "ceph-rbd-mirror-crd.md", "project_name": "Rook", "subcategory": "Cloud Native Storage" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List local endpoint entries ``` cilium-dbg bpf endpoint list [flags] ``` ``` -h, --help help for list -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Local endpoint map" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_endpoint_list.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Firecracker offers the possiblity of choosing the block device caching strategy. Caching strategy affects the path data written from inside the microVM takes to the host persistent storage. When installing a block device through a PUT /drives API call, users can choose the caching strategy by inserting a `cache_type` field in the JSON body of the request. The available cache types are: `Unsafe` `Writeback` When configuring the block caching strategy to `Unsafe`, the device will not advertise the VirtIO `flush` feature to the guest driver. When configuring the block caching strategy to `Writeback`, the device will advertise the VirtIO `flush` feature to the guest driver. If negotiated when activating the device, the guest driver will be able to send flush requests to the device. When the device executes a flush request, it will perform an `fsync` syscall on the backing block file, committing all data in the host page cache to disk. The caching strategy should be used in order to make a trade-off: `Unsafe` enhances performance as fewer syscalls and IO operations are performed when running workloads sacrifices data integrity in situations where the host simply loses the contents of the page cache without committing them to the backing storage (such as a power outage) recommended for use cases with ephemeral storage, such as serverless environments `Writeback` ensures that once a flush request was acknowledged by the host, the data is committed to the backing storage sacrifices performance, from boot time increases to greater emulation-related latencies when running workloads recommended for use cases with low power environments, such as embedded environments Example sequence that configures a block device with a caching strategy: ```bash curl --unix-socket ${socket} -i \\ -X PUT \"http://localhost/drives/dummy\" \\ -H \"accept: application/json\" \\ -H \"Content-Type: application/json\" \\ -d \"{ \\\"drive_id\\\": \\\"dummy\\\", \\\"pathonhost\\\": \\\"${drive_path}\\\", \\\"isrootdevice\\\": false, \\\"isreadonly\\\": false, \\\"cache_type\\\": \\\"Writeback\\\" }\" ```" } ]
{ "category": "Runtime", "file_name": "block-caching.md", "project_name": "Firecracker", "subcategory": "Container Runtime" }
[ { "data": "layout: global title: Huawei Object Storage Service This guide describes the instructions to configure {:target=\"_blank\"} as Alluxio's under storage system. Huawei Object Storage Service (OBS) is a scalable service that provides secure, reliable, and cost-effective cloud storage for massive amounts of data. OBS provides unlimited storage capacity for objects of any format, catering to the needs of common users, websites, enterprises, and developers. For more information about Huawei OBS, please read its {:target=\"_blank\"}. If you haven't already, please see before you get started. In preparation for using Huawei OBS with Alluxio: <table class=\"table table-striped\"> <tr> <td markdown=\"span\" style=\"width:30%\">`<OBS_BUCKET>`</td> <td markdown=\"span\">{:target=\"_blank\"} or use an existing bucket</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OBS_DIRECTORY>`</td> <td markdown=\"span\">The directory you want to use in the bucket, either by creating a new directory or using an existing one</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OBSACCESSKEY>`</td> <td markdown=\"span\">Used to authenticate the identity of a requester. See {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OBSSECRETKEY>`</td> <td markdown=\"span\">Used to authenticate the identity of a requester. See {:target=\"_blank\"}</td> </tr> <tr> <td markdown=\"span\" style=\"width:30%\">`<OBS_ENDPOINT>`</td> <td markdown=\"span\">Domain name to access OBS in a region and is used to process requests of that region. See {:target=\"_blank\"}</td> </tr> </table> To use Huawei OBS as the UFS of Alluxio root mount point, you need to configure Alluxio to use under storage systems by modifying `conf/alluxio-site.properties`. If it does not exist, create the configuration file from the template. ```shell $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties ``` Specify an existing OBS bucket and directory as the underfs addresss system by modifying `conf/alluxio-site.properties` to include: ```properties alluxio.dora.client.ufs.root=obs://<OBSBUCKET>/<OBSDIRECTORY> ``` Note that if you want to mount the whole obs bucket, add a trailing slash after the bucket name (e.g. `obs://OBS_BUCKET/`). Specify credentials for OBS access by setting `fs.obs.accessKey` and `fs.obs.secretKey` in `alluxio-site.properties`. ```properties fs.obs.accessKey=<OBSACCESSKEY> fs.obs.secretKey=<OBSSECRETKEY> ``` Specify the OBS region by setting `fs.obs.endpoint` in `alluxio-site.properties` (e.g. obs.cn-north-4.myhuaweicloud.com). ```properties fs.obs.endpoint=<OBS_ENDPOINT> ``` Once you have configured Alluxio to Azure Blob Store, try to see that everything works. The default upload method uploads one file completely from start to end in one go. We use multipart-upload method to upload one file by multiple parts, every part will be uploaded in one thread. It won't generate any temporary files while uploading. To enable OBS multipart upload, you need to modify `conf/alluxio-site.properties` to include: ```properties alluxio.underfs.obs.multipart.upload.enabled=true ``` There are other parameters you can specify in `conf/alluxio-site.properties` to make the process faster and better. ```properties alluxio.underfs.object.store.multipart.upload.timeout ``` ```properties alluxio.underfs.obs.multipart.upload.threads ``` ```properties alluxio.underfs.obs.multipart.upload.partition.size ``` Huawei OBS UFS integration is contributed and maintained by the Alluxio community. The source code is located {:target=\"_blank\"}. Feel free submit pull requests to improve the integration and update the documentation {:target=\"_blank\"} if any information is missing or out of date." } ]
{ "category": "Runtime", "file_name": "Huawei-OBS.md", "project_name": "Alluxio", "subcategory": "Cloud Native Storage" }
[ { "data": "NRI, the Node Resource Interface, is a common framework for plugging extensions into OCI-compatible container runtimes. It provides basic mechanisms for plugins to track the state of containers and to make limited changes to their configuration. NRI itself is agnostic to the internal implementation details of any container runtime. It provides an adaptation library which runtimes use to integrate to and interact with NRI and plugins. In principle any NRI plugin should be able to work with NRI-enabled runtimes. For a detailed description of NRI and its capabilities please take a look at the . <details> <summary>see the containerd/NRI integration diagram</summary> <img src=\"./containerd-nri-integration.png\" title=\"Containerd/NRI Integration\"> </details> NRI support in containerd is split into two parts both logically and physically. These parts are a common plugin (/nri/*) to integrate to NRI and CRI-specific bits (/pkg/cri/server/nri-api) which convert data between the runtime-agnostic NRI representation and the internal representation of the CRI plugin. The containerd common NRI plugin implements the core logic of integrating to and interacting with NRI. However, it does this without any knowledge about the internal representation of containers or pods within containerd. It defines an additional interface, Domain, which is used whenever the internal representation of a container or pod needs to be translated to the runtime agnostic NRI one, or when a configuration change requested by an external NRI plugin needs to be applied to a container within containerd. `Domain` can be considered as a short-cut name for Domain-Namespace as Domain implements the functions the generic NRI interface needs to deal with pods and containers from a particular containerd namespace. As a reminder, containerd namespaces isolate state between clients of containerd. E.g. \"k8s.io\" for the kubernetes CRI clients, \"moby\" for docker clients, ... and \"containerd\" as the default for containerd/ctr. The containerd CRI plugin registers itself as an above mentioned NRI Domain for the \"k8s.io\" namespace, to allow container configuration to be customized by external NRI plugins. Currently this Domain interface is only implemented for the original CRI `pkg/cri/server` implementation. Implementing it for the more recent experimental `pkg/cri/sbserver` implementation is on the TODO list. The main reason for this split of functionality is to allow NRI plugins for other types of sandboxes and for other container clients other than just for CRI containers in the \"k8s.io\" namespace. Enabling and disabling NRI support in containerd happens by enabling or disabling the common containerd NRI plugin. Starting with containerd" }, { "data": "The plugin, and consequently NRI functionality, is enabled by default. It can be disabled by editing the `[plugins.\"io.containerd.nri.v1.nri\"]` section in the containerd configuration file, which by default is `/etc/containerd/config.toml`, and changing `disable = false` to `disable = true`. The NRI section to disable NRI functionality should look something like this: ```toml [plugins.\"io.containerd.nri.v1.nri\"] disable = true disable_connections = false pluginconfigpath = \"/etc/nri/conf.d\" plugin_path = \"/opt/nri/plugins\" pluginregistrationtimeout = \"5s\" pluginrequesttimeout = \"2s\" socket_path = \"/var/run/nri/nri.sock\" ``` There are two ways how an NRI plugin can be started. Plugins can be pre-registered in which case they are automatically started when the NRI adaptation is instantiated (or in our case when containerd is started). Plugins can also be started by external means, for instance by systemd. Pre-registering a plugin happens by placing a symbolic link to the plugin executable into a well-known NRI-specific directory, `/opt/nri/plugins` by default. A pre-registered plugin is started with a socket pre-connected to NRI. Externally launched plugins connect to a well-known NRI-specific socket, `/var/run/nri/nri.sock` by default, to register themselves. The only difference between pre-registered and externally launched plugins is how they get started and connected to NRI. Once a connection is established all plugins are identical. NRI can be configured to disable connections from externally launched plugins, in which case the well-known socket is not created at all. The configuration fragment shown above ensures that external connections are enabled regardless of the built-in NRI defaults. This is convenient for testing as it allows one to connect, disconnect and reconnect plugins at any time. Note that you can't run two NRI-enabled runtimes on a single node with the same default socket configuration. You need to either disable NRI or change the NRI socket path in one of the runtimes. You can verify that NRI integration is properly enabled and functional by configuring containerd and NRI as described above, taking the NRI logger plugin from the on github, compiling it and starting it up. ```bash git clone https://github.com/containerd/nri cd nri make ./build/bin/logger -idx 00 ``` You should see the logger plugin receiving receiving a list of existing pods and containers. If you then create or remove further pods and containers using crictl or kubectl you should see detailed logs of the corresponding NRI events printed by the logger. You can enable backward compatibility with NRI v0.1.0 plugins using the . ```bash git clone https://github.com/containerd/nri cd nri make sudo cp build/bin/v010-adapter /usr/local/bin sudo mkdir -p /opt/nri/plugins sudo ln -s /usr/local/bin/v010-adapter /opt/nri/plugins/00-v010-adapter ```" } ]
{ "category": "Runtime", "file_name": "NRI.md", "project_name": "containerd", "subcategory": "Container Runtime" }
[ { "data": "(network-load-balancers)= ```{note} Network load balancers are currently available for the {ref}`network-ovn`. ``` Network load balancers are similar to forwards in that they allow specific ports on an external IP address to be forwarded to specific ports on internal IP addresses in the network that the load balancer belongs to. The difference between load balancers and forwards is that load balancers can be used to share ingress traffic between multiple internal backend addresses. This feature can be useful if you have limited external IP addresses or want to share a single external address and ports over multiple instances. A load balancer is made up of: A single external listen IP address. One or more named backends consisting of an internal IP and optional port ranges. One or more listen port ranges that are configured to forward to one or more named backends. Use the following command to create a network load balancer: ```bash incus network load-balancer create <networkname> <listenaddress> [configuration_options...] ``` Each load balancer is assigned to a network. It requires a single external listen address (see {ref}`network-load-balancers-listen-addresses` for more information about which addresses can be load-balanced). Network load balancers have the following properties: Property | Type | Required | Description :-- | :-- | :-- | :-- `listen_address` | string | yes | IP address to listen on `description` | string | no | Description of the network load balancer `config` | string set | no | Configuration options as key/value pairs (only `user.*` custom keys supported) `backends` | backend list | no | List of {ref}`backend specifications <network-load-balancers-backend-specifications>` `ports` | port list | no | List of {ref}`port specifications <network-load-balancers-port-specifications>` (network-load-balancers-listen-addresses)= The following requirements must be met for valid listen addresses: Allowed listen addresses must be defined in the uplink network's `ipv{n}.routes` settings or the project's {config:option}`project-restricted:restricted.networks.subnets` setting (if set). The listen address must not overlap with a subnet that is in use with another network or entity in that network. (network-load-balancers-backend-specifications)= You can add backend specifications to the network load balancer to define target addresses (and optionally ports). The backend target address must be within the same subnet as the network that the load balancer is associated" }, { "data": "Use the following command to add a backend specification: ```bash incus network load-balancer backend add <networkname> <listenaddress> <backendname> <listenports> <targetaddress> [<targetports>] ``` The target ports are optional. If not specified, the load balancer will use the listen ports for the backend for the backend target ports. If you want to forward the traffic to different ports, you have two options: Specify a single target port to forward traffic from all listen ports to this target port. Specify a set of target ports with the same number of ports as the listen ports to forward traffic from the first listen port to the first target port, the second listen port to the second target port, and so on. Network load balancer backends have the following properties: Property | Type | Required | Description :-- | :-- | :-- | :-- `name` | string | yes | Name of the backend `target_address` | string | yes | IP address to forward to `targetport` | string | no | Target port(s) (e.g. `70,80-90` or `90`), same as the {ref}`port <network-load-balancers-port-specifications>`'s `listenport` if empty `description` | string | no | Description of backend (network-load-balancers-port-specifications)= You can add port specifications to the network load balancer to forward traffic from specific ports on the listen address to specific ports on one or more target backends. Use the following command to add a port specification: ```bash incus network load-balancer port add <networkname> <listenaddress> <protocol> <listenports> <backendname>[,<backend_name>...] ``` You can specify a single listen port or a set of ports. The backend(s) specified must have target port(s) settings compatible with the port's listen port(s) setting. Network load balancer ports have the following properties: Property | Type | Required | Description :-- | :-- | :-- | :-- `protocol` | string | yes | Protocol for the port(s) (`tcp` or `udp`) `listen_port` | string | yes | Listen port(s) (e.g. `80,90-100`) `target_backend` | backend list | yes | Backend name(s) to forward to `description` | string | no | Description of port(s) Use the following command to edit a network load balancer: ```bash incus network load-balancer edit <networkname> <listenaddress> ``` This command opens the network load balancer in YAML format for editing. You can edit both the general configuration, backend and the port specifications. Use the following command to delete a network load balancer: ```bash incus network load-balancer delete <networkname> <listenaddress> ```" } ]
{ "category": "Runtime", "file_name": "network_load_balancers.md", "project_name": "lxd", "subcategory": "Container Runtime" }
[ { "data": "<!-- Thanks for your contribution! please review https://github.com/v6d-io/v6d/blob/main/CONTRIBUTING.rst before opening an issue. --> Describe your problem A clear and concise description of what your problem is. It might be a bug, a feature request, or just a problem that need support from the vineyard team. If is is a bug report, to help us reproducing this bug, please provide information below: Your Operation System version (`uname -a`): The version of vineyard you use (`vineyard.version`): Versions of crucial packages, such as gcc, numpy, pandas, etc.: Full stack of the error (if there are a crash): Minimized code to reproduce the error: If it is a feature request, please provides a clear and concise description of what you want to happen: Additional context Add any other context about the problem here." } ]
{ "category": "Runtime", "file_name": "ISSUE_TEMPLATE.md", "project_name": "Vineyard", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Release Schedule\" layout: docs toc: \"true\" Definitions borrowed from General phases for a Velero release Enhancement/Design freeze Implementation phase Feature freeze & pruning Code freeze & prerelease Release" } ]
{ "category": "Runtime", "file_name": "release-schedule.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "title: How Weave Net Interprets Network Topology menu_order: 20 search_type: Documentation This section contains the following topics: * * Topology messages capture which peers are connected to other peers. Weave peers communicate their knowledge of the topology (and changes to it) to others, so that all peers learn about the entire topology. Communication between peers occurs over TCP links using: a) a spanning-tree based broadcast mechanism, and b) a neighbor gossip mechanism. Topology messages are sent by a peer in the following instances: when a connection has been added; if the remote peer appears to be new to the network, the entire topology is sent to it, and an incremental update, containing information on just the two peers at the ends of the connection, is broadcast, when a connection has been marked as 'established', indicating that the remote peer can receive UDP traffic from the peer; an update containing just information about the local peer is broadcast, when a connection has been torn down; an update containing just information about the local peer is broadcast, periodically, on a timer, the entire topology is \"gossiped\" to a subset of neighbors, based on a topology-sensitive random distribution. This is done in case some of the aforementioned broadcasts do not reach all peers, due to rapid changes in the topology causing broadcast routing tables to become outdated. The receiver of a topology update merges that update with its own topology model, adding peers hitherto unknown to it, and updating peers for which the update contains a more recent version than known to it. If there were any such new/updated peers, and the topology update was received over gossip (rather than broadcast), then an improved update containing them is gossiped. If the update mentions a peer that the receiver does not know, then the entire update is" }, { "data": "Every gossip message is structured as follows: +--+ | 1-byte message type - Gossip | +--+ | 4-byte Gossip channel - Topology | +--+ | Peer Name of source | +--+ | Gossip payload (topology update) | +--+ The topology update payload is laid out like this: +--+ | Peer 1: Name | +--+ | Peer 1: NickName | +--+ | Peer 1: UID | +--+ | Peer 1: Version number | +--+ | Peer 1: List of connections | +--+ | ... | +--+ | Peer N: Name | +--+ | Peer N: NickName | +--+ | Peer N: UID | +--+ | Peer N: Version number | +--+ | Peer N: List of connections | +--+ Each List of connections is encapsulated as a byte buffer, within which the structure is: +--+ | Connection 1: Remote Peer Name | +--+ | Connection 1: Remote IP address | +--+ | Connection 1: Outbound | +--+ | Connection 1: Established | +--+ | Connection 2: Remote Peer Name | +--+ | Connection 2: Remote IP address | +--+ | Connection 2: Outbound | +--+ | Connection 2: Established | +--+ | ... | +--+ | Connection N: Remote Peer Name | +--+ | Connection N: Remote IP address | +--+ | Connection N: Outbound | +--+ | Connection N: Established | +--+ If a peer, after receiving a topology update, sees that another peer no longer has any connections within the network, it drops all knowledge of that second peer. The propagation of topology changes to all peers is not instantaneous. Therefore, it is very possible for a node elsewhere in the network to have an out-of-date view. If the destination peer for a packet is still reachable, then out-of-date topology can result in it taking a less efficient route. If the out-of-date topology makes it look as if the destination peer is not reachable, then the packet is dropped. For most protocols (for example, TCP), the transmission will be retried a short time later, by which time the topology should have updated. In a fully connected mesh of N peers, this would mean N^2 connections. To constrain the resource consumption and communication between the peers and complexity of route calculation there is a configurable upper limit on the maximum number of connections that a peer can make to the remote peers. There is safe default value (100) which works for most deployment. However if you would like to increase the number of peers, you should be able to explicitly override the default value by passing the value through `connlimit` flag for e.g.`weave launch --connlimit=100`. If you are using Weave with Kubernetes using then you can set CONN_LIMIT environment variable to set the connection limit. See Also" } ]
{ "category": "Runtime", "file_name": "network-topology.md", "project_name": "Weave Net", "subcategory": "Cloud Native Network" }
[ { "data": "Previous change logs can be found at , thanks to . , thanks to . , thanks to . , thanks to . , thanks to . , thanks to . , thanks to . , thanks to . , thanks to . , thanks to . 3 nodes (3mds, 9metaserver), each with: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz 256G RAM disk cache: INTEL SSDSC2BB80 800G (IOPS is about 30000+, bandwidth is about 300 MiB) ```yaml fs.cto: true fs.lookupCache.negativeTimeoutSec: 1 fs.lookupCache.minUses: 3 fuseClient.supportKVcache: true client.loglevel: 0 ``` ```bash [global] rw=randread direct=1 size=50G iodepth=128 ioengine=libaio bsrange=4k-4k ramp_time=10 runtime=300 group_reporting [disk01] filename=/path/to/mountpoint/1.txt ``` | fio | IOPS/bandwidth | avg-latency(ms) | clat 99.00th (ms) | clat 99.99th (ms) | | :-: | :-: | :-: | :-: | :-: | | numjobs=1 / size=50GB / 4k randwrite | 4243 | 0.23 | 0.176 | 2 | | numjobs=1 / size=50GB / 4k randwrite | 908 | 1.0 | 3.5 | 104 | | numjobs=1 / size=50GB / 512k write | 412 MiB/s | 2.4 | 19 | 566 | | numjobs=1 / size=50GB / 512k read | 333 MiB/s | 2.9 | 20 | 115 | ```bash for i in 1 4 8; do mpirun --allow-run-as-root -np $i mdtest -z 2 -b 3 -I 10000 -d /path/to/mountpoint; done ``` | Case | Dir creation | Dir stat | Dir removal | File creation | File stat | File read | File removal | Tree creation | Tree removal | | | | | | | | | | | | | client*1 | 341 | 395991 | 291 | 334 | 383844 | 3694 | 309 | 322 | 851 | | client*4 | 385 | 123266 | 288 | 361 | 1515592 | 15056 | 310 | 363 | 16 | | client*8 | 415 | 22138 | 314 | 400 | 2811416 | 20976 | 347 | 355 | 8 |" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-2.6.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }
[ { "data": "Thank you for taking the time out to contribute to the kube-vip project! This guide will walk you through the process of making your first commit and how to effectively get it merged upstream. <!-- toc --> - - - - <!-- /toc --> To get started, let's ensure you have completed the following prerequisites for contributing to project kube-vip: Read and observe the . Check out the for the kube-vip architecture and design. Set up your Now that you're setup, skip ahead to learn how to . Also, A GitHub account will be required in order to submit code changes and interact with the project. For committing any changes in Github it is required for you to have a . There are multiple ways in which you can contribute, either by contributing code in the form of new features or bug-fixes or non-code contributions like helping with code reviews, triaging of bugs, documentation updates, filing or writing blogs/manuals etc. Developers work in their own forked copy of the repository and when ready, submit pull requests to have their changes considered and merged into the project's repository. Fork your own copy of the repository to your GitHub account by clicking on `Fork` button on . Clone the forked repository on your local setup. ```bash git clone https://github.com/$user/kube-vip ``` Add a remote upstream to track upstream kube-vip repository. ```bash git remote add upstream https://github.com/kube-vip/kube-vip ``` Never push to upstream remote ```bash git remote set-url --push upstream no_push ``` Create a topic branch. ```bash git checkout -b branchName ``` Make changes and commit it locally. Make sure that your commit is . ```bash git add <modifiedFile> git commit -s ``` Update the \"Unreleased\" section of the for any significant change that impacts users. Keeping branch in sync with upstream. ```bash git checkout branchName git fetch upstream git rebase upstream/main ``` Push local branch to your forked repository. ```bash git push -f $remoteBranchName branchName ``` Create a Pull request on GitHub. Visit your fork at `https://github.com/kube-vip/kube-vip` and click `Compare & Pull Request` button next to your `remoteBranchName` branch. Once you have opened a Pull Request (PR), reviewers will be assigned to your PR and they may provide review comments which you need to address. Commit changes made in response to review comments to the same branch on your fork. Once a PR is ready to merge, squash any fix review feedback, typo and merged sorts of commits. To make it easier for reviewers to review your PR, consider the following: Follow the golang . Format your code with `make golangci-fix`; if the flag an issue that cannot be fixed automatically, an error message will be displayed so you can address the issue. Follow guidelines. Follow" }, { "data": "If your PR fixes a bug or implements a new feature, add the appropriate test cases to our to guarantee enough coverage. A PR that makes significant code changes without contributing new test cases will be flagged by reviewers and will not be accepted. For symbol names and documentation, do not introduce new usage of harmful language such as 'master / slave' (or 'slave' independent of 'master') and 'blacklist / whitelist'. For more information about what constitutes harmful language and for a reference word replacement list, please refer to the . We are committed to removing all harmful language from the project. If you detect existing usage of harmful language in code or documentation, please report the issue to us or open a Pull Request to address it directly. Thanks! To build the kube-vip Docker image together with all kube-vip bits, you can simply do: Checkout your feature branch and `cd` into it. Run `make dockerx86` The second step will compile the kube-vip code in a `golang` container, and build a `Ubuntu 20.04` Docker image that includes all the generated binaries. must be installed on your local machine in advance. Alternatively, you can build the kube-vip code in your local Go environment. The kube-vip project uses the which was introduced in Go 1.11. It facilitates dependency tracking and no longer requires projects to live inside the `$GOPATH`. To develop locally, you can follow these steps: 2. Checkout your feature branch and `cd` into it. To build all Go files and install them under `bin`, run `make bin` To run all Go unit tests, run `make test-unit` To build the kube-vip Ubuntu Docker image separately with the binaries generated in step 2, run `make ubuntu` For more information about the tests we run as part of CI, please refer to . Create a branch in your forked repo ```bash git checkout -b revertName ``` Sync the branch with upstream ```bash git fetch upstream git rebase upstream/main ``` Create a revert based on the SHA of the commit. The commit needs to be . ```bash git revert -s SHA ``` Push this new commit. ```bash git push $remoteRevertName revertName ``` Create a Pull Request on GitHub. Visit your fork at `https://github.com/kube-vip/kube-vip` and click `Compare & Pull Request` button next to your `remoteRevertName` branch. It is recommended to sign your work when contributing to the kube-vip repository. Git provides the `-s` command-line option to append the required line automatically to the commit message: ```bash git commit -s -m 'This is my commit message' ``` For an existing commit, you can also use this option with `--amend`: ```bash git commit -s --amend ``` If more than one person works on something it's possible for more than one person to sign-off on it. For example: ```bash Signed-off-by: Some Developer [email protected] Signed-off-by: Another Developer" }, { "data": "``` We use the to enforce that all commits in a Pull Request include the required `Signed-off-by` line. If this is not the case, the app will report a failed status for the Pull Request and it will be blocked from being merged. Compared to our earlier CLA, DCO tends to make the experience simpler for new contributors. If you are contributing as an employee, there is no need for your employer to sign anything; the DCO assumes you are authorized to submit contributions (it's your responsibility to check with your employer). We use labels and workflows (some manual, some automated with GitHub Actions) to help us manage triage, prioritize, and track issue progress. For a detailed discussion, see . Help is always appreciated. If you find something that needs fixing, please file an issue . Please ensure that the issue is self explanatory and has enough information for an assignee to get started. Before picking up a task, go through the existing and make sure that your change is not already being worked on. If it does not exist, please create a new issue and discuss it with other members. For simple contributions to kube-vip, please ensure that this minimum set of labels are included on your issue: kind* -- common ones are `kind/feature`, `kind/support`, `kind/bug`, `kind/documentation`, or `kind/design`. For an overview of the different types of issues that can be submitted, see [Issue and PR Kinds](#issue-and-pr-kinds). The kind of issue will determine the issue workflow. area* (optional) -- if you know the area the issue belongs in, you can assign it. Otherwise, another community member will label the issue during triage. The area label will identify the area of interest an issue or PR belongs in and will ensure the appropriate reviewers shepherd the issue or PR through to its closure. For an overview of areas, see the . size* (optional) -- if you have an idea of the size (lines of code, complexity, effort) of the issue, you can label it using a . The size can be updated during backlog grooming by contributors. This estimate is used to guide the number of features selected for a milestone. All other labels will be assigned during issue triage. Once an issue has been submitted, the CI (GitHub actions) or a human will automatically review the submitted issue or PR to ensure that it has all relevant information. If information is lacking or there is another problem with the submitted issue, an appropriate `triage/<?>` label will be applied. After an issue has been triaged, the maintainers can prioritize the issue with an appropriate `priority/<?>` label. Once an issue has been submitted, categorized, triaged, and prioritized it is marked as `ready-to-work`. A ready-to-work issue should have labels indicating assigned areas, prioritization, and should not have any remaining triage labels." } ]
{ "category": "Runtime", "file_name": "CONTRIBUTING.md", "project_name": "kube-vip", "subcategory": "Cloud Native Network" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> Manage the multicast subscribers. ``` -h, --help help for subscriber ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Manage multicast BPF programs - Add a remote subscriber to the multicast group. - Delete a subscriber from the multicast group. - List the multicast subscribers for the given group." } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bpf_multicast_subscriber.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Name | Type | Description | Notes | - | - | - Id | string | | Size | int64 | | File | Pointer to string | | [optional] Mergeable | Pointer to bool | | [optional] [default to false] Shared | Pointer to bool | | [optional] [default to false] Hugepages | Pointer to bool | | [optional] [default to false] HugepageSize | Pointer to int64 | | [optional] HostNumaNode | Pointer to int32 | | [optional] HotplugSize | Pointer to int64 | | [optional] HotpluggedSize | Pointer to int64 | | [optional] Prefault | Pointer to bool | | [optional] [default to false] `func NewMemoryZoneConfig(id string, size int64, ) *MemoryZoneConfig` NewMemoryZoneConfig instantiates a new MemoryZoneConfig object This constructor will assign default values to properties that have it defined, and makes sure properties required by API are set, but the set of arguments will change when the set of required properties is changed `func NewMemoryZoneConfigWithDefaults() *MemoryZoneConfig` NewMemoryZoneConfigWithDefaults instantiates a new MemoryZoneConfig object This constructor will only assign default values to properties that have it defined, but it doesn't guarantee that properties required by API are set `func (o *MemoryZoneConfig) GetId() string` GetId returns the Id field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetIdOk() (string, bool)` GetIdOk returns a tuple with the Id field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetId(v string)` SetId sets Id field to given value. `func (o *MemoryZoneConfig) GetSize() int64` GetSize returns the Size field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetSizeOk() (int64, bool)` GetSizeOk returns a tuple with the Size field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetSize(v int64)` SetSize sets Size field to given value. `func (o *MemoryZoneConfig) GetFile() string` GetFile returns the File field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetFileOk() (string, bool)` GetFileOk returns a tuple with the File field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetFile(v string)` SetFile sets File field to given value. `func (o *MemoryZoneConfig) HasFile() bool` HasFile returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetMergeable() bool` GetMergeable returns the Mergeable field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetMergeableOk() (bool, bool)` GetMergeableOk returns a tuple with the Mergeable field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetMergeable(v bool)` SetMergeable sets Mergeable field to given value. `func (o *MemoryZoneConfig) HasMergeable() bool` HasMergeable returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetShared() bool` GetShared returns the Shared field if non-nil, zero value" }, { "data": "`func (o MemoryZoneConfig) GetSharedOk() (bool, bool)` GetSharedOk returns a tuple with the Shared field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetShared(v bool)` SetShared sets Shared field to given value. `func (o *MemoryZoneConfig) HasShared() bool` HasShared returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetHugepages() bool` GetHugepages returns the Hugepages field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetHugepagesOk() (bool, bool)` GetHugepagesOk returns a tuple with the Hugepages field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetHugepages(v bool)` SetHugepages sets Hugepages field to given value. `func (o *MemoryZoneConfig) HasHugepages() bool` HasHugepages returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetHugepageSize() int64` GetHugepageSize returns the HugepageSize field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetHugepageSizeOk() (int64, bool)` GetHugepageSizeOk returns a tuple with the HugepageSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetHugepageSize(v int64)` SetHugepageSize sets HugepageSize field to given value. `func (o *MemoryZoneConfig) HasHugepageSize() bool` HasHugepageSize returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetHostNumaNode() int32` GetHostNumaNode returns the HostNumaNode field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetHostNumaNodeOk() (int32, bool)` GetHostNumaNodeOk returns a tuple with the HostNumaNode field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetHostNumaNode(v int32)` SetHostNumaNode sets HostNumaNode field to given value. `func (o *MemoryZoneConfig) HasHostNumaNode() bool` HasHostNumaNode returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetHotplugSize() int64` GetHotplugSize returns the HotplugSize field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetHotplugSizeOk() (int64, bool)` GetHotplugSizeOk returns a tuple with the HotplugSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetHotplugSize(v int64)` SetHotplugSize sets HotplugSize field to given value. `func (o *MemoryZoneConfig) HasHotplugSize() bool` HasHotplugSize returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetHotpluggedSize() int64` GetHotpluggedSize returns the HotpluggedSize field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetHotpluggedSizeOk() (int64, bool)` GetHotpluggedSizeOk returns a tuple with the HotpluggedSize field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetHotpluggedSize(v int64)` SetHotpluggedSize sets HotpluggedSize field to given value. `func (o *MemoryZoneConfig) HasHotpluggedSize() bool` HasHotpluggedSize returns a boolean if a field has been set. `func (o *MemoryZoneConfig) GetPrefault() bool` GetPrefault returns the Prefault field if non-nil, zero value otherwise. `func (o MemoryZoneConfig) GetPrefaultOk() (bool, bool)` GetPrefaultOk returns a tuple with the Prefault field if it's non-nil, zero value otherwise and a boolean to check if the value has been set. `func (o *MemoryZoneConfig) SetPrefault(v bool)` SetPrefault sets Prefault field to given value. `func (o *MemoryZoneConfig) HasPrefault() bool` HasPrefault returns a boolean if a field has been set." } ]
{ "category": "Runtime", "file_name": "MemoryZoneConfig.md", "project_name": "Kata Containers", "subcategory": "Container Runtime" }
[ { "data": "<!-- This file was autogenerated via cilium cmdref, do not edit manually--> List routes in the BGP Control Plane's RIBs List routes in the BGP Control Plane's Routing Information Bases (RIBs) ``` cilium-dbg bgp routes <available | advertised> <afi> <safi> [vrouter <asn>] [peer|neighbor <address>] [flags] ``` ``` Get all IPv4 unicast routes available: cilium-dbg bgp routes available ipv4 unicast Get all IPv6 unicast routes available for a specific vrouter: cilium-dbg bgp routes available ipv6 unicast vrouter 65001 Get IPv4 unicast routes advertised to a specific peer: cilium-dbg bgp routes advertised ipv4 unicast peer 10.0.0.1 ``` ``` -h, --help help for routes -o, --output string json| yaml| jsonpath='{}' ``` ``` --config string Config file (default is $HOME/.cilium.yaml) -D, --debug Enable debug messages -H, --host string URI to server-side API ``` - Access to BGP control plane" } ]
{ "category": "Runtime", "file_name": "cilium-dbg_bgp_routes.md", "project_name": "Cilium", "subcategory": "Cloud Native Network" }
[ { "data": "Generating Markdown pages from a cobra command is incredibly easy. An example is as follows: ```go package main import ( \"log\" \"github.com/spf13/cobra\" \"github.com/spf13/cobra/doc\" ) func main() { cmd := &cobra.Command{ Use: \"test\", Short: \"my test program\", } err := doc.GenMarkdownTree(cmd, \"/tmp\") if err != nil { log.Fatal(err) } } ``` That will get you a Markdown document `/tmp/test.md` This program can actually generate docs for the kubectl command in the kubernetes project ```go package main import ( \"log\" \"io/ioutil\" \"os\" \"k8s.io/kubernetes/pkg/kubectl/cmd\" cmdutil \"k8s.io/kubernetes/pkg/kubectl/cmd/util\" \"github.com/spf13/cobra/doc\" ) func main() { kubectl := cmd.NewKubectlCommand(cmdutil.NewFactory(nil), os.Stdin, ioutil.Discard, ioutil.Discard) err := doc.GenMarkdownTree(kubectl, \"./\") if err != nil { log.Fatal(err) } } ``` This will generate a whole series of files, one for each command in the tree, in the directory specified (in this case \"./\") You may wish to have more control over the output, or only generate for a single command, instead of the entire command tree. If this is the case you may prefer to `GenMarkdown` instead of `GenMarkdownTree` ```go out := new(bytes.Buffer) err := doc.GenMarkdown(cmd, out) if err != nil { log.Fatal(err) } ``` This will write the markdown doc for ONLY \"cmd\" into the out, buffer. Both `GenMarkdown` and `GenMarkdownTree` have alternate versions with callbacks to get some control of the output: ```go func GenMarkdownTreeCustom(cmd *Command, dir string, filePrepender, linkHandler func(string) string) error { //... } ``` ```go func GenMarkdownCustom(cmd Command, out bytes.Buffer, linkHandler func(string) string) error { //... } ``` The `filePrepender` will prepend the return value given the full filepath to the rendered Markdown file. A common use case is to add front matter to use the generated documentation with : ```go const fmTemplate = ` date: %s title: \"%s\" slug: %s url: %s ` filePrepender := func(filename string) string { now := time.Now().Format(time.RFC3339) name := filepath.Base(filename) base := strings.TrimSuffix(name, path.Ext(name)) url := \"/commands/\" + strings.ToLower(base) + \"/\" return fmt.Sprintf(fmTemplate, now, strings.Replace(base, \"_\", \" \", -1), base, url) } ``` The `linkHandler` can be used to customize the rendered internal links to the commands, given a filename: ```go linkHandler := func(name string) string { base := strings.TrimSuffix(name, path.Ext(name)) return \"/commands/\" + strings.ToLower(base) + \"/\" } ```" } ]
{ "category": "Runtime", "file_name": "md_docs.md", "project_name": "CubeFS", "subcategory": "Cloud Native Storage" }
[ { "data": "title: \"Use Oracle Cloud as a Backup Storage Provider for Velero\" layout: docs is a tool used to backup and migrate Kubernetes applications. Here are the steps to use as a destination for Velero backups. Download the of Velero to your development environment. This includes the `velero` CLI utility and example Kubernetes manifest files. For example: ``` wget https://github.com/vmware-tanzu/velero/releases/download/v1.0.0/velero-v1.0.0-linux-amd64.tar.gz ``` NOTE: Its strongly recommend that you use an official release of Velero. The tarballs for each release contain the velero command-line client. The code in the main branch of the Velero repository is under active development and is not guaranteed to be stable! Untar the release in your `/usr/bin` directory: `tar -xzvf <RELEASE-TARBALL-NAME>.tar.gz` You may choose to rename the directory `velero` for the sake of simplicity: `mv velero-v1.0.0-linux-amd64 velero` Add it to your PATH: `export PATH=/usr/local/bin/velero:$PATH` Run `velero` to confirm the CLI has been installed correctly. You should see an output like this: ``` $ velero Velero is a tool for managing disaster recovery, specifically for Kubernetes cluster resources. It provides a simple, configurable, and operationally robust way to back up your application state and associated data. If you're familiar with kubectl, Velero supports a similar model, allowing you to execute commands such as 'velero get backup' and 'velero create schedule'. The same operations can also be performed as 'velero backup get' and 'velero schedule create'. Usage: velero [command] ``` Oracle Object Storage provides an API to enable interoperability with Amazon S3. To use this Amazon S3 Compatibility API, you need to generate the signing key required to authenticate with Amazon S3. This special signing key is an Access Key/Secret Key pair. Follow these steps to . Refer to this link for more information about . Create a Velero credentials file with your Customer Secret Key: ``` $ vi credentials-velero [default] awsaccesskey_id=bae031188893d1eb83719648790ac850b76c9441 awssecretaccess_key=MmY9heKrWiNVCSZQ2Mf5XTJ6Ys93Bw2d2D6NMSTXZlk= ``` Create an Oracle Cloud Object Storage bucket called `velero` in the root compartment of your Oracle Cloud tenancy. Refer to this page for . You will need the following information to install Velero into your Kubernetes cluster with Oracle Object Storage as the Backup Storage provider: ``` velero install \\ --provider [provider name] \\ --bucket [bucket name] \\ --prefix [tenancy name] \\ --use-volume-snapshots=false \\ --secret-file [secret file location] \\ --backup-location-config region=[region],s3ForcePathStyle=\"true\",s3Url=[storage API endpoint] ``` `--provider` This example uses the S3-compatible API, so use `aws` as the provider. `--bucket` The name of the bucket created in Oracle Object Storage - in our case this is named `velero`. ` --prefix` The name of your Oracle Cloud tenancy - in our case this is named `oracle-cloudnative`. `--use-volume-snapshots=false` Velero does not have a volume snapshot plugin for Oracle Cloud, so creating volume snapshots is disabled. `--secret-file` The path to your `credentials-velero` file. `--backup-location-config` The path to your Oracle Object Storage bucket. This consists of your `region` which corresponds to your Oracle Cloud region name () and the `s3Url`, the S3-compatible API endpoint for Oracle Object Storage based on your region: `https://oracle-cloudnative.compat.objectstorage.[region name].oraclecloud.com` For example: ``` velero install \\ --provider aws \\ --bucket velero \\ --prefix oracle-cloudnative \\ --use-volume-snapshots=false \\ --secret-file /Users/mboxell/bin/velero/credentials-velero \\ --backup-location-config region=us-phoenix-1,s3ForcePathStyle=\"true\",s3Url=https://oracle-cloudnative.compat.objectstorage.us-phoenix-1.oraclecloud.com ``` This will create a `velero` namespace in your cluster along with a number of CRDs, a ClusterRoleBinding, ServiceAccount, Secret, and Deployment for Velero. If your pod fails to successfully provision, you can troubleshoot your installation by running: `kubectl logs [velero pod" }, { "data": "To remove Velero from your environment, delete the namespace, ClusterRoleBinding, ServiceAccount, Secret, and Deployment and delete the CRDs, run: ``` kubectl delete namespace/velero clusterrolebinding/velero kubectl delete crds -l component=velero ``` This will remove all resources created by `velero install`. After creating the Velero server in your cluster, try this example: Start the sample nginx app: `kubectl apply -f examples/nginx-app/base.yaml` This will create an `nginx-example` namespace with a `nginx-deployment` deployment, and `my-nginx` service. ``` $ kubectl apply -f examples/nginx-app/base.yaml namespace/nginx-example created deployment.apps/nginx-deployment created service/my-nginx created ``` You can see the created resources by running `kubectl get all` ``` $ kubectl get all NAME READY STATUS RESTARTS AGE pod/nginx-deployment-67594d6bf6-4296p 1/1 Running 0 20s pod/nginx-deployment-67594d6bf6-f9r5s 1/1 Running 0 20s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/my-nginx LoadBalancer 10.96.69.166 <pending> 80:31859/TCP 21s NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 2 2 2 2 21s NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deployment-67594d6bf6 2 2 2 21s ``` Create a backup: `velero backup create nginx-backup --include-namespaces nginx-example` ``` $ velero backup create nginx-backup --include-namespaces nginx-example Backup request \"nginx-backup\" submitted successfully. Run `velero backup describe nginx-backup` or `velero backup logs nginx-backup` for more details. ``` At this point you can navigate to appropriate bucket, called `velero`, in the Oracle Cloud Object Storage console to see the resources backed up using Velero. Simulate a disaster by deleting the `nginx-example` namespace: `kubectl delete namespaces nginx-example` ``` $ kubectl delete namespaces nginx-example namespace \"nginx-example\" deleted ``` Wait for the namespace to be deleted. To check that the nginx deployment, service, and namespace are gone, run: ``` kubectl get deployments --namespace=nginx-example kubectl get services --namespace=nginx-example kubectl get namespace/nginx-example ``` This should return: `No resources found.` Restore your lost resources: `velero restore create --from-backup nginx-backup` ``` $ velero restore create --from-backup nginx-backup Restore request \"nginx-backup-20190604102710\" submitted successfully. Run `velero restore describe nginx-backup-20190604102710` or `velero restore logs nginx-backup-20190604102710` for more details. ``` Running `kubectl get namespaces` will show that the `nginx-example` namespace has been restored along with its contents. Run: `velero restore get` to view the list of restored resources. After the restore finishes, the output looks like the following: ``` $ velero restore get NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR nginx-backup-20190604104249 nginx-backup Completed 0 0 2019-06-04 10:42:39 -0700 PDT <none> ``` NOTE: The restore can take a few moments to finish. During this time, the `STATUS` column reads `InProgress`. After a successful restore, the `STATUS` column shows `Completed`, and `WARNINGS` and `ERRORS` will show `0`. All objects in the `nginx-example` namespace should be just as they were before you deleted them. If there are errors or warnings, for instance if the `STATUS` column displays `FAILED` instead of `InProgress`, you can look at them in detail with `velero restore describe <RESTORE_NAME>` Clean up the environment with `kubectl delete -f examples/nginx-app/base.yaml` ``` $ kubectl delete -f examples/nginx-app/base.yaml namespace \"nginx-example\" deleted deployment.apps \"nginx-deployment\" deleted service \"my-nginx\" deleted ``` If you want to delete any backups you created, including data in object storage, you can run: `velero backup delete BACKUP_NAME` ``` $ velero backup delete nginx-backup Are you sure you want to continue (Y/N)? Y Request to delete backup \"nginx-backup\" submitted successfully. The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. ``` This asks the Velero server to delete all backup data associated with `BACKUP_NAME`. You need to do this for each backup you want to permanently delete. A future version of Velero will allow you to delete multiple backups by name or label selector. Once fully removed, the backup is no longer visible when you run: `velero backup get BACKUP_NAME` or more generally `velero backup get`: ```" } ]
{ "category": "Runtime", "file_name": "oracle-config.md", "project_name": "Velero", "subcategory": "Cloud Native Storage" }
[ { "data": "Previous change logs can be found at <hr> This version is mainly for performance optimization. - - Hardware: 6 nodes, each with: 20x SATA SSD Intel SSD DC S3500 Series 800G 2x Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz 2x Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection, bond mode is 802.3ad with layer2+3 hash policy 251G RAM Performance test is based on curve-nbd, the size of each block device is 200GB, all configurations are default, and each Chunkserver is deployed on one SSD. 1 NBD block device: | item | iops/bandwidth | avg-latency | 99th-latency | release-1.0<br>iops/bandwidth | release-1.0<br>avg-latency | release-1.0<br>99th-latency | iops/bandwidth improvement percentage | | :-: | :-: | :-: | :-: | :-: |:-: |:-: | :-: | | 4K randwrite, 128 depth | 109,000 iops | 1,100 us | 2,040 us | 62,900 iops | 2,000 us | 3,000 us | 73% | | 4K randread, 128 depth | 128,000 iops | 1,000 us | 1,467 us | 76,600 iops | 1,600 us | 2,000us | 67% | | 512K write, 128 depth | 204 MB/s | 314 ms | 393 ms | 147 MB/s | 435 ms | 609 ms| 38% | | 512K read, 128 depth | 995 MB/s | 64 ms | 92 ms | 757 MB/s | 84 ms | 284 ms| 31% | 10 NBD block device: | item | iops/bandwidth | avg-latency | 99th-latency | release-1.0<br>iops/bandwidth | release-1.0<br>avg-latency | release-1.0<br>99th-latency | iops/bandwidth improvement percentage | | :-: | :-: | :-: | :-: | :-: |:-: |:-: | :-: | | 4K randwrite, 128 depth | 262,000 iops | 4.9 ms | 32 ms | 176,000 iops | 7.2 ms | 16 ms | 48% | | 4K randread, 128 depth | 497,000 iops | 2.69 ms | 6 ms | 255,000 iops | 5.2 ms | 22 ms | 94% | | 512K write, 128 depth | 1,122 MB/s | 569 ms | 1,101 ms | 899 MB/s | 710 ms | 1,502 ms | 24% | | 512K read, 128 depth | 3,241 MB/s | 200 ms | 361 ms | 1,657 MB/s | 386 ms | 735 ms | 95% |" } ]
{ "category": "Runtime", "file_name": "CHANGELOG-1.1.md", "project_name": "Curve", "subcategory": "Cloud Native Storage" }